[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] Software for internet-based auditory testing



Moving listening tests from the lab to the micro-task labor market of Amazon Mechanical Turk speeds data collection and reduces investigator effort. However, it also reduces the amount of control investigators have over the testing environment, adding new variability and potential biases to the data. In this work, we compare multiple stimulus listening tests performed in a lab environment to multiple stimulus listening tests performed in web environment on a population drawn from Mechanical Turk. IF you want to read more about this work, here is our publication on that topic.


M. Cartwright, B. Pardo,  G. Mysore and M. Hoffman, “Fast and Easy Crowdsourced Perceptual Audio Evaluation,” Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, March 20-25, 2016


http://music.cs.northwestern.edu/publications/cartwright_etal_icassp2016.pdf


Best wishes,


Bryan Pardo



From: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx> on behalf of Samuel Mehr <sam@xxxxxxxxxxxxxxx>
Sent: Monday, October 2, 2017 11:59 PM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: Software for internet-based auditory testing
 
Dear Dick,

Lots of folks do successful audio-based experiments on Turk and I generally find it to be a good platform for the sort of work you're describing (which is not really what I do, but experimentally is similar enough for the purposes of your question). I've done a few simple listening experiments of the form "listen to this thing, answer some questions about it", and the results directly replicate parallel in-person experiments in my lab, even when Turkers geolocate to lots of far-flung countries. I require subjects to wear headphones and validate that requirement with this great task from Josh McDermott's lab:

Woods, K. J. P., Siegel, M. H., Traer, J., & McDermott, J. H. (2017). Headphone screening to facilitate web-based auditory experiments. Attention, Perception, & Psychophysics, 1–9. https://doi.org/10.3758/s13414-017-1361-2

In a bunch of piloting, passing the headphone screener correlates with a bunch of other checks on Turker compliance, positively. Things like "What color is the sky? Please answer incorrectly, on purpose" and "Tell us honestly how carefully you completed this HIT". Basically, if you have a few metrics in an experiment that capture variance on some dimension related to participant quality, you should be able to easily tell which Turkers are actually doing good work and which aren't. Depending on how your ethics approval is set up, you can either pay everyone and filter out bad subjects, or require them to pass some level of quality control to receive payment.

best
Sam


-- 
Samuel Mehr
Department of Psychology
Harvard University



On Tue, Oct 3, 2017 at 8:57 AM, Richard F. Lyon <dicklyon@xxxxxxx> wrote:
Five years on, are there any updates on experience using Mechanical Turk and such for sound perception experiments?

I've never conducted psychoacoustic experiments myself (other than informal ones on myself), but now I think I have some modeling ideas that need to be tuned and tested with corresponding experimental data.  Is MTurk the way to go?  If it is, are IRB approvals still needed? I don't even know if that applies to me; probably my company has corresponding approval requirements.

I'm interested in things like SNR thresholds for binaural detection and localization of different types of signals and noises -- 2AFC tests whose relative results across conditions would hopefully not be strongly dependent on level or headphone quality.  Are there good MTurk task structures that motivate people to do a good job on these, e.g. by making their space quieter, paying attention, getting more pay as the task gets harder, or just getting to do more similar tasks, etc.?  Can the pay depend on performance?  Or just cut them off when the SNR has been lowered to threshold, so that people with lower thresholds stay on and get paid longer?

If anyone in academia has a good setup for human experiments and an interest in collaborating on binaural model improvements, I'd love to discuss that, too, either privately or on the list.

Dick


On Tue, Nov 13, 2012 at 3:36 AM, Kevin Tang <kevin.tang.10@xxxxxxxxx> wrote:
Dear all,

I would recommend "
Experigen" - A framework for creating phonology experiments. The platform can be used for all kinds of experiments, but at the moment, it doesn't support reaction time.

https://github.com/tlozoot/experigen

All the best,
Kevin

------------------------------
Kevin Tang
Doctoral candidate
UCL (University College London)
Division of Psychology and Language Sciences
Department of Linguistics
------------------------------



On 12 November 2012 22:35, Jieun Oh <jieun5@xxxxxxxxxxxxxxxxxx> wrote:
Hello,

I recently presented (at this year's ICMPC conference) a paper on using online crowdsourcing (e.g. Amazon Mechanical Turk) for conducting music perception experiments. I've attached the paper for your information, which includes, in Section IV, examples of how the MIR community has been using this approach in recent years. 

Online crowdsourcing as a research methodology definitely has pros and cons, but I feel that there is a growing potential for it, especially if the study requires mass participation and can be designed as a short/simple task.

Best, Jieun



On Sat, Nov 10, 2012 at 6:32 AM, Sam Mathias <smathias@xxxxxxxxxx> wrote:
During my PhD I created this online test using adobe flash: www.auditorytest.zxq.net

I used it to search for potential participants who were poor at pitch discrimination. [Mathias, Micheyl, & Bailey (2010) JASA, 127, 3026-3037]. I found flash to be highly versatile, have good functionality for auditory playback, and not too difficult to learn. However, I think support for this software is really dwindling these days (being replaced by HTML 5?) so it may be obsolete soon. It was also horrendously expense: I coded it with the 30-day trial version so I didn't have to pay for it.

Sam


On 9 November 2012 14:34, Robert Zatorre <robert.zatorre@xxxxxxxxx> wrote:
>
> Dear list
>
> Several times the list has received requests for participation in web-based experiments. We would like to implement something along these lines, and I am wondering if any of you who have experience with it have recommendations (for or against) software to use. We are looking for something reasonably inexpensive and simple to program that would allow us to present audio stimuli and collect behavioral responses, ideally with response times although that may not be so simple I realize.
>
> Any advice would be welcome. Thank you in advance
>
> Robert Zatorre
>
>
>
> -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>
> Robert J. Zatorre, Ph.D.
> Montreal Neurological Institute
> 3801 University St.
> Montreal, QC Canada H3A 2B4
> phone: 1-514-398-8903
> fax: 1-514-398-1338
> e-mail: robert.zatorre@xxxxxxxxx
> web site: www.zlab.mcgill.ca




--
Dr. Samuel R. Mathias
Neural Mechanisms of Human Communication
Max Planck Institute for Human Cognitive and Brain Sciences
Stephanstraße 1
04103 Leipzig, Germany
Tel: +49 341 9940 2479