Re: Online listening tests and psychoacoutics experiments with large N (Pierre Divenyi )


Subject: Re: Online listening tests and psychoacoutics experiments with large N
From:    Pierre Divenyi  <pdivenyi@xxxxxxxx>
Date:    Mon, 2 Jul 2007 16:33:30 -0700
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

I see a looming danger, like a tiger hiding in the shade ready to jump on any US investigator actually running online listening experiments. The tiger is called the IRB. I mean, (pray tell) how the world would the investigator ensure protection of the unsuspecting web subject who takes part even in 15 minutes of listening? Those not living under the tutelage of Institutional Review Boards may have no idea what it takes to get approval even for our obviously unthreatening listening experiments and how serious the consequences of even the slightest infringements of their often arbitrary rules could be. I am sure many of our colleagues have a few personally experienced horror stories to tell. The large-N studies for those boards spell simply "NO". Pierre At 12:24 PM 7/2/2007, Henkjan Honing wrote: >Some comments on earlier postings on the topic of online listening >experiments: > >Online listening experiments are indeed not novel. However, it only >recently became possible to be relatively sure about good quality >stimuli presentation at the users end (i.e., high-quality sound, >reliable timing, and reasonable download times, e.g., using formats >like mpeg4). So nothing really new, besides that it is only recently >that these type of online listening experiments can run on most >computers (thanks to several internet standards). > >Nevertheless, the real challenge in these type of listening >experiments is how to control for attention. And interestingly, this >is not different from experiments performed in the lab. In an online >experiment as well, one needs to make sure people are paying >attention and actually doing what you instructed them to do (such as >a request to use a headphone). >Our solution (for the moment) is —next to the standard tricks — to >make these online listening experiments as engaging as possible! For >instance, by using screen-casts [1] i.o. having to read instructions >from the screen (or from paper in the lab), designing a doable short >experiment (15 min max.) that is challenging and/or fun to do, etc. >All such that we can assume serious and really interested listeners. >In addition, and like suggested in an earlier posting, listeners >might actually behave more natural and/or less ‘stressed’ in doing >some of these experiments at home. >I am confident that online listening studies will become more and >more a reliable source for empirical research. Next to becoming a >serious alternative to some types of lab-based experiments, it might >even avoid some the traps of lab based studies such as the typical >psychology-students-pool biased results (see [2] for an argument in >the visual domain, but also [3] arguing strongly against web-based >experiments; Actually, [4] was refused on that basis by that >journal :-). > >In short, there are still plenty of weaknesses is using this type of >online data collection. However, I believe that they generally do not >generate more problems than in an ‘ordinary’ lab situation. > >Henkjan Honing > > >[1] www.musiccognition.nl/e4/ >[2] McGraw, K.O., et al. (2000). The Integrity of Web-Delivered >Experiments: Can You Trust the Data? Psychological Science, 11(6), >502-506 >[3] Mehler, J. (1999) Editorial. Cognition 71 (1999) 187 ­ 189. >[4] http://www.hum.uva.nl/mmm/abstracts/honing-2005d.html


This message came from the mail archive
http://www.auditory.org/postings/2007/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University