Some comments on earlier postings on the topic of online listening
experiments:
Online listening experiments are indeed not novel. However, it only
recently became possible to be relatively sure about good quality
stimuli presentation at the users end (i.e., high-quality sound,
reliable timing, and reasonable download times, e.g., using formats
like mpeg4). So nothing really new, besides that it is only recently
that these type of online listening experiments can run on most
computers (thanks to several internet standards).
Nevertheless, the real challenge in these type of listening
experiments is how to control for attention. And interestingly, this
is not different from experiments performed in the lab. In an online
experiment as well, one needs to make sure people are paying
attention and actually doing what you instructed them to do (such as
a request to use a headphone).
Our solution (for the moment) is —next to the standard tricks — to
make these online listening experiments as engaging as possible! For
instance, by using screen-casts [1] i.o. having to read instructions
from the screen (or from paper in the lab), designing a doable short
experiment (15 min max.) that is challenging and/or fun to do, etc.
All such that we can assume serious and really interested listeners.
In addition, and like suggested in an earlier posting, listeners
might actually behave more natural and/or less ‘stressed’ in doing
some of these experiments at home.
I am confident that online listening studies will become more and
more a reliable source for empirical research. Next to becoming a
serious alternative to some types of lab-based experiments, it might
even avoid some the traps of lab based studies such as the typical
psychology-students-pool biased results (see [2] for an argument in
the visual domain, but also [3] arguing strongly against web-based
experiments; Actually, [4] was refused on that basis by that
journal :-).
In short, there are still plenty of weaknesses is using this type of
online data collection. However, I believe that they generally do not
generate more problems than in an ‘ordinary’ lab situation.
Henkjan Honing
[1] www.musiccognition.nl/e4/
[2] McGraw, K.O., et al. (2000). The Integrity of Web-Delivered
Experiments: Can You Trust the Data? Psychological Science, 11(6),
502-506
[3] Mehler, J. (1999) Editorial. Cognition 71 (1999) 187 189.
[4] http://www.hum.uva.nl/mmm/abstracts/honing-2005d.html