Re: AUDITORY Digest - 2 Dec 2007 to 3 Dec 2007 (#2007-275) (Jim Abbott )


Subject: Re: AUDITORY Digest - 2 Dec 2007 to 3 Dec 2007 (#2007-275)
From:    Jim Abbott  <jfabbott@xxxxxxxx>
Date:    Tue, 4 Dec 2007 06:53:16 -0500
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

The measured latencies seem to be proportional to four times the buffer length divided by the sampling rate, where the constant of proportionality is somewhat determined by the type of interface: Interface Type L / (4N/f) PCI 1.8+-.2, 1.1, or 1.0 Firewire 3.5+-.1, 4.5 +-.6, or 3.8 +-.8 USB 2.5 +- .1 Thanks for the interesting data Ive been wondering about for while... Best, Jim +++++++++++++++++++++++++++++++++++++++++++++++ James F. Abbott, Ph.D. Cooper Union Audio Lab 51 Astor Place New York, NY 10003 Phone: +1-212-353-4002 FAX: +1-212-353-4341 jfabbott@xxxxxxxx On Dec 4, 2007, at 12:26 AM, AUDITORY automatic digest system wrote: > There are 3 messages totalling 295 lines in this issue. > > Topics of the day: > > 1. Experiments with large N (2) > 2. low-latency audio I/O for Windows: a report > > ---------------------------------------------------------------------- > > Date: Mon, 3 Dec 2007 13:42:14 -0500 > From: Robert Zatorre <robert.zatorre@xxxxxxxx> > Subject: Re: Experiments with large N > > Huge samples are very nice if you can get 'em, though such is not > always > the case, alas. > > So one thing that I would like to see from people who do have > gigantic N > is to do some analyses to determine at what point the data reach some > asymptote. In other words, if you've collected 1,000,000 people, at > what > earlier point in your sampling could you have stopped, and come to the > identical conclusions with valid statistics? > > Obviously, the answer to this question will be different for different > types of studies with different types of variance and so forth. But > having the large N allows one to perform this calculation, so that > next > time one does a similar study, one could reasonably stop after > reaching > a smaller and more manageable sample size. > > Has anybody already done this for those large samples that were > recently > discussed? It would be really helpful for those who cannot always > collect such samples. > > Best > > Robert > -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > > Robert J. Zatorre, Ph.D. > Montreal Neurological Institute > 3801 University St. > Montreal, QC Canada H3A 2B4 > phone: 1-514-398-8903 > fax: 1-514-398-1338 > e-mail: robert.zatorre@xxxxxxxx > web site: www.zlab.mcgill.ca > > > Malcolm Slaney wrote: >> This music paper has 380k subjects :-) >> http://cobweb.ecn.purdue.edu/~malcolm/yahoo/Slaney2007 >> (SimilarityByUserRatingISMIR).pdf >> >> While Ben Marlin collected another 30k subjects for this >> music-recommendation study. >> http://cobweb.ecn.purdue.edu/~malcolm/yahoo/Marlin2007 >> (UserBiasUncertainty).pdf >> >> The underlying data for both papers is available for academic >> researchers (fully anonymized, both by song and by user). Send me >> email >> if you want more information. >> >> - Malcolm >> >> On Dec 1, 2007, at 5:43 PM, Matt Wright wrote: >> >>> Trevor Cox recently published the results of an online experiment >>> about listeners' ratings of sound files on a six-point scale ("not >>> horrible", "bad", "really bad", "awful", "really awful", and >>> "horrible"). To date he has 130,000 subjects (!) and about 1.5 >>> million data points: >>> >>> http://www.sea-acustica.es/WEB_ICA_07/fchrs/papers/ppa-09-003.pdf >>> >>> Here's the website for his experiment: http://www.sound101.org >>> >>> Clearly this is related to the "effect of visual stimuli on the >>> horribleness of awful sounds" that Kelly Fitz pointed out. >>> >>> -Matt >>> >>> >>> On Jun 29, 2007, at 12:32 AM, Massimo Grassi wrote: >>>> So far it looks that the experiment with the largest N (513!) is >>>> "The >>>> role of contrasting temporal amplitude patterns in the >>>> perception of >>>> speech" Healy and Warren JASA but I didn't check yet the >>>> methodology >>>> to see whether is a between or a within subject design. >> > > ------------------------------ > > Date: Mon, 3 Dec 2007 13:58:55 -0500 > From: "J. Devin McAuley" <mcauley@xxxxxxxx> > Subject: Re: Experiments with large N > > This issue nicely highlights the need to report effect size > measures. With a > large enough sample, even the smallest of effects will show up as > reliable! > :) > > Best regards, > Devin > >> -----Original Message----- >> From: AUDITORY - Research in Auditory Perception >> [mailto:AUDITORY@xxxxxxxx On Behalf Of Robert Zatorre >> Sent: Monday, December 03, 2007 1:42 PM >> To: AUDITORY@xxxxxxxx >> Subject: Re: [AUDITORY] Experiments with large N >> >> Huge samples are very nice if you can get 'em, though such is not >> always >> the case, alas. >> >> So one thing that I would like to see from people who do have >> gigantic N >> is to do some analyses to determine at what point the data reach some >> asymptote. In other words, if you've collected 1,000,000 people, >> at what >> earlier point in your sampling could you have stopped, and come to >> the >> identical conclusions with valid statistics? >> >> Obviously, the answer to this question will be different for >> different >> types of studies with different types of variance and so forth. But >> having the large N allows one to perform this calculation, so that >> next >> time one does a similar study, one could reasonably stop after >> reaching >> a smaller and more manageable sample size. >> >> Has anybody already done this for those large samples that were >> recently >> discussed? It would be really helpful for those who cannot always >> collect such samples. >> >> Best >> >> Robert >> -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >> >> Robert J. Zatorre, Ph.D. >> Montreal Neurological Institute >> 3801 University St. >> Montreal, QC Canada H3A 2B4 >> phone: 1-514-398-8903 >> fax: 1-514-398-1338 >> e-mail: robert.zatorre@xxxxxxxx >> web site: www.zlab.mcgill.ca >> >> >> Malcolm Slaney wrote: >>> This music paper has 380k subjects :-) >>> >> http://cobweb.ecn.purdue.edu/~malcolm/yahoo/Slaney2007 >> (SimilarityByUserRat >> ingISMIR).pdf >>> >>> While Ben Marlin collected another 30k subjects for this >>> music-recommendation study. >>> >> http://cobweb.ecn.purdue.edu/~malcolm/yahoo/Marlin2007 >> (UserBiasUncertainty >> ).pdf >>> >>> The underlying data for both papers is available for academic >>> researchers (fully anonymized, both by song and by user). Send >>> me email >>> if you want more information. >>> >>> - Malcolm >>> >>> On Dec 1, 2007, at 5:43 PM, Matt Wright wrote: >>> >>>> Trevor Cox recently published the results of an online experiment >>>> about listeners' ratings of sound files on a six-point scale ("not >>>> horrible", "bad", "really bad", "awful", "really awful", and >>>> "horrible"). To date he has 130,000 subjects (!) and about 1.5 >>>> million data points: >>>> >>>> http://www.sea-acustica.es/WEB_ICA_07/fchrs/papers/ppa-09-003.pdf >>>> >>>> Here's the website for his experiment: http://www.sound101.org >>>> >>>> Clearly this is related to the "effect of visual stimuli on the >>>> horribleness of awful sounds" that Kelly Fitz pointed out. >>>> >>>> -Matt >>>> >>>> >>>> On Jun 29, 2007, at 12:32 AM, Massimo Grassi wrote: >>>>> So far it looks that the experiment with the largest N (513!) >>>>> is "The >>>>> role of contrasting temporal amplitude patterns in the >>>>> perception of >>>>> speech" Healy and Warren JASA but I didn't check yet the >>>>> methodology >>>>> to see whether is a between or a within subject design. >>> > > ------------------------------ > > Date: Mon, 3 Dec 2007 14:16:16 -0800 > From: "Freed, Dan" <DFreed@xxxxxxxx> > Subject: low-latency audio I/O for Windows: a report > > Dear Auditory List Members: > > In October I posted a request for information about low-latency audio > interface devices for use with Windows. I received many helpful > responses. Over the last two months I've had the opportunity to > acquire > several devices and measure their latencies. Since latency > information > is generally not reported (or is incorrectly reported) in manufacturer > specifications, I'm posting my measurement results here. > > By "latency", I mean the total end-to-end delay imposed by the device > and its driver, from analog input to analog output. This doesn't > include any additional delay imposed by the software signal processing > (filter group delay, FFT blocking delay, etc.). > > Latency was measured by presenting a pulse train to the analog input, > viewing the analog input and output on a dual-trace oscilloscope, > comparing the traces, and visually estimating the delay. Tests were > performed under Windows XP. The PC was running a simple 1-channel > input-to-output copying program that accesses the device through the > ASIO driver interface. Each device was tested at multiple sampling > rates. At each sampling rate, testing was performed using the > shortest > buffer length that the device supports for that sampling rate, so the > measurements are best-case results. > > Results are shown below. Sampling rates are in kHz, latencies are in > ms. Buffer length in samples is shown in parentheses. Note that > latencies under 3 ms are achievable at the higher sampling rates with > some devices. > > EDIROL UA-1EX [USB device, $80] > 32 kHz: 14.2 ms (96) > 44.1 kHz: 11.5 ms (96) > 48 kHz: 12.0 ms (112) > > M-AUDIO FIREWIRE SOLO [FireWire device, $172] > 44.1 kHz: 9.0 ms (64) > 48 kHz: 8.2 ms (64) > 88.2 kHz: 6.5 ms (64) > 96 kHz: 6.1 ms (64) > > ECHO AUDIOFIRE 4 [FireWire device, $300] > 32 kHz: 8.0 ms (32) > 44.1 kHz: 6.0 ms (32) > 48 kHz: 5.5 ms (32) > 88.2 kHz: 3.6 ms (32) > 96 kHz: 3.4 ms (32) > > RME FIREFACE 400 [FireWire device, $1000] > [at some sampling rates, 48-sample buffer caused bus errors, so used > 64-sample instead] > 32 kHz: 10.4 ms (64) > 44.1 kHz: 6.6 ms (48) > 48 kHz: 6.0 ms (48) > 64 kHz: 6.2 ms (64) > 88.2 kHz: 4.5 ms (64) > 96 kHz: 4.2 ms (64) > 128 kHz: 4.2 ms (64) > 176.4 kHz: 2.9 ms (64) > 192 kHz: 2.5 ms (48) > > M-AUDIO DELTA 44 [PCI device, $200] > 16 kHz: 16.2 ms (64) > 22.05 kHz: 11.7 ms (64) > 24 kHz: 10.7 ms (64) > 32 kHz: 8.1 ms (64) > 44.1 kHz: 5.9 ms (64) > 48 kHz: 5.5 ms (64) > 88.2 kHz: 3.0 ms (64) > 96 kHz: 2.7 ms (64) > > ECHO LAYLA 3G [PCI device, $500] > 32 kHz: 8.8 ms (64) > 44.1 kHz: 6.4 ms (64) > 48 kHz: 5.9 ms (64) > 64 kHz: 4.2 ms (64) > 88.2 kHz: 3.1 ms (64) > 96 kHz: 2.8 ms (64) > > RME MULTIFACE II + HDSP PCI [PCI device, $1049] > 32 kHz: 6.5 ms (32) > 44.1 kHz: 4.7 ms (32) > 48 kHz: 4.3 ms (32) > 64 kHz: 3.8 ms (32) > 88.2 kHz: 2.8 ms (32) > 96 kHz: 2.5 ms (32) > > Caveat: running at high sampling rates with short buffer lengths > increases the risk of dropouts. I did some limited listening tests in > all of the above testing conditions and never heard any dropouts, > but I > offer no guarantees. > > I'd be happy to answer any questions about my measurements. I hope > this > information is useful. > > Dan Freed > Senior Engineer > Dept. of Human Communication Sciences & Devices > House Ear Institute > 2100 W. Third St. > Los Angeles, CA 90057 USA > Phone: +1-213-353-7084 > Fax: +1-213-413-0950 > Email: dfreed@xxxxxxxx > > ------------------------------ > > End of AUDITORY Digest - 2 Dec 2007 to 3 Dec 2007 (#2007-275) > *************************************************************


This message came from the mail archive
http://www.auditory.org/postings/2007/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University