[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Fw: sursound: speaker phones
Re Jens's reply:
I agree. I want to also mention the fact that in such diphthongs as occur
in the English word "I", the diphthong is carried by a rapid temporal change
in timbre from "ah" to "ee". If the first waveform was the sole determinant
of the timbre, we'd hear this word as "ah" (even outside the southern United
States where it is actually pronounced "ah").
Al
-------------------------------------------------
Albert S. Bregman, Emeritus Professor
Dept of Psychology, McGill University
1205 Docteur Penfield Avenue
Montreal, QC, Canada H3A 1B1
Tel: +1 (514) 398-6103
Fax: +1 (514) 398-4896
bregman@hebb.psych.mcgill.ca
-------------------------------------------------
----- Original Message -----
From: Jens Blauert <blauert@IKA.RUHR-UNI-BOCHUM.DE>
To: <AUDITORY@LISTS.MCGILL.CA>
Sent: 26-Oct-00 6:14 AM
Subject: Re: Fw: sursound: speaker phones
> There is no such thing as a "law of the first wavefront" for timbre.
> Timbre is determined by the incoming energy over up to 100 ms - like
loudness.
> So, something must be wrong with the statement below.
>
> Jeens Blauert
>
> >
> > The answer is binaural and monaural decolorization, the first wavefront
> > determines the timbre. With recordings the correct binaural and monaural
> > cues are missing and thus its sounds hollow/colored. A ref. for the
binaural
> > decolorization:
> >
> > P. M. Zurek, "Measurements of Binaural Echo Suppression", J. Acoust.
Soc.
> > Am., vol. 66, pp. 1750-1757 (1979 Dec.).
> >
> > John Beerends
> > KPN Research
> >
> >
> > -----Original Message-----
> > From: James W. Beauchamp [mailto:j-beauch@UX1.CSO.UIUC.EDU]
> > Sent: Wednesday, October 25, 2000 20:44
> > To: AUDITORY@LISTS.MCGILL.CA
> > Subject: Re: Fw: sursound: speaker phones
> >
> >
> > While we're on the subject of sound localization, can someone explain
> > why speaker phones always sound like you're "talking through a tube" to
> > the person on the other end of the line? I'm radiating a sound which
> > is picked up by a diaphragm on a table and then directly transmitted
> > to someone's ear via a small speaker. How is this substantially
> > different from my talking to a hole in the table with someone's ear
> > directly underneath?
> >
> > Here is a related problem: Suppose I wish to record someone talking in
> > the front of the room, and I am in the back of the room. When I am
> > actually there listening, the speech is as clear as a bell; I ignore
> > all environmental sounds and echoes. To (roughly) simulate the pressures
> > at the ears, I take the headphone of my Walkman, put it on, and use it
> > as a stereo microphone. Later, when I play it back through the
headphones,
> > the basic sound is there, but now the echos and environmental sounds
swamp
> > out the speaker, who is rendered barely audible. Does using really good
> > mics help? (Cheap actual mics don't seem to improve the situation.)
> >
> > If we understand what the problem is, how do we correct for it? E. g.,
> > why aren't there better speaker phones? (Maybe there are for a price.)
> >
> > I realize that this problem is being worked on in the context of hearing
> > aids -- my neighbor has to take his off in order to hear a conversation
> > when there's more than 1 other person talking -- and in
tele-conferencing
> > applications.
> >
> > Jim Beauchamp
> > Univ. of Ill. U-C
> >
>