[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: speaker phones and listening in reverbation
Let me have a try at summarizing these issues in order to keep the
discussion from going off in about eight directions.
James Beauchamp's original question had two parts. The first had to do with
why speakerphones 'sound like you're "talking through a tube' to the person
on the other end of the line?"
The answer, I think, has to do with direct-to-reverberant ratio. With a
handset phone the distance from mouth to microphone is a few cm, while with
a speakerphone it can be on the order of a meter or more. That makes for a
very large difference in direct-to-reverberant ratio and the resulting
reverberant quality with speakerphones.
The second part of the question had to do with actual binaural listening in
a reverberant room versus listening to binaural recordings made at the
listener's location in the same room. Assuming the recordings preserved
interaural cues reasonably well, the only factor I can think of that might
make intelligibility with actual listening much better than that for
recorded listening is lipreading, which is a BIG factor in intelligibility.
James' question also implied, though, that subjective quality (not just
intelligibility) was much poorer for recorded than live listening. I don't
know of any work on that question -- whether there is something like a
"contextual echo suppression". It might make an interesting study.
John Beerends commented on binaural echo suppression, which was not raised
by James' question directly but which is certainly a related phenomenon.
Koenig's original subjective observation (1950, JASA 22, 61-62) noted that
listening to binaural dummy-head recordings had much less reverberant
quality than listening to one channel only (which James could do too with
his recordings). My study described a signal-detection advantage that
seemed to reflect binaural echo suppression.
Jens Blauert clarified Beerends' statement about a "law of the first
wavefront for timbre." It is true that Koenig's observation of binaural
echo suppression and my study both used long-duration sounds and did not
implicate "first wavefront" or "precedence" effects. However, Jens' comment
triggered a memory of a recent study by Hartmann, Litovsky and co. that
found a precedence effect with click stimuli processed via median-plane
HRTFs. Assuming those stimuli are essentially diotic, it suggests that
there can be a first-wavefront effect for stimuli based on spectral shape,
which manifests as subjective timbre difference. (The authors should
clarify any misinterpretations I've made, and provide the reference).
Patrick Zurek
Sensimetrics Corp.
At 12:14 PM 10/26/00 +0200, you wrote:
There is no such thing as a "law of the first wavefront" for timbre.
Timbre is determined by the incoming energy over up to 100 ms - like loudness.
So, something must be wrong with the statement below.
Jeens Blauert
>
> The answer is binaural and monaural decolorization, the first wavefront
> determines the timbre. With recordings the correct binaural and monaural
> cues are missing and thus its sounds hollow/colored. A ref. for the
binaural
> decolorization:
>
> P. M. Zurek, "Measurements of Binaural Echo Suppression", J. Acoust. Soc.
> Am., vol. 66, pp. 1750-1757 (1979 Dec.).
>
> John Beerends
> KPN Research
>
>
> -----Original Message-----
> From: James W. Beauchamp [mailto:j-beauch@UX1.CSO.UIUC.EDU]
> Sent: Wednesday, October 25, 2000 20:44
> To: AUDITORY@LISTS.MCGILL.CA
> Subject: Re: Fw: sursound: speaker phones
>
>
> While we're on the subject of sound localization, can someone explain
> why speaker phones always sound like you're "talking through a tube" to
> the person on the other end of the line? I'm radiating a sound which
> is picked up by a diaphragm on a table and then directly transmitted
> to someone's ear via a small speaker. How is this substantially
> different from my talking to a hole in the table with someone's ear
> directly underneath?
>
> Here is a related problem: Suppose I wish to record someone talking in
> the front of the room, and I am in the back of the room. When I am
> actually there listening, the speech is as clear as a bell; I ignore
> all environmental sounds and echoes. To (roughly) simulate the pressures
> at the ears, I take the headphone of my Walkman, put it on, and use it
> as a stereo microphone. Later, when I play it back through the headphones,
> the basic sound is there, but now the echos and environmental sounds swamp
> out the speaker, who is rendered barely audible. Does using really good
> mics help? (Cheap actual mics don't seem to improve the situation.)
>
> If we understand what the problem is, how do we correct for it? E. g.,
> why aren't there better speaker phones? (Maybe there are for a price.)
>
> I realize that this problem is being worked on in the context of hearing
> aids -- my neighbor has to take his off in order to hear a conversation
> when there's more than 1 other person talking -- and in tele-conferencing
> applications.
>
> Jim Beauchamp
> Univ. of Ill. U-C
>