[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AW: Cochlear nonlinearity & TTS



From that point of view, the cochlea might be better described as an acoustic feature extractor rather than a compression system. The latter would imply that at some point decompression occurs, yielding a (approximate) replica of the acoustic signal somewhere in the CNS; I would guess this does not happen (what would be the point?).

If we agree that the brain is only interested in specific features of the acoustic signal (those that we perceive as pitch, loudness, location, timbre, etc), there is no need for a compression/decompression system. It would be more efficient to extract the features as accurately as possible without considerations as to whether the operations are invertible.

Whether it is possible in principle to reconstruct the acoustic signal from the nerve output is a different question. You would definitely need the spikes from all nerve fibers (at least the more, the better), and you won't be able to get it exactly right (if you don't have a priori knowledge about the signal).

Erik

--
Erik Larsen, Ph.D. candidate
Speech and Hearing Bioscience and Technology
Harvard-MIT Division of Health Sciences and Technology
Cambridge MA 02139
http://web.mit.edu/shbt

David Mountain wrote:
 believe that the available data suggest that the cochlear amplifier only
has a significant effect over a small region at and basal to the peak of
the traveling wave.  See for example:

Cody AR (1992) Acoustic lesions in the mammalian cochlea: implications for
the spatial distribution of the 'active process'.  Hear Res. 1992
Oct;62(2):166-72. PMID: 1429258

I would rephrase Ramda's question to be the question of whether the
transformation between sound and auditory nerve firing pattern is
invertable.  I think the answer in the exact sense is that it is not
invertable.  It is my belief that we are dealing with a compression system
with some loss but one that preserves the features of biological
relevance.  Modern audio compression schemes take advantage of this fact
and throw away acoustic information that is not perceived or only barely
detectable by the listener.


--------------------------------------------------------------------

David C. Mountain, Ph.D.
Professor of Biomedical Engineering

Boston University
44 Cummington St.
Boston, MA 02215

Email:   dcm@xxxxxx
Website: http://www.bu.edu/dbin/bme/faculty/?prof=dcm
Phone:   (617) 353-4343
FAX:     (617) 353-6766
Office:  ERB 413
On Thu, 18 Jan 2007, Ramdas Kumaresan wrote:

Navid, Richard and the listees,

I have heard a lot of speculation about the cochlear amplifier for many
years. One of the questions that  I have wondered about
as a signal processing engineer for many years, is with all the
sophisticated  nonlinearities, delays, amplifiers, filters
etc that are present in the auditory periphery, how does it "represent"
an acoustic signal in the neural spike patterns
that emanate from the auditory periphery? (I guess everyone  wonders
about it.)
Is it possible to reconstruct the acoustic signal if you were able to
measure/monitor   the
spike patterns  that are put out by all the auditory nerve fibers?  What
is the reconstruction 'algorithm"?
(I know about   Egbert deBoer's reconstruction method  for a single
nerve fiber.) Is'n't the information about the signal
distributed across many, many  nerve fibers? Should'nt the
reconstruction take information from
all nerve fibers and fuse them to reconstruct the signal? Just wondering
aloud. RK





Richard F. Lyon wrote:

At 9:17 AM -0800 1/16/07, Navid Shahnaz wrote:

Thank you Reinhart for your clarification. Does the cochlear
amplifier works on both sides of the excitation pattern peak on the
BM? or the amplifier operates  wore efficiently at a place that is
just above or toward the apex from the point of disturbance created
by travelling wave? Operationally this point may be an ideal point as
it is less likely saturates the amplifier due to sharp slope of the
travelling wave on the apical side.
Cheers
Navid

Navid,

Both Monita and Reinhart have given good explanations, but let me add
a bit.

The way I think of it, the active amplification is active everywhere,
but it competes with the passive loss mechanisms, and is only
significant at low enough levels.  The active loss mechanism (damping)
increases rapidly apically when a sine wave travels past a
characteristic place.  Because of the active gain, the response to a
sine wave can travel further before it damps out; from the "passive
peak" that Reinhart mentions, the peak response location can be
further apical, up to about a half octave worth of place further, when
the active amplification is significant, to the "active peak". The
"net" amplification is positive (in dB per mm or whatever) before the
response peak, and negative after the response peak, pretty much by
definition of peak.  That net includes the active gain, which
saturates, and the passive loss, which doesn't, so it's level dependent.

In addition to the saturation that reduces the active gain at high
level, there is also efferent control that turns down the gain in
response to afferent response level and possibly other central control
signals.  This effect of efferent control of mechanical gain has been
directly demonstrated, but I don't recall exactly who/when/where to
cite right now.

Dick


-- Erik Larsen PhD candidate Speech and Hearing Bioscience and Technology http://web.mit.edu/shbt

'The scientist does not study Nature because it is useful;
he studies it because he delights in it, and he delights in it
because it is beautiful. If Nature were not beautiful, it
would not be worth knowing, and if Nature were not worth
knowing, life would not be worth living.'

-Henri Poincare