[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Information loss (Re: Analytical approach to temporal coding...)



It's an interesting question to figure out the constraints imposed by
the AN spike representation and how much of the original sound waveform
information is available to subsequent processing.  But if the answer
is "all of it" (as I understand Ramdas Kumaresan's post to be
arguing), that's actually, for me, a disappointing result.

The function of auditory scene analysis -- and audition in general --
is to extract *useful* information (defined according to the goals of
the organism), and throw away the rest.  It may be that, ultimately,
the cochlea doesn't contribute much to this higher-level processing
i.e. it is obliged to provide a near-complete representation because
the useful selection between relevant and irrelevant detail can't be
implemented at such a low level of analysis.

But for me the real interest lies in finding the representations that
*do* throw away information, that sacrifice the ability to make full
reconstructions, then seeing which parts are kept.  That't the kind of
description we need to help us build intelligent automatic sound
analysis systems.

-- DAn Ellis <dpwe@ee.columbia.edu> http://www.ee.columbia.edu/~dpwe/
   Dept. of Elec. Eng., Columbia Univ., New York NY 10027 (212) 854-8928