Re: Get lost, Mr. Cochlea!! --- The Brain (Jont Allen )


Subject: Re: Get lost, Mr. Cochlea!! --- The Brain
From:    Jont Allen  <jba(at)RESEARCH.ATT.COM>
Date:    Tue, 27 Feb 2001 00:01:19 -0500

Yadong, This is all very cute, and I dont want to be accused of not having a sense of humor, (clearly you do, and it is refreshing), but there is a thing called masking. Information is lost in the early auditory stages, due to neural coding. The auditory nerve signal is not about zero crossing. Even zero crossing are not exact, and would have jitter. But masking is NOT timing jitter. The ear IS similar to a floating point converter. The ear does not have an infinite dynamic range or signal to noise ratio. This limited dynamic range shows up as masking. Do you disagree? Jont Yadong Wang wrote: > Dear Dr. Dan Ellis, and Dr. Hershey: > > > It's an interesting question to figure out the constraints imposed by > > the AN spike representation and how much of the original sound waveform > > information is available to subsequent processing. But if the answer > > is "all of it" (as I understand Ramdas Kumaresan's post to be > > arguing), that's actually, for me, a disappointing result. > > Why it is "a disappointing result"? Just because our model keeps all > the information in the first stage? > > Maybe you get a wrong idea in our paper. As you said: > > >"The function of auditory scene analysis -- and audition in general -- > >is to extract *useful* information (defined according to the goals of > >the organism), and throw away the rest." > > OK, you are right only at this point. Then who "extract *useful* > information and throw away the rest"? The cochlea, neuron, or the brain? > Definitely it is the brain which does all those high level processing. > Cochlea, as brain's window to the sounds outside, has to provide as much > information as possible to the brain, which is cochlea's high level > processing center. Facing with such a task, cochlea has two options: > > 1) To provide a "near-complete representation", ... and send it to the > brain. If the brain happens to know the truth, that the cochlea > "doesn't" sent him all the information the cochlea get from middle ear, > what will happen? > > "Get lost, Mr. Cochlea!!" This is the only words our dear brain > will say to his "not-hard-working man". > > 2) To provide a "complete representation", ... and send it to the > brain anyway. If the brain happens to know the truth, that the cochlea > "does" sent him all the information the cochlea get from middle ear, > what will happen, then? > > "Come on, Mr. Cochlea!! You did the great job. I will ask the aorta > to sent you more fresh blood and you will be allowed to retire when > you are in your fifties, I promise." This is the words our dear brain > will say to his "hard-working man". > > If you happen to be a cochlea, which stratagem you prefer, "1" or "2"? > I will choose "2" definitely myself, to keep my only job and get what > I need to survive! > > "Oh, gee! My dear Lord, I have survive for millions of years". (Quoted > from my left cochlea, when I am writing this email.) > > > ... It may be that, ultimately, > > the cochlea doesn't contribute much to this higher-level processing > > i.e. it is obliged to provide a near-complete representation because > > the useful selection between relevant and irrelevant detail can't be > > implemented at such a low level of analysis. > > Since "the useful selection between relevant and irrelevant detail > can't be implemented at such a low level of analysis", then, why "it is > obliged to provide a near-complete representation"??????? > > Are you kidding? I just cannot figure out what's going on in this logic. > "What you want, Mr. cochlea?", brain will be angry, "You are destined > 'in such a low level of analysis'. HOW HAREBRAINED YOU ARE! Give me all > the information you have or get lost!!!" > > > But for me the real interest lies in finding the representations that > > *do* throw away information, that sacrifice the ability to make full > > reconstructions, then seeing which parts are kept. That't the kind of > > description we need to help us build intelligent automatic sound > > analysis systems. > > As a computer program, maybe it will work. (But I am not sure!.) If the > cochlea dares do that (" *do* throw away information"), however, I am > pretty sure that the brain will kill the cochlea, anyway!!! > > -Yadong A Made-In-Brain toward A Man-Made-Brain > > ----------------------------------------------------------------------- > Yadong Wang > Dept of Electrical & Computer Eng. > Univ of Rhode Island > Kelly Hall, 4 East Alumni Ave. > Kingston, RI, 02881 > > Email: ydwang(at)ele.uri.edu > Phone: 401-874-5392 (O) > 401-789-7742 (H) > Fax : 401-782-6422 > URL : http://www.CoZeC.com -- Jont B. Allen AT&T Labs-Research, Shannon Laboratory, E161 180 Park Ave., Florham Park NJ, 07932-0971 973/360-8545voice, x7111fax, http://www.research.att.com/~jba


This message came from the mail archive
http://www.auditory.org/postings/2001/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University