Dear Dick,
While linear system theories seem to work reasonably well with 
mechanical systems, I believe they fail when applied to Biological 
systems. Consider that even Helmoholtz had to appeal to non-linear 
processes (never really described) in the auditory system to account 
for the "missing fundamental" and "combination tones". Both of these 
psycho-acoustical phenomenon have been well established and 
explanations for pitch perception are either spectral based or time 
based with some throwing in learning and cognition to avoid having 
to make the harder decision that maybe this field needs a new 
paradigm. This new paradigm should be able to provide a better model 
that explains frequency (sound!) analysis in a fashion such that the 
nothing is missing and parameter values can be calculated to explain 
pitch salience, a subject that seems to be never discussed in pitch 
perception models.
Furthermore, such a new approach should also be able to explain why 
the cochlear is the shape it is, which as far as I can see has never 
been touched upon by existing signal processing methods. Finally, 
are these missing components "illusions" that are filled in so to 
speak by our higher level cognitive capabilities? It is remarkable 
that this so called filling in process is as robust as it is, to  be 
more or less common to everyone, and therefore one wonders if all 
the other illusions are really not illusions but may have a 
perfectly good basis for their existence. If they were "illusions" 
one would expect a fair amount of variation in the psycho-acoustic 
experimental results I would think.
I myself gave up on linear systems early in my study of this field 
and have felt that other systems, e.g. switching, may offer a better 
future explanatory capability, especially when it comes to showing 
some commonality of signal processing between the visual and the 
auditory system. To this end, I am quite happy to accept that I do 
not consider myself an expert in linear system theory.
Regards,
Randy Randhawa
On 8/2/2011 1:49 PM, Richard F. Lyon wrote:
At 5:55 PM +0300 8/2/11, ita katz wrote:
The periodicity is determined by the least-common-multiple of the 
periodicities of the present harmonics, so if (for example) a 
sound is composed of sines of frequencies 200Hz, 300Hz, and 400Hz, 
the periods are 5msec, 3 1/3msec, and 2.5msec, so the 
least-common-multiple is 10msec (2 periods of 5msec, 3 periods of 
3.33msec, and 4 periods of 2.5msec), which is of course the 
periodicity of the sum of the sines, or in other words 100Hz. 
(actually it is the same as the greatest-common-divisor of the 
frequencies).
Ita, that explanation is sort of OK, but as written implies that 
the auditory system has the ability to do number-theory operations 
on periods (or frequencies), and depends on there being harmonics 
present and separately measureable.
It would be much more robust to say that "The pitch is determined 
based on an approximately common periodicity of outputs of the 
cochlea," which I believe is consistent with your intent.
Why is this better?  First, it doesn't say the periodicity is 
determined; what is determined is the pitch (even that is a bit of 
stretch, but let's go with it).  Second, it doesn't depend on 
whether the signal is periodic, that is, whether harmonics exist. 
Third, it doesn't depend on being able to isolate and separately 
characterize components, harmonic or otherwise.  Fourth, it doesn't 
need "multiples" (or divisors), but relies on the property of 
periodicity that a signal with a given period is also periodic at 
multiples of that period, so it only needs to look for "common" 
periodicities--which doesn't require any arithmetic, just simple 
neural circuits.  Fifth, it admits approximation, so that things 
like "the strike note of a chime" and noise-based pitch can be 
accommodated.  Sixth, it recognizes that the cochlea has a role in 
pitch perception.  It's still not complete or perfect, but I think 
presents a better picture of how it actually works, in a form that 
can be realistically modeled.
Is this "tortured use of existing signal processing techniques" as 
Randy puts it?  I don't think so.  Is it "a unique way to do 
frequency analysis and to meet the dictum in biology that 'form 
follows function'"?  Sure, why not?  But why call it "frequency 
analysis"?  How about "a unique way to do sound analysis" (if by 
"unique" we mean common to many animals)?
I do have some sympathy for Randy's concern that we are far from a 
complete understanding, and that hearing aids are not as good as 
they would be if we understood better, but yes, he sounds way too 
harsh in overblowing it so.  I'm wondering what's behind that, and 
whether it's just confusion about all the confusing literature on 
pitch perception, which I agree is a complicated mess -- or is the 
problem, indicated by Randy's previous posts, just that he doesn't 
understand basic linear systems and signal processing, and that's 
why it all seems "tortured"?
Dick