Kaernbach and D. provided psychophysical evidence against these models.
You are quite right in that purely frequency based models cannot account for all audible features of sound.
> Your poster says that the spectra were estimatedusing FFT, and the next sentence says using a gammatone filterbank. Which is it? Or both? Oh, I see, one says the algorithm and the other the model. Why would you choose an algorithm that doesn't match the model? Why treat these as conceptually different things? An algorithm is a computational model, is it not?
While I support this caveat, I do not expect a simple mathematical solution at
all, because the multipolars within CN do not synchronously respond to the
frequency which stimulated the IHCs. Neurons are generally too slow as to
directly convey all audible frequencies. Chopper frequencies in the kHz range
are impossible due to refractory time. So auditory nerve and cochlear nucleus
perform something like downsampling. So harmony and in particular octave unison
are quite natural phonomena. We need not look for their 'learned' basis.
Here I have no idea what you think you're responding to.