Dear Emad,
The only criterion that makes sense to me is that a
cochlear model should replicate well-known psychoacoustic experiments without
having to apply "special adjustments" for each experiment. For example, the
two-tone interference experiments such as missing fundamental, combination
tones, masking, etc. should all be explainable by the same model. Likewise
for many other phenomena, such as rippled noise, whispered speech,
real-time pitch detection, or detection of tones without periodic repetition,
not requiring exotic computations. Furthermore, the model should be capable
of separating sounds in terms of awareness and attention.
As far as I know, no current models can come
anywhere close to doing even one of these things. So you have an open field to
pursue.
Here's a hint: Helmholtz was wrong, Seebeck was
right.
Best regards,
John Bates
|