[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Time and Space



                                                      June 11, 1997

Neil Todd wrote
> One of the basic principles of any kind of theoretical work is
> that you start out your models as simple as possible, to at
> least get them up an running, so that you can make some
> comparisons with the data. Then when the model breaks down you
> learn something. That's how theoretical science makes progress.

Quite true! But then, I'm also a physicist, specialized in
(electronic) modelling, so that must be suspicious ;-)

I have been looking around for an auditory model that gets at
least the gross features right in, say, auditory profile analysis
for complex sounds. A good model is not necessarily a model that
gets all the features right, but one that is clear and testable
about what it can (does) and what it cannot (does not) represent.
A physical analogy is Newtonian laws of mechanics: still highly
useful for speeds far below the speed of light, although clearly
"wrong" in relation to relativistic (and many other) effects.

I need a model in order to better estimate/predict information
loss in auditory display schemes like the one demonstrated at
http://ourworld.compuserve.com/homepages/Peter_Meijer/javoice.htm
which maps arbitrary visual information to an auditory
representation. I'd like to push these time-varying complex
sounds through an auditory computer model to see what amount
of information is lost at the output of the model. That would
simplify playing with (spatio-temporal) parameters to get better
results, perhaps making use of Gerald Langner's findings as well.
Doing these experiments (only) with people would be a real pain
in view of, e.g., the variability in human performance. Is there
any decently specified model around that at least gets the major
perceptual figures of JND, missing fundamental perception and
major (temporal and frequency) masking effects right? Temporal
masking may be a bit hard because of learning effects, but
still my wish is to have a model that stands any chance of
making some valid predictions about perception of arbitrary
complex sounds - even if only covering peripheral auditory
processing to omit a number of hard issues about neural
(cortical) plasticity.

Currently I'm still rather hesitant to try this with any of
the suite of existing auditory models for possible lack of
consensus about what the results would mean in terms of
making convincing arguments pro or con. If only the maker
of the model would believe the results, it would probably be
a waste of effort for me. Am I too pessimistic in this respect?
Is there consensus about the validity of any model for the
kinds of auditory profile analysis and related purposes I'm
after?

Best wishes,

Peter Meijer