The vOICe to AIM (at)


Subject: The vOICe to AIM
From:    at <meijerNATLAB.RESEARCH.PHILIPS.COM>
Date:    Mon, 16 Jun 1997 09:28:02 +0200

June 16, 1997 I'd like to thank all of you who responded, publicly or privately, to my inquiry on auditory models (subject was "Re: Time and Space"). So far, it seems that consensus about the validity of any model for the kinds of auditory profile analysis that I'm after is rather unlikely. Of course, that does not automatically imply that there are no models that might be up to the job, but just that there is no widespread agreement about it yet. I personally do not take a position in this issue, because I simply do not know how good or bad the available auditory models are. As an auditory model testbench, I have now put a 1.05 second, 20 kHz sample rate, 16-bit mono, .wav sound file on my site (see URL below). The actual frequencies present in this sound are in the range [500Hz, 5kHz], and the sound was synthesized by my Java application from the arti2.gif test image at my site (again, see URL below). In order to get at least some idea of what a contemporary (and readily available) auditory model would make of my complex sounds, I have now run a few experiments using Roy Patterson's AIM model. First draft results can be found on my new page at URL http://ourworld.compuserve.com/homepages/Peter_Meijer/aumodel.htm where I used as much as possible the AIM default settings in generating auditory spectrograms, basilar membrane motion plots and neural activity patterns from the AIM model. In other words, I did not search for a special corner in parameter space to squeeze out seemingly best results: I only adapted the AIM output gain to get proper plotting ranges. I'm sure the results will be controversial, but it illustrates how an auditory model may in principle be used to investigate auditory processing of auditory images as generated via The vOICe image-to-sound mapping. I hope this will in the longer run help bridge the gap between the available large body of "microscopic" knowledge of auditory processing and perception and the "macroscopic" level of complex sound processing as required for mental reconstruction of sonified imagery. The vOICe mapping might serve as a research vehicle in bringing many issues in auditory perception together within a single, concrete and conceptually simple framework. Results from other auditory models may be added at a later stage when other models become available to me. Best wishes, Peter Meijer


This message came from the mail archive
http://www.auditory.org/postings/1997/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University