[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

AIMR8.2



                         AIMR8.2

                AUDITORY IMAGE MODEL: RELEASE 8.2

                         May 1997

In October 1995, we published a short paper on the Auditory Image
Model (AIM) and the software package that we use to run the model.

Patterson, R.D., Allerhand, M., and Giguere, C., (1995). "Time-domain
   modelling of peripheral auditory processing: A modular architecture
   and a software platform," J. Acoust. Soc. Am. 98, 1890-1894.

AIM converts a digitised sound into a multi-channel neural activity
pattern (NAP) like that produced by the cochlea in reponse to the
sound.  Then it applies strobed temporal integration or
autocorrelation to each channel of the NAP to convert it into
something more like the auditory image we hear when presented with the
sound.

You can read about AIM R8 on our WWW page, and pick up the sources there,
        http://www.mrc-apu.cam.ac.uk/aim/

or pick up the source code and documentation by annonymous ftp from
        ftp.mrc-apu.cam.ac.uk           directory pub/aim

The ReadMe.First file explains how to to acquire the source code, how
to compile it, and how to get started.

The paper describes AIM Release 7.1 (AIM R7). The purpose of this
letter is to announce the first public release of AIM R8 (Release 8.2,
May 1997).  There was also a beta version of AIMR8 (R8.1 dated
August 96).  There are three changes to AIM itself: a) We have
introducted a bank of low-pass filters at the output of the cochlea
simulation to improve the simulation of loss of phase locking at high
frequencies.  b) We have introduced the option of power compression at
the output of the gammatone auditory filterbank. c) We have changed
the nonlinearity of the transmission-line filterbank to a square-root
function. We have also developed a Matlab interface for AIM, and
written a document describing how to use the AIM software to create a
Meddis and Hewitt (1991) model of pitch perception
(docs/aimMeddisHewitt).


=======================================================================
CHANGES TO THE AUDITORY IMAGE MODEL ITSELF

The software package contains two basic types of AIM: functional and
physiological. The modifications are as follows:

1a. Power Compression Option: compress=0.5

The compression in the functional version of AIMR7 is logarithmic and
this is the appropriate function to bridge between gammatone filters
with their exponential tails and two-dimensional thesholding -- the
process that simulates neural transduction in functional AIM.  There
are now data to indicate, however, that a power compressor with an
exponent of 0.5 (applied to amplitude values) is a better
representation of auditory compression over quite a wide dynamic
range.  Accordingly, the 'compress' option has been modified to accept
the arguments 'log', 'off', or a value between 0 and 1 indicating the
degree of power compression required. The default is 'log'.

The file bin/aimR8demo contains a script that shows how to set up
functional AIM with square root compression rather than logarithmic
compression.

1b. Nonlinearity of the transmission line filterbank

The basilar membrane nonlinearity of the physiological version has
been changed from an inverse function, 1/(1+x), to a square-root
function, sqrt(1/(1+x)), to improve the simulation of auditory
nonlinearities and two-tone suppression effects. The change in
nonlinearity has virtually no effect on the main travelling wave for a
wide range of input levels.

There are no changes to the relevant options (with suffix _tlf).


2. Loss of Phase Locking: stages_idt=2 tup_idt=0.133ms

In the functional version of AIM, the instruction to simulate the
neural activity pattern produced by a sound (gennap) has been modified
to improve the simulation of 'loss of phase locking' as a function of
frequency in the region above 1200 Hz. AIMR8 applies a two-stage, or
second order, lowpass filter with a time constant of 0.133 ms to the
output of each NAP channel. The filtering results in a 12 dB/ocatve
rolloff in temporal resolution in the region above 1200 Hz. Previously
there was no loss of phase locking. The relevant options and defaults
are 'stages_idt=2' and tup_idt=0.133ms'. Set 'stages_idt=off' to
revert to AIMR7.

The lowpass filter can also be used with the Meddis haircell module in
the physiological version of AIM to increase the rate of phase locking
from its original 6 dB per octave above 1050 Hz, to a more realistic
12 dB/octave above about 1100 Hz. The relevant options and defaults
are 'stages_idt=1' and tup_idt=0.133ms'.  The issues are described in
docs/aimMeddisHewitt.


3. bits_wave:  The options 'bits_wave' has been removed since it has no
effect in the floating point version of AIM. It remains a silent
option for anyone who still uses the integer version.

4. makefile: The makefile has been organised and comments have been
added to assist compilation at non-standard unix sites. There is also
a help option for the makefile ('make help').


=======================================================================
ADDITIONS TO THE SOFTWARE PACKAGE


1. Simulation of Meddis and Hewitt (1991) using the AIM software:

The functional version of AIM includes a gammatone auditory
filterbank. The physiological version includes a bank of Meddis
haircells and a bank of autocorrelators for constructing
autocorrelograms. You can cross connect the modules and so produce a
Meddis and Hewitt (1991) model with a gammatone auditory filterbank, a
bank of Meddis haircells and a bank of autocorrelators. The option
settings required to enable the Meddis and Hewitt model are described
in docs/aimMeddisHewitt.

The file docs/aimR8demo contains a script that demonstrates
these new versions of AIM.


2. Matlab/AIM:

There is now a Matlab interface for AIM so that you can construct
sounds using matlab functions, pass the waves to AIM for processing,
and import the multi-channel neural activity patterns, or auditory
images, back into Matlab for futher processing. It is referred to as
AIMMAT, and it is particularly useful for developing decision
statistics to relate the complex patterns observed in the neural
patterns and auditory images to data from experiments with human
listeners. It is described in

Tsuzaki, M. and Patterson, R.D. (1997). "AIM and AIMMAT as simulators
   of auditory peripheral processing," Int. Symp on Simulation,
   Visualization and Auralization. 2-4 April, Tokyo, Japan.

The interface tools are found in directory 'matlab'. There are a set
of demonstration routines with the prefix 'amd_'. They are all matlab
scripts (that is, ".m" files) and so the easiest way to get going in
Matlab/AIM is to find the amd_xxx.m demo that is closest to your own
interest and modify a copy of it for your own application.


3. Silent Options for Postscript printing:

There are now a large number of Silent Options associated with
printing AIM displays (see docs/aimSilentOptions).  They are
particularly useful when printing autocorrelograms, summary
autocorrelograms, and summary auditory images, where some of the
default axes and labels are incorrect.  Examples of how to use these
silent options are presented in docs/aimR8demo.


4. The Bibliography of AIM related papers (docs/aimBibilography) has
been updated.

5. A new TARGET has been added to the makefile to assist compilation
on SGI machines.

6. A document has been drafted describing the strobed temporal
integration (STI) mechanism used to construct auditory images from
neural activity patterns, and the relationship between STI and
autocorrelation.  See docs/aimStrobeCriterion. It is still under
development but is the best description of the issues as we think of
them currently.


=======================================================================

Regards from the AIM team,

Roy Patterson, Chris Giguere, Minoru Tsuzaki, Michael Akeroyd,
        Jay Datta and Mike Allerhand


May 1997

=======================================================================