ASA 124th Meeting New Orleans 1992 October

1aPP11. An artificial neural network model of human sound localization.

T. R. Anderson

Armstrong Lab., AL/CFBA, Wright--Patterson AFB, Dayton, OH 45433-6573

J. A. Janko

Wright State Univ., Dayton, OH 45435

R. H. Gilkey

AL/CFBA, Wright--Patterson AFB, Dayton, OH 45433-6573

Wright State Univ., Dayton, OH 45435

An artificial neural network was trained to identify the location of sound sources using the head-related transfer functions (HRTFs) of Wightman and Kistler [J. Acoust. Soc. Am. 85, 858--867 (1989)]. The simulated signals were either filtered clicks or pure tones, with speaker placements separated in steps of 15 deg in azimuth of 18 deg in elevation. After the signals were passed through the HRTFs, the inputs to the nets were computed as the difference of left ear and right ear phase spectra or the difference of the power at the output of left and right ear third-octave or twelfth-octave filter banks. Back propagation was used to train the nets. Separate nets were trained for each signal type and for each type of input data. Better than 90% correct identification of the source speakers location can be achieved in either the horizontal or median planes. The results for the horizontal plane are compared to the predictions of the duplex theory of sound localization. [Work supported by AFOSR-91-0289 and AFOSR-Task 2313V3.]