1pSP6. Interpolating HRTF for auditory virtual reality.

Session: Monday Afternoon, December 2

Time: 3:15


Author: Takanori Nishino
Location: Dept. of Info. Elec., Nagoya Univ., Furo-cho 1, Chikusa-ku Nagoya-shi, 464-01 Japan
Author: Sumie Mase
Location: Dept. of Info. Elec., Nagoya Univ., Furo-cho 1, Chikusa-ku Nagoya-shi, 464-01 Japan
Author: Shoji Kajita
Location: Dept. of Info. Elec., Nagoya Univ., Furo-cho 1, Chikusa-ku Nagoya-shi, 464-01 Japan
Author: Kazuya Takeda
Location: Dept. of Info. Elec., Nagoya Univ., Furo-cho 1, Chikusa-ku Nagoya-shi, 464-01 Japan
Author: Fumitada Itakura
Location: Dept. of Info. Elec., Nagoya Univ., Furo-cho 1, Chikusa-ku Nagoya-shi, 464-01 Japan

Abstract:

Two (linear and nonlinear) interpolation methods of the head-related transfer function (HRTF) are exploited in order to realize virtual auditory localization. In both methods, HRTFs of the left and right ears are represented by a delay time and a common impulse response, where delay time is determined so that the cross correlation of two HRTFs takes the maximum value. A three-layer neural network is trained for the nonlinear method, whereas basic linear interpolation is used for the linear method. Evaluation tests are performed by using HRTF prototypes, Web-published by the MIT Media Lab. The signal-to-deviation ratios (SDR) of the measured and interpolated HRTFs are calculated for objective evaluation of the methods. The SDR of the nonlinear method is much better, i.e., 50 to 70 dB, than that of the linear method, i.e., 5 to 30 dB. On the other hand, there is no significant difference in the subjective evaluation of localizing the earphone-presented sounds generated by the two interpolated HRTFs. Furthermore, the results of the above subjective tests are nearly identical to that of the measured HRTF.


ASA 132nd meeting - Hawaii, December 1996