Abstract:
Auditory and visual localization of a real sound source under both natural
and artificial listening conditions were examined. Five subjects were employed
in a visual search task paradigm which contained some conditions that were
aurally aided. The aurally aided conditions utilized white noise, emitted at the
subjects horizon, from a single speaker at 72 dB(A). The paradigm also contained
conditions in which a fully enclosed helmet was used to occlude the pinna in
order to create the artificial listening conditions in which normal head-related
transfer functions (HRTFs) are transformed. Reaction time and accuracy were
simultaneously used within each condition to measure the extent to which normal
HRTFs were transformed by the artificial listening conditions. The data were
analyzed and significance was found between the natural and artificial
conditions