Dear all,
Thank you all very much for your responses.
It seems that there is plenty of literature on the effect of visual stimuli in auditory localisation. If anyone is interested, a summary of relevant keywords for this topic could be: 'visual capture', 'visual dominance', 'visual bias' and 'cross-modal
bias'. Also, one may find relevant papers under: 'multimodal integration', 'multisensory integration' and 'cross-modal plasticity'.
I have found that a common practice is to use only one visual cue and one auditory cue at the same time. If the two stimuli are close to be spatially congruent, the subject will probably bind the two of them together unconsciously, thus causing this 'visual
capture' effect in which the visual stimulus dominates the auditory one. This may not happen if the two stimuli are not spatially congruent in a noticeable way [1, 2].
However, in the scenario that I proposed originally there are two auditory stimuli: one of them is explicitly associated to the visual cue and would act as an 'anchor', while the other one has to be located. Intuitively, one might think that if the two
auditory cues are perceived as different sources, the risk of visual dominance should be small.
As it has been pointed out, another part of the question is on 'relative localisation' and comparative judgements, particularly in multimodal scenarios. How good are we at estimating the location of two sound sources with respect to each other? And what
happens if we introduce visual cues?
All suggestions are welcome! Thank you all again for your contributions.
Kind regards,
Isaac Engel
References:
[1] Bosen, Adam K. et al. 2016. “Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture.” Biological Cybernetics 110(6): 455–71
[2] Berger, Christopher C., et al. "Generic HRTFs may be good enough in Virtual Reality. Improving source localization through cross-modal plasticity." Frontiers in Neuroscience 12 (2018): 21.
From: Engel Alonso-Martinez, Isaac
Sent: 24 February 2018 19:08 To: auditory@xxxxxxxxxxxxxxx Subject: Visual references in sound localisation Dear all,
I am interested in the impact of audible visual references in sound localisation tasks.
For instance, let's say that you are presented two different continuous sounds (e.g., speech) coming from sources A and B, which are in different locations. While source A is clearly visible to you, B is invisible and you are asked to estimate its location.
Will source A act as a spatial reference, helping you in doing a more accurate estimation, or will it be distracting and make the task more difficult?
If anyone can point to some literature on this, it would be greatly appreciated.
Kind regards,
Isaac Engel
|