1aSC1. The role of within-category structure in the integration of auditory and visual speech.

Session: Monday Morning, December 2

Time: 11:30


Author: Michael D. Hall
Location: Dept. of Speech and Hearing Sci., CHDD, Box 357920, Univ. of Washington, Seattle, WA 98195-7920
Author: Paula M. T. Smeele
Location: Dept. of Speech and Hearing Sci., CHDD, Box 357920, Univ. of Washington, Seattle, WA 98195-7920
Author: Patricia K. Kuhl
Location: Dept. of Speech and Hearing Sci., CHDD, Box 357920, Univ. of Washington, Seattle, WA 98195-7920

Abstract:

The influence of visual information on auditory speech perception can be observed under conditions where the two sources of information are discrepant. One demonstration involves viewing a face producing /b/ while listening to a dubbed /g/ token, with participants reporting that they heard /bg/ (McGurk and MacDonald, 1976). This ``combination'' response reflects the contribution of both modalities. Two experiments evaluated whether differences in the perceived quality of auditory stimuli within the /g/ category influence the incidence of combination responses. Synthetic VCV stimuli ranging from ``good'' /aga/ to ``poor'' /aga/ tokens were generated by factorially combining 6 levels of F2, and 4 levels of F3, onset frequency. In experiment 1 participants identified these auditory stimuli and rated them with respect to goodness as a /g/. Goodness was found to be correlated with, but not completely predicted by, consonant identification. In experiment 2 these stimuli were separately dubbed with a visual /aga/ (``matched'') and /aba/ (``mismatched,'' which should evoke combination responses). Results will be discussed in terms of the sufficiency of consonant identification and category goodness in predicting the probability of combination responses. These data will be used to address models of auditory-visual speech integration. [Work supported by NICHD.]


ASA 132nd meeting - Hawaii, December 1996