Helena M. Saldana
Lawrence D. Rosenblum
Theresa Osinga
Dept. of Psychol., Univ. of California, Riverside, CA 92521
Visual information of a speaker's articulations can influence heard speech syllables [H. McGurk and J. McDonald, Nature 264, 746--748 (1976)]. The strength of this so-called McGurk effect was tested using a highly reduced visual image. A point-light technique was adopted whereby an actor's face was darkened and reflective dots were arranged on various parts of the actor's lips, teeth, tongue, and jaw. The actor was videotaped producing syllables in the dark. These reduced visual stimuli were dubbed onto discrepant auditory syllables in order to test their visual influence. Although subjects could not identify a frozen frame of these stimuli as a face, dynamic presentations resulted in a significant visual influence on syllable identifications. These results suggest that ``pictorial'' facial features are not necessary for audiovisual integration in speech perception. The results will be discussed in terms of the ecological approach, the fuzzy logical model, and the motor theory of speech perception.