Abstract:
There is evidence for a left-visual-field/right-hemisphere (LVF/RH) advantage for speechreading static faces [R. Campbell, Brain & Cognit. 5, 1--21 (1986)] and a right-visual-field/left-hemisphere (RVF/LH) advantage for speechreading dynamic faces [P. M. Smeele, NATO ASI Workshop (1995)]. However, there is also evidence for a LVF/RH advantage when integrating dynamic visual speech with auditory speech [e.g., E. Diesch, Q. J. Exp. Psychol.: Human Exp. Psychol. 48, 320--333 (1995)]. To test relative hemispheric differences and the role of dynamic information, static, dynamic, and point-light visual speech stimuli were implemented for both speechreading and audio--visual integration tasks. Point-light stimuli are thought to retain only dynamic visual speech information [L. D. Rosenblum and H. M. Saldana, J. Exp. Psychol.: Human Percept. Perform. 22, 318--331 (1996)]. For both the speechreading and audio--visual integration tasks, a LVF/RH advantage was observed for the static stimuli, and a RVF/LH advantage was found for the dynamic and point-light stimuli. In addition, the relative RVF/LH advantage was greater with the point-light stimuli implicating greater relative LH involvement for dynamic speech information.