4pSCa4. Muscle-based modeling of facial dynamics during speech.

Session: Thursday Afternoon, June 19


Author: Jorge C. Lucero
Location: Dept. of Psych., Queen's Univ., Kingston, ON, Canada
Author: Kevin G. Munhall
Location: Dept. of Psych., Queen's Univ., Kingston, ON, Canada
Author: Eric Vatikiotis-Bateson
Location: ATR Human Information Processing Res. Labs., Kyoto, Japan
Author: Vincent L. Gracco
Location: Haskins Labs., New Haven, CT 06511-6695
Author: Demetri Terzopoulos
Location: Univ. of Toronto, Toronto, Canada

Abstract:

A dynamical, muscle-based model of the face is being developed to extend the facial model of Terzopoulos et al. [e.g., D. Terzopoulos and K. Waters, IEEE Trans. Pattern Anal. Machine Intell. 15, 569--579 (1993)]. The purpose of this work is to characterize facial dynamics during speech with a physiologically realistic model. The model consists of a multilayered deformable mesh of lumped masses connected by springs and viscous elements which represents the layered structure of facial tissue. The spring and viscous constants approximate the stress/strain and viscous characteristics of facial skin. The mesh is deformed by forces generated by a set of modeled muscles of facial expression whose physical characteristics are being determined empirically. The shape of the facial model is individualized to subjects facial morphology with data from a laser ranger finder (Cyberware scanner). In this report, work on driving the model with intramuscular, perioral EMG signals is presented. Recordings of the sampled three-dimensional position of a subject's facial surface using OPTOTRAK are compared to the patterns of deformation of the epidermal mesh in the model. Results will be discussed in terms of the strengths and weaknesses of this modeling approach. [Work supported by NIH-NIDCD Grant No. DC-00594 and NSERC.]


ASA 133rd meeting - Penn State, June 1997