ASA 125th Meeting Ottawa 1993 May

2aSP8. Inversion and control of an articulatory model of the vocal tract: Recovering articulatory gestures from sounds.

Rafael Laboissiere

Inst. de la Commun. Parlee, 46 Ave. Felix Viallet, 38031 Grenoble Cedex, France

To what extent are the basic concepts of robotics and control theory useful for the understanding of speech communication processes? In this paper, this question is addressed and a quantitative modeling framework is proposed, which has some bearings on classical concepts of feedforward and feedback control systems. The main feature of the model is the capability to solve the ill-posed inversion problem of determining the articulatory commands from speech goals specifications, whatever they are. A theoretical analysis shows that former models for motor control, namely the task-dynamics approach [Saltzman and Munhall (1989), Ecolog. Psychol. 1(4)] and the forward-inverse modeling [Jordan and Rumelhart (1992), Cognitive Sci.], are subcases of the present model. Simulations of well-known phenomena in speech, like compensation for bite-block perturbation and coarticulation of simple VCV sequences, can be successfully done in this framework. This modeling effort can feed the theoretical debate about invariance/variability in speech because it is a natural framework to address the following issues: (1) the nature of the control space used in programming speech gestures; (2) the suitable principles of control, defined as the choice of an architecture for the control and the constraints imposed on the control law to solve the inversion problem; and (3) the learning of an inverse model capable of generating suitable articulatory commands from a specification of the goals to be reached.