5pSC1. Vowel articulation training aid for the hearing impaired: An update.

Session: Friday Afternoon, June 20


Author: A. Matthew Zimmer
Location: Dept. of Elec. and Comput. Eng., Old Dominion Univ., Norfolk, VA 23529, zahorian@ece.odu.edu
Author: Stephen A. Zahorian
Location: Dept. of Elec. and Comput. Eng., Old Dominion Univ., Norfolk, VA 23529, zahorian@ece.odu.edu
Author: Stefan Auberg
Location: Syracuse Lang. Systems, Syracuse, NY 13214-2845

Abstract:

A computer-based system which provides real-time feedback for vowel articulation training for the hearing impaired is described. This system is a revised version of the training aid described in previous papers [Zahorian and Correal, J. Acoust. Soc. Am. 95, 3014(A) (1994); Beck and Zahorian, ICASSP-92, II--241-244]. Revised feature extraction and classification algorithms improve accuracy and allow processing of vowels spoken in CVC contexts. Cost has been reduced by a new Windows 95/NT-based implementation which requires no specialized DSP or audio hardware. Steady-state vowel and CVC tokens have been collected from 24 adult male, 28 adult female, and 46 child speakers to provide training data for semi-speaker-independent neural network classifiers used in the system. Displays include a two-dimensional F1/F2 style display which provides continuous feedback, and a vowel bar graph display. Each accommodates four speaker groups: adult female, adult male, child, and general. Tests with ten steady-state monophthong vowels produced by speakers outside the training set indicate that typically over 80% of vowels are correctly depicted by the bar graph display and over 75% by the 2-D display. A third display for vowels spoken in CVC contexts is under development. [Work funded by NSF, Grant No. NSF-BES-9411607.]


ASA 133rd meeting - Penn State, June 1997