3aSCb23. Computational approaches to relating consonant and sentence recognition test scores.

Session: Wednesday Morning, December 3


Author: Philip F. Seitz
Location: Army Audiol. & Speech Ctr., Walter Reed Army Med. Ctr., Washington, DC 20307-5001, seitz@wrair-emh1.army.mil
Author: Ken W. Grant
Location: Army Audiol. & Speech Ctr., Walter Reed Army Med. Ctr., Washington, DC 20307-5001, seitz@wrair-emh1.army.mil

Abstract:

Several recent studies have investigated the predictive relations among phoneme, word, and sentence recognition tests [e.g., Olsen et al., Ear Hear. 18, 175--188 (1997)]. The present study addressed this issue computationally, using data from 29 older hearing-impaired subjects who performed audio--visual recognition of 18 consonants in VCV syllables, and key words in IEEE/Harvard sentences. It was hypothesized that the predictive relation between consonant and sentence test scores should be made stronger by explicitly modeling and adjusting for differences between the test materials. Two approaches were tested: (1) weighting scores for the consonants based on their frequency of occurrence in the sentence materials, and (2) using a large computerized English lexicon to obtain estimates of possible lexical confusions for the IEEE/Harvard key words based on the consonant confusion matrices. The coefficient of determination (r[sup 2]) for the overall consonant and key word scores was 0.56 [Grant et al., J. Acoust. Soc. Am. 97, 3308 (1995)]. Neither the frequency weighting nor the lexical weighting of the consonant scores increased the r[sup 2]. These results suggest that consonant and sentence recognition performance are related by a very general property of speech recognition, which is not subject to lexical mediation. [Work supported by NIH-NIDCD.]


ASA 134th Meeting - San Diego CA, December 1997