4pSC4. A comparison of perceptual word similarity metrics.

Session: Thursday Afternoon, December 4


Author: Paul Iverson
Location: Spoken Lang. Processes Lab., House Ear Inst., 2100 W. 3rd St., Los Angeles, CA 90057
Author: Edward T. Auer, Jr.
Location: Spoken Lang. Processes Lab., House Ear Inst., 2100 W. 3rd St., Los Angeles, CA 90057
Author: Lynne E. Bernstein
Location: Spoken Lang. Processes Lab., House Ear Inst., 2100 W. 3rd St., Los Angeles, CA 90057

Abstract:

Contemporary theories of spoken-word recognition rely on the notion that the stimulus word is mapped against, or selected from, words in long-term memory in terms of its phonetic (form-based) attributes. A few metrics have been proposed to model the form-based similarity of words, including an abstract phonemic metric computed directly on the lexicon (i.e., Coltheart-n), and perceptual metrics derived from the results of phoneme identification experiments. The results of applying several different metrics to phoneme and word identification data (open-set and forced-choice tasks) will be discussed, and these metrics across stimulus conditions with a range of intelligibility levels and similarity structures (visual-only lipreading, audio-only conditions processed by a vocoder, and audiovisual conditions pairing vocoded audio with lipreading) will be compared. The our results suggest that graded perceptual metrics may be most useful for understanding the results of word identification experiments across a wide range of stimulus conditions. [Work supported by NIH DC00695.]


ASA 134th Meeting - San Diego CA, December 1997