Faye Erickson
Lexington Ctr., 30th Ave. and 75th St., Jackson Heights, NY 11347
Eddy Yeung
Arthur Boothroyd
City Univ. of New York, New York, NY 10036
Interactions occur between the temporal characteristics of nonlinear processing schemes and the temporal and spectral properties of speech. Uncertainties in the nature and extent of these interactions make it difficult to predict hearing aid performance with speech input. An option is empirical measurement, but the collection of spectral data on a sample of phonemic segments can be prohibitively time consuming. This paper describes progress toward the development of an automated process. Input is derived from digitized samples of connected speech in which the temporal locations of segments of interest are already known. Recordings of speech output are, themselves, digitized. Using a known onset marker, the segments of interest are automatically extracted, subjected to FFT transformation, and integrated over a moving 1/3-oct window. The intensities and frequencies of key spectral points from each spectrum are displayed on a graph of intensity versus frequency for comparison with the input data. [Work supported by NIDRR Grant No. H133E80019.]