[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Data sets available

Dear List,

After receiving several requests to share our speech recognition data sets I have decided to make some of these available via the web (http://www.wramc.amedd.army.mil/departments/aasc/avlab/). The data sets are in the form of confusion matrices and are for consonant recognition in vCv context with the vowel /a/. Eighteen english consonants [b,p,g,k,d,t,m,n,v,f,tx (as in the word "that"),th,z,s,zh (as in the word "beige"),sh,ch, and j] were tested. The talker was a female speaker of American English. Ten unique productions of each consonant were recorded audiovisually and selected randomly during testing. The tests include both auditory, visual (speechreading), and auditory-visual presentations. We will be adding additional data sets (e.g., filtered speech recognition) in the future. Sentence and word recognition scores on many of these same subjects will also be made available.

Data Set 1 includes individual results from 40 hearing-impaired subjects, with speech presented at 0 db S/N in a continuous speech-shaped noise. Each consonant was presented 40 times in each receiving condition (720 responses per matrix). There are three matrices per subject, corresponding to auditory, visual, and auditory-visual recognition conditions.

Data Set 2 includes pooled results from 8 normal-hearing subjects, with speech presented at a variety of S/N ratios (using a continuous speech-shaped noise). Each consonant was presented 40 times per subject (5760 responses per matrix). There are two matrices per condition (auditory and auditory-visual). At the end of the file is a pooled visual only matrix.

Additional information regarding these data sets can be found in:

Grant, K.W., Walden, B.E., and Seitz, P.F. (1998). "Auditory-visual speech recognition by  hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration," J. Acoust. Soc. Am. 103, 2677-2690.

Grant, K.W., and P.F. Seitz (1998). "Measures of auditory-visual integration in nonsense syllables and sentences," J. Acoust. Soc. Am. 104, 2438-2450.

Grant, K.W., and Walden, B.E. (1996). "Evaluating the articulation index for auditory-visual consonant recognition," J. Acoust. Soc. Am.100, 2415-2424.

A great deal of time and effort went into collecting these data and I would respectively request that before you publish any reanalyzes of these data that I be given a heads-up as to results and interpretation. I hope you find these useful in your work.

Ken W. Grant

Walter Reed Army Medical Center
Army Audiology and Speech Center
Washington, DC 20307-5001

PHONE: (202) 782-8596
FAX: (202) 782-9228

EMAIL: grant@tidalwave.net