Data sets available (Ken Grant )


Subject: Data sets available
From:    Ken Grant  <grant(at)TIDALWAVE.NET>
Date:    Wed, 2 Aug 2000 11:17:51 -0400

--------------596DF891BE21EFFB47676958 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Dear List, After receiving several requests to share our speech recognition data sets I have decided to make some of these available via the web (http://www.wramc.amedd.army.mil/departments/aasc/avlab/). The data sets are in the form of confusion matrices and are for consonant recognition in vCv context with the vowel /a/. Eighteen english consonants [b,p,g,k,d,t,m,n,v,f,tx (as in the word "that"),th,z,s,zh (as in the word "beige"),sh,ch, and j] were tested. The talker was a female speaker of American English. Ten unique productions of each consonant were recorded audiovisually and selected randomly during testing. The tests include both auditory, visual (speechreading), and auditory-visual presentations. We will be adding additional data sets (e.g., filtered speech recognition) in the future. Sentence and word recognition scores on many of these same subjects will also be made available. Data Set 1 includes individual results from 40 hearing-impaired subjects, with speech presented at 0 db S/N in a continuous speech-shaped noise. Each consonant was presented 40 times in each receiving condition (720 responses per matrix). There are three matrices per subject, corresponding to auditory, visual, and auditory-visual recognition conditions. Data Set 2 includes pooled results from 8 normal-hearing subjects, with speech presented at a variety of S/N ratios (using a continuous speech-shaped noise). Each consonant was presented 40 times per subject (5760 responses per matrix). There are two matrices per condition (auditory and auditory-visual). At the end of the file is a pooled visual only matrix. Additional information regarding these data sets can be found in: Grant, K.W., Walden, B.E., and Seitz, P.F. (1998). "Auditory-visual speech recognition by hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration," J. Acoust. Soc. Am. 103, 2677-2690. Grant, K.W., and P.F. Seitz (1998). "Measures of auditory-visual integration in nonsense syllables and sentences," J. Acoust. Soc. Am. 104, 2438-2450. Grant, K.W., and Walden, B.E. (1996). "Evaluating the articulation index for auditory-visual consonant recognition," J. Acoust. Soc. Am.100, 2415-2424. A great deal of time and effort went into collecting these data and I would respectively request that before you publish any reanalyzes of these data that I be given a heads-up as to results and interpretation. I hope you find these useful in your work. -- Ken W. Grant Walter Reed Army Medical Center Army Audiology and Speech Center Washington, DC 20307-5001 PHONE: (202) 782-8596 FAX: (202) 782-9228 EMAIL: grant(at)tidalwave.net --------------596DF891BE21EFFB47676958 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> Dear List, <p>After receiving several requests to share our speech recognition data sets I have decided to make some of these available via the web (<A HREF="http://www.wramc.amedd.army.mil/departments/aasc/avlab/">http://www.wramc.amedd.army.mil/departments/aasc/avlab/</A>). The data sets are in the form of confusion matrices and are for consonant recognition in vCv context with the vowel /a/. Eighteen english consonants [b,p,g,k,d,t,m,n,v,f,tx (as in the word "that"),th,z,s,zh (as in the word "beige"),sh,ch, and j] were tested. The talker was a female speaker of American English. Ten unique productions of each consonant were recorded audiovisually and selected randomly during testing. The tests include both auditory, visual (speechreading), and auditory-visual presentations. We will be adding additional data sets (e.g., filtered speech recognition) in the future. Sentence and word recognition scores on many of these same subjects will also be made available. <p><u>Data Set 1</u> includes individual results from 40 hearing-impaired subjects, with speech presented at 0 db S/N in a continuous speech-shaped noise. Each consonant was presented 40 times in each receiving condition (720 responses per matrix). There are three matrices per subject, corresponding to auditory, visual, and auditory-visual recognition conditions. <p><u>Data Set 2</u> includes pooled results from 8 normal-hearing subjects, with speech presented at a variety of S/N ratios (using a continuous speech-shaped noise). Each consonant was presented 40 times per subject (5760 responses per matrix). There are two matrices per condition (auditory and auditory-visual). At the end of the file is a pooled visual only matrix. <p>Additional information regarding these data sets can be found in: <p>Grant, K.W., Walden, B.E., and Seitz, P.F. (1998). "Auditory-visual speech recognition by&nbsp; hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration," J. Acoust. Soc. Am. 103, 2677-2690. <p>Grant, K.W., and P.F. Seitz (1998). "Measures of auditory-visual integration in nonsense syllables and sentences," J. Acoust. Soc. Am. 104, 2438-2450. <p>Grant, K.W., and Walden, B.E. (1996). "Evaluating the articulation index for auditory-visual consonant recognition," J. Acoust. Soc. Am.100, 2415-2424. <p>A great deal of time and effort went into collecting these data and I would respectively request that before you publish any reanalyzes of these data that I be given a heads-up as to results and interpretation. I hope you find these useful in your work. <p>-- <br>Ken W. Grant <p>Walter Reed Army Medical Center <br>Army Audiology and Speech Center <br>Washington, DC 20307-5001 <p>PHONE: (202) 782-8596 <br>FAX: (202) 782-9228 <p>EMAIL: grant(at)tidalwave.net <br>&nbsp;</html> --------------596DF891BE21EFFB47676958--


This message came from the mail archive
http://www.auditory.org/postings/2000/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University