Subject: Audibility of fire alarms From: Brad Ingrao <info(at)bradingrao.com> Date: Sun, 19 Sep 2004 13:47:00 -0400This is a multi-part message in MIME format. ------=_NextPart_000_0094_01C49E4F.24958440 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Does anyone of any research that identifies the acoustic characteristics needed to arouse people from sleep. I am particularly looking at making fire alarms more accessible to people with hearing loss, but need to go back to the initial psychoacoustic work done if it exists. Thank you in advance ______________________________________ Best Regards, Brad Ingrao, MSEd, CCC-A, FAAA Editor EDEN - The Electronic Deaf Education Network info(at)bradingrao.com _____ From: AUDITORY Research in Auditory Perception [mailto:AUDITORY(at)LISTS.MCGILL.CA] On Behalf Of Brungart Douglas S Civ AFRL/HECB Sent: Wednesday, September 15, 2004 3:21 PM To: AUDITORY(at)LISTS.MCGILL.CA Subject: Possible National Acadamy of Science Post-Doc Opportunity at AFRL I wanted to alert everyone to this possible research opportunity at our laboratory at Wright-Patterson AFB in Dayton, Ohio. We are looking for possible post-doc candidates interested in auditory localization and multitalker speech perception. The program is administered by the National Academy of Sciences, and the stipend is relatively generous. There is no guarantee that funding will be available, but for the right candidate this could be a great opportunity. The deadline to apply for the fall cycle is November 1st. Note that the position is open only to US citizens and legal permanent residents (Green Card Holders). Thanks, Doug Brungart Increasing Information Transfer in Audio Display Systems Human audition is an amazingly complex modality capable of extracting spatial, spectral, and temporal information from multiple simultaneous sound sources even in adverse listening environments. However, most real-world audio display systems rely on relatively simple stimuli that fail to take full advantage of the inherent capabilities of human listeners. The goal of this research is to find ways to increase the amount of information transferred to listeners through audio display systems. The effort involves two broad areas of research. The first area focuses on the generation of robust and intuitive azimuth, elevation, and distance cues that maximize the transfer of spatial information in audio displays, especially in noisy environments that involve more than one virtual sound source. The second area focuses on improving the segregation of competing sound sources in complex listening environments, especially those that involve more than one simultaneous speech signal. A major component of this research is a study of the role that non-energetic "informational" masking plays in the perception of multiple speech signals. More info about the program... http://www4.nationalacademies.org/pga/rap.nsf/ByTitle/13.15.07.B5700?OpenDoc ument More info about our laboratory and its facilities: http://www.hec.afrl.af.mil/HECB/index.shtml ------=_NextPart_000_0094_01C49E4F.24958440 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=3DContent-Type content=3D"text/html; = charset=3Dus-ascii"> <META content=3D"MSHTML 6.00.2800.1400" name=3DGENERATOR> <STYLE>(at)page Section1 {size: 8.5in 11.0in; margin: 1.0in 1.25in 1.0in = 1.25in; } P.MsoNormal { FONT-SIZE: 12pt; MARGIN: 0in 0in 0pt; COLOR: windowtext; FONT-FAMILY: = "Times New Roman" } LI.MsoNormal { FONT-SIZE: 12pt; MARGIN: 0in 0in 0pt; COLOR: windowtext; FONT-FAMILY: = "Times New Roman" } DIV.MsoNormal { FONT-SIZE: 12pt; MARGIN: 0in 0in 0pt; COLOR: windowtext; FONT-FAMILY: = "Times New Roman" } A:link { COLOR: blue; TEXT-DECORATION: underline } SPAN.MsoHyperlink { COLOR: blue; TEXT-DECORATION: underline } A:visited { COLOR: purple; TEXT-DECORATION: underline } SPAN.MsoHyperlinkFollowed { COLOR: purple; TEXT-DECORATION: underline } P { FONT-SIZE: 10pt; MARGIN-LEFT: 0in; COLOR: black; MARGIN-RIGHT: 0in; = FONT-FAMILY: Arial } SPAN.EmailStyle17 { COLOR: windowtext; FONT-FAMILY: Arial } DIV.Section1 { page: Section1 } </STYLE> </HEAD> <BODY lang=3DEN-US vLink=3Dpurple link=3Dblue> <DIV dir=3Dltr align=3Dleft><SPAN class=3D153324417-19092004><FONT = face=3DArial=20 color=3D#0000ff size=3D2>Does anyone of any research that identifies the = acoustic=20 characteristics needed to arouse people from sleep. I am = particularly=20 looking at making fire alarms more accessible to people with hearing = loss, but=20 need to go back to the initial psychoacoustic work done if it=20 exists.</FONT></SPAN></DIV> <DIV dir=3Dltr align=3Dleft><SPAN class=3D153324417-19092004><FONT = face=3DArial=20 color=3D#0000ff size=3D2></FONT></SPAN> </DIV> <DIV dir=3Dltr align=3Dleft><SPAN class=3D153324417-19092004><FONT = face=3DArial=20 color=3D#0000ff size=3D2>Thank you in advance</FONT></SPAN></DIV> <DIV dir=3Dltr align=3Dleft><SPAN class=3D153324417-19092004><FONT = face=3DArial=20 color=3D#0000ff size=3D2></FONT></SPAN> </DIV> <DIV dir=3Dltr align=3Dleft><SPAN class=3D153324417-19092004> <DIV align=3Dleft> <P><FONT face=3DArial = size=3D2>______________________________________</FONT></P> <P><FONT face=3DArial size=3D2>Best Regards,</FONT></P> <P><FONT face=3DArial size=3D2>Brad Ingrao, MSEd, CCC-A, FAAA</FONT></P> <P><FONT face=3DArial size=3D2>Editor</FONT></P> <P><FONT face=3DArial size=3D2>EDEN - The Electronic Deaf Education=20 Network</FONT></P> <P><FONT size=3D2><FONT face=3DArial>info(at)bradingrao.com</FONT>=20 </FONT></P></DIV></SPAN></DIV> <DIV dir=3Dltr align=3Dleft><SPAN class=3D153324417-19092004><FONT = face=3DArial=20 color=3D#0000ff size=3D2></FONT></SPAN> </DIV><BR> <DIV class=3DOutlookMessageHeader lang=3Den-us dir=3Dltr align=3Dleft> <HR tabIndex=3D-1> <FONT face=3DTahoma size=3D2><B>From:</B> AUDITORY Research in Auditory = Perception=20 [mailto:AUDITORY(at)LISTS.MCGILL.CA] <B>On Behalf Of </B>Brungart Douglas S = Civ=20 AFRL/HECB<BR><B>Sent:</B> Wednesday, September 15, 2004 3:21 = PM<BR><B>To:</B>=20 AUDITORY(at)LISTS.MCGILL.CA<BR><B>Subject:</B> Possible National Acadamy of = Science=20 Post-Doc Opportunity at AFRL<BR></FONT><BR></DIV> <DIV></DIV> <DIV class=3DSection1> <P class=3DMsoNormal><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">I=20 wanted to alert everyone to this possible research opportunity at our = laboratory=20 at Wright-Patterson AFB in </SPAN></FONT></B><B><FONT face=3DArial = color=3Dblack=20 size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">Dayton</SPAN></FONT></B><B><FONT=20 face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">,=20 </SPAN></FONT></B><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">Ohio</SPAN></FONT></B><B><FONT=20 face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">. =20 We are looking for possible post-doc candidates interested in auditory=20 localization and multitalker speech perception. The program is=20 administered by the </SPAN></FONT></B><B><FONT face=3DArial = color=3Dblack=20 size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">National</SPAN></FONT></B><B><FONT=20 face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">=20 </SPAN></FONT></B><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">Academy</SPAN></FONT></B><B><FONT=20 face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial"> of=20 Sciences, and the stipend is relatively generous. There is no = guarantee=20 that funding will be available, but for the right candidate this could = be a=20 great opportunity. The deadline to apply for the fall cycle is = November=20 1<SUP>st</SUP>. Note that the position is open only to US citizens = and=20 legal permanent residents (Green Card Holders).</SPAN></FONT></B></P> <P class=3DMsoNormal><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">Thanks,</SPAN></FONT></B></P> <P class=3DMsoNormal><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">Doug=20 Brungart</SPAN></FONT></B></P> <P class=3DMsoNormal><B><FONT face=3D"Times New Roman" size=3D3><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 12pt">Increasing Information = Transfer in=20 Audio Display Systems</SPAN></FONT></B></P> <P class=3DMsoNormal><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: Arial">Human = audition is an=20 amazingly complex modality capable of extracting spatial, spectral, and = temporal=20 information from multiple simultaneous sound sources even in adverse = listening=20 environments. However, most real-world audio display systems rely on = relatively=20 simple stimuli that fail to take full advantage of the inherent = capabilities of=20 human listeners. The goal of this research is to find ways to increase = the=20 amount of information transferred to listeners through audio display = systems.=20 The effort involves two broad areas of research. The first area focuses = on the=20 generation of robust and intuitive azimuth, elevation, and distance cues = that=20 maximize the transfer of spatial information in audio displays, = especially in=20 noisy environments that involve more than one virtual sound source. The = second=20 area focuses on improving the segregation of competing sound sources in = complex=20 listening environments, especially those that involve more than one = simultaneous=20 speech signal. A major component of this research is a study of the role = that=20 non-energetic "informational" masking plays in the perception of = multiple speech=20 signals. </SPAN></FONT></P> <P class=3DMsoNormal><FONT face=3DArial size=3D2><SPAN=20 style=3D"FONT-SIZE: 10pt; FONT-FAMILY: Arial"></SPAN></FONT> </P> <P class=3DMsoNormal><FONT face=3DArial size=3D2><SPAN=20 style=3D"FONT-SIZE: 10pt; FONT-FAMILY: Arial"></SPAN></FONT> </P> <P class=3DMsoNormal><FONT face=3DArial size=3D2><SPAN=20 style=3D"FONT-SIZE: 10pt; FONT-FAMILY: Arial">More info about the=20 program...</SPAN></FONT></P> <P class=3DMsoNormal><FONT face=3DArial size=3D2><SPAN=20 style=3D"FONT-SIZE: 10pt; FONT-FAMILY: Arial"></SPAN></FONT> </P> <P class=3DMsoNormal><FONT face=3DArial size=3D2><SPAN=20 style=3D"FONT-SIZE: 10pt; FONT-FAMILY: = Arial">http://www4.nationalacademies.org/pga/rap.nsf/ByTitle/13.15.07.B57= 00?OpenDocument</SPAN></FONT></P> <P class=3DMsoNormal><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">More=20 info about our laboratory and its facilities:</SPAN></FONT></B></P> <P class=3DMsoNormal><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial">http://www.hec.afrl.af.mil/HECB/index.shtml</SPAN></FONT></B></P> <P class=3DMsoNormal><B><FONT face=3DArial color=3Dblack size=3D2><SPAN=20 style=3D"FONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: black; FONT-FAMILY: = Arial"></SPAN></FONT></B> </P> <P class=3DMsoNormal><FONT face=3D"Times New Roman" size=3D3><SPAN=20 style=3D"FONT-SIZE: 12pt"></SPAN></FONT> </P></DIV></BODY></HTML> ------=_NextPart_000_0094_01C49E4F.24958440--