[AUDITORY] CFP Interspeech Special Session on Speech Intelligibility Prediction for Hearing-Impaired Listeners (Jon Barker )


Subject: [AUDITORY] CFP Interspeech Special Session on Speech Intelligibility Prediction for Hearing-Impaired Listeners
From:    Jon Barker  <j.p.barker@xxxxxxxx>
Date:    Fri, 21 Jan 2022 14:18:22 +0000

--0000000000004c4b7005d6184b1b Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Special Session on Speech Intelligibility Prediction for Hearing-Impaired Listeners https://claritychallenge.github.io/interspeech2022_siphil/ Interspeech, September 18-22, Incheon, Korea Submissions due - March 21st ----------------- One of the greatest challenges for hearing-impaired listeners is understanding speech in the presence of background noise. Noise levels encountered in everyday social situations can have a devastating impact on speech intelligibility, and thus communication effectiveness, potentially leading to social withdrawal and isolation. Disabling hearing impairment affects 360 million people worldwide, with that number increasing because of the ageing population. Unfortunately, current hearing aid technology is often ineffective at restoring speech intelligibility in noisy situations. *To allow the development of better hearing aids, we need better ways to evaluate the speech intelligibility of audio signals*. We need prediction models that can take audio signals and use knowledge of the listener's characteristics (e.g., an audiogram) to estimate the signal=E2=80=99s intelligibility. Further, we need models that can estimate intelligibility not just of natural signals, but also of signals that have been processed using hearing aid algorithms - whether current or under development. *The Clarity Prediction Challenge*As a focus for the session, we have launched the `Clarity Prediction Challenge=E2=80=99. The challenge provides= you with noisy speech signals that have been processed with a number of hearing aid signal processing systems and corresponding intelligibility scores produced by a panel of hearing-impaired individuals. You are tasked with producing a model that can predict intelligibility scores given just the signals, their clean references and a characterisation of each listener=E2= =80=99s specific hearing impairment. The challenge will remain open until the Interspeech submission deadline and all entrants are welcome. (Note, the Clarity Prediction Challenge is part of a 5-year programme with further prediction and enhancement challenges planned for the future.) *Relevant Topics*The session welcomes submission from entrants to the Clarity Prediction Challenge but is also* inviting papers related to topics in hearing impairment and speech intelligibility*, including, but not limited to, - Statistical speech modelling for intelligibility prediction - Modelling energetic and informational noise masking - Individualising intelligibility models using audiometric data - Intelligibility prediction in online and low latency settings - Model-driven speech intelligibility enhancement - New methodologies for intelligibility model evaluation - Speech resources for intelligibility model evaluation - Applications of intelligibility modelling in acoustic engineering - Modelling interactions between hearing impairment and speaking style - Papers using the data supplied with the Clarity Prediction Challenge *Organisers* - Trevor Cox - University of Salford, UK - Fei Chen - Southern University of Science and Technology, China - Jon Barker - University of Sheffield, UK - Daniel Korzekwa - Amazon TTS - Michael Akeroyd University of Nottingham, UK - John Culling - University of Cardiff, UK - Graham Naylor - University of Nottingham, UK -- Professor Jon Barker, Department of Computer Science, University of Sheffield +44 (0) 114 222 1824 --=20 Professor Jon Barker, Department of Computer Science, University of Sheffield +44 (0) 114 222 1824 --0000000000004c4b7005d6184b1b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">Special Session on Speech Intelligibility Prediction for H= earing-Impaired Listeners<br><a href=3D"https://claritychallenge.github.io/= interspeech2022_siphil/" target=3D"_blank">https://claritychallenge.github.= io/interspeech2022_siphil/<br></a>Interspeech, September 18-22, Incheon, Ko= rea<div>Submissions due - March 21st<br>-----------------<br><br>One of the= greatest challenges for hearing-impaired listeners is understanding speech= in the presence of background noise. Noise levels encountered in everyday = social situations can have a devastating impact on speech intelligibility, = and thus communication effectiveness, potentially leading to social withdra= wal and isolation. Disabling hearing impairment affects 360 million people = worldwide, with that number increasing because of the ageing population. Un= fortunately, current hearing aid technology is often ineffective at restori= ng speech intelligibility in noisy situations.<br><br><b>To allow the devel= opment of better hearing aids, we need better ways to evaluate the speech i= ntelligibility of audio signals</b>. We need prediction models that can tak= e audio signals and use knowledge of the listener&#39;s characteristics (e.= g., an audiogram) to estimate the signal=E2=80=99s intelligibility. Further= , we need models that can estimate intelligibility not just of natural sign= als, but also of signals that have been processed using hearing aid algorit= hms - whether current or under development.<br><br><b>The Clarity Predictio= n Challenge<br></b>As a focus for the session, we have launched the `Clarit= y Prediction Challenge=E2=80=99. The challenge provides you with noisy spee= ch signals that have been processed with a number of hearing aid signal pro= cessing systems and corresponding intelligibility scores produced by a pane= l of hearing-impaired individuals. You are tasked with producing a model th= at can predict intelligibility scores given just the signals, their clean r= eferences and a characterisation of each listener=E2=80=99s specific hearin= g impairment. The challenge will remain open until the Interspeech submissi= on deadline and all entrants are welcome. (Note, the Clarity Prediction Cha= llenge is part of a 5-year programme with further prediction and enhancemen= t challenges planned for the future.)<br><br><b>Relevant Topics<br></b>The = session welcomes submission from entrants to the Clarity Prediction Challen= ge but is also<b>=C2=A0inviting papers related to topics in hearing impairm= ent and speech intelligibility</b>, including, but not limited to,<br><ul><= li style=3D"margin-left:15px">Statistical speech modelling for intelligibil= ity prediction</li><li style=3D"margin-left:15px">Modelling energetic and i= nformational noise masking</li><li style=3D"margin-left:15px">Individualisi= ng intelligibility models using audiometric data</li><li style=3D"margin-le= ft:15px">Intelligibility prediction in online and low latency settings</li>= <li style=3D"margin-left:15px">Model-driven speech intelligibility enhancem= ent</li><li style=3D"margin-left:15px">New methodologies for intelligibilit= y model evaluation</li><li style=3D"margin-left:15px">Speech resources for = intelligibility model evaluation</li><li style=3D"margin-left:15px">Applica= tions of intelligibility modelling in acoustic engineering</li><li style=3D= "margin-left:15px">Modelling interactions between hearing impairment and sp= eaking style</li><li style=3D"margin-left:15px">Papers using the data suppl= ied with the Clarity Prediction Challenge</li></ul><div><b>Organisers</b><b= r><ul><li style=3D"margin-left:15px">Trevor Cox - University of Salford, UK= </li><li style=3D"margin-left:15px">Fei Chen - Southern University of Scien= ce and Technology, China</li><li style=3D"margin-left:15px">Jon Barker - Un= iversity of Sheffield, UK</li><li style=3D"margin-left:15px">Daniel Korzekw= a - Amazon TTS</li><li style=3D"margin-left:15px">Michael Akeroyd Universit= y of Nottingham, UK</li><li style=3D"margin-left:15px">John Culling - Unive= rsity of Cardiff, UK</li><li style=3D"margin-left:15px">Graham Naylor - Uni= versity of Nottingham, UK</li></ul><font color=3D"#888888"><div><br></div>-= -<br><div dir=3D"ltr"><div dir=3D"ltr"><div><div dir=3D"ltr">Professor Jon = Barker,<div><div>Department of Computer Science,</div><div>University of Sh= effield</div><div>+44 (0) 114 222 1824</div><div><br></div></div></div></di= v></div></div></font></div><div><br></div>-- <br><div dir=3D"ltr" class=3D"= gmail_signature" data-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div><= div dir=3D"ltr">Professor Jon Barker,<div><div>Department of Computer Scien= ce,</div><div>University of Sheffield</div><div>+44 (0) 114 222 1824</div><= div><br></div></div></div></div></div></div></div></div> --0000000000004c4b7005d6184b1b--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University