Subject: [AUDITORY] Pre-announcing the 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing - Launching 30th March From: Jon Barker <00000196bd06e182-dmarc-request@xxxxxxxx> Date: Fri, 4 Mar 2022 14:56:17 +0000--00000000000031562f05d965b8e9 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable *Pre-announcing the 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing.* Preliminary details appear on the Clarity website <http://claritychallenge.org/> and below. Full details appear on the challenge launch date, 30th March, on a dedicated website. If you have questions, please contact us directly at claritychallengecontact@xxxxxxxx *Important Dates 2022* *30th March* - Challenge launch including the release of train/dev data sets, mixing tools, full rules and documentation *end April* - Release of full toolset + baseline system *25th July* - Evaluation data released *1st Sept* - Submission deadline *Sept-Nov* - Listening test evaluation period *Early Dec* - Results announced at a Clarity Challenge Workshop; prizes awarded. *Background* We are organising a series of machine learning challenges to advance hearing aid speech signal processing. Even if you=E2=80=99ve not worked on = hearing aids before, we=E2=80=99ll provide you with the tools to enable you to appl= y your machine learning and speech processing algorithms to help those with hearing loss. Although age-related hearing loss affects 40% of 55 to 74 year-olds, the majority of adults who would benefit from hearing aids don=E2=80=99t use th= em. A key reason is simply that hearing aids don=E2=80=99t provide enough benefit= . In particular, speech in noise is still a critical problem, even for the most sophisticated devices. The purpose of the =E2=80=9CClarity=E2=80=9D challen= ges is to catalyse new work to radically improve the speech intelligibility provided by hearing aids. *This is the second in a series of enhancement challenges* that are considering increasingly complex listening scenarios. The first round (CEC1) focused on speech in indoor environments in the presence of a single interferer. The new challenge extends CEC1 in several important respects: modelling* listener head motion*, including scenes with *multiple interferers*, and an *extended range of interferer types*. *The Task* You will work with simulated scenes, each including a target speaker and one or more interfering noises. For each scene, there will be signals that simulate those captured by a behind-the-ear hearing aid with 3-microphones at each ear and those captured at the eardrum without a hearing aid present. The target speech will be a short sentence and the interfering noises will be either speech, domestic appliance noise or music samples. The task will be to deliver a hearing aid signal processing algorithm that can improve the intelligibility of the target speaker for a specified hearing-impaired listener. Initially, entries will be evaluated using an objective speech intelligibility measure. Subsequently, up to twenty of the most promising systems will be evaluated by a panel of hearing-impaired listeners. Prizes will be awarded for the systems achieving the best objective measure scores and for the best listening test outcomes. We will provide a baseline system so that teams can choose to focus on individual components or to develop their own complete pipelines. *What will be provided* - Evaluation of the best entries by a panel of hearing-impaired listeners. - Premixed speech + interferer scenes for training and evaluation. - A database of 10,000 spoken target sentences, and speech, noise and music interferers. - Listener characterisations, including audiograms and speech-in-noise testing. - Software including tools for generating additional training data, a baseline hearing aid algorithm, a baseline model of hearing impairment, = and a binaural objective intelligibility measure. Challenge participants will be invited to present their work at a dedicated workshop to be held in early December (details TBC). There will be prizes for the best-performing systems. We will be organising a special issue of the journal Speech Communication to which participants will be invited to contribute. *For further information* Full details will be released on a dedicated website on the challenge launch date, 30th March. If you have questions, please contact us directly at claritychal...@xxxxxxxx <https://groups.google.com/> *Organisers* Michael A. Akeroyd, Hearing Sciences, School of Medicine, University of Nottingham Jon Barker, Department of Computer Science, University of Sheffield Will Bailey, Department of Computer Science, University of Sheffield Trevor J. Cox, Acoustics Research Centre, University of Salford John F. Culling, School of Psychology, Cardiff University Lara Harris, Acoustics Research Centre, University of Salford Graham Naylor, Hearing Sciences, School of Medicine, University of Nottingham Zuzanna Podwinska, Acoustics Research Centre, University of Salford Zehai Tu, Department of Computer Science, University of Sheffield *Funded by *the Engineering and Physical Sciences Research Council (EPSRC), UK *Supported by* RNID (formerly Action on Hearing Loss), Hearing Industry Research Consortium, Amazon TTS Research --00000000000031562f05d965b8e9 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><span class=3D"gmail-im" style=3D"color:rgb(80,0,80)"><div= style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,A= rial,sans-serif;font-size:14px;white-space:pre-wrap"><b>Pre-announcing the = 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing.<br></b= ></div><br style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,H= elvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap"></span><div = style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Ar= ial,sans-serif;font-size:14px;white-space:pre-wrap">Preliminary details app= ear on the <a href=3D"http://claritychallenge.org/" target=3D"_blank">Clari= ty website</a> and below. Full details appear on the challenge launch date,= 30th March, on a dedicated website. If you have questions, please contact= us directly at <a href=3D"mailto:claritychallengecontact@xxxxxxxx" target= =3D"_blank">claritychallengecontact@xxxxxxxx</a></div><span class=3D"gmail= -im" style=3D"color:rgb(80,0,80)"><br style=3D"color:rgba(0,0,0,0.87);font-= family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-s= pace:pre-wrap"><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,Robo= toDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap"><b>= Important Dates 2022</b></div><div style=3D"color:rgba(0,0,0,0.87);font-fam= ily:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-spac= e:pre-wrap"><b><br></b></div><div style=3D"color:rgba(0,0,0,0.87);font-fami= ly:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space= :pre-wrap"><b>30th March</b> - Challenge launch including the release of t= rain/dev data sets, mixing tools, full rules and documentation</div><div st= yle=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Aria= l,sans-serif;font-size:14px;white-space:pre-wrap"><b>end April</b> - Releas= e of full toolset + baseline system</div><div style=3D"color:rgba(0,0,0,0.= 87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14p= x;white-space:pre-wrap"><b>25th July</b> - Evaluation data released</div><d= iv style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica= ,Arial,sans-serif;font-size:14px;white-space:pre-wrap"><b>1st Sept</b> - Su= bmission deadline</div><div style=3D"color:rgba(0,0,0,0.87);font-family:Rob= oto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-w= rap"><b>Sept-Nov</b> - Listening test evaluation period</div><div style=3D"= color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-= serif;font-size:14px;white-space:pre-wrap"><b>Early Dec</b> - Results annou= nced at a Clarity Challenge Workshop; prizes awarded.</div><div style=3D"co= lor:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-se= rif;font-size:14px;white-space:pre-wrap"><br></div><div style=3D"color:rgba= (0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font= -size:14px;white-space:pre-wrap"><b>Background</b></div><div style=3D"color= :rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif= ;font-size:14px;white-space:pre-wrap">We are organising a series of machine= learning challenges to advance hearing aid speech signal processing. Even = if you=E2=80=99ve not worked on hearing aids before, we=E2=80=99ll provide = you with the tools to enable you to apply your machine learning and speech = processing algorithms to help those with hearing loss.</div><br style=3D"co= lor:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-se= rif;font-size:14px;white-space:pre-wrap"><div style=3D"color:rgba(0,0,0,0.8= 7);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px= ;white-space:pre-wrap">Although age-related hearing loss affects 40% of 55 = to 74 year-olds, the majority of adults who would benefit from hearing aids= don=E2=80=99t use them. A key reason is simply that hearing aids don=E2=80= =99t provide enough benefit. In particular, speech in noise is still a crit= ical problem, even for the most sophisticated devices. The purpose of the = =E2=80=9CClarity=E2=80=9D challenges is to catalyse new work to radically i= mprove the speech intelligibility provided by hearing aids. <b>This is the = second in a series of enhancement challenges</b> that are considering incre= asingly complex listening scenarios. The first round (CEC1) focused on spee= ch in indoor environments in the presence of a single interferer. The new c= hallenge extends CEC1 in several important respects: modelling<b> listener = head motion</b>, including scenes with <b>multiple interferers</b>, and an = <b>extended range of interferer types</b>.</div><br style=3D"color:rgba(0,0= ,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-siz= e:14px;white-space:pre-wrap"><div style=3D"color:rgba(0,0,0,0.87);font-fami= ly:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space= :pre-wrap"><b>The Task</b></div><div style=3D"color:rgba(0,0,0,0.87);font-f= amily:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-sp= ace:pre-wrap">You will work with simulated scenes, each including a target = speaker and one or more interfering noises. For each scene, there will be s= ignals that simulate those captured by a behind-the-ear hearing aid with 3-= microphones at each ear and those captured at the eardrum without a hearing= aid present. The target speech will be a short sentence and the interferi= ng noises will be either speech, domestic appliance noise or music samples.= </div><br style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,He= lvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap"><div style=3D= "color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans= -serif;font-size:14px;white-space:pre-wrap">The task will be to deliver a h= earing aid signal processing algorithm that can improve the intelligibility= of the target speaker for a specified hearing-impaired listener. Initially= , entries will be evaluated using an objective speech intelligibility measu= re. Subsequently, up to twenty of the most promising systems will be evalua= ted by a panel of hearing-impaired listeners.</div><br style=3D"color:rgba(= 0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-= size:14px;white-space:pre-wrap"><div style=3D"color:rgba(0,0,0,0.87);font-f= amily:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-sp= ace:pre-wrap">Prizes will be awarded for the systems achieving the best obj= ective measure scores and for the best listening test outcomes.</div><br st= yle=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Aria= l,sans-serif;font-size:14px;white-space:pre-wrap"><div style=3D"color:rgba(= 0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-= size:14px;white-space:pre-wrap">We will provide a baseline system so that t= eams can choose to focus on individual components or to develop their own c= omplete pipelines.</div><br style=3D"color:rgba(0,0,0,0.87);font-family:Rob= oto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-w= rap"><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,He= lvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap"><b>What will = be provided</b></div><div style=3D"color:rgba(0,0,0,0.87);font-family:Robot= o,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wra= p"><ul><li style=3D"margin-left:15px">Evaluation of the best entries by a p= anel of hearing-impaired listeners.</li><li style=3D"margin-left:15px">Prem= ixed speech + interferer scenes for training and evaluation.</li><li style= =3D"margin-left:15px">A database of 10,000 spoken target sentences, and spe= ech, noise and music interferers.</li><li style=3D"margin-left:15px">Listen= er characterisations, including audiograms and speech-in-noise testing.</li= ><li style=3D"margin-left:15px">Software including tools for generating add= itional training data, a baseline hearing aid algorithm, a baseline model o= f hearing impairment, and a binaural objective intelligibility measure.</li= ></ul></div><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoD= raft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap">Challe= nge participants will be invited to present their work at a dedicated works= hop to be held in early December (details TBC). There will be prizes for th= e best-performing systems. We will be organising a special issue of the jou= rnal Speech Communication to which participants will be invited to contribu= te.</div><div><span class=3D"gmail-im" style=3D"color:rgb(80,0,80)"><div st= yle=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Aria= l,sans-serif;font-size:14px;white-space:pre-wrap"><b><br></b></div><div sty= le=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial= ,sans-serif;font-size:14px;white-space:pre-wrap"><b>For further information= </b></div><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDra= ft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap">Full det= ails will be released on a dedicated website on the challenge launch date, = 30th March. If you have questions, please contact us directly at <a href= =3D"https://groups.google.com/" rel=3D"nofollow" style=3D"text-decoration-l= ine:none;color:rgb(26,115,232)">claritychal...@xxxxxxxx</a></div><br style= =3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,s= ans-serif;font-size:14px;white-space:pre-wrap"><div style=3D"color:rgba(0,0= ,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-siz= e:14px;white-space:pre-wrap"><b>Organisers</b></div><div style=3D"color:rgb= a(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;fon= t-size:14px;white-space:pre-wrap">Michael A. Akeroyd, Hearing Sciences, Sch= ool of Medicine, University of Nottingham</div><div style=3D"color:rgba(0,0= ,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-siz= e:14px;white-space:pre-wrap">Jon Barker, Department of Computer Science, Un= iversity of Sheffield</div><div style=3D"color:rgba(0,0,0,0.87);font-family= :Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:p= re-wrap">Will Bailey, Department of Computer Science, University of Sheffie= ld</div><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft= ,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap">Trevor J. = Cox, Acoustics Research Centre, University of Salford</div><div style=3D"co= lor:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-se= rif;font-size:14px;white-space:pre-wrap">John F. Culling, School of Psychol= ogy, Cardiff University</div><div style=3D"color:rgba(0,0,0,0.87);font-fami= ly:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space= :pre-wrap">Lara Harris, Acoustics Research Centre, University of Salford</d= iv><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helv= etica,Arial,sans-serif;font-size:14px;white-space:pre-wrap">Graham Naylor, = Hearing Sciences, School of Medicine, University of Nottingham</div><div st= yle=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Aria= l,sans-serif;font-size:14px;white-space:pre-wrap">Zuzanna Podwinska, Acoust= ics Research Centre, University of Salford</div><div style=3D"color:rgba(0,= 0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-si= ze:14px;white-space:pre-wrap">Zehai Tu, Department of Computer Science, Uni= versity of Sheffield</div><div style=3D"color:rgba(0,0,0,0.87);font-family:= Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pr= e-wrap"><br></div><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,R= obotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap">= <b>Funded by </b>the Engineering and Physical Sciences Research Council (EP= SRC), UK</div><br style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,Roboto= Draft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap"><div = style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Ar= ial,sans-serif;font-size:14px;white-space:pre-wrap"><b>Supported by</b> RNI= D (formerly Action on Hearing Loss), Hearing Industry Research Consortium, = Amazon TTS Research</div></span></div><br style=3D"color:rgba(0,0,0,0.87);f= ont-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;whi= te-space:pre-wrap"><div style=3D"color:rgba(0,0,0,0.87);font-family:Roboto,= RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;white-space:pre-wrap"= ><br></div></span></div> --00000000000031562f05d965b8e9--