[AUDITORY] [CfP] Announcing launch of 3rd Clarity Enhancement Challenge for Hearing Aid Signal Processing (CEC3) (Jon Barker )


Subject: [AUDITORY] [CfP] Announcing launch of 3rd Clarity Enhancement Challenge for Hearing Aid Signal Processing (CEC3)
From:    Jon Barker  <00000196bd06e182-dmarc-request@xxxxxxxx>
Date:    Wed, 10 Apr 2024 09:48:35 +0100

--000000000000518f670615ba1be9 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Dear List, We are pleased to announce the launch of the third Clarity Enhancement Challenge for Hearing Aid Signal Processing The full challenge data and core development tools are now available. For full details please see the challenge website (https://claritychallenge.org/). **Important Dates** - 2nd April 2024: Launch of Task 1 and Task 2 with training and development data; initial tools. - 1st May 2024: Launch of Task 3. - 25th July 2024: Evaluation data released - 2nd Sept 2024: 1st round submission for evaluation by objective measure - 15th Sept 2024: 2nd round submission deadline for listening tests (Task 2 and 3) - Sept-Nov 2024: Listening test evaluation period. - Dec 2024: Results announced at a Clarity Challenge Workshop (Details TBD); prizes awarded. **Background** We are organising a series of machine learning challenges to advance hearing aid speech signal processing. Even if you=E2=80=99ve not worked on = hearing aids before, we=E2=80=99ll provide you with the tools to enable you to appl= y your machine learning and speech processing algorithms to help those with a hearing loss. Although age-related hearing loss affects 40% of 55 to 74 year-olds, the majority of adults who would benefit from hearing aids don=E2=80=99t use th= em. A key reason is simply that hearing aids don=E2=80=99t provide enough benefit= . In particular, speech in noise is still a critical problem, even for the most sophisticated devices. The purpose of the =E2=80=9CClarity=E2=80=9D challen= ges is to catalyse new work to radically improve the speech intelligibility provided by hearing aids. **The tasks** You will be provided with hearing aid input signals for scenes containing a target speaker and interfering noise sources. Your task will be to develop a hearing aid signal processing algorithm that can improve the intelligibility of the target speaker for a specified hearing-impaired listener under these conditions. The challenge follows on from the success of the 2nd Clarity Enhancement Challenge (CEC2) which used fully simulated data. We have extended CEC2 in three different directions which have been presented as three separate tasks. - Task 1: Real ambisonic room impulse responses - Task 2: Real hearing aid signals - Task 3: Real dynamic backgrounds (launching 1st May) Initially, entries will be evaluated using an objective speech intelligibility measure. Subsequently, up to twenty of the most promising systems will be evaluated by a panel of hearing-impaired listeners. We will provide a baseline system so that teams can choose to focus on individual components or to develop their own complete pipelines. **What will be provided** - Evaluation of the best entries by a panel of hearing-impaired listeners. - Speech + interferer scenes for training and evaluation. - An entirely new datasets including ambisonic impulse response and acoustic scene recording; recordings of speech in noise scenes over 6-channel hearing aid devices. - Listener characteristics including audiograms. - Software including tools for generating training data, a baseline hearing aid algorithm, a baseline model of hearing impairment, and a binaural objective intelligibility measure. Challenge and workshop participants will be invited to contribute to a journal Special Issue on the topic of Machine Learning for Hearing Aid Processing (TBC) **For further information** If you are interested in participating and wish to receive further information, please sign up to the Clarity Challenge Google Group at https://groups.google.com/g/clarity-challenge If you have questions, contact us directly at claritychallengecontact@xxxxxxxx **Organisers (alphabetical)** - Michael A. Akeroyd, Hearing Sciences, School of Medicine, University of Nottingham - Jon Barker, Department of Computer Science, University of Sheffield - Trevor J. Cox, Acoustics Research Centre, University of Salford - John F. Culling, School of Psychology, Cardiff University - Jennifer Firth, Hearing Sciences, School of Medicine, University of Nottingham - Simone Graetzer, Acoustics Research Centre, University of Salford - Graham Naylor, Hearing Sciences, School of Medicine, University of Nottingham - Jianyuan Sun, Department of Computer Science, University of Sheffield Funded by the Engineering and Physical Sciences Research Council (EPSRC), U= K Supported by RNID (formerly Action on Hearing Loss), Hearing Industry Research Consortium, Amazon TTS Research --=20 Professor Jon Barker, Department of Computer Science, University of Sheffield +44 (0) 114 222 1824 --000000000000518f670615ba1be9 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div>Dear List,<br><br>We are pleased to announce the laun= ch of the third Clarity Enhancement Challenge for Hearing Aid Signal Proces= sing=C2=A0 The full challenge data and core development tools are now avail= able. For full details please see the challenge website (<a href=3D"https:/= /claritychallenge.org/">https://claritychallenge.org/</a>).<br><br>*<b>Impo= rtant Dates</b>*<br><br>- 2nd April 2024: Launch of Task 1 and Task 2 with = training and development data; initial tools.<br>- 1st May 2024: Launch of = Task 3.<br>- 25th July 2024: Evaluation data released<br>- 2nd Sept 2024: 1= st round submission for evaluation by objective measure<br>- 15th Sept 2024= : 2nd round submission deadline for listening tests (Task 2 and 3)<br>- Sep= t-Nov 2024: Listening test evaluation period.<br>- Dec 2024: Results announ= ced at a Clarity Challenge Workshop (Details TBD); prizes awarded.<br><br>*= <b>Background</b>*<br><br>We are organising a series of machine learning ch= allenges to advance hearing aid speech signal processing. Even if you=E2=80= =99ve not worked on hearing aids before, we=E2=80=99ll provide you with the= tools to enable you to apply your machine learning and speech processing a= lgorithms to help those with a hearing loss.<br><br>Although age-related he= aring loss affects 40% of 55 to 74 year-olds, the majority of adults who wo= uld benefit from hearing aids don=E2=80=99t use them. A key reason is simpl= y that hearing aids don=E2=80=99t provide enough benefit. In particular, sp= eech in noise is still a critical problem, even for the most sophisticated = devices. The purpose of the =E2=80=9CClarity=E2=80=9D challenges is to cata= lyse new work to radically improve the speech intelligibility provided by h= earing aids.<br><br>*<b>The tasks</b>*<br><br>You will be provided with hea= ring aid input signals for scenes containing a target speaker and interferi= ng noise sources. Your task will be to develop a hearing aid signal process= ing algorithm that can improve the intelligibility of the target speaker fo= r a specified hearing-impaired listener under these conditions. <br><br>The= challenge follows on from the success of the 2nd Clarity Enhancement Chall= enge (CEC2) which used fully simulated data. We have extended CEC2 in three= different directions which have been presented as three separate tasks.<br= ><br>- Task 1: Real ambisonic room impulse responses <br>- Task 2: Real hea= ring aid signals<br>- Task 3: Real dynamic backgrounds (launching 1st May)<= br><br>Initially, entries will be evaluated using an objective speech intel= ligibility measure. Subsequently, up to twenty of the most promising system= s will be evaluated by a panel of hearing-impaired listeners.<br><br>We wil= l provide a baseline system so that teams can choose to focus on individual= components or to develop their own complete pipelines.<br><br>*<b>What wil= l be provided</b>*<br><br>- Evaluation of the best entries by a panel of he= aring-impaired listeners.<br>- Speech + interferer scenes for training and = evaluation.<br>- An entirely new datasets including ambisonic impulse respo= nse and acoustic scene recording; recordings of speech in noise scenes over= 6-channel hearing aid devices.<br>- Listener characteristics including aud= iograms.<br>- Software including tools for generating training data, a base= line hearing aid algorithm, a baseline model of hearing impairment, and a b= inaural objective intelligibility measure.<br><br>Challenge and workshop pa= rticipants will be invited to contribute to a journal Special Issue on the = topic of Machine Learning for Hearing Aid Processing (TBC)<br><br>*<b>For f= urther information</b>*<br><br>If you are interested in participating and w= ish to receive further information, please sign up to the Clarity Challenge= Google Group at <a href=3D"https://groups.google.com/g/clarity-challenge">= https://groups.google.com/g/clarity-challenge</a><br><br>If you have questi= ons, contact us directly at <a href=3D"mailto:claritychallengecontact@xxxxxxxx= .com">claritychallengecontact@xxxxxxxx</a><br><br>*<b>Organisers (alphabet= ical)</b>*<br><br>- Michael A. Akeroyd, Hearing Sciences, School of Medicin= e, University of Nottingham<br>- Jon Barker, Department of Computer Science= , University of Sheffield<br>- Trevor J. Cox, Acoustics Research Centre, Un= iversity of Salford<br>- John F. Culling, School of Psychology, Cardiff Uni= versity<br>- Jennifer Firth, Hearing Sciences, School of Medicine, Universi= ty of Nottingham<br>- Simone Graetzer, Acoustics Research Centre, Universit= y of Salford<br>- Graham Naylor, Hearing Sciences, School of Medicine, Univ= ersity of Nottingham<br>- Jianyuan Sun, Department of Computer Science, Uni= versity of Sheffield<br><br>Funded by the Engineering and Physical Sciences= Research Council (EPSRC), UK<br><br>Supported by RNID (formerly Action on = Hearing Loss), Hearing Industry Research Consortium, Amazon TTS Research<br= ></div><div><br></div><span class=3D"gmail_signature_prefix">-- </span><br>= <div dir=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_signatur= e"><div dir=3D"ltr"><div><div dir=3D"ltr">Professor Jon Barker,<div><div>De= partment of Computer Science,</div><div>University of Sheffield</div><div>+= 44 (0) 114 222 1824</div><div><br></div></div></div></div></div></div></div= > --000000000000518f670615ba1be9--


This message came from the mail archive
postings/2024/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University