[AUDITORY] [CFP] Announcing the 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing (CEC2) (Jon Barker )


Subject: [AUDITORY] [CFP] Announcing the 2nd Clarity Enhancement Challenge for Hearing Aid Signal Processing (CEC2)
From:    Jon Barker  <00000196bd06e182-dmarc-request@xxxxxxxx>
Date:    Thu, 14 Apr 2022 14:51:44 +0100

--000000000000d1328405dc9d98bb Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable We are pleased to announce the launch of the second Clarity Enhancement Challenge for Hearing Aid Signal Processing. The full challenge data and core development tools are now avaible. For details see the challenge website <https://claritychallenge.github.io/clarity_CC_doc/> and our github repository <https://github.com/claritychallenge/clarity>. Important Dates - 30th March - Challenge website launch - 14th April - Release of training dataset and initial tools - 30th April - Release of baseline system - 1st May - Registration of challenge entrants opens - 25th July - Evaluation data released - 1st Sept - 1st round submission deadline for evaluation by objective measure - 15th Sept - 2nd round <https://claritychallenge.github.io/clarity_CC_doc/docs/cec2/cec2_submis= sion>submission deadline for listening tests - Sept-Nov - Listening test evaluation period. - 2nd Dec - Results announced at a Clarity Challenge Workshop and prizes awarded Background We are organising a series of machine learning challenges to advance hearing aid speech signal processing. Even if you=E2=80=99ve not worked on = hearing aids before, we=E2=80=99ll provide you with the tools to enable you to appl= y your machine learning and speech processing algorithms to help those with a hearing loss. Although age-related hearing loss affects 40% of 55 to 74 year-olds, the majority of adults who would benefit from hearing aids don=E2=80=99t use th= em. A key reason is simply that hearing aids don=E2=80=99t provide enough benefit= . In particular, speech in noise is still a critical problem, even for the most sophisticated devices. The purpose of the =E2=80=9CClarity=E2=80=9D challen= ges is to catalyse new work to radically improve the speech intelligibility provided by hearing aids. The series of challenges will consider increasingly complex listening scenarios. The first round focusses on speech in indoor environments in the presence of a single interferer. It begins with a challenge involving improving hearing aid processing. Future challenges on how to model speech-in-noise perception will be launched at a later date. The task You will be provided with simulated scenes, each including a target speaker and interfering noise. For each scene, there will be signals that simulate those captured by a behind-the-ear hearing aid with 3-channels at each ear and those captured at the eardrum without a hearing aid present. The target speech will be a short sentence and the interfering noise will be either speech, music, domestic appliance noise or a combination of up to three sources of those types. The scenes will be dynamic, with the simulated listener facing some direction away from the target at the beginning of the scene and turning to face some direction towards, but not exactly at the target. The task will be to deliver a hearing aid signal processing algorithm that can improve the intelligibility of the target speaker for a specified hearing-impaired listener under these conditions. Initially, entries will be evaluated using an objective speech intelligibility measure. Subsequently, up to twenty of the most promising systems will be evaluated by a panel of listeners. We will provide a baseline system so that teams can choose to focus on individual components or to develop their own complete pipelines. What will be provided - Evaluation of the best entries by a panel of hearing-impaired listeners. - Speech + interferer scenes for training and evaluation. - An entirely new database of 10,000 spoken sentences including head rotation data - Listener characterisations including audiograms and speech-in-noise testing. - Software including tools for generating training data, a baseline hearing aid algorithm, a baseline model of hearing impairment, and a binaural objective intelligibility measure. Challenge and workshop participants will be invited to contribute to a journal Special Issue on the topic of Machine Learning for Hearing Aid Processing that will be announced next year. For further information If you are interested in participating and wish to receive further information, please sign up to the Clarity Challenge Google Group at <http://claritychallenge.org/sign-up-to-the-challenges> https://groups.google.com/g/clarity-challenge If you have questions, contact us directly at claritychallengecontact@xxxxxxxx <claritychallengecontact@xxxxxxxx>gmail.com Organisers (alphabetical) - Michael A. Akeroyd, Hearing Sciences, School of Medicine, University of Nottingham - Will Bailey, Department of Computer Science, University of Sheffield - Jon Barker, Department of Computer Science, University of Sheffield - Trevor J. Cox, Acoustics Research Centre, University of Salford - John F. Culling, School of Psychology, Cardiff University - Lara Harris, Acoustics Research Centre, University of Salford - Graham Naylor, Hearing Sciences, School of Medicine, University of Nottingham - Zuzanna Podwi=C5=84ska, Acoustics Research Centre, University of Salfo= rd - Zehai Tu, Department of Computer Science, University of Sheffield Funded by the Engineering and Physical Sciences Research Council (EPSRC), U= K Supported by RNID (formerly Action on Hearing Loss), Hearing Industry Research Consortium, Amazon TTS Research --=20 Professor Jon Barker, Department of Computer Science, University of Sheffield +44 (0) 114 222 1824 --000000000000d1328405dc9d98bb Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><span id=3D"gmail-m_2169847805782525631gmail-docs-internal= -guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><p dir=3D"ltr" style=3D"line-he= ight:1.38;margin-top:0pt;margin-bottom:12pt"><span style=3D"font-size:11pt;= font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-varian= t-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;whi= te-space:pre-wrap">We are pleased to announce the </span><span style=3D"fon= t-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent= ;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal= ;vertical-align:baseline;white-space:pre-wrap">launch of the second Clarity= Enhancement Challenge for Hearing Aid Signal Processing</span><span style= =3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:tran= sparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical= -align:baseline;white-space:pre-wrap">. The full challenge data and core de= velopment tools are now avaible. For details see the </span><a href=3D"http= s://claritychallenge.github.io/clarity_CC_doc/" target=3D"_blank" style=3D"= text-decoration-line:none"><span style=3D"font-size:11pt;font-family:Arial;= background-color:transparent;font-variant-numeric:normal;font-variant-east-= asian:normal;text-decoration-line:underline;vertical-align:baseline;white-s= pace:pre-wrap">challenge website</span></a><span style=3D"font-size:11pt;fo= nt-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-= numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white= -space:pre-wrap"> and our </span><a href=3D"https://github.com/claritychall= enge/clarity" target=3D"_blank" style=3D"text-decoration-line:none"><span s= tyle=3D"font-size:11pt;font-family:Arial;background-color:transparent;font-= variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:= underline;vertical-align:baseline;white-space:pre-wrap">github repository</= span></a><span style=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);b= ackground-color:transparent;font-variant-numeric:normal;font-variant-east-a= sian:normal;vertical-align:baseline;white-space:pre-wrap">.</span></p><p di= r=3D"ltr" style=3D"line-height:1.38;margin-top:12pt;margin-bottom:12pt"><sp= an style=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-co= lor:transparent;font-weight:700;font-variant-numeric:normal;font-variant-ea= st-asian:normal;vertical-align:baseline;white-space:pre-wrap">Important Dat= es</span></p><ul><li><span id=3D"gmail-m_2169847805782525631gmail-docs-inte= rnal-guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-c= olor:transparent;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-sp= ace:pre-wrap">30th March - Challenge website launch</span></span></li><li><= span id=3D"gmail-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7ff= f-c760-ca2b-c97bc479dba1"><span style=3D"background-color:transparent;font-= size:11pt;color:rgb(0,0,0);font-family:Arial;font-weight:700;white-space:pr= e-wrap">14th April - Release of training dataset and initial tools</span></= span></li><li><span id=3D"gmail-m_2169847805782525631gmail-docs-internal-gu= id-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-color:tr= ansparent;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-space:pre= -wrap">30th April - Release of baseline system</span></span></li><li><span = id=3D"gmail-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c76= 0-ca2b-c97bc479dba1"><span style=3D"background-color:transparent;font-size:= 11pt;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">1st May - Reg= istration of challenge entrants opens</span></span></li><li><span id=3D"gma= il-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c9= 7bc479dba1"><span style=3D"background-color:transparent;font-size:11pt;colo= r:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">25th July - Evaluation= data released</span></span></li><li><span id=3D"gmail-m_216984780578252563= 1gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span style= =3D"background-color:transparent;color:rgb(0,0,0);font-family:Arial;font-si= ze:11pt;white-space:pre-wrap;font-variant-numeric:normal;font-variant-east-= asian:normal;font-weight:700;vertical-align:baseline">1st Sept - 1st round = submission deadline</span><span style=3D"background-color:transparent;color= :rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap;font-vari= ant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">= for evaluation by objective measure</span></span></li><li><span id=3D"gmai= l-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c97= bc479dba1"><span style=3D"background-color:transparent;color:rgb(0,0,0);fon= t-family:Arial;font-size:11pt;white-space:pre-wrap;font-variant-numeric:nor= mal;font-variant-east-asian:normal;vertical-align:baseline">15th Sept - 2nd= round</span><a href=3D"https://claritychallenge.github.io/clarity_CC_doc/d= ocs/cec2/cec2_submission" target=3D"_blank" style=3D"background-color:trans= parent;font-family:Arial;font-size:11pt;white-space:pre-wrap;text-decoratio= n-line:none"><span style=3D"font-size:11pt;color:rgb(0,0,0);background-colo= r:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;ve= rtical-align:baseline"> </span></a><span style=3D"background-color:transpar= ent;color:rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap;= font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:b= aseline">submission deadline for listening tests</span></span></li><li><spa= n id=3D"gmail-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c= 760-ca2b-c97bc479dba1"><span style=3D"background-color:transparent;font-siz= e:11pt;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Sept-Nov - = Listening test evaluation period.</span></span></li><li><span id=3D"gmail-m= _2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c97bc4= 79dba1"><span style=3D"background-color:transparent;font-size:11pt;color:rg= b(0,0,0);font-family:Arial;white-space:pre-wrap">2nd Dec - Results announce= d at a Clarity Challenge Workshop and prizes awarded</span></span></li></ul= ><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:12pt;margin-bottom:12p= t"><span style=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);backgro= und-color:transparent;font-weight:700;font-variant-numeric:normal;font-vari= ant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Backgro= und</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:12pt;marg= in-bottom:12pt"><span style=3D"font-size:11pt;font-family:Arial;color:rgb(0= ,0,0);background-color:transparent;font-variant-numeric:normal;font-variant= -east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We are org= anising a series of machine learning challenges to advance hearing aid spee= ch signal processing. Even if you=E2=80=99ve not worked on hearing aids bef= ore, we=E2=80=99ll provide you with the tools to enable you to apply your m= achine learning and speech processing algorithms to help those with a heari= ng loss.</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:12pt= ;margin-bottom:12pt"><span style=3D"background-color:transparent;color:rgb(= 0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap">Although age-= related hearing loss affects 40% of 55 to 74 year-olds, the majority of adu= lts who would benefit from hearing aids don=E2=80=99t use them. A key reaso= n is simply that hearing aids don=E2=80=99t provide enough benefit. In part= icular, speech in noise is still a critical problem, even for the most soph= isticated devices. The purpose of the =E2=80=9CClarity=E2=80=9D challenges = is to catalyse new work to radically improve the speech intelligibility pro= vided by hearing aids.</span></p><p dir=3D"ltr" style=3D"line-height:1.38;m= argin-top:12pt;margin-bottom:12pt"><span style=3D"font-size:11pt;font-famil= y:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:= normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:p= re-wrap">The series of challenges will consider increasingly complex listen= ing scenarios. The first round focusses on speech in indoor environments in= the presence of a single interferer. It begins with a challenge involving = improving hearing aid processing. Future challenges on how to model speech-= in-noise perception will be launched at a later date.</span></p><p dir=3D"l= tr" style=3D"line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span sty= le=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:tr= ansparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asi= an:normal;vertical-align:baseline;white-space:pre-wrap">The task</span></p>= <p dir=3D"ltr" style=3D"line-height:1.38;margin-top:12pt;margin-bottom:12pt= "><span style=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);backgrou= nd-color:transparent;font-variant-numeric:normal;font-variant-east-asian:no= rmal;vertical-align:baseline;white-space:pre-wrap">You will be provided wit= h simulated scenes, each including a target speaker and interfering noise. = For each scene, there will be signals that simulate those captured by a beh= ind-the-ear hearing aid with 3-channels at each ear and those captured at t= he eardrum without a hearing aid present.=C2=A0 The target speech will be a= short sentence and the interfering noise will be either speech, music, dom= estic appliance noise or a combination of up to three sources of those type= s. The scenes will be dynamic, with the simulated listener facing some dire= ction away from the target at the beginning of the scene and turning to fac= e some direction towards, but not exactly at the target.</span></p><p dir= =3D"ltr" style=3D"line-height:1.38;margin-top:12pt;margin-bottom:12pt"><spa= n style=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-col= or:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;v= ertical-align:baseline;white-space:pre-wrap">The task will be to deliver a = hearing aid signal processing algorithm that can improve the intelligibilit= y of the target speaker for a specified hearing-impaired listener under the= se conditions. Initially, entries will be evaluated using an objective spee= ch intelligibility measure. Subsequently, up to twenty of the most promisin= g systems will be evaluated by a panel of listeners.</span></p><p dir=3D"lt= r" style=3D"line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span styl= e=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:tra= nsparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertica= l-align:baseline;white-space:pre-wrap">We will provide a baseline system so= that teams can choose to focus on individual components or to develop thei= r own complete pipelines.</span></p><p dir=3D"ltr" style=3D"line-height:1.3= 8;margin-top:12pt;margin-bottom:12pt"><span style=3D"font-size:11pt;font-fa= mily:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:700;fo= nt-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:bas= eline;white-space:pre-wrap">What will be provided</span></p><ul><li><span i= d=3D"gmail-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760= -ca2b-c97bc479dba1"><span style=3D"background-color:transparent;font-size:1= 1pt;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Evaluation of = the best entries by a panel of hearing-impaired listeners.</span></span></l= i><li><span id=3D"gmail-m_2169847805782525631gmail-docs-internal-guid-0d3ec= 596-7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-color:transparen= t;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">S= peech + interferer scenes for training and evaluation.</span></span></li><l= i><span id=3D"gmail-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-= 7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-color:transparent;fo= nt-size:11pt;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">An en= tirely new database of 10,000 spoken sentences including head rotation data= </span></span></li><li><span id=3D"gmail-m_2169847805782525631gmail-docs-in= ternal-guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span style=3D"background= -color:transparent;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-= space:pre-wrap">Listener </span>characterisations<span style=3D"background-= color:transparent;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-s= pace:pre-wrap"> including audiograms and speech-in-noise testing.</span></s= pan></li><li><span id=3D"gmail-m_2169847805782525631gmail-docs-internal-gui= d-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-color:tra= nsparent;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-space:pre-= wrap">Software including tools for generating training data, a baseline hea= ring aid algorithm, a baseline model of hearing impairment, and a binaural = objective intelligibility measure.</span></span></li></ul><p dir=3D"ltr" st= yle=3D"line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style=3D"= font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transpar= ent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-ali= gn:baseline;white-space:pre-wrap">Challenge and workshop participants will = be invited to contribute to a journal Special Issue on the topic of Machine= Learning for Hearing Aid Processing that will be announced next year.</spa= n></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:12pt;margin-botto= m:12pt"><span style=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);ba= ckground-color:transparent;font-weight:700;font-variant-numeric:normal;font= -variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Fo= r further information</span></p><p dir=3D"ltr" style=3D"line-height:1.38;ma= rgin-top:12pt;margin-bottom:12pt"><span style=3D"font-size:11pt;font-family= :Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:n= ormal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pr= e-wrap">If you are interested in participating and wish to receive further = information, please sign up to the Clarity Challenge Google Group at</span>= <a href=3D"http://claritychallenge.org/sign-up-to-the-challenges" target=3D= "_blank" style=3D"text-decoration-line:none"><span style=3D"font-size:11pt;= font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-varian= t-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;whi= te-space:pre-wrap"> </span></a><a href=3D"https://groups.google.com/g/clari= ty-challenge" target=3D"_blank" style=3D"text-decoration-line:none"><span s= tyle=3D"font-size:11pt;font-family:Arial;background-color:transparent;font-= variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:= underline;vertical-align:baseline;white-space:pre-wrap">https://groups.goog= le.com/g/clarity-challenge</span></a></p><p dir=3D"ltr" style=3D"line-heigh= t:1.38;margin-top:12pt;margin-bottom:12pt"><span style=3D"font-size:11pt;fo= nt-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-= numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white= -space:pre-wrap">If you have questions, contact us directly at</span><a hre= f=3D"mailto:claritychallengecontact@xxxxxxxx" target=3D"_blank" style=3D"t= ext-decoration-line:none"><span style=3D"font-size:11pt;font-family:Arial;b= ackground-color:transparent;font-variant-numeric:normal;font-variant-east-a= sian:normal;text-decoration-line:underline;vertical-align:baseline;white-sp= ace:pre-wrap"> claritychallengecontact@xxxxxxxx</span></a><span style=3D"font-size:= 11pt;font-family:Arial;color:rgb(17,85,204);background-color:transparent;fo= nt-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-li= ne:underline;vertical-align:baseline;white-space:pre-wrap"><a href=3D"http:= //gmail.com/" target=3D"_blank">gmail.com</a></span></p><p dir=3D"ltr" styl= e=3D"line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style=3D"fo= nt-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparen= t;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:norma= l;vertical-align:baseline;white-space:pre-wrap">Organisers </span><span sty= le=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:tr= ansparent;font-style:italic;font-variant-numeric:normal;font-variant-east-a= sian:normal;vertical-align:baseline;white-space:pre-wrap">(alphabetical)</s= pan></p><ul><li><span id=3D"gmail-m_2169847805782525631gmail-docs-internal-= guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-color:= transparent;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-space:p= re-wrap">Michael A. Akeroyd, Hearing Sciences, School of Medicine, Universi= ty of Nottingham</span></span></li><li><span id=3D"gmail-m_2169847805782525= 631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span sty= le=3D"background-color:transparent;font-size:11pt;color:rgb(0,0,0);font-fam= ily:Arial;white-space:pre-wrap">Will Bailey, Department of Computer Science= , University of Sheffield</span></span></li><li><span id=3D"gmail-m_2169847= 805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1">= <span style=3D"background-color:transparent;font-size:11pt;color:rgb(0,0,0)= ;font-family:Arial;white-space:pre-wrap">Jon Barker, Department of Computer= Science, University of Sheffield</span></span></li><li><span id=3D"gmail-m= _2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c97bc4= 79dba1"><span style=3D"background-color:transparent;font-size:11pt;color:rg= b(0,0,0);font-family:Arial;white-space:pre-wrap">Trevor J. Cox, Acoustics R= esearch Centre, University of Salford</span></span></li><li><span id=3D"gma= il-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c9= 7bc479dba1"><span style=3D"background-color:transparent;font-size:11pt;colo= r:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">John F. Culling, Schoo= l of Psychology, Cardiff University</span></span></li><li><span id=3D"gmail= -m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c97b= c479dba1"><span style=3D"background-color:transparent;font-size:11pt;color:= rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Lara Harris, Acoustics R= esearch Centre, University of Salford</span></span></li><li><span id=3D"gma= il-m_2169847805782525631gmail-docs-internal-guid-0d3ec596-7fff-c760-ca2b-c9= 7bc479dba1"><span style=3D"background-color:transparent;font-size:11pt;colo= r:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Graham Naylor, Hearing= Sciences, School of Medicine, University of Nottingham</span></span></li><= li><span id=3D"gmail-m_2169847805782525631gmail-docs-internal-guid-0d3ec596= -7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-color:transparent;f= ont-size:11pt;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Zuza= nna Podwi=C5=84ska, Acoustics Research Centre, University of Salford</span>= </span></li><li><span id=3D"gmail-m_2169847805782525631gmail-docs-internal-= guid-0d3ec596-7fff-c760-ca2b-c97bc479dba1"><span style=3D"background-color:= transparent;font-size:11pt;color:rgb(0,0,0);font-family:Arial;white-space:p= re-wrap">Zehai Tu, Department of Computer Science, University of Sheffield<= /span></span></li></ul><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:= 12pt;margin-bottom:12pt"><span style=3D"font-size:11pt;font-family:Arial;co= lor:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-nu= meric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-s= pace:pre-wrap">Funded by </span><span style=3D"font-size:11pt;font-family:A= rial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:nor= mal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-= wrap">the Engineering and Physical Sciences Research Council (EPSRC), UK</s= pan></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:12pt;margin-bot= tom:12pt"><span style=3D"font-size:11pt;font-family:Arial;color:rgb(0,0,0);= background-color:transparent;font-weight:700;font-variant-numeric:normal;fo= nt-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">= Supported by</span><span style=3D"font-size:11pt;font-family:Arial;color:rg= b(0,0,0);background-color:transparent;font-variant-numeric:normal;font-vari= ant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> RNID (= formerly Action on Hearing Loss), Hearing Industry Research Consortium, Ama= zon TTS Research</span></p></span><font color=3D"#888888"><br><div><br></di= v></font><div><br></div>-- <br><div dir=3D"ltr" class=3D"gmail_signature" d= ata-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr">Pr= ofessor Jon Barker,<div><div>Department of Computer Science,</div><div>Univ= ersity of Sheffield</div><div>+44 (0) 114 222 1824</div><div><br></div></di= v></div></div></div></div></div> --000000000000d1328405dc9d98bb--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University