[AUDITORY] Cadenza Challenge pre-announcement: Signal processing challenge for music and listeners with a hearing loss (Trevor Cox )


Subject: [AUDITORY] Cadenza Challenge pre-announcement: Signal processing challenge for music and listeners with a hearing loss
From:    Trevor Cox  <0000017832c3f089-dmarc-request@xxxxxxxx>
Date:    Tue, 20 Dec 2022 13:07:59 +0000

--_000_PAXPR01MB92206EF0C429DD64765A43EDEDEA9PAXPR01MB9220eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Background http://cadenzachallenge.org/ is organising signal processing challenges for= music and listeners with a hearing loss. Interested? Please join our Googl= e Group https://groups.google.com/g/cadenza-challenge Although hearing loss is an almost inevitable part of aging, the majority o= f adults who would benefit from hearing aids don't use them. The purpose of= the Cadenza challenges is to catalyze new work to radically improve the pr= ocessing of music for those with a hearing loss. Even if you've not worked = on hearing loss before, we'll provide you with the tools to enable you to a= pply your machine learning and music processing and demixing/mixing algorit= hms to get going. The first round with a full launch in March 2023 will focus on two scenario= s: * Track 1, listening over headphones. * Track 2, listening in a car in the presence of noise. Your task is to improve the perceived audio quality of the reproduction con= sidering the listeners' hearing loss. This is about intelligent demixing/re= mixing of the tracks or processing the mixed music to compensate for the he= aring loss of the listener. You might be doing this to make the lyrics clea= rer, correct the frequency balance, ensure the music has the intended emoti= onal impact, etc. The 2023 Challenge Track 1: Headphones and demixing You will be tasked with improving the audio quality of music samples for li= steners with defined hearing losses. The listeners are not using their norm= al hearing aids, they are just listening to the signals you provide via the= headphones. One machine learning challenge here is to demix stereo music u= sing an evaluation metric that allows for hearing loss, to then allow an in= telligent remix for the listener. Track 2: Car You need to enhance music samples played by the car stereo in the presence = of noise from the engine/road. This will be for listeners with a defined he= aring loss, who have a fixed hearing aid that we will provide. Evaluation Entries will be objectively evaluated using Hearing Aid Audio Quality Index= (HAAQI). The best systems will go forward to be scored by our listening pa= nel of people with a hearing loss. What you will get * Databases of music and scenes for training and evaluation. * Listener characteristics, including audiograms. * A complete end-to-end software baseline to build upon from sample sel= ection to evaluation. * Tutorials to learn about hearing loss, hearing aids, and our software= . Draft key dates * 1st Feb 2023: Beta launch of challenge with software and datasets. * 1st March 2023: Full launch of challenge with software and datasets. * June 2023: Release of evaluation data. * July 2023: Competition closed. All entrants submit (i) audio for eval= uation and (ii) a draft of their technical report. * Aug 2023: Entrants informed which systems are going forward to the li= stening test evaluation stage. * Sept 2023: Submit two page technical reports to Cadenza-2023 workshop= . * Autumn 2023: Cadenza-2023 workshop. The Team * Trevor Cox<https://www.salford.ac.uk/our-staff/trevor-cox>, Professor= of Acoustic Engineering, University of Salford * Alinka Greasley<https://ahc.leeds.ac.uk/music/staff/286/dr-alinka-gre= asley>, Associate Professor in Music Psychology, Leeds University * Michael Akeroyd<https://www.nottingham.ac.uk/medicine/people/michael.= akeroyd>, Professor of Hearing Sciences, University of Nottingham * Jon Barker<https://www.sheffield.ac.uk/dcs/people/academic/jon-barker= >, Professor in Computer Science, University of Sheffield * William Whitmer<https://www.nottingham.ac.uk/research/groups/hearings= ciences/people/bill.whitmer>, Senior Investigator Scientist, University of = Nottingham * Bruno Fazenda,<https://www.salford.ac.uk/our-staff/bruno-fazenda> Rea= der in Acoustics, University of Salford * Simone Graetzer,<https://www.salford.ac.uk/our-staff/simone-graetzer>= Research Fellow, University of Salford * Rebecca Vos,<https://www.salford.ac.uk/our-staff/rebecca-vos> Researc= h Fellow, University of Salford * Jennifer Firth, Research Assistant in Hearing Sciences, University of= Nottingham Funders * Cadenza is funded by EPSRC. Project partners are RNID; BBC R&D; Carl = von Ossietzky University Oldenburg; Google; Logitech UK Ltd and Sonova AG. Prof @xxxxxxxx Acoustical Engineering, University of Salford +44 161 295 5474; +44 7986 557419 --_000_PAXPR01MB92206EF0C429DD64765A43EDEDEA9PAXPR01MB9220eurp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"= > <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:Wingdings; panose-1:5 0 0 0 0 0 0 0 0 0;} @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; margin-top:0cm; margin-right:0cm; margin-bottom:8.0pt; margin-left:36.0pt; mso-add-space:auto; line-height:106%; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParag= raphCxSpFirst {mso-style-priority:34; mso-style-type:export-only; margin-top:0cm; margin-right:0cm; margin-bottom:0cm; margin-left:36.0pt; mso-add-space:auto; line-height:106%; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListPar= agraphCxSpMiddle {mso-style-priority:34; mso-style-type:export-only; margin-top:0cm; margin-right:0cm; margin-bottom:0cm; margin-left:36.0pt; mso-add-space:auto; line-height:106%; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagra= phCxSpLast {mso-style-priority:34; mso-style-type:export-only; margin-top:0cm; margin-right:0cm; margin-bottom:8.0pt; margin-left:36.0pt; mso-add-space:auto; line-height:106%; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} /* List Definitions */ @xxxxxxxx l0 {mso-list-id:2124305005; mso-list-type:hybrid; mso-list-template-ids:-1037406338 -1094295724 -1877453418 -737085596 19431= 84628 -717184710 -1557604388 -1681244042 1122285002 -1823422730;} @xxxxxxxx l0:level1 {mso-level-number-format:bullet; mso-level-text:\F0B7; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:Symbol;} @xxxxxxxx l0:level2 {mso-level-number-format:bullet; mso-level-text:o; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:"Courier New"; mso-bidi-font-family:"Times New Roman";} @xxxxxxxx l0:level3 {mso-level-number-format:bullet; mso-level-text:\F0A7; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:Wingdings;} @xxxxxxxx l0:level4 {mso-level-number-format:bullet; mso-level-text:\F0B7; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:Symbol;} @xxxxxxxx l0:level5 {mso-level-number-format:bullet; mso-level-text:o; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:"Courier New"; mso-bidi-font-family:"Times New Roman";} @xxxxxxxx l0:level6 {mso-level-number-format:bullet; mso-level-text:\F0A7; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:Wingdings;} @xxxxxxxx l0:level7 {mso-level-number-format:bullet; mso-level-text:\F0B7; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:Symbol;} @xxxxxxxx l0:level8 {mso-level-number-format:bullet; mso-level-text:o; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:"Courier New"; mso-bidi-font-family:"Times New Roman";} @xxxxxxxx l0:level9 {mso-level-number-format:bullet; mso-level-text:\F0A7; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; font-family:Wingdings;} ol {margin-bottom:0cm;} ul {margin-bottom:0cm;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-GB" link=3D"#0563C1" vlink=3D"#954F72" style=3D"word-wrap:= break-word"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><b><span style=3D"font-family:&quot;Arial&quot;,sans= -serif">Background<o:p></o:p></span></b></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><a href=3D"http://cadenzachallenge.org/">http://cadenzachallenge.org/<= /a> is organising signal processing challenges for music and listeners with= a hearing loss. Interested? Please join our Google Group <a href=3D"https://groups.google.com/g/cadenza-challenge">https://gr= oups.google.com/g/cadenza-challenge</a><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">Although hearing loss is an almost inevitable part of aging, the major= ity of adults who would benefit from hearing aids don&#8217;t use them. The= purpose of the Cadenza challenges is to catalyze new work to radically improve the processing of music for those with a hearing= loss. Even if you&#8217;ve not worked on hearing loss before, we&#8217;ll = provide you with the tools to enable you to apply your machine learning and= music processing and demixing/mixing algorithms to get going.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">The first round with a full launch in March 2023 will focus on two sce= narios:</span><span style=3D"font-family:&quot;Arial&quot;,sans-serif"><o:p= ></o:p></span></p> <ul style=3D"margin-top:0cm" type=3D"disc"> <li class=3D"MsoListParagraphCxSpFirst" style=3D"margin-left:0cm;mso-add-sp= ace:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Tra= ck 1, listening over headphones.<o:p></o:p></span></li><li class=3D"MsoList= ParagraphCxSpLast" style=3D"margin-left:0cm;mso-add-space:auto;mso-list:l0 = level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Tra= ck 2, listening in a car in the presence of noise.<o:p></o:p></span></li></= ul> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">Your task is to improve the perceived audio quality of the reproductio= n considering the listeners' hearing loss. This is about intelligent demixi= ng/remixing of the tracks or processing the mixed music to compensate for the hearing loss of the listener. You might be doi= ng this to make the lyrics clearer, correct the frequency balance, ensure t= he music has the intended emotional impact, etc.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><b><span style=3D"font-family:&quot;Arial&quot;,sans= -serif">The 2023 Challenge<o:p></o:p></span></b></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">Track 1: Headphones and demixing</span><span style=3D"font-family:&quo= t;Arial&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">You will be tasked with improving the audio quality of music samples f= or listeners with defined hearing losses. The listeners are not using their= normal hearing aids, they are just listening to the signals you provide via the headphones. One machine learning challe= nge here is to demix stereo music using an evaluation metric that allows fo= r hearing loss, to then allow an intelligent remix for the listener.<o:p></= o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">Track 2: Car</span><span style=3D"font-family:&quot;Arial&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">You need to enhance music samples played by the car stereo in the pres= ence of noise from the engine/road. This will be for listeners with a defin= ed hearing loss, who have a fixed hearing aid that we will provide.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><b><span style=3D"font-family:&quot;Arial&quot;,sans= -serif">Evaluation</span></b><b><span style=3D"font-family:&quot;Arial&quot= ;,sans-serif"><o:p></o:p></span></b></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">Entries will be objectively evaluated using Hearing Aid Audio Quality = Index (HAAQI). The best systems will go forward to be scored by our listeni= ng panel of people with a hearing loss.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><b><span style=3D"font-family:&quot;Arial&quot;,sans= -serif">What you will get</span></b><b><span style=3D"font-family:&quot;Ari= al&quot;,sans-serif"><o:p></o:p></span></b></p> <ul style=3D"margin-top:0cm" type=3D"disc"> <li class=3D"MsoListParagraphCxSpFirst" style=3D"margin-left:0cm;mso-add-sp= ace:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Dat= abases of music and scenes for training and evaluation.<o:p></o:p></span></= li><li class=3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:0cm;mso-ad= d-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Lis= tener characteristics, including audiograms.<o:p></o:p></span></li><li clas= s=3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:0cm;mso-add-space:aut= o;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">A c= omplete end-to-end software baseline to build upon from sample selection to= evaluation.<o:p></o:p></span></li><li class=3D"MsoListParagraphCxSpLast" s= tyle=3D"margin-left:0cm;mso-add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Tut= orials to learn about hearing loss, hearing aids, and our software.<o:p></o= :p></span></li></ul> <p class=3D"MsoNormal"><b><span style=3D"font-family:&quot;Arial&quot;,sans= -serif">Draft key dates</span></b><b><span style=3D"font-family:&quot;Arial= &quot;,sans-serif"><o:p></o:p></span></b></p> <ul style=3D"margin-top:0cm" type=3D"disc"> <li class=3D"MsoListParagraphCxSpFirst" style=3D"margin-left:0cm;mso-add-sp= ace:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">1st= Feb 2023: Beta launch of challenge with software and datasets.<o:p></o:p><= /span></li><li class=3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:0c= m;mso-add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">1st= March 2023: Full launch of challenge with software and datasets.<o:p></o:p= ></span></li><li class=3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:= 0cm;mso-add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Jun= e 2023: Release of evaluation data.<o:p></o:p></span></li><li class=3D"MsoL= istParagraphCxSpMiddle" style=3D"margin-left:0cm;mso-add-space:auto;mso-lis= t:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Jul= y 2023: Competition closed. All entrants submit (i) audio for evaluation an= d (ii) a draft of their technical report.<o:p></o:p></span></li><li class= =3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:0cm;mso-add-space:auto= ;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Aug= 2023: Entrants informed which systems are going forward to the listening t= est evaluation stage.<o:p></o:p></span></li><li class=3D"MsoListParagraphCx= SpMiddle" style=3D"margin-left:0cm;mso-add-space:auto;mso-list:l0 level1 lf= o1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Sep= t 2023: Submit two page technical reports to Cadenza-2023 workshop.<o:p></o= :p></span></li><li class=3D"MsoListParagraphCxSpLast" style=3D"margin-left:= 0cm;mso-add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Aut= umn 2023: Cadenza-2023 workshop.<o:p></o:p></span></li></ul> <p class=3D"MsoNormal"><b><span style=3D"font-family:&quot;Arial&quot;,sans= -serif">The Team</span></b><b><span style=3D"font-family:&quot;Arial&quot;,= sans-serif"><o:p></o:p></span></b></p> <ul style=3D"margin-top:0cm" type=3D"disc"> <li class=3D"MsoListParagraphCxSpFirst" style=3D"margin-left:0cm;mso-add-sp= ace:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://www.salford.ac.uk/our-staff/trevor-cox">Trevor Cox</a></spa= n><span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">,= Professor of Acoustic Engineering, University of Salford<o:p></o:p></span>= </li><li class=3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:0cm;mso-= add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://ahc.leeds.ac.uk/music/staff/286/dr-alinka-greasley">Alinka = Greasley</a></span><span lang=3D"EN-US" style=3D"font-family:&quot;Arial&qu= ot;,sans-serif">, Associate Professor in Music Psychology, Leeds University<o:p></o:p></span></li><li class=3D"MsoListParagraphCxSpMiddle" = style=3D"margin-left:0cm;mso-add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://www.nottingham.ac.uk/medicine/people/michael.akeroyd">Micha= el Akeroyd</a></span><span lang=3D"EN-US" style=3D"font-family:&quot;Arial&= quot;,sans-serif">, Professor of Hearing Sciences, University of Nottingham<o:p></o:p></span></li><li class=3D"MsoListParagraphCxSpMiddl= e" style=3D"margin-left:0cm;mso-add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://www.sheffield.ac.uk/dcs/people/academic/jon-barker">Jon Bar= ker</a></span><span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,s= ans-serif">, Professor in Computer Science, University of Sheffield<o:p></o= :p></span></li><li class=3D"MsoListParagraphCxSpMiddle" style=3D"margin-lef= t:0cm;mso-add-space:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://www.nottingham.ac.uk/research/groups/hearingsciences/people= /bill.whitmer">William Whitmer</a></span><span lang=3D"EN-US" style=3D"font= -family:&quot;Arial&quot;,sans-serif">, Senior Investigator Scientist, University of Nottingham<o:p></o:p></span></li><li class=3D"MsoListParagra= phCxSpMiddle" style=3D"margin-left:0cm;mso-add-space:auto;mso-list:l0 level= 1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://www.salford.ac.uk/our-staff/bruno-fazenda">Bruno Fazenda,</= a></span><span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-s= erif"> Reader in Acoustics, University of Salford<o:p></o:p></span></li><li= class=3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:0cm;mso-add-spac= e:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://www.salford.ac.uk/our-staff/simone-graetzer">Simone Graetze= r,</a></span><span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sa= ns-serif"> Research Fellow, University of Salford<o:p></o:p></span></li><li= class=3D"MsoListParagraphCxSpMiddle" style=3D"margin-left:0cm;mso-add-spac= e:auto;mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif"><a = href=3D"https://www.salford.ac.uk/our-staff/rebecca-vos">Rebecca Vos,</a></= span><span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif= "> Research Fellow, University of Salford<o:p></o:p></span></li><li class= =3D"MsoListParagraphCxSpLast" style=3D"margin-left:0cm;mso-add-space:auto;m= so-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Jen= nifer Firth, Research Assistant in Hearing Sciences, University of Nottingh= am<o:p></o:p></span></li></ul> <p class=3D"MsoNormal"><b><span style=3D"font-family:&quot;Arial&quot;,sans= -serif">Funders<o:p></o:p></span></b></p> <ul style=3D"margin-top:0cm" type=3D"disc"> <li class=3D"MsoListParagraph" style=3D"margin-left:0cm;mso-add-space:auto;= mso-list:l0 level1 lfo1"> <span lang=3D"EN-US" style=3D"font-family:&quot;Arial&quot;,sans-serif">Cad= enza is funded by EPSRC. Project partners are RNID; BBC R&amp;D; Carl von O= ssietzky University Oldenburg; Google; Logitech UK Ltd and Sonova AG.<o:p><= /o:p></span></li></ul> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">Prof @xxxxxxxx<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">Acoustical Engineering, University of Salford<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif">+44 161 295 5474; +44 7986 557419<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-family:&quot;Arial&quot;,sans-se= rif"><o:p>&nbsp;</o:p></span></p> </div> </body> </html> --_000_PAXPR01MB92206EF0C429DD64765A43EDEDEA9PAXPR01MB9220eurp_--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University