[AUDITORY] Signal processing challenge launch: improving music for listeners with a hearing loss (Trevor Cox )


Subject: [AUDITORY] Signal processing challenge launch: improving music for listeners with a hearing loss
From:    Trevor Cox  <0000017832c3f089-dmarc-request@xxxxxxxx>
Date:    Sun, 5 Mar 2023 18:08:50 +0000

--_000_PAXPR01MB92202EDFDA5F965628975766EDB19PAXPR01MB9220eurp_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable First Cadenza Challenge launches http://cadenzachallenge.org/ is organizing signal processing challenges for= music and listeners with hearing loss. The purpose of the challenges is to= catalyse new work to radically improve the processing of music for those w= ith a hearing loss. Even if you=92ve not worked on hearing loss before, we are providing you wi= th the tools to enable you to apply your machine learning and music process= ing to get going. The first round is focused on two scenarios: * Task 1, listening over headphones. * Task 2, listening in a car in the presence of noise. Your task is to improve the perceived audio quality of the reproduction con= sidering the listeners' hearing loss. You might be doing this to make the l= yrics clearer, correct the frequency balance, ensure the music has the inte= nded emotional impact, etc. Task 1 =96 Live Now In Task 1, listeners are listening to music over headphones, you are asked = to improve the perceived audio quality of the reproduction considering the = listeners' hearing loss. The listeners are not using their normal hearing a= ids, they are just listening to the signals you provide via the headphones.= The task is divided into two steps; demixing and remixing a stereo song. First, like traditional music source separation challenges, you=92ll need t= o employ machine learning approaches to demix a piece of stereo music into = eight stems corresponding to the left and right parts of the vocal, bass, d= rum and other stems. However, unlike traditional challenges, the demixing n= eeds to be personalized to the listener characteristics, and the metric use= d to objectively evaluate the eight stems is the HAAQI (Hearing Aid Audio Q= uality Index) score instead of the SDR (Signal to Distortion Ratio). Then, you=92ll need to remix the signal in a personalized manner. This is t= he signal that the listener will receive in their headphones. Task 2 =96 Available next week In Task 2, you need to enhance music samples played by the car stereo in th= e presence of noise from the engine/road. You will have access to the =93cl= ean=94 music (i.e., music without the presence of noise), listener characte= ristics and metadata information about the noise. Note that you won=92t hav= e access to the noise signal, only metadata information. In the evaluation stage, car noise and room impulses will be added to your = signal. Listeners will be using a fixed hearing aid. In this case, the HAAQ= I evaluation is performed to the signal resulting after adding the car nois= e and the hearing aid. Learning Resources In http://cadenzachallenge.org/docs/learning_resources/learning_intro, we p= rovide a range of material designed to fill any gap in their knowledge to e= nable them to enter the challenges. These materials include hearing impairm= ent, hearing aids for music and guidelines to understand audiograms. Evaluation Task 1 and Task 2 uses HAAQI to evaluate the enhanced signals. Additionally= , the best systems will go forward to be scored by our listening panel of p= eople with hearing loss. Software and Datasets * The software is shared in https://github.com/claritychallenge/clarity= GitHub repository. * The baseline is stored in recipes/cad1 directory. * You will find instructions on how to get access to the datasets on bo= th the website and in the baseline recipe on git. The Team * Trevor Cox<https://www.salford.ac.uk/our-staff/trevor-cox>, Professor= of Acoustic Engineering, University of Salford * Alinka Greasley<https://ahc.leeds.ac.uk/music/staff/286/dr-alinka-gre= asley>, Professor of Music Psychology, University of Leeds * Michael Akeroyd<https://www.nottingham.ac.uk/medicine/people/michael.= akeroyd>, Professor of Hearing Sciences, University of Nottingham * Jon Barker<https://www.sheffield.ac.uk/dcs/people/academic/jon-barker= >, Professor in Computer Science, University of Sheffield * William Whitmer<https://www.nottingham.ac.uk/research/groups/hearings= ciences/people/bill.whitmer>, Senior Investigator Scientist, University of = Nottingham * Bruno Fazenda,<https://www.salford.ac.uk/our-staff/bruno-fazenda> Rea= der in Acoustics, University of Salford * Scott Bannister<https://ahc.leeds.ac.uk/music/staff/3358/dr-scott-ban= nister>, Research Fellow, University of Leeds * Simone Graetzer,<https://www.salford.ac.uk/our-staff/simone-graetzer>= Research Fellow, University of Salford * Rebecca Vos,<https://www.salford.ac.uk/our-staff/rebecca-vos> Researc= h Fellow, University of Salford * Gerardo Roa, Research Fellow, University of Salford * Jennifer Firth, Research Assistant in Hearing Sciences, University of= Nottingham Funders Cadenza is funded by EPSRC. Project partners are RNID; BBC R&D; Carl von Os= sietzky University Oldenburg; Google; Logitech UK Ltd and Sonova AG. Trevor Cox Professor of Acoustic Engineering Newton Building, University of Salford, Salford M5 4WT, UK. Tel 0161 295 5474 Mobile: 07986 557419 www.acoustics.salford.ac.uk @xxxxxxxx --_000_PAXPR01MB92202EDFDA5F965628975766EDB19PAXPR01MB9220eurp_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> <style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo= ttom:0;} </style> </head> <body dir=3D"ltr"> <div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size= : 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" class= =3D"elementToProof"> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <b><span lang=3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&qu= ot;;mso-fareast-theme-font:minor-fareast" class=3D"ContentPasted0">First Ca= denza Challenge launches</span></b></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <span lang=3D"EN-US"><a href=3D"http://cadenzachallenge.org/" class=3D"Cont= entPasted0"><span style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;ms= o-fareast-theme-font:minor-fareast">http://cadenzachallenge.org/</span></a>= </span><span lang=3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Minch= o&quot;;mso-fareast-theme-font:minor-fareast" class=3D"ContentPasted0"> is organizing signal processing challenges for music and listeners with he= aring loss. </span><span lang=3D"EN-US" class=3D"ContentPasted0">The purpose of the cha= llenges is to catalyse new work to radically improve the processing of musi= c for those with a hearing loss.</span><span lang=3D"EN-US" style=3D"mso-fa= reast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareas= t"><o:p class=3D"ContentPasted0">&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif;tab-stops:36.0pt"> <span lang=3D"EN-US" class=3D"ContentPasted0">Even if you=92ve not worked o= n hearing loss before, we are providing you with the tools to enable you to= apply your machine learning and music processing to get going.</span><span= lang=3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-= fareast-theme-font:minor-fareast"><o:p class=3D"ContentPasted0">&nbsp;</o:p= ></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <span lang=3D"EN-US" style=3D"mso-ascii-font-family:Calibri;mso-fareast-fon= t-family:Calibri;mso-hansi-font-family:Calibri;mso-bidi-font-family:Calibri= ;color:black;mso-themecolor:text1" class=3D"ContentPasted0">The first round= is focused on two scenarios:<o:p class=3D"ContentPasted0">&nbsp;</o:p></sp= an></p> <ul style=3D"margin-bottom:0cm"> <li style=3D"color: black;"><span lang=3D"EN-US" style=3D"mso-ascii-font-fa= mily:Calibri;mso-fareast-font-family:Calibri;mso-hansi-font-family:Calibri;= mso-bidi-font-family:Calibri;color:black;mso-themecolor:text1" class=3D"Con= tentPasted0">Task 1, listening over headphones.<o:p class=3D"ContentPasted0= ">&nbsp;</o:p></span></li><li style=3D"color: black;"><span lang=3D"EN-US" = style=3D"mso-ascii-font-family:Calibri;mso-fareast-font-family:Calibri;mso-= hansi-font-family:Calibri;mso-bidi-font-family:Calibri;color:black;mso-them= ecolor:text1" class=3D"ContentPasted0">Task 2, listening in a car in the presence of noise.<o:p class=3D"ContentPasted0">&nbsp;</o:p></span> <div style=3D"color: black;"><span lang=3D"EN-US" style=3D"mso-ascii-font-f= amily:Calibri;mso-fareast-font-family:Calibri;mso-hansi-font-family:Calibri= ;mso-bidi-font-family:Calibri;color:black;mso-themecolor:text1" class=3D"Co= ntentPasted0"><o:p class=3D"ContentPasted0"><br> </o:p></span></div> </li></ul> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <span lang=3D"EN-US" style=3D"mso-ascii-font-family:Calibri;mso-fareast-fon= t-family:Calibri;mso-hansi-font-family:Calibri;mso-bidi-font-family:Calibri= ;color:black;mso-themecolor:text1" class=3D"ContentPasted0">Your task is to= improve the perceived audio quality of the reproduction considering the listeners' hearing loss. You might be doi= ng this to make the lyrics clearer, correct the frequency balance, ensure t= he music has the intended emotional impact, etc.<o:p class=3D"ContentPasted= 0">&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif;tab-stops:36.0pt"> <b><span lang=3D"EN-US" class=3D"ContentPasted0">Task 1 =96 Live Now<o:p cl= ass=3D"ContentPasted0">&nbsp;</o:p></span></b></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif;tab-stops:36.0pt"> <span lang=3D"EN-US" class=3D"ContentPasted0">In Task 1, listeners are list= ening to music over headphones, you are asked to improve the perceived audi= o quality of the reproduction considering the listeners' hearing loss. The = listeners are not using their normal hearing aids, they are just listening to the signals you provide via the h= eadphones. The task is divided into two steps; demixing and remixing a ster= eo song. <o:p class=3D"ContentPasted0">&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif;tab-stops:36.0pt"> <span lang=3D"EN-US" class=3D"ContentPasted0">First, like traditional music= source separation challenges, you=92ll need to employ machine learning app= roaches to demix a piece of stereo music into eight stems corresponding to = the left and right parts of the vocal, bass, drum and other stems. However, unlike traditional challenges, the de= mixing needs to be personalized to the listener characteristics, and the me= tric used to objectively evaluate the eight stems is the HAAQI (</span><spa= n lang=3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso= -fareast-theme-font:minor-fareast" class=3D"ContentPasted0">Hearing Aid Audio Quality Index)</span><span lang=3D"EN-US" class=3D"ContentPasted= 0"> score instead of the SDR (Signal to Distortion Ratio).</span><span lang= =3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-farea= st-theme-font:minor-fareast"><o:p class=3D"ContentPasted0">&nbsp;</o:p></sp= an></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif;tab-stops:36.0pt"> <span lang=3D"EN-US" class=3D"ContentPasted0">Then, you=92ll need to remix = the signal in a personalized manner. This is the signal that the listener w= ill receive in their headphones. </span><span lang=3D"EN-US" style=3D"mso-ascii-font-family:Calibri;mso-fare= ast-font-family:Calibri;mso-hansi-font-family:Calibri;mso-bidi-font-family:= Calibri"><o:p class=3D"ContentPasted0">&nbsp;</o:p></span></p> <h2 style=3D"margin:0cm 0cm 8pt;font-size:13pt;font-family:Arial, sans-seri= f"><span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:&quot;Calibri= &quot;,sans-serif;mso-ascii-theme-font:minor-latin;mso-fareast-font-family:= &quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast;mso-hansi-theme-= font:minor-latin;mso-bidi-font-family:Arial;mso-bidi-theme-font:minor-bidi"= class=3D"ContentPasted0">Task 2 =96 Available next week</span></h2> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <span lang=3D"EN-US" style=3D"mso-ascii-font-family:Calibri;mso-fareast-fon= t-family:Calibri;mso-hansi-font-family:Calibri;mso-bidi-font-family:Calibri= ;color:black;mso-themecolor:text1" class=3D"ContentPasted0">In Task 2, you = need to enhance music samples played by the car stereo in the presence of noise from the engine/road. You will hav= e access to the =93clean=94 music (i.e., music without the presence of nois= e), listener characteristics and metadata information about the noise. Note= that you won=92t have access to the noise signal, only metadata information. <o:p class=3D"ContentPasted0">&nbsp;</o= :p></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <span lang=3D"EN-US" style=3D"mso-ascii-font-family:Calibri;mso-fareast-fon= t-family:Calibri;mso-hansi-font-family:Calibri;mso-bidi-font-family:Calibri= ;color:black;mso-themecolor:text1" class=3D"ContentPasted0">In the evaluati= on stage, car noise and room impulses will be added to your signal. Listeners will be using a fixed hearing aid.= In this case, the HAAQI evaluation is performed to the signal resulting af= ter adding the car noise and the hearing aid. <o:p class=3D"ContentPasted0">&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif;tab-stops:36.0pt"> <b><span lang=3D"EN-US" class=3D"ContentPasted0">Learning Resources<o:p cla= ss=3D"ContentPasted0">&nbsp;</o:p></span></b></p> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif;tab-stops:36.0pt"> <span lang=3D"EN-US" class=3D"ContentPasted0">In <a href=3D"http://cadenzac= hallenge.org/docs/learning_resources/learning_intro" class=3D"ContentPasted= 0"> http://cadenzachallenge.org/docs/learning_resources/learning_intro</a>, we = provide a range of material designed to fill any gap in their knowledge to = enable them to enter the challenges. These materials include hearing impair= ment, hearing aids for music and guidelines to understand audiograms.<o:p class=3D"ContentPasted0">&nbsp;</= o:p></span></p> <h2 style=3D"margin:0cm 0cm 8pt;font-size:13pt;font-family:Arial, sans-seri= f"><span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:&quot;Calibri= &quot;,sans-serif;mso-ascii-theme-font:minor-latin;mso-fareast-font-family:= &quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast;mso-hansi-theme-= font:minor-latin;mso-bidi-font-family:Arial;mso-bidi-theme-font:minor-bidi"= class=3D"ContentPasted0">Evaluation<o:p class=3D"ContentPasted0">&nbsp;</o= :p></span></h2> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <span lang=3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;= ;mso-fareast-theme-font:minor-fareast" class=3D"ContentPasted0">Task 1 and = Task 2 uses HAAQI to evaluate the enhanced signals. Additionally, the best = systems will go forward to be scored by our listening panel of people with hearing loss.<o:p class=3D"ContentPasted0">&nbsp;</o:= p></span></p> <h2 style=3D"margin:0cm 0cm 8pt;font-size:13pt;font-family:Arial, sans-seri= f;tab-stops:36.0pt"> <span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:&quot;Calibri&qu= ot;,sans-serif;mso-ascii-theme-font:minor-latin;mso-fareast-font-family:&qu= ot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast;mso-hansi-theme-fon= t:minor-latin;mso-bidi-font-family:Arial;mso-bidi-theme-font:minor-bidi" cl= ass=3D"ContentPasted0">Software and Datasets<o:p class=3D"ContentPasted0">&nbsp;</o:p></span></h2> <ul style=3D"margin-bottom:0cm"> <li><span lang=3D"EN-US" class=3D"ContentPasted0">The software is shared in= <a href=3D"https://github.com/claritychallenge/clarity" class=3D"ContentPa= sted0"> https://github.com/claritychallenge/clarity</a> GitHub repository. <o:p cla= ss=3D"ContentPasted0"> &nbsp;</o:p></span></li><li><span lang=3D"EN-US" class=3D"ContentPasted0">T= he baseline is stored in <b class=3D"ContentPasted0"> recipes/cad1 </b>directory.<o:p class=3D"ContentPasted0">&nbsp;</o:p></span= ></li><li><span lang=3D"EN-US" class=3D"ContentPasted0">You will find instr= uctions on how to get access to the datasets on both the website and in the= baseline recipe on git.<span style=3D"mso-spacerun:yes" class=3D"ContentPa= sted0">&nbsp; </span><o:p class=3D"ContentPasted0">&nbsp;</o:p></span></li></ul> <h2 style=3D"margin:0cm 0cm 8pt;font-size:13pt;font-family:Arial, sans-seri= f"><span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:&quot;Calibri= &quot;,sans-serif;mso-ascii-theme-font:minor-latin;mso-fareast-font-family:= &quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast;mso-hansi-theme-= font:minor-latin;mso-bidi-font-family:Arial;mso-bidi-theme-font:minor-bidi"= class=3D"ContentPasted0">The Team<o:p class=3D"ContentPasted0">&nbsp;</o:p></span></h2> <ul style=3D"margin-bottom:0cm"> <li><span lang=3D"EN-US"><a href=3D"https://www.salford.ac.uk/our-staff/tre= vor-cox" class=3D"ContentPasted0" data-loopstyle=3D"link"><span style=3D"ms= o-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fa= reast">Trevor Cox</span></a></span><span lang=3D"EN-US" style=3D"mso-fareas= t-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast" c= lass=3D"ContentPasted0">, Professor of Acoustic Engineering, University of Salford<o:p class=3D"Cont= entPasted0">&nbsp;</o:p></span></li><li><span lang=3D"EN-US"><a href=3D"htt= ps://ahc.leeds.ac.uk/music/staff/286/dr-alinka-greasley" class=3D"ContentPa= sted0"><span style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-far= east-theme-font:minor-fareast">Alinka Greasley</span></a></span><span lang= =3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-farea= st-theme-font:minor-fareast" class=3D"ContentPasted0">, Professor of Music Psychology, University of Leeds<o:p class=3D"ContentPas= ted0">&nbsp;</o:p></span></li><li><span lang=3D"EN-US"><a href=3D"https://w= ww.nottingham.ac.uk/medicine/people/michael.akeroyd" class=3D"ContentPasted= 0"><span style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast= -theme-font:minor-fareast">Michael Akeroyd</span></a></span><span lang=3D"E= N-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-th= eme-font:minor-fareast" class=3D"ContentPasted0">, Professor of Hearing Sciences, University of Nottingham<o:p class=3D"Conte= ntPasted0">&nbsp;</o:p></span></li><li><span lang=3D"EN-US"><a href=3D"http= s://www.sheffield.ac.uk/dcs/people/academic/jon-barker" class=3D"ContentPas= ted0"><span style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fare= ast-theme-font:minor-fareast">Jon Barker</span></a></span><span lang=3D"EN-= US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-them= e-font:minor-fareast" class=3D"ContentPasted0">, Professor in Computer Science, University of Sheffield<o:p class=3D"Conten= tPasted0">&nbsp;</o:p></span></li><li><span lang=3D"EN-US"><a href=3D"https= ://www.nottingham.ac.uk/research/groups/hearingsciences/people/bill.whitmer= " class=3D"ContentPasted0"><span style=3D"mso-fareast-font-family:&quot;MS = Mincho&quot;;mso-fareast-theme-font:minor-fareast">William Whitmer</span></= a></span><span lang=3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Min= cho&quot;;mso-fareast-theme-font:minor-fareast" class=3D"ContentPasted0">, Senior Investigator Scientist, University of Nottingham<o:p class=3D"Conte= ntPasted0">&nbsp;</o:p></span></li><li><span lang=3D"EN-US"><a href=3D"http= s://www.salford.ac.uk/our-staff/bruno-fazenda" class=3D"ContentPasted0"><sp= an style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme= -font:minor-fareast">Bruno Fazenda,</span></a></span><span lang=3D"EN-US" s= tyle=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-fon= t:minor-fareast" class=3D"ContentPasted0"> Reader in Acoustics, University of Salford<o:p class=3D"ContentPasted0">&n= bsp;</o:p></span></li><li><span lang=3D"EN-US"><a href=3D"https://ahc.leeds= .ac.uk/music/staff/3358/dr-scott-bannister" class=3D"ContentPasted0"><span = style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-fo= nt:minor-fareast">Scott Bannister</span></a></span><span lang=3D"EN-US" sty= le=3D"mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:= minor-fareast" class=3D"ContentPasted0">, Research Fellow, University of Leeds<o:p class=3D"ContentPasted0">&nbsp;</= o:p></span></li><li><span lang=3D"EN-US"><a href=3D"https://www.salford.ac.= uk/our-staff/simone-graetzer" class=3D"ContentPasted0"><span style=3D"mso-f= areast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-farea= st">Simone Graetzer,</span></a></span><span lang=3D"EN-US" style=3D"mso-far= east-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast= " class=3D"ContentPasted0"> Research Fellow, University of Salford<o:p class=3D"ContentPasted0">&nbsp;= </o:p></span></li><li><span lang=3D"EN-US"><a href=3D"https://www.salford.a= c.uk/our-staff/rebecca-vos" class=3D"ContentPasted0"><span style=3D"mso-far= east-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast= ">Rebecca Vos,</span></a></span><span lang=3D"EN-US" style=3D"mso-fareast-f= ont-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast" clas= s=3D"ContentPasted0"> Research Fellow, University of Salford<o:p class=3D"ContentPasted0">&nbsp;= </o:p></span></li><li><span lang=3D"EN-US" style=3D"mso-fareast-font-family= :&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast" class=3D"Conte= ntPasted0">Gerardo Roa, Research Fellow, University of Salford<o:p class=3D= "ContentPasted0">&nbsp;</o:p></span></li><li><span lang=3D"EN-US" style=3D"= mso-fareast-font-family:&quot;MS Mincho&quot;;mso-fareast-theme-font:minor-= fareast" class=3D"ContentPasted0">Jennifer Firth, Research Assistant in Hea= ring Sciences, University of Nottingham<o:p class=3D"ContentPasted0">&nbsp;= </o:p></span></li></ul> <h2 style=3D"margin:0cm 0cm 8pt;font-size:13pt;font-family:Arial, sans-seri= f"><span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:&quot;Calibri= &quot;,sans-serif;mso-ascii-theme-font:minor-latin;mso-fareast-font-family:= &quot;MS Mincho&quot;;mso-fareast-theme-font:minor-fareast;mso-hansi-theme-= font:minor-latin;mso-bidi-font-family:Arial;mso-bidi-theme-font:minor-bidi"= class=3D"ContentPasted0">Funders<o:p class=3D"ContentPasted0">&nbsp;</o:p>= </span></h2> <p class=3D"MsoNormal" style=3D"margin:0cm 0cm 8pt;font-size:11pt;font-fami= ly:Calibri, sans-serif"> <span lang=3D"EN-US" style=3D"mso-fareast-font-family:&quot;MS Mincho&quot;= ;mso-fareast-theme-font:minor-fareast" class=3D"ContentPasted0">Cadenza is = funded by EPSRC. Project partners are RNID; BBC R&amp;D; Carl von Ossietzky= University Oldenburg; Google; Logitech UK Ltd and Sonova AG.<o:p class=3D"ContentPasted0">&nbsp;</o:p></span></p> <br> </div> <div class=3D"elementToProof"> <div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size= : 12pt; color: rgb(0, 0, 0);"> <br> </div> <div id=3D"Signature"> <div> <div></div> <div style=3D"font-family:Tahoma; font-size:13px"> <div style=3D"font-family:Tahoma; font-size:13px"> <div style=3D"font-size:13px; font-family:Tahoma"><font color=3D"#888888"><= font size=3D"2">Trevor Cox<br> Professor of Acoustic Engineering<br> Newton Building, University of Salford, Salford M5 4WT, UK.<br> <strike>Tel 0161 295 5474</strike><br> Mobile: 07986 557419<br> www.acoustics.salford.ac.uk<br> @xxxxxxxx</font></font><br> </div> </div> </div> </div> </div> </div> </body> </html> --_000_PAXPR01MB92202EDFDA5F965628975766EDB19PAXPR01MB9220eurp_--


This message came from the mail archive
src/postings/2023/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University