[AUDITORY] [JOBS] PhD @xxxxxxxx University of Nottingham, UK (Chris Scholes )


Subject: [AUDITORY] [JOBS] PhD @xxxxxxxx University of Nottingham, UK
From:    Chris Scholes  <scholesy@xxxxxxxx>
Date:    Wed, 26 Mar 2025 13:24:47 +0000

--_000_GV1PR10MB59372C1BF0E16FDCAB438D5886A62GV1PR10MB5937EURP_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable PhD Program =96 Artificial Intelligence Doctoral Training Centre (AI DTC) = =96 University of Nottingham, UK * The University of Nottingham is inviting applications for the Artific= ial Intelligence Doctoral Training Centre and I would be happy to support a= n applicant for the project detailed below. * The deadline is 5th May 2025 but we would need to interview applicant= s soon to be ready for this application deadline. * If you would like to apply, or have any questions, please email chris= .scholes@xxxxxxxx<mailto:chris.scholes@xxxxxxxx> * Please note, this scheme is only open to Home (UK) students General details can be found here: https://www.nottingham.ac.uk/computersci= ence/studywithus/postgraduateresearch/nottinghamdtcinai.aspx Our project: Using facial and vocal tract dynamics to improve speech comprehension in no= ise Around 18 million individuals in the UK are estimated to have hearing loss,= including over half of the population aged 55 or more. Hearing aids are th= e most common intervention for hearing loss, however, one in five people wh= o should wear hearing aids do not and a failure to comprehend speech in noi= sy situations is one of the most common complaints of hearing aid users. Di= fficulties with communication negatively impact quality of life and can lea= d to social isolation, depression and problems with maintaining employment. Clearly, there is a growing need to make hearing aids more attractive and o= ne route to achieve this is to enable users to better understand speech in = noisy situations. Facial speech offers a previously untapped source of info= rmation which is immune from auditory noise. Importantly, auditory includes= sounds that are spectrally like the target voice, such as competing voices= , which are particularly challenging for noise reduction algorithms current= ly employed in hearing aids. With multimodal hearing aids, which capture a = speaker=92s face using a video camera, already in development, it is now vi= tal that we establish how to use facial speech information to augment heari= ng aid function. What you will do This PhD project offers the opportunity to explore how the face and voice a= re linked to a common source (the vocal tract) with the aim of predicting s= peech from the face alone or combined with noisy audio speech. You will wor= k with a state-of-the-art multimodal speech dataset, recorded in Nottingham= , in which facial video and voice audio have been recorded simultaneously w= ith real-time magnetic resonance imaging of the vocal tract (for examples, = see https://doi.org/10.1073/pnas.2006192117). You will use a variety of ana= lytical methods including principal components analysis and machine-learnin= g to model the covariation of the face, voice and vocal tract during speech= . Who would be suitable for this project? This project would equally suit a computational student with an interest in= applied research or a psychology/neuroscience student with an interest in = developing skills in programming, sophisticated analysis and AI. You should= have a degree in Psychology, Neuroscience, Computer Science, Maths, Physic= s or a related area. You should have experience of programming and a strong= interest in speech and machine learning. Supervisors: Dr Chris Scholes<https://www.nottingham.ac.uk/psychology/peopl= e/chris.scholes> (School of Psychology), Dr Joy Egede<https://www.nottingha= m.ac.uk/computerscience/people/joy.egede> (School of Computer Science), Pro= f Alan Johnston<https://www.nottingham.ac.uk/psychology/people/alan.johnsto= n> (School of Psychology). For further details and to arrange an interview please contact Dr Chris Sch= oles<https://www.nottingham.ac.uk/psychology/people/chris.scholes> (School = of Psychology). --_000_GV1PR10MB59372C1BF0E16FDCAB438D5886A62GV1PR10MB5937EURP_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> <style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo= ttom:0;} </style> </head> <body dir=3D"ltr"> <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> <b>PhD Program =96 Artificial Intelligence Doctoral Training Centre (AI DTC= ) =96 University of Nottingham, UK</b></div> <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> &nbsp;</div> <ul style=3D"margin-top: 0cm; margin-right: 0cm; padding-left: 0px;"> <li style=3D"font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; = color: rgb(0, 0, 0); margin: 0cm 0cm 0cm 36pt;"> The University of Nottingham is inviting applications for the Artificial In= telligence Doctoral Training Centre and I would be happy to support an appl= icant for the project detailed below.</li><li style=3D"font-family: Calibri= , Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); margin: 0cm = 0cm 0cm 36pt;"> The deadline is 5th&nbsp;May 2025 but <b>we would need to interview applica= nts soon</b>&nbsp;to be ready for this application deadline.</li><li style= =3D"font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rg= b(0, 0, 0); margin: 0cm 0cm 0cm 36pt;"> If you would like to apply, or have any questions, please email <span style= =3D"color: blue;"> <u><a href=3D"mailto:chris.scholes@xxxxxxxx" id=3D"OWA11274b04-afaa= -981f-e086-47ec34c6bdfd" class=3D"OWAAutoLink" style=3D"color: blue;">chris= .scholes@xxxxxxxx</a></u></span> </li><li style=3D"font-family: Calibri, Helvetica, sans-serif; font-size: 1= 2pt; color: rgb(0, 0, 0); margin: 0cm 0cm 0cm 36pt;"> <b>Please note, this scheme is only open to Home (UK) students</b></li></ul= > <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> &nbsp;</div> <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> General details can be found here: <span style=3D"color: blue;"><u><a href= =3D"https://www.nottingham.ac.uk/computerscience/studywithus/postgraduatere= search/nottinghamdtcinai.aspx" id=3D"OWA3b3a13ff-2df1-a12f-7d13-dfce756a973= a" class=3D"OWAAutoLink" style=3D"color: blue;">https://www.nottingham.ac.u= k/computerscience/studywithus/postgraduateresearch/nottinghamdtcinai.aspx</= a></u></span></div> <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> <b>&nbsp;</b></div> <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> Our project:</div> <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> <b>&nbsp;</b></div> <div class=3D"elementToProof" style=3D"margin: 0cm; font-family: Calibri, H= elvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"> <b>Using facial and vocal tract dynamics to improve speech comprehension in= noise</b></div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> &nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> Around 18 million individuals in the UK are estimated to have hearing loss,= including over half of the population aged 55 or more. Hearing aids are th= e most common intervention for hearing loss, however, one in five people wh= o should wear hearing aids do not and a failure to comprehend speech in noisy situations is one of the most = common complaints of hearing aid users. Difficulties with communication neg= atively impact quality of life and can lead to social isolation, depression= and problems with maintaining employment.&nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> &nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> Clearly, there is a growing need to make hearing aids more attractive and o= ne route to achieve this is to enable users to better understand speech in = noisy situations. Facial speech offers a previously untapped source of info= rmation which is immune from auditory noise. Importantly, auditory includes sounds that are spectrally like the = target voice, such as competing voices, which are particularly challenging = for noise reduction algorithms currently employed in hearing aids. With mul= timodal hearing aids, which capture a speaker=92s face using a video camera, already in development, it is now= vital that we establish how to use facial speech information to augment he= aring aid function.&nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> &nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> What you will do</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> This PhD project offers the opportunity to explore how the face and voice a= re linked to a common source (the vocal tract) with the aim of predicting s= peech from the face alone or combined with noisy audio speech. You will wor= k with a state-of-the-art multimodal speech dataset, recorded in Nottingham, in which facial video and voice au= dio have been recorded simultaneously with real-time magnetic resonance ima= ging of the vocal tract (for examples, see <span style=3D"color: blue;"><u><a href=3D"https://doi.org/10.1073/pnas.200= 6192117" target=3D"_blank" id=3D"OWA5088f32c-f6f0-59bd-f2c7-a00e0f298d52" c= lass=3D"OWAAutoLink" title=3D"https://doi.org/10.1073/pnas.2006192117" data= -linkindex=3D"3" data-auth=3D"NotApplicable" style=3D"color: blue;">https:/= /doi.org/10.1073/pnas.2006192117</a></u></span>). You will use a variety of analytical methods including principal component= s analysis and machine-learning to model the covariation of the face, voice= and vocal tract during speech.</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> &nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> Who would be suitable for this project?</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> This project would equally suit a computational student with an interest in= applied research or a psychology/neuroscience student with an interest in = developing skills in programming, sophisticated analysis and AI. You should= have a degree in Psychology, Neuroscience, Computer Science, Maths, Physics or a related area. You should have experi= ence of programming and a strong interest in speech and machine learning.</= div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> &nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> Supervisors: <span style=3D"color: rgb(70, 120, 134);"><u><a href=3D"https:= //www.nottingham.ac.uk/psychology/people/chris.scholes" target=3D"_blank" i= d=3D"OWA44750990-aaf1-cfa4-40b8-0f0a90dd6da8" class=3D"OWAAutoLink" title= =3D"https://www.nottingham.ac.uk/psychology/people/chris.scholes" data-link= index=3D"4" data-auth=3D"NotApplicable" style=3D"color: rgb(70, 120, 134);"= >Dr Chris Scholes</a></u></span>&nbsp;(School of Psychology), <span style=3D"c= olor: rgb(70, 120, 134);"> <u><a href=3D"https://www.nottingham.ac.uk/computerscience/people/joy.egede= " target=3D"_blank" id=3D"OWA5fc6b52d-0f74-ac0d-1fa8-24cd9fbe3731" class=3D= "OWAAutoLink" title=3D"https://www.nottingham.ac.uk/computerscience/people/= joy.egede" data-linkindex=3D"5" data-auth=3D"NotApplicable" style=3D"color:= rgb(70, 120, 134);">Dr Joy Egede</a></u></span>&nbsp;(School of Computer Science), <span style=3D= "color: rgb(70, 120, 134);"> <u><a href=3D"https://www.nottingham.ac.uk/psychology/people/alan.johnston"= target=3D"_blank" id=3D"OWAb6da3ffe-2410-6fc2-eb2c-bb9fdb3d7ccf" class=3D"= OWAAutoLink" title=3D"https://www.nottingham.ac.uk/psychology/people/alan.j= ohnston" data-linkindex=3D"6" data-auth=3D"NotApplicable" style=3D"color: r= gb(70, 120, 134);">Prof Alan Johnston</a></u></span>&nbsp;(School of Psychology).</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> &nbsp;</div> <div class=3D"elementToProof" style=3D"text-align: left; margin: 0cm; font-= family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0= );"> For further details and to arrange an interview please contact <span style= =3D"color: rgb(70, 120, 134);"> <u><a href=3D"https://www.nottingham.ac.uk/psychology/people/chris.scholes"= target=3D"_blank" id=3D"OWAe9c5f38d-cde9-efa2-97bd-8d9405853f1e" class=3D"= OWAAutoLink" title=3D"https://www.nottingham.ac.uk/psychology/people/chris.= scholes" data-linkindex=3D"7" data-auth=3D"NotApplicable" style=3D"color: r= gb(70, 120, 134);">Dr Chris Scholes</a></u></span>&nbsp;(School of Psychology).</div> <div class=3D"elementToProof" style=3D"font-family: Calibri, Helvetica, san= s-serif; font-size: 12pt; color: rgb(0, 0, 0);"> <br> </div> </body> </html> --_000_GV1PR10MB59372C1BF0E16FDCAB438D5886A62GV1PR10MB5937EURP_--


This message came from the mail archive
postings/2025/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University