Subject: Re: [AUDITORY] Six funded PhD positions in aural diversity at Salford, Goldsmiths From: Bill Davies <00000193a08a7512-dmarc-request@xxxxxxxx> Date: Thu, 14 Mar 2024 14:53:45 +0000--_000_AM0PR01MB5812737B0A546A50CC9D71CCD2292AM0PR01MB5812eurp_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi again List, Apologies for the follow-on, but I forgot to mention below that we are hold= ing a webinar for prospective applicants for our Leverhulme funded PhDs. Th= is will be on Thu 21 March at 1300 GMT. Several PhD supervisors will introd= uce themselves and their topics and we'll have a Q&A. More details and the = link can be found at Webinar - Funded PhD Opportunities in Aural Diversity = 2024<https://hub.salford.ac.uk/sirc-acoustics/2024/03/13/funded-phd-opportu= nities-in-aural-diversity-march-2024/> Best, Bill (he/him) From: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxx>= On Behalf Of Bill Davies Sent: Wednesday, March 13, 2024 3:40 PM To: AUDITORY@xxxxxxxx Subject: [AUDITORY] Six funded PhD positions in aural diversity at Salford,= Goldsmiths Dear List, Closing date 21 April 2024 John Drever and I have won =A32.2M from the Leverhulme Trust to fund PhDs i= n aural diversity. We're now advertising for our first cohort of six fully-= funded PhDs to start in Sept/Oct 2024. Please forward this message to anyon= e who you think might be interested. As well as the standard UK stipend of =A318,622 per year (+=A32k London wei= ghting if at Goldsmiths), we also offer a generous sum of up to =A310k per = student for research expenses. Typical research expenses include participan= t recruitment and conference trips. Aural diversity is the fairly new idea that hearing/listening differences b= etween individuals and groups might be better represented as a spectrum ins= tead of a binary normal/impaired division. The PhD topics will build on the= early work of the https://auraldiversity.org/ research network to apply th= e aural diversity concept in many disciplines concerned with sound. Hence, = the PhDs are mostly rather interdisciplinary and a bit different from mains= tream hearing science. We have a long list of potential PhD topics and supervisors at LAURA: The L= everhulme Trust Aural Diversity Doctoral Research Hub<https://www.salford.a= c.uk/school-of-science-engineering-and-environment/laura-the-leverhulme-tru= st-aural-diversity-doctoral-research-hub> We aim to recruit six PhDs this y= ear, and more in the following years, building to a total of 25 researchers= in aural diversity. I'm afraid that we can't accept international applicants this year. I reali= se this is irritating for a global mailing list. Sorry. The restriction is = because the short recruitment cycle in the initial year will not allow enou= gh time for the UK visa processes. We will have some fully-funded internati= onal places available next year (PhDs starting Sept/Oct 2025). Interested applicants should probably look at all the potential topics on o= ur web page, but some that are perhaps a bit closer to the mainstream inter= ests of this list include: 1. Global vs. local features in music listening and sub-clinical autisti= c traits It has been hypothesised that certain musical skills and abilities, such as= absolute pitch, benefit from a local cognitive processing style that is co= mmonly found in autistic individuals (e.g. Mottron et al., 2000; Wenhart & = Altenm=FCller, 2019). However, there is very little empirical evidence yet = for the relationship between local vs. global processing styles of music in= general and sub-clinical autistic traits. Hence, in a first step this proj= ect aims to the answer the question whether a local vs. global processing s= tyle for music is a stable individual trait or just two different listening= approaches that can be shifted at will and whether the ability to shift be= tween these depends on the degree of musical expertise. In a second step, t= he project aims to clarify whether the degree to which a local processing s= tyle for music can be employed to solve a listening task is related to sub-= clinical autistic traits (i.e. systematising/empathising traits) of individ= ual listeners. Supervisor: Daniel M=FCllensiefen<https://www.gold.ac.uk/psychology/staff/m= ullensiefen/> (Goldsmiths) 1. Aural diversity in machine listening models of auditory attention Machine learning algorithms are increasing being used to model human listen= ing. In most cases, machine listening algorithms are developed using ground= truth data generated with psychoacoustic experiments on people with 'norma= l' hearing. This results in machine learning algorithms that are then biase= d against people with diverse hearing. This matters, because these algorith= ms are then used by other machine learners e.g. to develop better hearing a= id processors. In this project, you will develop machine learning models of= attention, with the aim of exploring how to best incorporate more diverse = hearing. You will need to be able to program Python within machine learning= frameworks and have an interest in psychoacoustics. Supervisors: Trevor Cox<https://salford-repository.worktribe.com/person/115= 4144/trevor-cox> and Sunil Vadera<https://salford-repository.worktribe.com/= person/1161017/sunil-vadera> (Salford) 1. Individualised remixing of the sonic environment Some aurally divergent individuals can find many environmental sounds as be= ing disturbing, confusing or even painful to hear. Recent work in AI has im= proved the capability of sound source separation, sound (e.g., speech) enha= ncement and reduction (e.g., background noise). This project would investig= ate deep learning techniques for sound identification and separation with t= he aim of facilitating real time rebalancing of environmental sounds based = on individual requirements or needs. Supervisors: Ben Shirley<https://salford-repository.worktribe.com/person/11= 52493/ben-shirley> and Chris Hughes<https://salford-repository.worktribe.co= m/person/1167587/christopher-hughes> (Salford) 1. Environmental noise assessment accounting for aural and personal dive= rsity Current methods for quantifying the impact of noise on exposed communities = are based on meta-analyses of exposure-response relationships between noise= exposure and annoyance. These assume an average community responding in a= consistent way to environmental noise exposure. We do know that everybody= 'hears differently' and therefore will respond differently to noise. Beca= use of this, new evidence and novel methods are needed for best practice an= d regulation to consider aural and personal diversity in decision-making. = This PhD will explore different approaches to incorporate aural and persona= l (e.g., noise sensitivity) diversity in the assessment and management of e= nvironmental noise exposure. Supervisors: Antonio Torija Martinez<https://salford-repository.worktribe.c= om/person/1174635/antonio-torija-martinez> and Robert Bendall<https://salfo= rd-repository.worktribe.com/person/1166330/robert-bendall> (Salford) For further information, including how to apply, please refer to our webpag= e at LAURA: The Leverhulme Trust Aural Diversity Doctoral Research Hub<http= s://www.salford.ac.uk/school-of-science-engineering-and-environment/laura-t= he-leverhulme-trust-aural-diversity-doctoral-research-hub> We encourage int= erested applicants to contact potential supervisors for an informal discuss= ion. For questions which are not about a specific PhD topic, you can email = see-laura@xxxxxxxx<mailto:see-laura@xxxxxxxx> Cheers, Bill Professor Bill Davies (he/him) Acoustics Research Centre | School of Science, Engineering & Environment w.davies@xxxxxxxx<mailto:w.davies@xxxxxxxx> | Google Scholar<http= s://scholar.google.co.uk/citations?user=3DRPVF0EsAAAAJ&hl=3Den> | ResearchG= ate<https://www.researchgate.net/profile/William_Davies8> | www.salford.ac.= uk<http://www.salford.ac.uk/> Room 108, Newton Building, University of Salford, Salford M5 4WT --_000_AM0PR01MB5812737B0A546A50CC9D71CCD2292AM0PR01MB5812eurp_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-= 1"> <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; margin-top:0cm; margin-right:0cm; margin-bottom:0cm; margin-left:36.0pt; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;} span.EmailStyle21 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt; mso-ligatures:none;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} /* List Definitions */ @xxxxxxxx l0 {mso-list-id:283389624; mso-list-template-ids:-305373168;} @xxxxxxxx l1 {mso-list-id:525292856; mso-list-template-ids:-20159458;} @xxxxxxxx l1:level1 {mso-level-start-at:4; mso-level-tab-stop:36.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l2 {mso-list-id:650906029; mso-list-template-ids:-901500856;} @xxxxxxxx l2:level1 {mso-level-start-at:2; mso-level-tab-stop:36.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l3 {mso-list-id:1147895535; mso-list-type:hybrid; mso-list-template-ids:582647488 134807567 134807577 134807579 134807567 13= 4807577 134807579 134807567 134807577 134807579;} @xxxxxxxx l3:level1 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l3:level2 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l3:level3 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l3:level4 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l3:level5 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l3:level6 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l3:level7 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l3:level8 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l3:level9 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l4 {mso-list-id:1372070157; mso-list-template-ids:1663888248;} @xxxxxxxx l4:level1 {mso-level-start-at:3; mso-level-tab-stop:36.0pt; mso-level-number-position:left; text-indent:-18.0pt;} ol {margin-bottom:0cm;} ul {margin-bottom:0cm;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-GB" link=3D"#0563C1" vlink=3D"#954F72" style=3D"word-wrap:= break-word"> <div class=3D"WordSection1"> <p class=3D"MsoNormal">Hi again List,<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">Apologies for the follow-on, but I forgot to mention= below that we are holding a webinar for prospective applicants for our Lev= erhulme funded PhDs. This will be on Thu 21 March at 1300 GMT. Several PhD = supervisors will introduce themselves and their topics and we’ll have a Q&A. More details and the link= can be found at <a href=3D"https://hub.salford.ac.uk/sirc-acoustics/2024/03/13/funded-phd-o= pportunities-in-aural-diversity-march-2024/"> Webinar - Funded PhD Opportunities in Aural Diversity 2024</a><o:p></o:p></= p> <p class=3D"MsoNormal"><o:p> </o:p></p> <div> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none;mso-fareast-langua= ge:EN-GB">Best,<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none;mso-fareast-langua= ge:EN-GB"><o:p> </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none;mso-fareast-langua= ge:EN-GB">Bill </span><span lang=3D"EN-US" style=3D"mso-ligatures:none;mso-fareast-languag= e:EN-GB">(he/him)</span><span style=3D"mso-ligatures:none;mso-fareast-langu= age:EN-GB"><o:p></o:p></span></p> </div> <p class=3D"MsoNormal"><o:p> </o:p></p> <div> <div style=3D"border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm = 0cm 0cm"> <p class=3D"MsoNormal"><b><span lang=3D"EN-US" style=3D"mso-ligatures:none;= mso-fareast-language:EN-GB">From:</span></b><span lang=3D"EN-US" style=3D"m= so-ligatures:none;mso-fareast-language:EN-GB"> AUDITORY - Research in Audit= ory Perception <AUDITORY@xxxxxxxx> <b>On Behalf Of </b>Bill Davies<br> <b>Sent:</b> Wednesday, March 13, 2024 3:40 PM<br> <b>To:</b> AUDITORY@xxxxxxxx<br> <b>Subject:</b> [AUDITORY] Six funded PhD positions in aural diversity at S= alford, Goldsmiths<o:p></o:p></span></p> </div> </div> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">Dear List,<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal"><b>Closing date 21 April 2024</b><o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">John Drever and I have won =A32.2M from the Leverhul= me Trust to fund PhDs in aural diversity. We’re now advertising for o= ur first cohort of six fully-funded PhDs to start in Sept/Oct 2024. Please = forward this message to anyone who you think might be interested.<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">As well as the standard UK stipend of =A318,622 per = year (+=A32k London weighting if at Goldsmiths), we also offer a generous s= um of up to =A310k per student for research expenses. Typical research expe= nses include participant recruitment and conference trips.<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">Aural diversity is the fairly new idea that hearing/= listening differences between individuals and groups might be better repres= ented as a spectrum instead of a binary normal/impaired division. The PhD t= opics will build on the early work of the <a href=3D"https://auraldiversity.org/">https://auraldiversity.org/= </a> research network to apply the aural diversity concept in many discipli= nes concerned with sound. Hence, the PhDs are mostly rather interdisciplina= ry and a bit different from mainstream hearing science.<o:p></o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal">We have a long list of potential PhD topics and supe= rvisors at <span style=3D"mso-ligatures:none"><a href=3D"https://www.salford.ac.uk/sch= ool-of-science-engineering-and-environment/laura-the-leverhulme-trust-aural= -diversity-doctoral-research-hub">LAURA: The Leverhulme Trust Aural Diversi= ty Doctoral Research Hub</a> We aim to recruit six PhDs this year, and more in the following years, building t= o a total of 25 researchers in aural diversity.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none"><o:p> </o:p>= </span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none">I’m afraid = that we can’t accept international applicants this year. I realise th= is is irritating for a global mailing list. Sorry. The restriction is becau= se the short recruitment cycle in the initial year will not allow enough time for the UK visa processes. We will have some fu= lly-funded international places available next year (PhDs starting Sept/Oct= 2025).<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none"><o:p> </o:p>= </span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none">Interested applic= ants should probably look at all the potential topics on our web page, but = some that are perhaps a bit closer to the mainstream interests of this list= include:<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none"><o:p> </o:p>= </span></p> <ol style=3D"margin-top:0cm" start=3D"1" type=3D"1"> <li class=3D"MsoListParagraph" style=3D"margin-left:0cm;mso-list:l3 level1 = lfo3"><span style=3D"mso-ligatures:none">Global vs. local features in music= listening and sub-clinical autistic traits<o:p></o:p></span></li></ol> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt">It has been hypothesise= d that certain musical skills and abilities, such as absolute pitch, benefi= t from a local cognitive processing style that is commonly found in autisti= c individuals (e.g. Mottron et al., 2000; Wenhart & Altenm=FCller, 2019). However, there is very little em= pirical evidence yet for the relationship between local vs. global processi= ng styles of music in general and sub-clinical autistic traits. Hence, in a= first step this project aims to the answer the question whether a local vs. global processing style for music is a st= able individual trait or just two different listening approaches that can b= e shifted at will and whether the ability to shift between these depends on= the degree of musical expertise. In a second step, the project aims to clarify whether the degree to which = a local processing style for music can be employed to solve a listening tas= k is related to sub-clinical autistic traits (i.e. systematising/empathisin= g traits) of individual listeners. <o:p></o:p></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><o:p> </o:p></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt">Supervisor: <a href=3D"= https://www.gold.ac.uk/psychology/staff/mullensiefen/" target=3D"_blank"> <span style=3D"color:#C4222A">Daniel M=FCllensiefen</span></a> (Goldsmiths)= <o:p></o:p></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><o:p> </o:p></p> <ol style=3D"margin-top:0cm" start=3D"2" type=3D"1"> <li class=3D"MsoListParagraph" style=3D"margin-left:0cm;mso-list:l3 level1 = lfo3">Aural diversity in machine listening models of auditory attention<o:p= ></o:p></li></ol> <p style=3D"margin-left:18.0pt"><span style=3D"color:#0A1F29">Machine learn= ing algorithms are increasing being used to model human listening. In most = cases, machine listening algorithms are developed using ground truth data g= enerated with psychoacoustic experiments on people with ‘normal’ hearing. This results in machine learn= ing algorithms that are then biased against people with diverse hearing. Th= is matters, because these algorithms are then used by other machine learner= s e.g. to develop better hearing aid processors. In this project, you will develop machine learning models of attention, wi= th the aim of exploring how to best incorporate more diverse hearing. You w= ill need to be able to program Python within machine learning frameworks an= d have an interest in psychoacoustics.<o:p></o:p></span></p> <p style=3D"margin-left:18.0pt;box-sizing: border-box;font-size:1.125rem;fo= nt-variant-ligatures: normal;font-variant-caps: normal;orphans: 2;text-alig= n:start;widows: 2;-webkit-text-stroke-width: 0px;text-decoration-thickness:= initial;text-decoration-style: initial;text-decoration-color: initial;word= -spacing:0px"> <span style=3D"color:#0A1F29">Supervisors: <a href=3D"https://salford-= repository.worktribe.com/person/1154144/trevor-cox"><span style=3D"color:#C= 4222A">Trevor Cox</span></a> and <a href=3D"https://salford-repos= itory.worktribe.com/person/1161017/sunil-vadera"><span style=3D"color:#C422= 2A">Sunil Vadera</span></a> (Salford)<o:p></o:p></span></p> <ol start=3D"3" type=3D"1"> <li class=3D"MsoNormal" style=3D"color:#0A1F29;mso-margin-top-alt:auto;mso-= margin-bottom-alt:auto;mso-list:l3 level1 lfo3"> <span style=3D"mso-ligatures:none;mso-fareast-language:EN-GB">Individualise= d remixing of the sonic environment<o:p></o:p></span></li></ol> <p style=3D"margin-left:18.0pt"><span style=3D"color:#0A1F29">Some aurally = divergent individuals can find many environmental sounds as being disturbin= g, confusing or even painful to hear. Recent work in AI has improved the ca= pability of sound source separation, sound (e.g., speech) enhancement and reduction (e.g., background noise). T= his project would investigate deep learning techniques for sound identifica= tion and separation with the aim of facilitating real time rebalancing of e= nvironmental sounds based on individual requirements or needs.<o:p></o:p></span></p> <p style=3D"margin-left:18.0pt;box-sizing: border-box;font-size:1.125rem;fo= nt-variant-ligatures: normal;font-variant-caps: normal;orphans: 2;text-alig= n:start;widows: 2;-webkit-text-stroke-width: 0px;text-decoration-thickness:= initial;text-decoration-style: initial;text-decoration-color: initial;word= -spacing:0px"> <span style=3D"color:#0A1F29">Supervisors: <a href=3D"https://salford-= repository.worktribe.com/person/1152493/ben-shirley"><span style=3D"color:#= C4222A">Ben Shirley</span></a> and <a href=3D"https://salford-rep= ository.worktribe.com/person/1167587/christopher-hughes"><span style=3D"col= or:#C4222A">Chris Hughes</span></a> (Salford)<o:p></o:p></span></p> <ol start=3D"4" type=3D"1"> <li class=3D"MsoNormal" style=3D"color:#0A1F29;mso-margin-top-alt:auto;mso-= margin-bottom-alt:auto;mso-list:l3 level1 lfo3"> <span style=3D"mso-ligatures:none;mso-fareast-language:EN-GB">Environmental= noise assessment accounting for aural and personal diversity<o:p></o:p></s= pan></li></ol> <p style=3D"margin-left:18.0pt"><span style=3D"color:#0A1F29">Current metho= ds for quantifying the impact of noise on exposed communities are based on = meta-analyses of exposure-response relationships between noise exposure and= annoyance. These assume an average community responding in a consistent way to environmental noise exposure. = We do know that everybody ‘hears differently’ and therefo= re will respond differently to noise. Because of this, new evidence a= nd novel methods are needed for best practice and regulation to consider aural and personal diversity in decision-making. This Ph= D will explore different approaches to incorporate aural and personal (e.g.= , noise sensitivity) diversity in the assessment and management of environm= ental noise exposure.<o:p></o:p></span></p> <p style=3D"margin-left:18.0pt;box-sizing: border-box;font-size:1.125rem;fo= nt-variant-ligatures: normal;font-variant-caps: normal;orphans: 2;text-alig= n:start;widows: 2;-webkit-text-stroke-width: 0px;text-decoration-thickness:= initial;text-decoration-style: initial;text-decoration-color: initial;word= -spacing:0px"> <span style=3D"color:#0A1F29">Supervisors: <a href=3D"https://salford-= repository.worktribe.com/person/1174635/antonio-torija-martinez"><span styl= e=3D"color:#C4222A">Antonio Torija Martinez</span></a> and <a hre= f=3D"https://salford-repository.worktribe.com/person/1166330/robert-bendall= "><span style=3D"color:#C4222A">Robert Bendall</span></a> (Salford)<o:p></o:p></span></p> <p class=3D"MsoNormal">For further information, including how to apply, ple= ase refer to our webpage at <span style=3D"mso-ligatures:none"><a href=3D"https://www.salford.ac.uk/sch= ool-of-science-engineering-and-environment/laura-the-leverhulme-trust-aural= -diversity-doctoral-research-hub">LAURA: The Leverhulme Trust Aural Diversi= ty Doctoral Research Hub</a> We encourage interested applicants to contact potential supervisors for an informal dis= cussion. For questions which are not about a specific PhD topic, you can em= ail <a href=3D"mailto:see-laura@xxxxxxxx">see-laura@xxxxxxxx</a> <o:p= ></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none;mso-fareast-langua= ge:EN-GB"><o:p> </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none;mso-fareast-langua= ge:EN-GB">Cheers,<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none;mso-fareast-langua= ge:EN-GB"><o:p> </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"mso-ligatures:none;mso-fareast-langua= ge:EN-GB">Bill<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black;mso-ligatures:none;mso-fa= reast-language:EN-GB"> <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:2.4pt"><b><span lang=3D"EN-US= " style=3D"font-size:8.0pt;font-family:"Arial",sans-serif;color:#= C60C30;mso-ligatures:none;mso-fareast-language:EN-GB">Professor Bill Davies= </span></b><b><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-family:&qu= ot;Arial",sans-serif;mso-ligatures:none;mso-fareast-language:EN-GB"> (he/him)</span></b><span style=3D"mso-ligatures:none;mso-fareast-language:= EN-GB"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:2.0pt"><span lang=3D"EN-US" s= tyle=3D"font-size:8.0pt;font-family:"Arial",sans-serif;color:blac= k;mso-ligatures:none;mso-fareast-language:EN-GB">Acoustics Research Centre<= b> |</b> School of Science, Engineering & Environment</span= ><span style=3D"color:black;mso-ligatures:none;mso-fareast-language:EN-GB">= <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:2.0pt"><span lang=3D"EN-US" s= tyle=3D"font-size:8.0pt;font-family:"Arial",sans-serif;color:blac= k;mso-ligatures:none;mso-fareast-language:EN-GB"><a href=3D"mailto:w.davies= @xxxxxxxx" target=3D"_blank"><span style=3D"color:blue">w.davies@xxxxxxxx= rd.ac.uk</span></a> <b>| </b></span><span style=3D"font-size:8.0pt;font-family:"Arial"= ;,sans-serif;color:black;mso-ligatures:none;mso-fareast-language:EN-GB"><a = href=3D"https://scholar.google.co.uk/citations?user=3DRPVF0EsAAAAJ&hl= =3Den"><span style=3D"color:blue">Google Scholar</span></a> </span><b><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-family:"A= rial",sans-serif;color:black;mso-ligatures:none;mso-fareast-language:E= N-GB">|</span></b><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-family= :"Arial",sans-serif;color:black;mso-ligatures:none;mso-fareast-la= nguage:EN-GB"> </span><span style=3D"font-size:8.0pt;font-family:"Arial",sans-se= rif;color:black;mso-ligatures:none;mso-fareast-language:EN-GB"><a href=3D"h= ttps://www.researchgate.net/profile/William_Davies8"><span style=3D"color:b= lue">ResearchGate</span></a> </span><b><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-family:"A= rial",sans-serif;color:black;mso-ligatures:none;mso-fareast-language:E= N-GB">|</span></b><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-family= :"Arial",sans-serif;color:black;mso-ligatures:none;mso-fareast-la= nguage:EN-GB"> </span><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-family:"Time= s New Roman",serif;color:black;mso-ligatures:none;mso-fareast-language= :EN-GB"><a href=3D"http://www.salford.ac.uk/" target=3D"_blank"><span style= =3D"font-family:"Arial",sans-serif;color:blue">www.salford.ac.uk<= /span></a></span><span style=3D"font-size:8.0pt;font-family:"Arial&quo= t;,sans-serif;color:black;mso-ligatures:none;mso-fareast-language:EN-GB"><o= :p></o:p></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US" style=3D"font-size:8.0pt;font-f= amily:"Arial",sans-serif;color:black;mso-ligatures:none;mso-farea= st-language:EN-GB">Room 108, Newton Building, University of Salford, Salfor= d M5 4WT</span><span style=3D"mso-ligatures:none;mso-fareast-language= :EN-GB"><o:p></o:p></span></p> <p class=3D"MsoNormal"><o:p> </o:p></p> </div> </body> </html> --_000_AM0PR01MB5812737B0A546A50CC9D71CCD2292AM0PR01MB5812eurp_--