[AUDITORY] Fully funded PhD studentships in acoustics at Salford =?Windows-1252?Q?=96_?=closing date 31 March 2017 (Davies Bill )


Subject: [AUDITORY] Fully funded PhD studentships in acoustics at Salford =?Windows-1252?Q?=96_?=closing date 31 March 2017
From:    Davies Bill  <W.Davies@xxxxxxxx>
Date:    Thu, 9 Mar 2017 18:14:42 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--_000_51F6FD61D5B32B4083EDAFE1D3F35880A37275F7uospexch02_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Dear All, Please forward the following PhD studentship information to anyone who may = be interested in applying. The Acoustics Research Centre at Salford has sev= eral areas of interest which overlap with those on Auditory. Applications are invited from UK and EU candidates for fully-funded PhD stu= dentships at the University of Salford. TOPICS The Acoustics Research Centre within the School of Computing, Science and E= ngineering is keen to encourage high-quality applications on the following = topics: 1. Modelling expectation in soundscapes. Supervisor: Professor Bill Davies (w.davies@xxxxxxxx<mailto:w.davies@xxxxxxxx= alford.ac.uk>) Human response to complex acoustic scenes is a major topic of research at S= alford. We have previously shown (Bruce and Davies, 2014) that listener exp= ectation has a significant effect on evaluation of outdoor urban soundscape= s. We have since extended our soundscape work to show that similar strategi= es are used by listeners with complex spatial audio scenes (Woodcock et al.= , 2016; 2017). We are now applying our models and methods to inform machine= listening systems for processing the torrent of everyday audio in networke= d computer systems (Bones et al., 2016). There is a gap in our work (and that of other researchers) when it comes to= expectation. We know it=92s important, but we don=92t have a model for it.= The goal of this PhD will be to develop a model for how listener expectati= on influences evaluation of (non-speech, non-music) soundscapes. Simple mod= els exist for music expectation and these will be a likely starting point f= or this PhD. References Bones, O., W. J. Davies and T. J. Cox (2016). An evidence-based taxonomy of= everyday sounds. Acoustical Society of America. Hawaii. Bruce, N. S. and W. J. Davies (2014). "The effects of expectation on the pe= rception of soundscapes." Applied Acoustics 85: 1-11. Woodcock, J., W. J. Davies, T. J. Cox and F. Melchior (2016). "Categorizati= on of broadcast audio objects in complex auditory scenes." J. Audio Eng. So= c. 64: 380-394. Woodcock, J., W. J. Davies and T. J. Cox (2017). "A cognitive framework for= the categorisation of auditory objects in urban soundscapes." Applied Acou= stics 121: 56-64. 2. Structure and scale in soundscape cognition. Supervisor: Professor Bill Davies (w.davies@xxxxxxxx<mailto:w.davies@xxxxxxxx= alford.ac.uk>) Human response to soundscapes is a major topic of research at Salford. Soun= dscape research has many results on the characteristics of whole soundscape= s (e.g. Davies et al., 2013), some on individual sounds (e.g. Bones et al.,= 2016), and a few on the characteristics of one sound. But there is not eno= ugh evidence of how these mental representations are tied together in an ov= erall cognitive structure. The goal of this project is to explore the inter= actions between auditory attention, physical scale and cognitive scale in c= omplex acoustic scenes. We have previously suggested (Davies, 2015) the con= cept of the scale of the cognitive structure is a fundamental feature that = underlies many of the important attributes in the perception of soundscapes= , spatial audio and music. The goal of this PhD will be to develop a model of cognitive scale for soun= dscape perception. References Bones, O., W. J. Davies and T. J. Cox (2016). An evidence-based taxonomy of= everyday sounds. Acoustical Society of America. Hawaii. Davies, W. J., M. D. Adams, N. S. Bruce, M. Marselle, R. Cain, P. Jennings,= J. Poxon, A. Carlyle, P. Cusack, D. A. Hall, A. Irwin, K. I. Hume and C. J= . Plack (2013). "Perception of soundscapes: An interdisciplinary approach."= Applied Acoustics 74(2): 224-231. Davies, W. J. (2015). Cognition of soundscapes and other complex acoustic s= cenes. Internoise 2015. San Francisco. 3. Unintelligible radio dramas Supervisor: Professor Trevor Cox (t.j.cox@xxxxxxxx<mailto:t.j.cox@xxxxxxxx= ord.ac.uk>) Complaints about unintelligible speech on TV drama is becoming a common com= plaint, with recent examples including Jamaica Inn, Poldark and SS-GB. Spee= ch intelligibility research has traditionally focussed on transmission prob= lems, but the recent examples demonstrate some problems are caused by mumbl= ing and whispering by actors. In this project you will apply psychoacoustic= testing to better understand the requirements of listeners. You will apply= statistics and machine learning (e.g. deep nets) to model the effects of a= ccents and poor elocution. From there, you will produce meters than can be = used by sound engineers to monitor the intelligibility of dialogue and so i= mprove TV sound. 4. Computational Model of Situational Awareness for users of smartpho= nes in the vicinity of traffic Supervisor: Dr Bruno Fazenda (b.m.fazenda@xxxxxxxx<mailto:b.m.fazenda@xxxxxxxx= salford.ac.uk>) Advances in technology (Bluetooth headsets, =91iPods=92, quieter cars, helm= ets) are leading to situations where an individual's perception of the surr= ounding environment is hindered, making him/her more vulnerable to accident= s and/or intentional dangers. Examples include: a driver unaware of a fast = moving emergency vehicle; a motorbike or cyclist wearing a protection helme= t; a civil protection foraging robot or vehicle on patrol. This project aim= s to investigate this problem from both a human factors as well as technolo= gical development points of view. A candidate reading for a PhD in this are= a will be designing experiments in our fully immersive 3D audiovisual envir= onment in order to collect behaviour data that can help us understand impai= rments caused by the use of portable infotainment technology. The goal of t= he project is to develop detection and warning systems that can use sensor = and usage data from devices, allowing constant monitoring of behaviour and = environment and subsequently model drops in attention and awareness. This P= hD may include all or some of the following multi-disciplinary skills: sens= or engineering (with particular emphasis on acoustic detection), digital si= gnal processing; cognitive behaviour. There is an opportunity to be involve= d in a funded international collaboration through the Royal Society. The goal of this PhD will be to develop prediction models of awareness in u= sers. 5. Auditory Perception in Virtual and Augmented Reality Spaces Supervisor: Dr Bruno Fazenda (b.m.fazenda@xxxxxxxx<mailto:b.m.fazenda@xxxxxxxx= salford.ac.uk>) Headphones are ubiquitous and a very convenient way of reproducing sound. M= ore recently attention has been devoted to full surround sound capabilities= over headphones and a great deal of research effort is being devoted to th= is, particularly in areas of Virtual and Augmented Reality. However, convin= cing rendition of acoustic spaces with headphones is still elusive and prob= lems still exist with internalisation, individualisation, acoustic modellin= g of the spaces, etc. This PhD project takes at look at the issue from a mo= re holistic perspective. The multimodal aspects of audio and visual interac= tions are to be considered as well as important human factors such as perso= nality and cognitive styles and how these affect perception of virtual and = augmented spaces. The project will involve the design of subjective experi= ments to be undertaken in virtual or augmented reality spaces and will incl= ude both aspects of signal processing and room acoustics modelling as well = as applied psychology methods. 6. Personalised Music Creation and Distribution using Object Based Au= dio Supervisor: Dr Bruno Fazenda (b.m.fazenda@xxxxxxxx<mailto:b.m.fazenda@xxxxxxxx= salford.ac.uk>) Music creation and consumption has followed the same paradigm since the adv= ent of recording. Hours of composition, recording and mixing result in a li= near piece which the listener consumes repeatedly. Music consumption progre= ssed from physical formats to internet streaming and subscription services = which =91learn=92 consumer tastes. This has given rise to playlists and alg= orithms providing content delivery which can be mediated by listener goals = such as =91music to go to sleep=92 or =91drive music=92. However, these ser= vices simply aid the delivery of existing music items, rather than facilita= te the creation of optimised, personalised content. This project will apply a new paradigm for music creation and consumption t= o deliver new music content personalised to listener context, different eve= ry time it is consumed, and directly aligned to listener wellbeing goal. I= t will deliver a framework of music composition and consumption that sense= s listener state and context using wearables and smartphone sensors, throug= h which the composer will be able to dynamically =91perform=92 their music.= It will enable novel creative, performative possibilities for the artist a= nd a new form of music experience and service delivery for the listener. Fo= r the artist, the proposed framework will facilitate capture of ideas and a= ssociated rights and provide a 'fair ecosystem' when content is reused or r= edistributed. For listeners it will form a novel experience delivery acknow= ledging principles of embodied cognition: that our thought-processes and ou= r resulting wellbeing states are tied to, and influenced by, our immediate = environment and our interactions with it. The project will involve research into aspects of object based audio, music= and wellbeing, generative music composition paradigms and automatic mixing= methods. It will involve perceptual testing in both lab and field conditio= ns through applications deployed on smartphones. You will collaborate with = musicians and audio technologists. 7. Automatic Detection of Audio Quality in Commercial Music Productio= ns Supervisor: Dr Bruno Fazenda (b.m.fazenda@xxxxxxxx<mailto:b.m.fazenda@xxxxxxxx= salford.ac.uk>) Some music productions sound great and some don=92t: the sound quality of a= udio programme material is very variable. Expert and na=EFve listeners are = quite good at picking up these differences in sound quality. However, so fa= r there are no metrics that can quantify if a given music track is of good = quality or not. This project aims to define and extract quality features fr= om audio signals that enable an automated rating of the acoustic quality th= erein. With the recent advances in deep learning networks, it is possible to predi= ct whether a given musical piece has elements of high quality but the techn= ical rules that afford that quality are hidden. This project will use the r= ecent advances in signal processing and data mining to support a substantia= l study of human factors that determine perceived quality in sound and audi= o production. The foreseen outcomes are: 1) A framework that sets the relat= ive importance of various objective acoustic measures of signal content in = the context of human listening; 2) A digital tool that automatically rates = and improves audio quality in a given stream. Applications of the knowledge= and technology span from automated adjustment to different reproduction sc= enarios (eg: radio speech in a car vs. live sound) to archive recovery. QUALIFICATIONS Applicants should have a good undergraduate honours degree (1st or 2:1) and= /or a good MSc degree in acoustics, psychology, electronic engineering or a= related subject. Desirable experience includes design of listening tests, = statistical analysis, programming (e.g. MATLAB), and scientific publication= . ENVIRONMENT You can expect to be able to take advantage of our world-class experimental= facilities including anechoic and semi-anechoic chambers, listening room, = object-based spatial audio systems, head-tracked binaural system, and so on= , as appropriate. You=92ll join a thriving Acoustics Research Centre and wi= ll work alongside PhD students, post-doctoral fellows and senior researcher= s who are researching related topics (http://www.salford.ac.uk/computing-sc= ience-engineering/research/acoustics). The topic and methods of each projec= t might be varied to suit the strengths of the applicant. FUNDING Successful candidates will receive a bursary of =A314,553 tax free for up = to three years and will also get their tuition fees paid. TO APPLY You are strongly encouraged to contact the likely supervisor indicated abov= e for an informal discussion before you apply. Competition for these fully-= funded places is expected to be intense and you will benefit from our advic= e on your application. Further details and application form can be found at= http://www.salford.ac.uk/study/postgraduate/fees-and-funding/funded-phd-st= udentship Best, Bill Davies Professor Bill Davies Associate Dean Academic | School of Computing, Science and Engineering Room 108, Newton Building, University of Salford, Salford M5 4WT t: +44 (0) 161 295 5986 w.davies@xxxxxxxx<mailto:w.davies@xxxxxxxx> | www.salford.ac.uk<= http://www.salford.ac.uk> --_000_51F6FD61D5B32B4083EDAFE1D3F35880A37275F7uospexch02_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @xxxxxxxx {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.MsoPlainText, li.MsoPlainText, div.MsoPlainText {mso-style-priority:99; mso-style-link:"Plain Text Char"; margin:0cm; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} p.MsoAcetate, li.MsoAcetate, div.MsoAcetate {mso-style-priority:99; mso-style-link:"Balloon Text Char"; margin:0cm; margin-bottom:.0001pt; font-size:8.0pt; font-family:"Tahoma",sans-serif;} p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; margin-top:0cm; margin-right:0cm; margin-bottom:0cm; margin-left:36.0pt; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif;} span.PlainTextChar {mso-style-name:"Plain Text Char"; mso-style-priority:99; mso-style-link:"Plain Text"; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} span.BalloonTextChar {mso-style-name:"Balloon Text Char"; mso-style-priority:99; mso-style-link:"Balloon Text"; font-family:"Tahoma",sans-serif;} span.EmailStyle22 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:windowtext;} span.EmailStyle23 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:#1F497D;} span.EmailStyle24 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:#1F497D;} span.EmailStyle25 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:#1F497D;} span.EmailStyle26 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:#1F497D;} span.EmailStyle27 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:#1F497D;} span.xapple-converted-space {mso-style-name:x_apple-converted-space;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} /* List Definitions */ @xxxxxxxx l0 {mso-list-id:297609090; mso-list-type:hybrid; mso-list-template-ids:-999408130 134807567 134807577 134807579 134807567 1= 34807577 134807579 134807567 134807577 134807579;} @xxxxxxxx l0:level1 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level2 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level3 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l0:level4 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level5 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level6 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l0:level7 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level8 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level9 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l1 {mso-list-id:477310345; mso-list-type:hybrid; mso-list-template-ids:796193106 996555482 134807577 134807579 134807567 13= 4807577 134807579 134807567 134807577 134807579;} @xxxxxxxx l1:level1 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt; color:#1F497D;} @xxxxxxxx l1:level2 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l1:level3 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l1:level4 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l1:level5 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l1:level6 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @xxxxxxxx l1:level7 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l1:level8 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l1:level9 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} ol {margin-bottom:0cm;} ul {margin-bottom:0cm;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-GB" link=3D"blue" vlink=3D"purple"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Dear All,<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Please forward the following PhD stud= entship information to anyone who may be interested in applying. The Acoust= ics Research Centre at Salford has several areas of interest which overlap with those on Auditory.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Applications are invited from UK and = EU candidates for fully-funded PhD studentships at the University of Salfor= d.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">TOPICS<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">The Acoustics Research Centre within = the School of Computing, Science and Engineering is keen to encourage high-= quality applications on the following topics:<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l1 leve= l1 lfo3"><![if !supportLists]><span style=3D"font-size:11.0pt;font-family:&= quot;Calibri&quot;,sans-serif;color:#1F497D"><span style=3D"mso-list:Ignore= ">1.<span style=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbs= p;&nbsp;&nbsp;&nbsp; </span></span></span><![endif]><span style=3D"font-size:11.0pt;font-family:= &quot;Calibri&quot;,sans-serif;color:#1F497D">Modelling expectation in soun= dscapes.</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&qu= ot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">&nbsp;</span><span style=3D"font-size= :11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Supervis= or: Professor Bill Davies (<a href=3D"mailto:w.davies@xxxxxxxx"><span = style=3D"color:purple">w.davies@xxxxxxxx</span></a>)</span><span style= =3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:p></o:p= ></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Human re= sponse to complex acoustic scenes is a major topic of research at Salford. = We have previously shown (Bruce and Davies, 2014) that listener expectation has a significant effect on evaluation of outdoo= r urban soundscapes. We have since extended our soundscape work to show tha= t similar strategies are used by listeners with complex spatial audio scene= s (Woodcock et al., 2016; 2017). We are now applying our models and methods to inform machine listening sys= tems for processing the torrent of everyday audio in networked computer sys= tems (Bones et al., 2016).</span><span style=3D"font-size:11.0pt;font-famil= y:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">There is= a gap in our work (and that of other researchers) when it comes to expecta= tion. We know it=92s important, but we don=92t have a model for it. The goal of this PhD will be to develop a model for how li= stener expectation influences evaluation of (non-speech, non-music) soundsc= apes. Simple models exist for music expectation and these will be a likely = starting point for this PhD.</span><span style=3D"font-size:11.0pt;font-fam= ily:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Referenc= es</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sa= ns-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Bones, O= ., W. J. Davies and T. J. Cox (2016). An evidence-based taxonomy of everyda= y sounds. Acoustical Society of America. Hawaii.</span><span style=3D"font-= size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span><= /p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Bruce, N= . S. and W. J. Davies (2014). &quot;The effects of expectation on the perce= ption of soundscapes.&quot; Applied Acoustics 85: 1-11.</span><span style= =3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:p></o:p= ></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Woodcock= , J., W. J. Davies, T. J. Cox and F. Melchior (2016). &quot;Categorization = of broadcast audio objects in complex auditory scenes.&quot; J. Audio Eng. Soc. 64: 380-394.</span><span style=3D"font-size:11.0pt;font= -family:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Woodcock= , J., W. J. Davies and T. J. Cox (2017). &quot;A cognitive framework for th= e categorisation of auditory objects in urban soundscapes.&quot; Applied Acoustics 121: 56-64.</span><span style=3D"font-size:11.0pt;font-f= amily:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l1 leve= l1 lfo3"><![if !supportLists]><span style=3D"font-size:11.0pt;font-family:&= quot;Calibri&quot;,sans-serif;color:#1F497D"><span style=3D"mso-list:Ignore= ">2.<span style=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbs= p;&nbsp;&nbsp;&nbsp; </span></span></span><![endif]><span style=3D"font-size:11.0pt;font-family:= &quot;Calibri&quot;,sans-serif;color:#1F497D">Structure and scale in sounds= cape cognition.</span><span style=3D"font-size:11.0pt;font-family:&quot;Cal= ibri&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">&nbsp;</span><span style=3D"font-size= :11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Supervis= or: Professor Bill Davies (<a href=3D"mailto:w.davies@xxxxxxxx"><span = style=3D"color:purple">w.davies@xxxxxxxx</span></a>)</span><span style= =3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:p></o:p= ></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Human re= sponse to soundscapes is a major topic of research at Salford. Soundscape r= esearch has many results on the characteristics of whole soundscapes (e.g. Davies et al., 2013), some on individual sounds= (e.g. Bones et al., 2016), and a few on the characteristics of one sound. = But there is not enough evidence of how these mental representations are ti= ed together in an overall cognitive structure. The goal of this project is to explore the interactions between= auditory attention, physical scale and cognitive scale in complex acoustic= scenes. We have previously suggested (Davies, 2015) the concept of the sca= le of the cognitive structure is a fundamental feature that underlies many of the important attributes in t= he perception of soundscapes, spatial audio and music.</span><span class=3D= "xapple-converted-space"><span style=3D"font-size:11.0pt;color:#1F497D">&nb= sp;</span></span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&= quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">The goal= of this PhD will be to develop a model of cognitive scale for soundscape p= erception.</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&= quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Referenc= es</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sa= ns-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;</= span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-s= erif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Bones, O= ., W. J. Davies and T. J. Cox (2016). An evidence-based taxonomy of everyda= y sounds. Acoustical Society of America. Hawaii.</span><span style=3D"font-= size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:p></o:p></span><= /p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Davies, = W. J., M. D. Adams, N. S. Bruce, M. Marselle, R. Cain, P. Jennings, J. Poxo= n, A. Carlyle, P. Cusack, D. A. Hall, A. Irwin, K. I. Hume and C. J. Plack (2013). &quot;Perception of soundscapes: An int= erdisciplinary approach.&quot; Applied Acoustics 74(2): 224-231.</span><spa= n style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,sans-serif"><o:= p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Davies, = W. J. (2015). Cognition of soundscapes and other complex acoustic scenes. I= nternoise 2015. San Francisco.</span> <o:p></o:p></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> <p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l1 leve= l1 lfo3"><![if !supportLists]><span style=3D"font-size:11.0pt;font-family:&= quot;Calibri&quot;,sans-serif;color:#1F497D"><span style=3D"mso-list:Ignore= ">3.<span style=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbs= p;&nbsp;&nbsp;&nbsp; </span></span></span><![endif]><span style=3D"font-size:11.0pt;font-family:= &quot;Calibri&quot;,sans-serif;color:#1F497D">Unintelligible radio dramas<o= :p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Supervis= or: Professor Trevor Cox (<a href=3D"mailto:t.j.cox@xxxxxxxx">t.j.cox@xxxxxxxx= salford.ac.uk</a>)<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><o:p>&nb= sp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Complain= ts about unintelligible speech on TV drama is becoming a common complaint, = with recent examples including Jamaica Inn, Poldark and SS-GB. Speech intelligibility research has traditionally focussed on t= ransmission problems, but the recent examples demonstrate some problems are= caused by mumbling and whispering by actors. In this project you will appl= y psychoacoustic testing to better understand the requirements of listeners. You will apply statistics and ma= chine learning (e.g. deep nets) to model the effects of accents and poor el= ocution. From there, you will produce meters than can be used by sound engi= neers to monitor the intelligibility of dialogue and so improve TV sound.</span> <o:p></o:p></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> <p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l1 leve= l1 lfo3"><![if !supportLists]><span style=3D"font-size:11.0pt;font-family:&= quot;Calibri&quot;,sans-serif;color:#1F497D"><span style=3D"mso-list:Ignore= ">4.<span style=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbs= p;&nbsp;&nbsp;&nbsp; </span></span></span><![endif]><span style=3D"font-size:11.0pt;font-family:= &quot;Calibri&quot;,sans-serif;color:#1F497D">Computational Model of Situat= ional Awareness for users of smartphones in the vicinity of traffic<o:p></o= :p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Supervis= or: Dr Bruno Fazenda (<a href=3D"mailto:b.m.fazenda@xxxxxxxx">b.m.faze= nda@xxxxxxxx</a>)<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><o:p></o= :p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Advances= in technology (Bluetooth headsets, =91iPods=92, quieter cars, helmets) are= leading to situations where an individual's perception of the surrounding environment is hindered, making him/her more vulnerable= to accidents and/or intentional dangers. Examples include: a driver unawar= e of a fast moving emergency vehicle; a motorbike or cyclist wearing a prot= ection helmet; a civil protection foraging robot or vehicle on patrol. This project aims to investigate this= problem from both a human factors as well as technological development poi= nts of view. A candidate reading for a PhD in this area will be designing e= xperiments in our fully immersive 3D audiovisual environment in order to collect behaviour data that can hel= p us understand impairments caused by the use of portable infotainment tech= nology. The goal of the project is to develop detection and warning systems= that can use sensor and usage data from devices, allowing constant monitoring of behaviour and environment an= d subsequently model drops in attention and awareness. This PhD may include= all or some of the following multi-disciplinary skills: sensor engineering= (with particular emphasis on acoustic detection), digital signal processing; cognitive behaviour. There is an op= portunity to be involved in a funded international collaboration through th= e Royal Society.&nbsp; <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><o:p>&nb= sp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">The goal= of this PhD will be to develop prediction models of awareness in users.<o:= p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p></o:p></span></p> <p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l1 leve= l1 lfo3"><![if !supportLists]><span style=3D"font-size:11.0pt;font-family:&= quot;Calibri&quot;,sans-serif;color:#1F497D"><span style=3D"mso-list:Ignore= ">5.<span style=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbs= p;&nbsp;&nbsp;&nbsp; </span></span></span><![endif]><span style=3D"font-size:11.0pt;font-family:= &quot;Calibri&quot;,sans-serif;color:#1F497D">Auditory Perception in Virtua= l and Augmented Reality Spaces<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Supervis= or: Dr Bruno Fazenda (<a href=3D"mailto:b.m.fazenda@xxxxxxxx">b.m.faze= nda@xxxxxxxx</a>) <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;<o= :p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Headphon= es are ubiquitous and a very convenient way of reproducing sound. More rece= ntly attention has been devoted to full surround sound capabilities over headphones and a great deal of research effort is = being devoted to this, particularly in areas of Virtual and Augmented Reali= ty. However, convincing rendition of acoustic spaces with headphones is sti= ll elusive and problems still exist with internalisation, individualisation, acoustic modelling of the spaces,= etc. This PhD project takes at look at the issue from a more holistic pers= pective. The multimodal aspects of audio and visual interactions are to be = considered as well as important human factors such as personality and cognitive styles and how these affec= t perception of virtual and augmented spaces.&nbsp; The project will involv= e the design of subjective experiments to be undertaken in virtual or augme= nted reality spaces and will include both aspects of signal processing and room acoustics modelling as well as = applied psychology methods.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p></o:p></span></p> <p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l1 leve= l1 lfo3"><![if !supportLists]><span style=3D"font-size:11.0pt;font-family:&= quot;Calibri&quot;,sans-serif;color:#1F497D"><span style=3D"mso-list:Ignore= ">6.<span style=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbs= p;&nbsp;&nbsp;&nbsp; </span></span></span><![endif]><span style=3D"font-size:11.0pt;font-family:= &quot;Calibri&quot;,sans-serif;color:#1F497D">Personalised Music Creation a= nd Distribution using Object Based Audio<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Supervis= or: Dr Bruno Fazenda (<a href=3D"mailto:b.m.fazenda@xxxxxxxx">b.m.faze= nda@xxxxxxxx</a>) <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;<o= :p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Music cr= eation and consumption has followed the same paradigm since the advent of r= ecording. Hours of composition, recording and mixing result in a linear piece which the listener consumes repeatedly. Music con= sumption progressed from physical formats to internet streaming and subscri= ption services which =91learn=92 consumer tastes. This has given rise to pl= aylists and algorithms providing content delivery which can be mediated by listener goals such as =91music to go to= sleep=92 or =91drive music=92. However, these services simply aid the deli= very of existing music items, rather than facilitate the creation of optimi= sed, personalised content. <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><o:p>&nb= sp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">This pro= ject will apply a new paradigm for music creation and consumption to delive= r new music content personalised to listener context, different every time it is consumed, and directly aligned to listener well= being goal.&nbsp; It will deliver a&nbsp; framework of music composition an= d consumption that senses listener state and context using wearables and sm= artphone sensors, through which the composer will be able to dynamically =91perform=92 their music. It will enable nove= l creative, performative possibilities for the artist and a new form of mus= ic experience and service delivery for the listener. For the artist, the pr= oposed framework will facilitate capture of ideas and associated rights and provide a 'fair ecosystem' when content= is reused or redistributed. For listeners it will form a novel experience = delivery acknowledging principles of embodied cognition: that our thought-p= rocesses and our resulting wellbeing states are tied to, and influenced by, our immediate environment and our i= nteractions with it. <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><o:p>&nb= sp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">The proj= ect will involve research into aspects of object based audio, music and wel= lbeing, generative music composition paradigms and automatic mixing methods. It will involve perceptual testing in both lab a= nd field conditions through applications deployed on smartphones. You will = collaborate with musicians and audio technologists.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l1 leve= l1 lfo3"><![if !supportLists]><span style=3D"font-size:11.0pt;font-family:&= quot;Calibri&quot;,sans-serif;color:#1F497D"><span style=3D"mso-list:Ignore= ">7.<span style=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbs= p;&nbsp;&nbsp;&nbsp; </span></span></span><![endif]><span style=3D"font-size:11.0pt;font-family:= &quot;Calibri&quot;,sans-serif;color:#1F497D">Automatic Detection of Audio = Quality in Commercial Music Productions<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Supervis= or: Dr Bruno Fazenda (<a href=3D"mailto:b.m.fazenda@xxxxxxxx">b.m.faze= nda@xxxxxxxx</a>) <o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">&nbsp;<o= :p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">Some mus= ic productions sound great and some don=92t: the sound quality of audio pro= gramme material is very variable. Expert and na=EFve listeners are quite good at picking up these differences in sound quality.= However, so far there are no metrics that can quantify if a given music tr= ack is of good quality or not. This project aims to define and extract qual= ity features from audio signals that enable an automated rating of the acoustic quality therein. <o:p></o:= p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D"><o:p>&nb= sp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-left:18.0pt"><span style=3D"font-siz= e:11.0pt;font-family:&quot;Calibri&quot;,sans-serif;color:#1F497D">With the= recent advances in deep learning networks, it is possible to predict wheth= er a given musical piece has elements of high quality but the technical rules that afford that quality are hidden. This project = will use the recent advances in signal processing and data mining to suppor= t a substantial study of human factors that determine perceived quality in = sound and audio production. The foreseen outcomes are: 1) A framework that sets the relative importance of= various objective acoustic measures of signal content in the context of hu= man listening; 2) A digital tool that automatically rates and improves audi= o quality in a given stream. Applications of the knowledge and technology span from automated adjustment to differen= t reproduction scenarios (eg: radio speech in a car vs. live sound) to arch= ive recovery. <o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">QUALIFICATIONS<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Applicants should have a good undergr= aduate honours degree (1<sup>st</sup> or 2:1) and/or a good MSc degree in a= coustics, psychology, electronic engineering or a related subject. Desirable experience includes design of listening tests= , statistical analysis, programming (e.g. MATLAB), and scientific publicati= on. <o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">ENVIRONMENT<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">You can expect to be able to take adv= antage of our world-class experimental facilities including anechoic and se= mi-anechoic chambers, listening room, object-based spatial audio systems, head-tracked binaural system, and so on, as appropr= iate. You=92ll join a thriving Acoustics Research Centre and will work alon= gside PhD students, post-doctoral fellows and senior researchers who are re= searching related topics (<a href=3D"http://www.salford.ac.uk/computing-sci= ence-engineering/research/acoustics">http://www.salford.ac.uk/computing-sci= ence-engineering/research/acoustics</a>). The topic and methods of each project might be varied to suit the strength= s of the applicant.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">FUNDING<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Successful candidates will receive a = bursary of =A314,553&nbsp; tax free for up to three years and will also get= their tuition fees paid.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">TO APPLY<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">You are strongly encouraged to contac= t the likely supervisor indicated above for an informal discussion before y= ou apply. Competition for these fully-funded places is expected to be intense and you will benefit from our advice on your app= lication. Further details and application form can be found at <a href=3D"http://www.salford.ac.uk/study/postgraduate/fees-and-funding/fun= ded-phd-studentship"> http://www.salford.ac.uk/study/postgraduate/fees-and-funding/funded-phd-stu= dentship</a><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Best,<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Bill Davies<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoPlainText">Professor Bill Davies<o:p></o:p></p> <p class=3D"MsoPlainText">Associate Dean Academic &nbsp;|&nbsp; School of C= omputing, Science and Engineering<o:p></o:p></p> <p class=3D"MsoPlainText">Room 108, Newton Building, University of Salford,= Salford&nbsp; M5 4WT<o:p></o:p></p> <p class=3D"MsoPlainText">t: &#43;44 (0) 161 295 5986<o:p></o:p></p> <p class=3D"MsoPlainText"><a href=3D"mailto:w.davies@xxxxxxxx">w.davie= s@xxxxxxxx</a> &nbsp;| <a href=3D"http://www.salford.ac.uk">www.salford.ac.uk</a><o:p></o:p></p> <p class=3D"MsoPlainText"><o:p>&nbsp;</o:p></p> </div> </body> </html> --_000_51F6FD61D5B32B4083EDAFE1D3F35880A37275F7uospexch02_--


This message came from the mail archive
../postings/2017/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University