[AUDITORY] JOB: Research Fellow in Generative Audio AI, University of Surrey, UK (Deadline: 13 May 2025) (Mark Plumbley )


Subject: [AUDITORY] JOB: Research Fellow in Generative Audio AI, University of Surrey, UK (Deadline: 13 May 2025)
From:    Mark Plumbley  <0000005fa4625f04-dmarc-request@xxxxxxxx>
Date:    Wed, 16 Apr 2025 08:07:17 +0000

--_000_VE1PR06MB7167E0CE9D02D1D115766607A9BD2VE1PR06MB7167eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Dear List, please forward to anyone who may be interested in this post. I would particularly like to encourage applications from under-represented = groups in our area, including women, people from Black, Asian and minority = ethnic groups and people with disabilities. Many thanks, Mark --- Research Fellow in Generative Audio AI (Deadline: 13 May 2025) Applications are invited for a Research Fellow (RF) position within the Cen= tre for Vision Speech and Signal Processing (CVSSP) and the Surrey Institut= e for People-Centred AI, at the University of Surrey, UK, to work in the ar= ea of generative AI for audio generation. This post is funded by the AI Hub in Generative Models (www.genai.ac.uk<htt= p://www.genai.ac.uk>). The Gen AI Hub brings together experts in Generative= AI, from industry and academia, to make Generative AI models more customis= able, reliable and trustworthy, and to help to realise the benefits of thes= e technologies for society, science and the economy. The postholder will be responsible for undertaking research in generative A= I and machine learning methods for audio generation and audio-related multi= modal content generation, including audio generation for environmental soun= ds, using approaches such as diffusion models and/or flow matching. About you The post holder is expected to have a PhD degree (or equivalent) in electro= nic engineering, computer science, applied mathematics, statistics, artific= ial intelligence, audio engineering, or a related subject; and research exp= erience in research experience in audio signal processing, audio-related mu= ltimodal processing (audio with text and/or video), audio deep learning, or= a related topic. The post holder would have experience in developing new r= esearch algorithms or methods, using languages such as Python, C++ and/or M= ATLAB, with relevant signal processing, machine learning and/or deep learni= ng tools. CVSSP is an International Centre of Excellence for research in Audio-Visual= Machine Perception and AI, with over 180 researchers. The Centre has state= -of-the-art audio and video capture and analysis facilities supporting rese= arch in real-time video and audio processing and visualisation. CVSSP has a= compute facility with 200 GPUs and >2PB of high-speed secure storage. The Surrey Institute for People-Centred AI is the founding pan-university i= nstitute at the University of Surrey, bringing together core AI-related exp= ertise in audio-visual and signal processing, computer science, and mathema= tics, with its domain expertise across engineering and physical sciences, h= uman and animal health, law and regulation, business, finance and the arts = and social sciences. Our multi-disciplinary approach puts people at the hea= rt of AI. How to Apply For further details and information on how to apply, visit https://jobs.sur= rey.ac.uk/021025 For informal inquiries, please contact Prof Mark Plumbley at m.plumbley@xxxxxxxx= rey.ac.uk<mailto:m.plumbley@xxxxxxxx> -- Prof Mark D Plumbley EPSRC Fellow in AI for Sound Professor of Signal Processing Centre for Vision, Speech and Signal Processing University of Surrey, Guildford, Surrey, GU2 7XH, UK Email: m.plumbley@xxxxxxxx --_000_VE1PR06MB7167E0CE9D02D1D115766607A9BD2VE1PR06MB7167eurp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"= > <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Aptos;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; font-size:12.0pt; font-family:"Aptos",sans-serif; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#467886; text-decoration:underline;} span.EmailStyle18 {mso-style-type:personal-compose; font-family:"Aptos",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt; mso-ligatures:none;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-GB" link=3D"#467886" vlink=3D"#96607D" style=3D"word-wrap:= break-word"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Dear List, please f= orward to anyone who may be interested in this post.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">I would particularl= y like to encourage applications from under-represented groups in our area,= including women, people from Black, Asian and minority ethnic groups and p= eople with disabilities.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Many thanks, Mark<o= :p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">---<o:p></o:p></spa= n></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:11.0pt">Research Fellow = in Generative Audio AI </span></b><span style=3D"font-size:11.0pt">(Deadline: 13 May 2025)<b><o:p>= </o:p></b></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Applications are in= vited for a Research Fellow (RF) position within the Centre for Vision Spee= ch and Signal Processing (CVSSP) and the Surrey Institute for People-Centre= d AI, at the University of Surrey, UK, to work in the area of generative AI for audio generation.<o:p></o:p></spa= n></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">This post is funded= by the AI Hub in Generative Models (<a href=3D"http://www.genai.ac.uk">www= .genai.ac.uk</a>). The Gen AI Hub brings together experts in Generative AI,= from industry and academia, to make Generative AI models more customisable, reliable and trustworthy, and to help to real= ise the benefits of these technologies for society, science and the economy= .<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">The postholder will= be responsible for undertaking research in generative AI and machine learn= ing methods for audio generation and audio-related multimodal content gener= ation, including audio generation for environmental sounds, using approaches such as diffusion models and/or flo= w matching.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:11.0pt">About you<o:p></= o:p></span></b></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">The post holder is = expected to have a PhD degree (or equivalent) in electronic engineering, co= mputer science, applied mathematics, statistics, artificial intelligence, a= udio engineering, or a related subject; and research experience in research experience in audio signal processing,= audio-related multimodal processing (audio with text and/or video), audio = deep learning, or a related topic. The post holder would have experience in= developing new research algorithms or methods, using languages such as Python, C++ and/or MATLAB, with releva= nt signal processing, machine learning and/or deep learning tools.<o:p></o:= p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">CVSSP is an Interna= tional Centre of Excellence for research in Audio-Visual Machine Perception= and AI, with over 180 researchers. The Centre has state-of-the-art audio a= nd video capture and analysis facilities supporting research in real-time video and audio processing and visualisat= ion. CVSSP has a compute facility with 200 GPUs and &gt;2PB of high-speed s= ecure storage.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">The Surrey Institut= e for People-Centred AI is the founding pan-university institute at the Uni= versity of Surrey, bringing together core AI-related expertise in audio-vis= ual and signal processing, computer science, and mathematics, with its domain expertise across engineering and= physical sciences, human and animal health, law and regulation, business, = finance and the arts and social sciences. Our multi-disciplinary approach p= uts people at the heart of AI.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><b><span style=3D"font-size:11.0pt">How to Apply<o:p= ></o:p></span></b></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">For further details= and information on how to apply, visit <a href=3D"https://jobs.surrey.ac.uk/021025">https://jobs.surrey.ac.uk/0210= 25</a><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">For informal inquir= ies, please contact Prof Mark Plumbley at <a href=3D"mailto:m.plumbley@xxxxxxxx">m.plumbley@xxxxxxxx</a><o:p>= </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;mso-fareast-language= :EN-GB">--</span><span style=3D"font-size:11.0pt;mso-ligatures:none;mso-far= east-language:EN-GB"><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;mso-fareast-language= :EN-GB">Prof Mark D Plumbley<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;mso-fareast-language= :EN-GB">EPSRC Fellow in AI for Sound<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;mso-fareast-language= :EN-GB">Professor of Signal Processing<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;mso-fareast-language= :EN-GB">Centre for Vision, Speech and Signal Processing<o:p></o:p></span></= p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;mso-fareast-language= :EN-GB">University of Surrey, Guildford, Surrey, GU2 7XH, UK<o:p></o:p></sp= an></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;mso-fareast-language= :EN-GB">Email: m.plumbley@xxxxxxxx<o:p></o:p></span></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> </div> </body> </html> --_000_VE1PR06MB7167E0CE9D02D1D115766607A9BD2VE1PR06MB7167eurp_--


This message came from the mail archive
postings/2025/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University