Subject: [AUDITORY] PhD and Postdoc position on audio and multimedia feature learning From: "Volk, A. (Anja)" <00000160534d63c6-dmarc-request@xxxxxxxx> Date: Wed, 16 Oct 2024 10:01:08 +0000--_000_DU0PR05MB9582481065B035536E2A5E2687462DU0PR05MB9582eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Dear colleagues, We have job openings for a PhD and a Postdoc position at Utrecht University= in the context of the large scale HAICu<https://www.haicu.science/> projec= t to help unlock the potential of cultural digital archives through multimo= dal use. The position that targets audio and multimodal feature learning (e.g. for s= peech and music) is of interest to this research community, so please help = spreading the word. Deadline for application is November 13, 2024. Apply for the PhD position: https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/phd-po= sition-on-multimedia-analysis-in-the-haicu-project Apply for the Postdoc position: https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/postdo= c-position-on-multimedia-analysis-in-the-haicu-project If you have any questions, please contact me. Please find some more informa= tion attached: Context: Are you passionate about developing cutting-edge AI techniques to = enhance interaction and communication across multiple modalities, such as t= ext, pictures, audio, and video? Join the large scale HAICu<https://www.hai= cu.science/> project to help unlock the potential of cultural digital archi= ves through multimodal use, providing richer context and a more comprehensi= ve analysis of current complex issues in society. If this fits your experti= se and interests, the Interaction Division of Utrecht University is seeking= you! Research topic: This topic targets audio and multimodal feature learning be= yond words, such as intonation, tone, stress and rhythm, in relation to con= veying emotion or messages, to support data-driven journalism. We will rese= arch audio features (e.g. for speech and music) and their relation to effec= tive message conveying in news collections with audio and video, and innova= te multimodal search by integrated feature learning in both visual and audi= o at the same time. Kind regards, Anja Volk Prof. Anja Volk | Chair of Music Information Computing | MA, MSc, PhD | Utr= echt University | Department of Information and Computing Sciences | Leuven= laan 4, 3584 CS Utrecht | Minnaertgebouw, room 4.05 | www.uu.nl/staff/AVolk= <http://www.uu.nl/staff/AVolk> | Editor-in-Chief of Transactions of ISMIR h= ttp://tismir.ismir.net<http://tismir.ismir.net/> | --_000_DU0PR05MB9582481065B035536E2A5E2687462DU0PR05MB9582eurp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:o=3D"urn:schemas-microsoft-com:office:office" xmlns:w=3D"urn:sc= hemas-microsoft-com:office:word" xmlns:m=3D"http://schemas.microsoft.com/of= fice/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"= > <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @xxxxxxxx {font-family:Aptos; panose-1:2 11 0 4 2 2 2 2 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; font-size:11.0pt; font-family:"Aptos",sans-serif; mso-ligatures:standardcontextual; mso-fareast-language:EN-US;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#467886; text-decoration:underline;} .MsoChpDefault {mso-style-type:export-only; font-size:11.0pt; mso-ligatures:none; mso-fareast-language:EN-US;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} --></style> </head> <body lang=3D"en-NL" link=3D"#467886" vlink=3D"#96607D" style=3D"word-wrap:= break-word"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><span lang=3D"EN-US">Dear colleagues,</span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">We have job openings for a PhD = and a Postdoc position at Utrecht University in the context of </span>= the large scale <a href=3D"https://www.haicu.science/" title=3D"https:= //eur03.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fwww.haicu.sci= ence%2F&data=3D05%7C02%7CA.Volk%40uu.nl%7Ce350c10c0fe34ccec6b008dcedc89= 6ff%7Cd72758a0a4464e0fa0aa4bf95a4a10e7%7C0%7C0%7C638646693027892563%7CUnkno= wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA">HAICu</a> project <span lang=3D= "EN-US">to </span>help unlock the potential of cultural digital archives through multimodal use<s= pan lang=3D"EN-US">. </span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">The position that targets = </span><b>audio and multimodal feature</b><b><span lang=3D"EN-US"> lea= rning (e.g. for speech and music)</span></b><span lang=3D"EN-US"> is of int= erest to this research community, so please help spreading the word. Deadline for application is <b>November 13, 2024</b>.</span= ></p> <p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">Apply for the PhD position:&nbs= p;</span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US"><a href=3D"https://www.uu.nl/en= /organisation/working-at-utrecht-university/jobs/phd-position-on-multimedia= -analysis-in-the-haicu-project" title=3D"Original URL:=0A= https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/phd-po= sition-on-multimedia-analysis-in-the-haicu-project=0A= =0A= Click to follow link."><span lang=3D"EN-GB">https://www.uu.nl/en/organisati= on/working-at-utrecht-university/jobs/phd-position-on-multimedia-analysis-i= n-the-haicu-project</span></a></span></p> <p class=3D"MsoNormal"> </p> <p class=3D"MsoNormal"><span lang=3D"EN-US">Apply for the Postdoc position:= </span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US"><a href=3D"https://www.uu.nl/en= /organisation/working-at-utrecht-university/jobs/postdoc-position-on-multim= edia-analysis-in-the-haicu-project" title=3D"Original URL:=0A= https://www.uu.nl/en/organisation/working-at-utrecht-university/jobs/postdo= c-position-on-multimedia-analysis-in-the-haicu-project=0A= =0A= Click to follow link."><span lang=3D"EN-GB">https://www.uu.nl/en/organisati= on/working-at-utrecht-university/jobs/postdoc-position-on-multimedia-analys= is-in-the-haicu-project</span></a></span></p> <p class=3D"MsoNormal"> </p> <p class=3D"MsoNormal"><span lang=3D"EN-US">If you have any questions, plea= se contact me. Please find some more information attached:</span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p> <p class=3D"MsoNormal"><b><span lang=3D"EN-US">Context</span></b><span lang= =3D"EN-US">: </span>Are you passionate about developing cutting-edge A= I techniques to enhance interaction and communication across multiple modal= ities, such as text, pictures, audio, and video? Join the large scale <a href=3D"https://www.haicu.science/" title=3D"= https://eur03.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fwww.hai= cu.science%2F&data=3D05%7C02%7CA.Volk%40uu.nl%7Ce350c10c0fe34ccec6b008d= cedc896ff%7Cd72758a0a4464e0fa0aa4bf95a4a10e7%7C0%7C0%7C638646693027948617%7= CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA">HAICu</a> project to help unlock the potential of cultural digital archives through multimod= al use, providing richer context and a more comprehensive analysis of curre= nt complex issues in society. If this fits your expertise and interests, th= e Interaction Division of Utrecht University is seeking you!</p> <p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p> <p class=3D"MsoNormal"><b>Research topic</b><span lang=3D"EN-US">: This top= ic targets</span> audio and multimodal feature learning beyond words, = such as intonation, tone, stress and rhythm, in relation to conveying emoti= on or messages, to support data-driven journalism. We will research audio features (e.g. for speech and music) and their rela= tion to effective message conveying in news collections with audio and vide= o, and innovate multimodal search by integrated feature learning in both vi= sual and audio at the same time.</p> <p class=3D"MsoNormal"> </p> <p class=3D"MsoNormal"><span lang=3D"EN-US">Kind regards,</span></p> <p class=3D"MsoNormal"><span lang=3D"EN-US">Anja Volk</span></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <p class=3D"MsoNormal"><o:p> </o:p></p> <div> <div> <p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#212121;mso-li= gatures:none;mso-fareast-language:EN-GB"> <o:p></o:p></span></p> </div> </div> <p class=3D"MsoNormal"><b><span style=3D"color:black;mso-ligatures:none">Pr= of. Anja Volk</span></b><span style=3D"color:black;mso-ligatures:none">&nbs= p;| Chair of Music Information Computing | MA, MSc, PhD | Utrecht Universit= y | Department of Information and Computing Sciences | Leuvenlaan 4, 3584 CS Utrecht | Minnaertgebouw, room 4.05 | </span>= <a href=3D"http://www.uu.nl/staff/AVolk" title=3D"https://eur03.safelinks.p= rotection.outlook.com/?url=3Dhttp%3A%2F%2Fwww.uu.nl%2Fstaff%2FAVolk&dat= a=3D05%7C02%7CA.Volk%40uu.nl%7Cc859c93f88b04ef1cd0208dc90778a3c%7Cd72758a0a= 4464e0fa0aa4bf95a4a10e7%7C0%7C0%7C638544090302913795%7CUnknown%7CTWFpbGZsb3= d8eyJWIjoiMC4"><span style=3D"color:#0078D7;mso-ligatures:none">www.uu.nl/s= taff/AVolk</span></a><span style=3D"color:black;mso-ligatures:none"> | Editor-in-Chief of Transactions of ISMIR </span><a href=3D"http:= //tismir.ismir.net/" title=3D"https://eur03.safelinks.protection.outlook.co= m/?url=3Dhttp%3A%2F%2Ftismir.ismir.net%2F&data=3D05%7C02%7CA.Volk%40uu.= nl%7Cc859c93f88b04ef1cd0208dc90778a3c%7Cd72758a0a4464e0fa0aa4bf95a4a10e7%7C= 0%7C0%7C638544090302930354%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM"><span= style=3D"color:#0078D7;mso-ligatures:none">http://tismir.ismir.net</span><= /a><span style=3D"color:black;mso-ligatures:none"> |</span></p> </div> </body> </html> --_000_DU0PR05MB9582481065B035536E2A5E2687462DU0PR05MB9582eurp_--