[AUDITORY] Call for PhD in AI for Cyber-Human Musicianship (Vincent Lostanlen )


Subject: [AUDITORY] Call for PhD in AI for Cyber-Human Musicianship
From:    Vincent Lostanlen  <vincent.lostanlen@xxxxxxxx>
Date:    Sun, 29 Jan 2023 13:36:45 +0100

--Apple-Mail=_B8C3861A-A36C-4060-B122-5608685D03C3 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Dear colleagues, I am looking for a PhD student on a project named COLLAGE (Collective = Learning and Listening Agents for Generative Improvisation) co-advised with G=C3=A9rard Assayag from Ircam + myself, scientist at = CNRS. Keywords: artificial social intelligence, computational co-creativity, = machine listening, multi-view learning, neural audio synthesis. The PhD will be co-funded by SOUND.AI (Sorbonne Universit=C3=A9) and ERC = REACH: http://repmus.ircam.fr/reach <http://repmus.ircam.fr/reach> Start date: Fall 2023. Here is the registration form: = https://soundai.sorbonne-universite.fr/dl/about = <https://soundai.sorbonne-universite.fr/dl/about> Deadline is Jan. 31st, 2023. (This coming Tuesday!) More details below my signature. Do not hesitate to write to me directly = for questions. Please share. Many thanks! Vincent. The COLLAGE research project proposes a new direction in cyber-human = co-creativity, with applications to musical improvisation. Its main = originality is to model not only the evolution of each improvised = stream, but also their interactions. COLLAGE will estimate, at any point = in time, who in the band is attempting to lead versus who is attempting = to follow. In free improvisation, these social roles evolve quickly and = have strong acoustical correlates: hence, the use of AI in COLLAGE will = serve to decipher the =E2=80=9Cco-generative scheme=E2=80=9D underlying = musical performance without composer nor conductor. Specifically, the PhD candidate will re-use and extend state-of-the-art = methods in multiview representation learning, sequence modeling, and = neural audio synthesis. In the short term, COLLAGE will be evaluated on = a dataset of improvised duets in terms of its ability to identify modes = of interaction: reactivity, imitation, confrontation, indifference, to = name a few. In the longer term, COLLAGE will be integrated as a machine = listening frontend for cyber-human musicianship as part of the ERC REACH = project. In summary, creative co-improvisation is omnipresent in human = social interactions, yet remains largely under-discussed in AI research, = for lack of a methodological framework. The vision behind COLLAGE is to = develop this framework from the standpoint of cyber-human musicianship. = As such, it constitutes a challenging yet foundational case study = towards the understanding of multi-human-multi-robot interactions in = their fullest generality.= --Apple-Mail=_B8C3861A-A36C-4060-B122-5608685D03C3 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><span= style=3D"font-family: Rockwell-Regular;" class=3D"">Dear = colleagues,</span><br class=3D"" style=3D"font-family: = Rockwell-Regular;"><br class=3D"" style=3D"font-family: = Rockwell-Regular;"><span style=3D"font-family: Rockwell-Regular;" = class=3D"">I am looking for a PhD student on a project = named&nbsp;COLLAGE&nbsp;(Collective Learning and Listening = Agents&nbsp;for Generative Improvisation)</span><br class=3D"" = style=3D"font-family: Rockwell-Regular;"><span style=3D"font-family: = Rockwell-Regular;" class=3D"">co-advised with G=C3=A9rard Assayag from = Ircam + myself, scientist at CNRS.</span><div class=3D"" = style=3D"font-family: Rockwell-Regular;"><br class=3D""></div><div = class=3D"" style=3D"font-family: Rockwell-Regular;">Keywords: artificial = social intelligence, computational co-creativity, machine listening, = multi-view learning, neural audio synthesis.</div><div class=3D"" = style=3D"font-family: Rockwell-Regular;"><div class=3D""><div = class=3D""><br class=3D""><div class=3D"">The PhD will be co-funded by = SOUND.AI (Sorbonne Universit=C3=A9) and ERC REACH:&nbsp;<a = href=3D"http://repmus.ircam.fr/reach" = class=3D"">http://repmus.ircam.fr/reach</a></div><div class=3D"">Start = date: Fall 2023.</div><div class=3D"">Here is the registration = form:&nbsp;<a href=3D"https://soundai.sorbonne-universite.fr/dl/about" = class=3D"">https://soundai.sorbonne-universite.fr/dl/about</a></div><div = class=3D"">Deadline is Jan. 31st, 2023. (This coming Tuesday!)<br = class=3D""><br class=3D""><br class=3D"">More details below my = signature. Do not hesitate to write to me directly for questions.<br = class=3D""><br class=3D"">Please share. Many thanks!<br class=3D""><br = class=3D"">Vincent.<br class=3D""><br class=3D""><br class=3D""><br = class=3D"">The COLLAGE research project proposes a new direction in = cyber-human co-creativity, with applications to = musical&nbsp;improvisation. Its main originality is to model not only = the evolution of each improvised stream, but also their = interactions.&nbsp;COLLAGE will estimate, at any point in time, who in = the band is attempting to lead versus who is attempting to follow. In = free&nbsp;improvisation, these social roles evolve quickly and have = strong acoustical correlates: hence, the use of AI in COLLAGE = will&nbsp;serve to decipher the =E2=80=9Cco-generative scheme=E2=80=9D = underlying musical performance without composer nor conductor.<div = class=3D""><br class=3D"">Specifically, the PhD candidate will re-use = and extend state-of-the-art methods in multiview representation = learning,&nbsp;sequence modeling, and neural audio synthesis. In the = short term, COLLAGE will be evaluated on a dataset of = improvised&nbsp;duets in terms of its ability to identify modes of = interaction: reactivity, imitation, confrontation, indifference, to name = a few. In&nbsp;the longer term, COLLAGE will be integrated as a machine = listening frontend for cyber-human musicianship as part of the&nbsp;ERC = REACH project. In summary, creative co-improvisation is omnipresent in = human social interactions, yet remains largely&nbsp;under-discussed in = AI research, for lack of a methodological framework. The vision behind = COLLAGE is to develop this&nbsp;framework from the standpoint of = cyber-human musicianship. As such, it constitutes a challenging yet = foundational case&nbsp;study towards the understanding of = multi-human-multi-robot interactions in their fullest = generality.</div></div></div></div></div></body></html>= --Apple-Mail=_B8C3861A-A36C-4060-B122-5608685D03C3--


This message came from the mail archive
src/postings/2023/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University