I am looking for a PhD student on a project named COLLAGE (Collective Learning and Listening Agents for Generative Improvisation)
co-advised with Gérard Assayag from Ircam + myself, scientist at CNRS.
Keywords: artificial social intelligence, computational co-creativity, machine listening, multi-view learning, neural audio synthesis.
The PhD will be co-funded by SOUND.AI (Sorbonne Université) and ERC REACH: http://repmus.ircam.fr/reach
Start date: Fall 2023.
Here is the registration form: https://soundai.sorbonne-universite.fr/dl/about
Deadline is Jan. 31st, 2023. (This coming Tuesday!)
More details below my signature. Do not hesitate to write to me directly for questions.
Please share. Many thanks!
The COLLAGE research project proposes a new direction in cyber-human co-creativity, with applications to musical improvisation. Its main originality is to model not only the evolution of each improvised stream, but also their interactions. COLLAGE will estimate, at any point in time, who in the band is attempting to lead versus who is attempting to follow. In free improvisation, these social roles evolve quickly and have strong acoustical correlates: hence, the use of AI in COLLAGE will serve to decipher the “co-generative scheme” underlying musical performance without composer nor conductor.
Specifically, the PhD candidate will re-use and extend state-of-the-art methods in multiview representation learning, sequence modeling, and neural audio synthesis. In the short term, COLLAGE will be evaluated on a dataset of improvised duets in terms of its ability to identify modes of interaction: reactivity, imitation, confrontation, indifference, to name a few. In the longer term, COLLAGE will be integrated as a machine listening frontend for cyber-human musicianship as part of the ERC REACH project. In summary, creative co-improvisation is omnipresent in human social interactions, yet remains largely under-discussed in AI research, for lack of a methodological framework. The vision behind COLLAGE is to develop this framework from the standpoint of cyber-human musicianship. As such, it constitutes a challenging yet foundational case study towards the understanding of multi-human-multi-robot interactions in their fullest generality.