[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] Seeking Ph.D. and Master's students, as well as female researchers, for research in Multimodal AI in Music



Dear Auditory list,


If you are interested in working with Multimodal AI in Music specially in generating melodies, lyrics, and dances, you can contact Yi Yu by yiyu@xxxxxxxxxxxxxxxxx) to apply for Ph.D. Program (Deadline: October 25, 2024, https://www.hiroshima-u.ac.jp/en/adse/admission/d_admission) , Master Program (Deadline: October 25, 2024, https://www.hiroshima-u.ac.jp/en/adse/admission/m_admission), and Female Researcher Position (Deadline: October 28, 2024, https://womenres.hiroshima-u.ac.jp/en_post/call-for-applications-fy2025-career-advancement-project-cap-researcher-full-time/) in Informatics Data Science at Hiroshima University.

 

Representative works of multimodal music generation from our group:

[1] Wenjie Yin, Xuejiao Zhao, Yi Yu, Hang Yin, Danica Kragic, M˚ arten Bj¨ orkman,  “LM2D:Lyrics- and Music-Driven Dance Synthesis,” chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2403.09407

[2] Zhe Zhang, Yi Yu, and Atsuhiro Takasu, “Controllable syllable-level lyrics generation from melody with prior attention”, IEEE Transactions on Multimedia, DOI: 10.1109/TMM.2024.3443664, https://ieeexplore.ieee.org/document/10637751, 2024.

[3] Zhe Zhang, Karol Lasocki, Yi Yu, and Atsuhiro Takasu, “Syllable-level lyrics generation from melody exploiting character-level language model,” Findings of European Chapter of the Association for Computational Linguistics (EACL), 2024, pp.1336-1346.

[4] Wenjie Yin, Yi Yu, Hang Yin, Dannica Kragic, and Mårten Björkman, “Scalable motion style transfer with constrained diffusion generation,” The Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2024, vol.38, No.9, pp.10234–10242.

[5] Zhe Zhang, Yi Yu, and Astsuhiro Takasu, “Controllable lyrics-to-melody generation,” https://rdcu.be/dgozv, Journal of Neural Computing and Applications, Volume 35, pp. 19805–19819, 2023.

[6] Wei Duan, Yi Yu, Xulong Zhang, Suhua Tang, Wei Li, and Keizo Oyama, “Melody generation from lyrics with local interpretability,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), Vol.19, Issue 3, article 124, pp.1-21, 2022.

[7]Wei Duan, Yi Yu*, and Keizo Oyama, “Semantic dependency network for lyrics generation from melody”, Journal of Neural Computing and Applications, Vol. 36Issue 8, pp. 40594069, 2024. IF: 5.102

[8] Yi Yu, Zhe Zhang, Wei Duan, Abhishek Srivastava, Rajiv Shah, and Yi Ren, “Conditional hybrid GAN for melody generation from lyrics,” Journal of Neural Computing and Applications, vol.35, Issue 4, pp.3191-3202, 2022.

[9]Yi Yu, Abhishek Srivastava, and Simon Canales, “Conditional LSTM-GAN for melody generation from lyrics,” ACM Transaction on Multimedia Computing Communication and Applications (TOMCCAP), Vol.17, No. 1, article 35, pp.1-20, 2021. IF: 3.275

 

Best regards,

 

Yi Yu

https://home.hiroshima-u.ac.jp/yiyu/index.html