Subject: [AUDITORY] Seeking Ph.D. and Master's students, as well as female researchers, for research in Multimodal AI in Music From: Yi Yu <yi.yu.yy@xxxxxxxx> Date: Sat, 5 Oct 2024 14:14:49 +0900--0000000000008864f40623b3debf Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Dear Auditory list, If you are interested in working with Multimodal AI in Music specially in generating melodies, lyrics, and dances, you can contact Yi Yu by *yiyu@xxxxxxxx <yiyu@xxxxxxxx>*) to apply for Ph.D. Program (Deadline: October 25, 2024, https://www.hiroshima-u.ac.jp/en/adse/admission/d_admission) , Master Program (Deadline: October 25, 2024, https://www.hiroshima-u.ac.jp/en/adse/admission/m_admission), and Female Researcher Position (Deadline: October 28, 2024, https://womenres.hiroshima-u.ac.jp/en_post/call-for-applications-fy2025-car= eer-advancement-project-cap-researcher-full-time/) in Informatics Data Science at Hiroshima University. Representative works of multimodal music generation from our group: [1] Wenjie Yin, Xuejiao Zhao, Yi Yu, Hang Yin, Danica Kragic, M=CB=9A arten= Bj=C2=A8 orkman, =E2=80=9CLM2D:Lyrics- and Music-Driven Dance Synthesis,=E2=80=9D chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/ https://arxiv.org/pdf/2403.09407 [2] Zhe Zhang, Yi Yu, and Atsuhiro Takasu, =E2=80=9CControllable syllable-l= evel lyrics generation from melody with prior attention=E2=80=9D, IEEE Transacti= ons on Multimedia, DOI: 10.1109/TMM.2024.3443664, https://ieeexplore.ieee.org/document/10637751, 2024. [3] Zhe Zhang, Karol Lasocki, Yi Yu, and Atsuhiro Takasu, =E2=80=9CSyllable= -level lyrics generation from melody exploiting character-level language model,=E2= =80=9D Findings of European Chapter of the Association for Computational Linguistics (EACL), 2024, pp.1336-1346. [4] Wenjie Yin, Yi Yu, Hang Yin, Dannica Kragic, and M=C3=A5rten Bj=C3=B6rk= man, =E2=80=9CScalable motion style transfer with constrained diffusion generati= on,=E2=80=9D The Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2024, vol.38, No.9, pp.10234=E2=80=9310242. [5] Zhe Zhang, Yi Yu, and Astsuhiro Takasu, =E2=80=9CControllable lyrics-to= -melody generation,=E2=80=9D https://rdcu.be/dgozv, Journal of Neural Computing and Applications, Volume 35, pp. 19805=E2=80=9319819, 2023. [6] Wei Duan, Yi Yu, Xulong Zhang, Suhua Tang, Wei Li, and Keizo Oyama, =E2=80=9CMelody generation from lyrics with local interpretability,=E2=80= =9D ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), Vol.19, Issue 3, article 124, pp.1-21, 2022. [7]Wei Duan, *Yi Yu**, and Keizo Oyama, =E2=80=9CSemantic dependency networ= k for lyrics generation from melody=E2=80=9D, *Journal of Neural Computing and Applications*, Vol. 36, Issue 8, pp. 4059=E2=80=934069, 2024. IF: 5.102 [8] Yi Yu, Zhe Zhang, Wei Duan, Abhishek Srivastava, Rajiv Shah, and Yi Ren, =E2=80=9CConditional hybrid GAN for melody generation from lyrics,=E2= =80=9D Journal of Neural Computing and Applications, vol.35, Issue 4, pp.3191-3202, 2022. [9]Yi Yu, Abhishek Srivastava, and Simon Canales, =E2=80=9CConditional LSTM= -GAN for melody generation from lyrics,=E2=80=9D* ACM Transaction on Multimedia Comp= uting Communication and Applications (TOMCCAP)*, Vol.17, No. 1, article 35, pp.1-20, 2021. IF: 3.275 Best regards, Yi Yu https://home.hiroshima-u.ac.jp/yiyu/index.html --0000000000008864f40623b3debf Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div dir=3D"ltr"><p class=3D"MsoNormal" style=3D"margin:0m= m;text-align:justify;font-size:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C= =9D,serif"><span lang=3D"EN-US"><span style=3D"font-family:Aptos,sans-serif= ;font-size:16px;text-align:start">Dear Auditory list,</span><br></span></p>= <p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify;font-size:10.= 5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US"><br= ></span></p><p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify;f= ont-size:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang= =3D"EN-US">If you are interested in working with Multimodal AI in Music spe= cially in generating melodies, lyrics, and dances, you can contact Yi Yu by= =C2=A0<u><a href=3D"mailto:yiyu@xxxxxxxx" target=3D"_blank" style= =3D"color:rgb(5,99,193)">yiyu@xxxxxxxx</a></u>) to apply for Ph.D.= Program (Deadline: October 25, 2024,=C2=A0<a href=3D"https://www.hiroshima= -u.ac.jp/en/adse/admission/d_admission" target=3D"_blank" style=3D"color:rg= b(5,99,193)">https://www.hiroshima-u.ac.jp/en/adse/admission/d_admission</a= >) , Master Program (Deadline: October 25, 2024,=C2=A0<a href=3D"https://ww= w.hiroshima-u.ac.jp/en/adse/admission/m_admission" target=3D"_blank" style= =3D"color:rgb(5,99,193)">https://www.hiroshima-u.ac.jp/en/adse/admission/m_= admission</a>), and Female Researcher Position (<a name=3D"m_-4229604438720= 581278_m_1110183321113080388__Hlk179028725">Deadline</a>: October 28, 2024,= =C2=A0<a href=3D"https://womenres.hiroshima-u.ac.jp/en_post/call-for-applic= ations-fy2025-career-advancement-project-cap-researcher-full-time/" target= =3D"_blank">https://womenres.hiroshima-u.ac.jp/en_post/call-for-application= s-fy2025-career-advancement-project-cap-researcher-full-time/</a>) in Infor= matics Data=C2=A0Science at Hiroshima University.</span></p><p class=3D"Mso= Normal" style=3D"margin:0mm;text-align:justify;font-size:10.5pt;font-family= :=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">=C2=A0</span></p><= p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify;font-size:10.5= pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">Repr= esentative works of multimodal music generation from our group:</span></p><= p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify;font-size:10.5= pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">[1] = Wenjie Yin, Xuejiao Zhao, Yi Yu, Hang Yin, Danica Kragic, M=CB=9A arten Bj= =C2=A8 orkman,=C2=A0 =E2=80=9CLM2D:Lyrics- and Music-Driven Dance Synthesis= ,=E2=80=9D chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/<a href=3D"h= ttps://arxiv.org/pdf/2403.09407" target=3D"_blank">https://arxiv.org/pdf/24= 03.09407</a></span></p><p class=3D"MsoNormal" style=3D"margin:0mm;text-alig= n:justify;font-size:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><= span lang=3D"EN-US">[2] Zhe Zhang, Yi Yu, and Atsuhiro Takasu, =E2=80=9CCon= trollable syllable-level lyrics generation from melody with prior attention= =E2=80=9D, IEEE Transactions on Multimedia, DOI: 10.1109/TMM.2024.3443664,= =C2=A0<a href=3D"https://ieeexplore.ieee.org/document/10637751" target=3D"_= blank">https://ieeexplore.ieee.org/document/10637751</a>, 2024.</span></p><= p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify;font-size:10.5= pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">[3] = Zhe Zhang, Karol Lasocki, Yi Yu, and Atsuhiro Takasu, =E2=80=9CSyllable-lev= el lyrics generation from melody exploiting character-level language model,= =E2=80=9D Findings of European Chapter of the Association for Computational= Linguistics (EACL), 2024, pp.1336-1346.</span></p><p class=3D"MsoNormal" s= tyle=3D"margin:0mm;text-align:justify;font-size:10.5pt;font-family:=E6=B8= =B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">[4] Wenjie Yin, Yi Yu, Ha= ng Yin, Dannica Kragic, and M=C3=A5rten Bj=C3=B6rkman, =E2=80=9CScalable mo= tion style transfer with constrained diffusion generation,=E2=80=9D The Ass= ociation for the Advancement of Artificial Intelligence Conference on Artif= icial Intelligence (AAAI), 2024, vol.38, No.9, pp.10234=E2=80=9310242.</spa= n></p><p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify;font-si= ze:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-U= S">[5] Zhe Zhang, Yi Yu, and Astsuhiro Takasu, =E2=80=9CControllable lyrics= -to-melody generation,=E2=80=9D=C2=A0<a href=3D"https://rdcu.be/dgozv" targ= et=3D"_blank">https://rdcu.be/dgozv</a>, Journal of Neural Computing and Ap= plications, Volume 35, pp. 19805=E2=80=9319819, 2023.</span></p><p class=3D= "MsoNormal" style=3D"margin:0mm;text-align:justify;font-size:10.5pt;font-fa= mily:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">[6] Wei Duan, = Yi Yu, Xulong Zhang, Suhua Tang, Wei Li, and Keizo Oyama, =E2=80=9CMelody g= eneration from lyrics with local interpretability,=E2=80=9D ACM Transaction= s on Multimedia Computing, Communications, and Applications (TOMCCAP), Vol.= 19, Issue 3, article 124, pp.1-21, 2022.</span></p><p class=3D"MsoNormal" s= tyle=3D"margin:0mm;text-align:justify;font-size:10.5pt;font-family:=E6=B8= =B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">[7]</span><span lang=3D"E= N-US" style=3D"font-size:10pt">Wei Duan,=C2=A0<b>Yi Yu</b>*, and Keizo Oyam= a, =E2=80=9CSemantic dependency network for lyrics generation from melody= =E2=80=9D,=C2=A0<i>Journal of Neural Computing and Applications</i>, Vol. 3= </span><span lang=3D"EN-US" style=3D"font-size:10pt">6</span><span lang=3D"= EN-US" style=3D"font-size:10pt">,=C2=A0</span><span lang=3D"EN-US" style=3D= "font-size:10pt">Issue 8,=C2=A0</span><span lang=3D"EN-US" style=3D"font-si= ze:10pt">pp.=C2=A0</span><span lang=3D"EN-US" style=3D"font-size:10pt">4059= </span><span lang=3D"EN-US" style=3D"font-size:10pt">=E2=80=93</span><span = lang=3D"EN-US" style=3D"font-size:10pt">4069</span><span lang=3D"EN-US" sty= le=3D"font-size:10pt">, 202</span><span lang=3D"EN-US" style=3D"font-size:1= 0pt">4</span><span lang=3D"EN-US" style=3D"font-size:10pt">. IF: 5.102</spa= n><span lang=3D"EN-US"></span></p><p class=3D"MsoNormal" style=3D"margin:0m= m;text-align:justify;font-size:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C= =9D,serif"><span lang=3D"EN-US">[8] Yi Yu, Zhe Zhang, Wei Duan, Abhishek Sr= ivastava, Rajiv Shah, and Yi Ren, =E2=80=9CConditional hybrid GAN for melod= y generation from lyrics,=E2=80=9D Journal of Neural Computing and Applicat= ions, vol.35, Issue 4, pp.3191-3202, 2022.</span></p><p class=3D"MsoNormal"= style=3D"margin:0mm;text-align:justify;font-size:10.5pt;font-family:=E6=B8= =B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">[9]</span><span lang=3D"E= N-US" style=3D"font-size:10pt">Yi Yu, Abhishek Srivastava, and Simon Canale= s, =E2=80=9CConditional LSTM-GAN for melody generation from lyrics,=E2=80= =9D<i>=C2=A0ACM Transaction on Multimedia Computing Communication and Appli= cations (TOMCCAP)</i>, Vol.17, No. 1, article 35, pp.1-20, 2021. IF: 3.275<= /span><span lang=3D"EN-US"></span></p><p class=3D"MsoNormal" style=3D"margi= n:0mm;text-align:justify;font-size:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6= =9C=9D,serif"><span lang=3D"EN-US">=C2=A0</span></p><p class=3D"MsoNormal" = style=3D"margin:0mm;text-align:justify;font-size:10.5pt;font-family:=E6=B8= =B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">Best regards,</span></p><= p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify;font-size:10.5= pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang=3D"EN-US">=C2= =A0</span></p><p class=3D"MsoNormal" style=3D"margin:0mm;text-align:justify= ;font-size:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,serif"><span lang= =3D"EN-US">Yi Yu</span></p><p class=3D"MsoNormal" style=3D"margin:0mm;text-= align:justify;font-size:10.5pt;font-family:=E6=B8=B8=E6=98=8E=E6=9C=9D,seri= f"><span lang=3D"EN-US"><a href=3D"https://home.hiroshima-u.ac.jp/yiyu/inde= x.html" target=3D"_blank" style=3D"color:rgb(5,99,193)">https://home.hirosh= ima-u.ac.jp/yiyu/index.html</a></span></p></div><br><div class=3D"gmail_quo= te"><div dir=3D"ltr" class=3D"gmail_attr"><br></div><blockquote class=3D"gm= ail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,= 204,204);padding-left:1ex"><div class=3D"gmail_quote"><blockquote class=3D"= gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(20= 4,204,204);padding-left:1ex"><div dir=3D"ltr"></div> </blockquote></div> </blockquote></div></div> --0000000000008864f40623b3debf--