[AUDITORY] Call for Papers: TISMIR Special Collection on Multi-Modal Music Information Retrieval (Igor Vatolkin )


Subject: [AUDITORY] Call for Papers: TISMIR Special Collection on Multi-Modal Music Information Retrieval
From:    Igor Vatolkin  <igor.vatolkin@xxxxxxxx>
Date:    Sat, 3 Feb 2024 16:16:27 +0100

This is a multi-part message in MIME format. --------------HyUGBpOuBebrbNROU6VIKSmT Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by edgeum4.it.mcgill.ca id 413FHgQ3013841 Dear list, we are happy to announce the Call for Papers for TISMIR Special Collectio= n on Multi-Modal=20 Music Information Retrieval. *Deadline for Submissions *01.08.2024* * *Scope of the Special Collection* Data related to and associated with music can be retrieved from a variety= of sources or=20 modalities: audio tracks; digital scores; lyrics; video clips and concert recordings;= artist photos=20 and album covers; expert annotations and reviews; listener social tags from the Internet; a= nd so on.=20 Essentially, the ways humans deal with music are very diverse: we listen to it, read reviews, a= sk friends for recommendations, enjoy visual performances during concerts, dance and per= form rituals, play musical instruments, or rearrange scores. As such, it is hardly surprising that we have discovered multi-modal data= to be so=20 effective in a range of technical tasks that model human experience and expertise. Former stud= ies have already confirmed that music classification scenarios may significantly benefit w= hen several=20 modalities are taken into account. Other works focused on cross-modal analysis, e.g., ge= nerating a=20 missing modality from existing ones or aligning the information between different modaliti= es. The current upswing of disruptive artificial intelligence technologies, d= eep learning, and=20 big data analytics is quickly changing the world we are living in, and inevitably = impacts MIR=20 research as well. Facilitating the ability to learn from very diverse data sources by means= of these=20 powerful approaches may not only bring the solutions to related applications to new levels of= quality,=20 robustness, and efficiency, but will also help to demonstrate and enhance the breadth and= interconnected=20 nature of music science research and the understanding of relationships between dif= ferent kinds of=20 musical data. In this special collection, we invite papers on multi-modal systems in al= l their=20 diversity. We particularly encourage under-explored repertoire, new connections between fields, and = novel research areas. Contributions consisting of pure algorithmic improvements, empirical stud= ies, theoretical=20 discussions, surveys, guidelines for future research, and introductions of new data se= ts are all=20 welcome, as the special collection will not only address multi-modal MIR, but also cover = multi-perspective=20 ideas, developments, and opinions from diverse scientific communities. *Sample Possible Topics* =E2=97=8F State-of-the-art music classification or regression systems whi= ch are based on several modalities =E2=97=8F Deeper analysis of correlation between distinct modalities and = features derived from them =E2=97=8F Presentation of new multi-modal data sets, including the possib= ility of formal analysis and theoretical discussion of practices for constructing better data sets in = future =E2=97=8F Cross-modal analysis, e.g., with the goal of predicting a modal= ity from another one =E2=97=8F Creative and generative AI systems which produce multiple modal= ities =E2=97=8F Explicit analysis of individual drawbacks and advantages of mod= alities for specific MIR=20 tasks =E2=97=8F Approaches for training set selection and augmentation techniqu= es for multi-modal classifier systems =E2=97=8F Applying transfer learning, large language models, and neural a= rchitecture search to multi-modal contexts =E2=97=8F Multi-modal perception, cognition, or neuroscience research =E2=97=8F Multi-objective evaluation of multi-modal MIR systems, e.g., no= t only focusing on the=20 quality, but also on robustness, interpretability, or reduction of the environment= al impact during the training of deep neural networks *Guest Editors* =E2=97=8F Igor Vatolkin (lead) - Akademischer Rat (Assistant Professor) a= t the Department of Computer Science, RWTH Aachen University, Germany =E2=97=8F Mark Gotham - Assistant professor at the Department of Computer= Science, Durham University, UK =E2=97=8F Xiao Hu - Associated professor at the University of Hong Kong =E2=97=8F Cory McKay - Professor of music and humanities at Marianopolis = College, Canada =E2=97=8F Rui Pedro Paiva - Professor at the Department of Informatics En= gineering of the=20 University of Coimbra, Portugal *Submission Guidelines* Please, submit through https://transactions.ismir.net, and note in your c= over letter that=20 your paper is intended to be part of this Special Collection on Multi-Modal MIR. Submissions should adhere to formatting guidelines of the TISMIR journal: https://transactions.ismir.net/about/submissions/. Specifically, articles= must not be=20 longer than 8,000 words in length, including referencing, citation and notes. Please also note that if the paper extends or combines the authors' previ= ously published=20 research, it is expected that there is a significant novel contribution in the submiss= ion (as a rule of=20 thumb, we would expect at least 50% of the underlying work - the ideas, concepts, m= ethods, results,=20 analysis and discussion - to be new). In case you are considering submitting to this special issue, it would gr= eatly help our=20 planning if you let us know by replying to igor.vatolkin@xxxxxxxx Kind regards, Igor Vatolkin on behalf of the TISMIR editorial board and the guest editors --------------HyUGBpOuBebrbNROU6VIKSmT Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by edgeum4.it.mcgill.ca id 413FHgQ3013841 <html> <head> <meta http-equiv=3D"content-type" content=3D"text/html; charset=3DUTF= -8"> </head> <body> <p>Dear list,<br> </p> <p>we are happy to announce the Call for Papers for TISMIR Special Collection on Multi-Modal Music Information Retrieval.<br> </p> <p><b>Deadline for Submissions<br> </b>01.08.2024<b><br> </b></p> <p><b>Scope of the Special Collection</b><br> Data related to and associated with music can be retrieved from a variety of sources or modalities:<br> audio tracks; digital scores; lyrics; video clips and concert recordings; artist photos and album covers;<br> expert annotations and reviews; listener social tags from the Internet; and so on. Essentially, the ways<br> humans deal with music are very diverse: we listen to it, read reviews, ask friends for<br> recommendations, enjoy visual performances during concerts, dance and perform rituals, play<br> musical instruments, or rearrange scores.</p> <p>As such, it is hardly surprising that we have discovered multi-modal data to be so effective in a range<br> of technical tasks that model human experience and expertise. Former studies have already<br> confirmed that music classification scenarios may significantly benefit when several modalities are<br> taken into account. Other works focused on cross-modal analysis, e.g., generating a missing modality<br> from existing ones or aligning the information between different modalities.</p> <p>The current upswing of disruptive artificial intelligence technologies, deep learning, and big data<br> analytics is quickly changing the world we are living in, and inevitably impacts MIR research as well.<br> Facilitating the ability to learn from very diverse data sources by means of these powerful approaches<br> may not only bring the solutions to related applications to new levels of quality, robustness, and<br> efficiency, but will also help to demonstrate and enhance the breadth and interconnected nature of<br> music science research and the understanding of relationships between different kinds of musical<br> data.</p> <p>In this special collection, we invite papers on multi-modal systems in all their diversity. We particularly<br> encourage under-explored repertoire, new connections between fields, and novel research areas.<br> Contributions consisting of pure algorithmic improvements, empirical studies, theoretical discussions,<br> surveys, guidelines for future research, and introductions of new data sets are all welcome, as the<br> special collection will not only address multi-modal MIR, but also cover multi-perspective ideas,<br> developments, and opinions from diverse scientific communities.</p> <p><b>Sample Possible Topics</b><br> =E2=97=8F State-of-the-art music classification or regression syste= ms which are based on several<br> modalities<br> =E2=97=8F Deeper analysis of correlation between distinct modalitie= s and features derived from them<br> =E2=97=8F Presentation of new multi-modal data sets, including the possibility of formal analysis and<br> theoretical discussion of practices for constructing better data sets in future<br> =E2=97=8F Cross-modal analysis, e.g., with the goal of predicting a modality from another one<br> =E2=97=8F Creative and generative AI systems which produce multiple modalities<br> =E2=97=8F Explicit analysis of individual drawbacks and advantages = of modalities for specific MIR tasks<br> =E2=97=8F Approaches for training set selection and augmentation techniques for multi-modal classifier<br> systems<br> =E2=97=8F Applying transfer learning, large language models, and ne= ural architecture search to<br> multi-modal contexts<br> =E2=97=8F Multi-modal perception, cognition, or neuroscience resear= ch<br> =E2=97=8F Multi-objective evaluation of multi-modal MIR systems, e.= g., not only focusing on the quality,<br> but also on robustness, interpretability, or reduction of the environmental impact during the<br> training of deep neural networks</p> <p><b>Guest Editors</b><br> =E2=97=8F Igor Vatolkin (lead) - Akademischer Rat (Assistant Profes= sor) at the Department of Computer<br> Science, RWTH Aachen University, Germany<br> =E2=97=8F Mark Gotham - Assistant professor at the Department of Co= mputer Science, Durham<br> University, UK<br> =E2=97=8F Xiao Hu - Associated professor at the University of Hong = Kong<br> =E2=97=8F Cory McKay - Professor of music and humanities at Mariano= polis College, Canada<br> =E2=97=8F Rui Pedro Paiva - Professor at the Department of Informat= ics Engineering of the University of<br> Coimbra, Portugal</p> <p><b>Submission Guidelines</b><br> Please, submit through <a class=3D"moz-txt-link-freetext" href=3D"https://transactions.ismir.net">https://transactions.ismi= r.net</a>, and note in your cover letter that your paper is<br> intended to be part of this Special Collection on Multi-Modal MIR.<= br> Submissions should adhere to formatting guidelines of the TISMIR journal:<br> <a class=3D"moz-txt-link-freetext" href=3D"https://transactions.ismir.net/about/submissions/">https:= //transactions.ismir.net/about/submissions/</a>. Specifically, articles must not be longer than<br> 8,000 words in length, including referencing, citation and notes.</= p> <p>Please also note that if the paper extends or combines the authors' previously published research, it<br> is expected that there is a significant novel contribution in the submission (as a rule of thumb, we<br> would expect at least 50% of the underlying work - the ideas, concepts, methods, results, analysis and<br> discussion - to be new).</p> <p>In case you are considering submitting to this special issue, it would greatly help our planning if you<br> let us know by replying to <a class=3D"moz-txt-link-abbreviated moz-txt-link-freetext" href=3D"mailto:igor.vatolkin@xxxxxxxx">igor.vatolkin@xxxxxxxx= achen.de</a>.<br> </p> <p>Kind regards,<br> Igor Vatolkin<br> on behalf of the TISMIR editorial board and the guest editors </p> <p></p> </body> </html> --------------HyUGBpOuBebrbNROU6VIKSmT--


This message came from the mail archive
postings/2024/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University