Subject: [AUDITORY] Call for Papers for a Special Collection of Music & Science on =?UTF-8?Q?=E2=80=9CExplaining_music_with_AI=3A_Advancing_the_scientifi?= =?UTF-8?Q?c_understanding_of_music_through_computation=E2=80=9D?= From: David Meredith <dave@xxxxxxxx> Date: Mon, 24 Jan 2022 18:20:06 +0100Dear Auditory list members, We would like to bring your attention to the following call for contributio= ns to a special collection of the journal, Music & Science, (https://journal= s.sagepub.com/home/mns) on the topic of "Explaining music with AI: Advancing= the scientific understanding of music through computation". We are aiming t= o have the collection published (open access) by the end of May 2023. Deadli= ne for submissions of full papers is 31 August 2022. Kind regards, David Meredith, Anja Volk and Tom Collins (guest editors) =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Call for Papers for a Special Collection of Music & Science on=20 =E2=80=9CExplaining music with AI: Advancing the scientific=20 understanding of music through computation=E2=80=9D Guest edited by David Meredith, Anja Volk & Tom Collins DEADLINE FOR SUBMISSION: Wednesday 31 August, 2022 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D In recent years, a huge number of publications, particularly in the areas o= f music information retrieval and music generation, have reported on project= s in which deep learning neural network models have been used successfully t= o carry out a wide variety of generation, regression and classification task= s on musical data. This work has significantly contributed to the arsenal of= computational tools that we have at our disposal, if we want to explore, or= ganise, create or simply enjoy digital music resources. However, the majority of such models are =E2=80=9Cblack box=E2=80=9D models that have t= housands or even millions of free parameters, whose values are determined th= rough training on, typically, large amounts of data. The computing pioneer, = John von Neumann, allegedly joked, =E2=80=9CWith four free variables I can fit an = elephant and with five I can make him wiggle his trunk=E2=80=9D (Freeman Dyson, 20= 04, =E2=80=9CA meeting with Enrico Fermi=E2=80=9D, Nature, 427:297). Such considerations= prompt us to question whether such black-box deep learning models make a si= gnificant contribution to our scientific understanding of music, musical pro= cesses and musical behaviour. For this special collection, we seek high quality contributions that report= on recent research in which any computational method has been used to advan= ce our understanding of how and why music is created, communicated and recei= ved. We are particularly interested in shining a light on computational meth= ods that have perhaps not received the attention they deserve because of the= dominance of deep learning in recent years. At the same time, contributions= in which deep learning and other neural network models have been shown to a= dvance the scientific understanding of music are also very welcome. Submissions may address any aspect of musical behaviour, including, but not= limited to, composition, improvisation, performance, listening, musical ges= tures and dance. Contributions may also focus on any aspects of music, e.g.,= rhythm, harmony, melody, counterpoint, instrumentation or timbre. We likewi= se set no constraints on the considered music=E2=80=99s style, genre, period or pl= ace of origin. However, the reported work must have adopted a computational = approach that has led to an advancement in our scientific understanding of m= usic. We are also keen to cover a variety of application areas where music is put= to use, not only for pure entertainment or artistic purposes, but also, for= example, in healthcare, in rituals and ceremonies, in meditation, in film s= oundtracks or video games, or even, for example, in politics or advertising.= We welcome, in particular, contributions where a computational approach has= been employed in conjunction with methodologies and knowledge from other fi= elds, such as psychology, musicology, sociology, biology, physics, anthropol= ogy or ethnomusicology. If you are interested in submitting a manuscript for this special collectio= n, please send an expression of interest to the guest editors (at <dave@xxxxxxxx= te.aau.dk>), containing the following information: - Draft title - Names, full contact details and affiliations of authors - 200 word overview of the expected content of the paper - References to any recent related publications by the authors=20 (including any recent conference papers on a related topic) =20 This expression of interest should be sent as soon as possible, but prefera= bly by Wednesday 16 March 2022. The deadline for submission of full manuscripts is Wednesday 31 August 2022= . Full manuscripts will need to be submitted using the online submission sys= tem at https://mc.manuscriptcentral.com/mns and should follow the Submission= Guidelines, which can be accessed at https://journals.sagepub.com/author-in= structions/MNS. Accepted papers in this collection will be published with open access by th= e end of May 2023. IMPORTANT DATES =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - Initial statement of interest:=20 As soon as possible, but preferably by Wednesday 16 March 2022 - Submission of first version of full manuscript: By Wednesday 31 August 2022 - First decision on manuscript sent to authors: By Wednesday 30 November 2022 - Submission of revised manuscripts: By Tuesday 28 February 2022 - Results of reviews of revised manuscripts sent to authors:=20 By Tuesday 2 May 2023 - Publication of accepted papers online: By Wednesday 31 May 2023 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D David Meredith=20 Department of Architecture, Design, and Media Technology, Aalborg Universit= y, Denmark Anja Volk Department of Information and Computing Sciences, Utrecht University, The N= etherlands Tom Collins Department of Music, University of York, United Kingdom