[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] Call for Papers: HCI International 2023 Workshop on Interactive Technologies for Analysing and Visualizing Musical Structure

Dear Auditory List members,


We would like to bring your attention to the following call for papers for a workshop at HCI International in Copenhagen on 23 July 2023, focusing on


Interactive Technologies for Analysing and Visualizing Musical Structure


The full call is available here: https://2023.hci.international/W1.html


Aim of the workshop


We are seeking high-quality submissions reporting on original, previously unpublished research within the area of interactive technologies for analysing and visualizing musical structure. Analysing and visualizing the structure of musical works can play a fundamental role in facilitating and enhancing the understanding and appreciation of those works among listeners, performers, composers and musicologists. For example, a performer who is learning to play a piece can benefit from having an effective visual representation of a coherent, satisfying and meaningful way of understanding the piece, perhaps in the context of other music in the same style or genre. Similarly, a concert-goer who is about to hear a piece for the first time might benefit from being presented first with an introduction to the piece, enhanced with visualizations of the work’s structure and the way the work relates to other works from the same period, in the same genre or by the same composer.


If listeners, performers, composers and musicologists have access to usable and available software that automatically generates insightful analyses from digital encodings of musical works, along with high-quality encodings of the works in which they are interested, then insight-giving visualizations of the music’s structure might be readily generated as required. Moreover, users might want to interact in various ways with such a generated analysis through a visualization that serves as a graphical user interface to the analysis. For example, a user might wish to hear certain themes or chords identified by the analysis, or compare part of one piece with part of a different piece that the automatically generated analysis has identified as being related. Users may also want to customize and modify a generated analysis so that it more accurately reflects how they personally interpret the piece.


Such use cases present challenging problems for software engineers and user-interface designers. The software that generates the analyses must carry out computationally expensive processes (e.g., pattern discovery) in practical running times if the software is to be usably responsive. Different types of users may require different types of user interfaces, affording different types of interactions and visualizations that match, for example, their level of musical expertise, the aspects of the music in which they are interested, and the specific tasks for which the software is being used.


Analysis and visualization of musical structure are especially important when attempting to gain knowledge about musical traditions, genres and repertoires with which one is unfamiliar. Effective visualizations (supported by appropriate representations, encodings and data-structures) are also crucial when communicating the knowledge that has been gained to a target audience. Such a situation occurs when music from one culture is performed to an audience consisting largely of people from a different culture—for example, when Persian or Indian music is played to a European audience that is primarily familiar with classical music. However, the problem of effectively communicating the meaning of a piece of music to an audience that is unfamiliar with the cultural or stylistic context within which the piece was created can even arise when the music and the audience share the same culture—for example, an audience that listens almost exclusively to Western popular music may find it hard to appreciate a concert of Western classical music.


Expected workshop outcome


We expect that the workshop will bring together musicologists, ethnomusicologists, software engineers and computer scientists, HCI experts, composers, musicians, and librarians and archivists interested in digitizing musical sources. By bringing together such a wide variety of experts from different domains, we expect that the event will initiate exciting inter-disciplinary collaborations that could lead to future large-scale collaborative projects. We expect that the workshop will serve to strengthen the identity of an emergent multi-disciplinary research community whose common goal is to develop technologies that can help a variety of types of users to better understand the music in which they are interested, interact more effectively and enjoyably with that music, and more effectively communicate knowledge about that music to a broader audience. We expect that the workshop will sharpen our understanding of what the main challenges and problems are in this domain and lay down the foundations for a roadmap for future research in the area. We also expect that this will be the first of a series of annual workshops on this topic.


Workshop topics


This workshop will focus on the technical and interaction-design challenges involved in building effective, usable technologies for generating, visualizing and interacting with analyses of musical works. It will also welcome contributions that illustrate how such technologies can deepen our understanding of works and make them accessible to broader audiences. Contributions will also be welcome that address the challenging issues inherent in creating, curating and disseminating collections of high-quality encodings of (possibly very large) musical works, since the availability of such collections is necessary if a wide variety of users are going to be able to study and interact with the music in which they are interested.


Submission for the Workshop


Prospective authors should submit their proposed papers in PDF format through the HCII Conference Management System (CMS). Authors may submit either a short paper (4-8 pages in the CCIS template) or a long paper (10-20 pages in the LNCS template). All authors of submissions accepted as posters will be required to create a digital poster to be presented during a 1-minute, 1-slide presentation in the “poster craze” session. Authors of papers selected for oral presentation will be required to prepare a 15-minute presentation to be given during the workshop.


Submission for the Conference Proceedings


The contributions to be presented in the context of Workshops will not be automatically included in the Conference proceedings.


However, after consultation with the Workshop Organizer, authors of accepted workshop proposals that are registered for the conference, are welcome to submit through the HCII Conference Management System (CMS), a (possibly extended) version of their workshop contribution, to be considered for presentation at the Conference and inclusion in the “Late Breaking Work” conference proceedings, either in the LNCS as a long paper (typically 12 pages, but no less than 10 and no more than 20 pages), or in the CCIS as a short paper (typically 6 pages, but no less than 4 and no more than 8), following peer review.


The submission deadline for the camera-ready papers (long or short) for the “Late Breaking Work” Volumes of the Proceedings is the 15th of June 2023.


Workshop deadlines



Submission of workshop long and short papers

1 April 2023

Authors notified of decisions on acceptance

25 April 2023

Finalization of workshop organization and registration of participants

30 April 2023

Submission deadlines for CRC of papers for proceedings

15 June 2023



David Meredith

M.A. (Cantab.), D.Phil. (Oxon.), Associate Professor

Department of Architecture, Design and Media Technology


Editor-in-Chief, Journal of New Music Research


Tel: +45 99408092 | Mob: +45 60143868







Aalborg University

Rendsburggade 14

9000 Aalborg, Denmark


Employee No.: 119171 | Vat No.: DK29102384


A picture containing text

Description automatically generated