[apologies for cross-postings, please distribute]
**************************************************************************************************************************************************
CALL FOR PAPERS
1st International Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies (MIRUM)
held in conjunction with ACM Multimedia 2011, November 28 – December 1 2011, Scottsdale, AZ
**************************************************************************************************************************************************
Music is an outstanding example of a content type with many different representations. The symbolic notation by the composer (e.g. in a score or a lead sheet) will only reach full manifestation when performed and presented to listeners in the form of music audio. Next to the symbolic and aural modality, multiple other modalities hold useful information that will contribute to the way in which the music is conveyed and experienced, such as visual, textual and social information. The existence of complementary representations and information sources in multiple modalities makes music content multimedia by definition.
The consumption of music is strongly guided by affective and subjective responses: aspects that are personal and context-dependent, occur at different conceptual specificity levels, and for which no universal, uncompromising ground truth exists. In order for music retrieval systems to yield satisfying results, insight into the information needs and demands of the actual users of the systems thus becomes very important.
To allow comprehensive and flexible exploitation of the multifaceted aspects of music, both the availability of complementary music-related information in multiple modalities and the role of the human user should be considered. At the same time, challenges such as the identification and optimal combination of useful information from different modalities and algorithmic approaches to user-dependent subjective assessments of music retrieval results still are largely unsolved. These challenges are certainly not unique to music content, but actual and prevalent in the broad multimedia community.
The MIRUM workshop, held in conjunction with ACM Multimedia 2011, November 28 – December 1 2011 in Scottsdale, AZ, provides a platform at a premier multimedia venue for discussing open challenges and presenting state-of-the art work on music information retrieval applying user-centered and/or multimodal strategies. The workshop explicitly aims to initiate a cross-disciplinary idea exchange between experts in music and multimedia information retrieval (and related fields) on the topics including, but not limited to:
- Music multimedia content analysis
- Visual and sensory information for music processing
- Multimodal music search, retrieval and recommendation
- Social networks and indexing for music applications
- Music similarity measures at different specificity levels
- Fusion of multimodal music information sources
- Music knowledge representation and reasoning
- Interactive music systems and retrieval
- (Adaptive) user interaction and interfaces
- User (context) models and personalization
- Real-world issues (unstructured and noisy data, scalability, formats, …)
- Evaluation methods and data understanding
- Cross-domain methodology transfer
MIRUM welcomes technical papers and a limited number of position papers (both max. 6 pages) with novel, thought-provoking work and ideas relating to the workshop topics. To stimulate the cross-disciplinary dialogue, authors from neighboring fields working on similar challenges, who can demonstrate relevance and transferability of their work to the music domain are encouraged to contribute to the workshop too. All submissions must be formatted according to the ACM Proceedings style and contain original work that is not being published or under review elsewhere following the guidelines at http://www.acmmm11.org/content-workshop-papers-formatting-guidelines.html. Each submission will undergo a double-blind reviewing process by at least 3 PC members. All accepted papers will be published together with the ACM MM 2011 main conference proceedings and made available through the ACM digital library. More information can be found on the workshop website at http://mirum11.tudelft.nl.
**************************************************************************************************************************************************
Important dates
Paper submission: June 19, 2011
Notification of acceptance: July 30, 2011
Camera-ready paper submission: September 5, 2011
ACM Multimedia 2011: November 28 – December 1 2011
**************************************************************************************************************************************************
Workshop organization
Cynthia Liem, Delft University of Technology
Douglas Eck, Google Inc.
Meinard Müller, Saarland University & MPI Informatik
George Tzanetakis, University of Victoria
**************************************************************************************************************************************************
Program committee
Elaine Chew, University of Southern California
Ching-Hua Chuan, University of North Florida
Simon Dixon, Queen Mary, University of London
Sebastian Ewert, University of Bonn
Rebecca Fiebrink, Princeton University
Takuya Fujishima, Yamaha Corporation
Masataka Goto, National Institute of Advanced Industrial Science and Technology
Fabien Gouyon, INESC Porto
Jason Hockman, McGill University
Paul Lamere, The Echo Nest
Gert Lanckriet, University of California, San Diego
Jin Ha Lee, University of Washington
Matthias Mauch, last.fm
Lie Lu, Dolby Sound Technology Research
Markus Schedl, Johannes Kepler University Linz
Björn Schuller, TU München
Joan Serrà, University Pompeu Fabra, Barcelona
Arjen de Vries, Delft University of Technology/Centre for Mathematics and Computer Science
Ye Wang, National University of Singapore
**************************************************************************************************************************************************