[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[AUDITORY] INTERSPEECH 2019 ComParE



Dear Colleagues,

All Sub-Challenges of this year's Interspeech Computational Paralinguistics ChallengE (ComParE) have opened:

Call for Participation
INTERSPEECH 2019 Computational Paralinguistics Challenge (ComParE)

Tasks: Styrian Dialects, Continuous Sleepiness, Baby Sounds & Orca Activity

http://www.compare.openaudio.eu/compare2019/

Organisers:
Björn Schuller (University of Augsburg, Germany & Imperial College London, UK)
Anton Batliner (University of Augsburg, Germany)
Christian Bergler (FAU Erlangen-Nuremberg Erlangen, Germany)
Florian Pokorny (Medical University of Graz, Austria)  
Jarek Krajewski (University of Wuppertal / Rhenish University of Applied Science Cologne, Germany)
Margaret Cychosz (University of California Berkeley, USA)

Sponsored by:
audEERING GmbH (https://www.audeering.com/)

Dates:
Paper Abstract Submission: 29 March 2019
Paper Final Submission: 5 April 2019
Final Result Upload: 24 June 2019
Camera-ready Paper: 1 July 2019

The Challenge:

The Interspeech 2019 Computational Paralinguistics ChallengE (ComParE) is an open Challenge dealing with states and traits of speakers as manifested in their speech signal's acoustic properties. There have so far been ten consecutive Challenges at INTERSPEECH since 2009 (cf. the Challenge series' repository at http://www.compare.openaudio.eu), but there still exists a multiplicity of not yet covered, but highly relevant paralinguistic phenomena. Thus, we introduce four new tasks in this year's edition. The following Sub-Challenges are addressed:

- In the Styrian Dialects Sub-Challenge, the sub-type of Styrian Dialect has to be recognised. Styria is a state in Austria with its capital Graz, which hosts Interspeech 2019.
- In the Continuous Sleepiness Sub-Challenge, the level of sleepiness of subjects has to be recognised, according to the 9-point Karolinska Sleepiness Scale.
- In the Baby Sounds Sub-Challenge, five types of baby vocalisations have to be classified.
- In the Orca Activity Sub-Challenge, the probability of an orca whale present in an underwater audio clip has to be determined.

All Sub-Challenges allow contributors to find their own features with their own machine learning algorithm. However, a standard feature set will be provided that may be used. Participants will have to stick to the definition of training, development, and test sets as given. They may report results obtained on the development sets, but have only five trials to upload their results on the test set per Sub-Challenge, whose labels are unknown to them. Each participation has to be accompanied by a paper presenting the results that undergoes the normal Interspeech peer-review and has to be accepted for the conference in order to participate in the Challenge. The organisers preserve the right to re-evaluate the findings, but will not participate themselves in the Challenge.
In these respects, the INTERSPEECH 2019 COMPUTATIONAL PARALINGUISTICS CHALLENGE (COMPARE) shall help bridging the gap between excellent research on paralinguistic information in spoken language and low compatibility of results. We encourage both - contributions aiming at highest performance w.r.t. the baselines provided by the organisers, and contributions aiming at finding new and interesting insights w.r.t. these data. Overall, contributions using the provided or equivalent data are sought for (but not limited to):

- Participation in a Sub-Challenge
- Contributions focussing on Computational Paralinguistics centred around the Challenge topics

The results of the Challenge will be presented at Interspeech 2019 in Graz, Austria.
Prizes will be awarded to the Sub-Challenge winners. If you are interested and planning to participate in INTERSPEECH 2019 COMPARE, or if you want to be kept informed about the Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage:

http://www.compare.openaudio.eu/compare2019/


On behalf of the organisers,

Björn Schuller






___________________________________________

Univ.-Prof. mult. Dr. habil. Björn W. Schuller, FIEEE

Professor and ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing
University of Augsburg / Germany

Professor of Artificial Intelligence
Head GLAM - Group on Language, Audio & Music
Imperial College London / UK

CSO/MD audEERING GmbH
Germany

schuller@xxxxxxxx
www.schuller.one