[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

INTERSPEECH Emotion Challenge 2009



Dear List, 

We would like to announce the "Interspeech 2009 Emotion Challenge" that might be of particular interest to those of you working in the fields of speech processing and affective computing. It features prizes in three disciplines and accepted papers will be presented in a special session at INTERSPEECH 2009.  

Detailed  information can be found at:
http://emotion-research.net/sigs/speech-sig/emotion-challenge

http://www.interspeech2009.org/conference/specialsessions.php

Please do not hesitate to contact us with any additional questions.

Thank you and best regards,

Bjoern Schuller, Stefan Steidl, Anton Batliner

We apologize if you receive this call more than once:


Call for Papers
--------------------------------------------------------------
INTERSPEECH 2009 Emotion Challenge

Feature, Classifier, and Open Performance Comparison for
Non-Prototypical Spontaneous Emotion Recognition

Organisers:
Bjoern Schuller (Technische Universitaet Muenchen, Germany)
Stefan Steidl (FAU Erlangen-Nuremberg, Germany)
Anton Batliner (FAU Erlangen-Nuremberg, Germany)

Sponsored by:
HUMAINE Association
Deutsche Telekom Laboratories

The Challenge
-------------
The young field of emotion recognition from voice has recently gained considerable interest in Human-Machine Communication, Human-Robot Communication, and Multimedia Retrieval. Numerous studies have been seen in the last decade trying to improve on features and classifiers. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead, a multiplicity of evaluation strategies employed such as crossvalidation or percentage splits without proper instance definition, prevents exact reproducibility. Further, to face more realistic use-cases, the community is in desperate need of more spontaneous and less prototypical data. In these respects, the INTERSPEECH 2009 Emotion Challenge shall help bridging the gap between excellent research on human emotion recognition from speech and low compatibility of results: the FAU Aibo Emotion Corpus of spontaneous, emotionally coloured speech, and benchmark results of the two most popular approaches will be provided by the organisers. Nine hours of speech (51 children) were recorded at two different schools. This allows for distinct definition of test and training partitions incorporating speaker independence as needed in most real-life settings. The
corpus further provides a uniquely detailed transcription of spoken content with word boundaries, non-linguistic vocalisations, emotion labels, units of analysis, etc.

Three sub-challenges are addressed in two different degrees of difficulty by using non-prototypical five or two emotion
classes (including a garbage model):

* The Open Performance Sub-Challenge allows contributors to find their own features with their own
classification algorithm. However, they will have to stick to the definition of test and training sets.

* In the Feature Sub-Challenge, participants are encouraged to upload their individual best features per unit of analysis with a maximum of 100 per contribution. These features will then be tested by the organisers with equivalent settings in one classification task, and pooled together in a feature selection process.

* In the Classifier Sub-Challenge, participants may use a large set of standard acoustic features provided by the organisers for classifier tuning.
The labels of the test set will be unknown, but each participant can upload instance predictions to receive the confusion matrix and results up to 25 times. As classes are un-balanced, the measure to optimise will be mean recall. The
organisers will not take part in the sub-challenges but provide baselines. 

Overall, contributions using the provided or an equivalent database are sought in (but not limited to) the areas:
* Participation in any of the sub-challenges
* Speaker adaptation for emotion recognition
* Noise/coding/transmission robust emotion recognition
* Effects of prototyping on performance
* Confidences in emotion recognition
* Contextual knowledge exploitation

The results of the Challenge will be presented at a Special Session of Interspeech 2009 in Brighton, UK.
Prizes will be awarded to the sub-challenge winners and a best paper.
If you are interested and planning to participate in the Emotion Challenge, or if you want to be kept informed about the
Challenge, please send the organisers an e-mail to indicate your interest and visit the homepage:

http://emotion-research.net/sigs/speech-sig/emotion-challenge

___________________________________________

Dr. Björn Schuller
Lecturer

Technische Universität München
Institute for Human-Machine Communication

Theresienstraße 90
Building N1, ground level
Room N0135
D-80333 München
Germany

Fax: ++49 (0)89 289-28535
Phone: ++49 (0)89 289-28548

schuller@xxxxxx
www.mmk.ei.tum.de/~sch
___________________________________________

This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from our institute may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks.


Attachment: emotion-challenge-IS09-cfp.pdf
Description: emotion-challenge-IS09-cfp.pdf