CFP: Quality of Experiencing Speech Services (Special Session at INTERSPEECH 2010) (Marcel Waeltermann )


Subject: CFP: Quality of Experiencing Speech Services (Special Session at INTERSPEECH 2010)
From:    Marcel Waeltermann  <Marcel.Waeltermann@xxxxxxxx>
Date:    Mon, 22 Mar 2010 14:46:58 +0100
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

[Our apologies if you receive multiple copies of this message.] Dear list, for those who are interested, we would like to inform you about the following Special Session and encourage you to participate: -------------------------------------------------------- "Quality of Experiencing Speech Services" Special Session at INTERSPEECH 2010 in Makuhari, Japan September 26-30, 2010 Paper submission deadline: April 30, 2010 See http://www.interspeech2010.org for details regarding the paper submission process. -------------------------------------------------------- Organizers: Sebastian Möller, Alexander Raake, and Marcel Wältermann Quality and Usability Lab, Deutsche Telekom Laboratories, TU Berlin, Germany Outline: Speech services - for communication between humans or between humans and machines - have mainly been evaluated following two paradigms: On the one hand, metrics of system performance have been developed to quantify the characteristics of the system and its underlying components; on the other hand, subjective evaluation has been carried out to analyze the quality perceived by actual users (Quality of Experience, QoE). However, automatic evaluation of speech technology is more convenient in order to save costly and time-consuming subjective tests, and will ultimately lead to better speech services. The primary purpose of this special session is to discuss technological and perceptual metrics related to the quality of experiencing speech services. We ask: What conceptions of quality are currently in use, and how do they relate to each other? What information can be extracted from speech signals? What types of speech transmission degradations are covered by standardized prediction models like PESQ and the E-model? Which approaches can be taken to monitor speech quality? What parameters can be used for describing spoken dialogue system performance and user behavior in spoken-dialogue interactions? How can these parameters be related to system quality? Is it possible to simulate user behavior for this purpose? Best regards, Sebastian Möller, Alexander Raake, Marcel Wältermann -- Quality and Usability Lab Deutsche Telekom Laboratories TU Berlin Ernst-Reuter-Platz 7 D-10587 Berlin, Germany phone: +49 30 8353 58471 fax : +49 30 8353 58409 email: marcel.waeltermann@xxxxxxxx web : http://www.qu.t-labs.tu-berlin.de


This message came from the mail archive
/home/empire6/dpwe/public_html/postings/2010/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University