|
**Apologies for cross posting**
Evaluation of speech and speech synthesis
Submission deadline: 30 June 2026
Synthetic speech has advanced to a point where its quality and diversity challenge the boundaries of existing evaluation methods. Frameworks such as MOS and MUSHRA were designed to measure transmission quality rather than to assess speech as such; they were
never intended to capture the communicative or functional properties of speech when transmission is no longer the limiting factor. In contemporary systems, performance ought instead to be defined by how well the speech fulfils its intended task, role, or utility.
The Special Issue therefore asks how evaluation can be made more responsive to this new landscape: one in which human and synthetic speech can, and should, be assessed by comparable principles tied to task and situation.
Much of today’s evaluation practice still relies on comparing synthetic speech to static recordings of human voices. Such tests can be useful for measuring surface similarity, but they ignore the dynamic and situational aspects that determine whether speech
actually fulfils its purpose. Human speakers continuously adapt timing, prosody, and style to the communicative setting and to the role or persona they embody. A synthetic voice should be expected to perform similarly: it should use a speaking style suited
to the situation or task, be it audiobook narration, dialogue interaction, public announcement, or personalised replacement voice, and align it with the intended persona, be that a robot, a disembodied assistant, a child, or an adult. This Special Issue particularly
seeks evaluations that capture such situational and functional adequacy, rather than limiting comparison to perceived “human-likeness.”
Guest editors:
Prof. Jens Edlund (Executive Guest Editor), KTH Royal Institute of Technology, Stockholm, Sweden; Email: edlund@xxxxxxxxxxxxx
Dr. Sébastien Le Maguer, University of Helsinki, Helsinki, Finland; Email: sebastien.lemaguer@xxxxxxxxxxx
Christina Tånnander, MTM, Swedish Agency for Accessible Media and KTH Royal Institute of Technology, Stockholm, Sweden; Email: christina.tannander@xxxxxx
Prof. Petra Wagner, Bielefeld University, Bielefeld, Germany; Email: petra.wagner@xxxxxxxxxxxxxxxx
Special issue information:
We invite contributions that reinvent, extend or refine evaluation practice in these directions, including but not limited to studies that:
• propose concrete alternatives to established evaluation paradigms, demonstrating that more informative and diagnostically useful practices are both possible and practicable;
• investigate the generalisability of established evaluation schemes across different applications or tasks, or compare various evaluation schemes within a single application domain;
• align measurement with real-world use, broadening evaluation perspectives through situated examples from accessibility, education, healthcare, entertainment, and other fields;
• provide guidance for future research, consolidating lessons into good practices and identifying the conceptual and methodological challenges that remain; or
• transfer or adapt evaluation practices from neighbouring fields such as speech therapy, HCI, or psychology.
Manuscript submission information:
Important Dates:
|