[AUDITORY] Computer Speech & Language Special Issue "Evaluation of speech and speech synthesis" - Call for Papers - Submission Deadline 30/06/2026 (=?Windows-1252?Q?Le_Maguer=2C_S=E9bastien?=)


Subject: [AUDITORY] Computer Speech & Language Special Issue "Evaluation of speech and speech synthesis" - Call for Papers - Submission Deadline 30/06/2026
From:    =?Windows-1252?Q?Le_Maguer=2C_S=E9bastien?= <=?Windows-1252?Q?Le_Maguer=2C_S=E9bastien?=>
Date:    Tue, 16 Dec 2025 15:56:13 +0000

--_000_AS8PR07MB73819A6B1D72CF30EA93558E92AAAAS8PR07MB7381eurp_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable **Apologies for cross posting** Evaluation of speech and speech synthesis Submission deadline: 30 June 2026 Synthetic speech has advanced to a point where its quality and diversity ch= allenge the boundaries of existing evaluation methods. Frameworks such as M= OS and MUSHRA were designed to measure transmission quality rather than to = assess speech as such; they were never intended to capture the communicativ= e or functional properties of speech when transmission is no longer the lim= iting factor. In contemporary systems, performance ought instead to be defi= ned by how well the speech fulfils its intended task, role, or utility. The Special Issue therefore asks how evaluation can be made more responsive= to this new landscape: one in which human and synthetic speech can, and sh= ould, be assessed by comparable principles tied to task and situation. Much of today=92s evaluation practice still relies on comparing synthetic s= peech to static recordings of human voices. Such tests can be useful for me= asuring surface similarity, but they ignore the dynamic and situational asp= ects that determine whether speech actually fulfils its purpose. Human spea= kers continuously adapt timing, prosody, and style to the communicative set= ting and to the role or persona they embody. A synthetic voice should be ex= pected to perform similarly: it should use a speaking style suited to the s= ituation or task, be it audiobook narration, dialogue interaction, public a= nnouncement, or personalised replacement voice, and align it with the inten= ded persona, be that a robot, a disembodied assistant, a child, or an adult= . This Special Issue particularly seeks evaluations that capture such situa= tional and functional adequacy, rather than limiting comparison to perceive= d =93human-likeness.=94 Guest editors: Prof. Jens Edlund (Executive Guest Editor), KTH Royal Institute of Technolo= gy, Stockholm, Sweden; Email: edlund@xxxxxxxx Dr. S=E9bastien Le Maguer, University of Helsinki, Helsinki, Finland; Email= : sebastien.lemaguer@xxxxxxxx Christina T=E5nnander, MTM, Swedish Agency for Accessible Media and KTH Roy= al Institute of Technology, Stockholm, Sweden; Email: christina.tannander@xxxxxxxx= tm.se Prof. Petra Wagner, Bielefeld University, Bielefeld, Germany; Email: petra.= wagner@xxxxxxxx Special issue information: We invite contributions that reinvent, extend or refine evaluation practice= in these directions, including but not limited to studies that: =95 propose concrete alternatives to established evaluation paradigms, demo= nstrating that more informative and diagnostically useful practices are bot= h possible and practicable; =95 investigate the generalisability of established evaluation schemes acro= ss different applications or tasks, or compare various evaluation schemes w= ithin a single application domain; =95 align measurement with real-world use, broadening evaluation perspectiv= es through situated examples from accessibility, education, healthcare, ent= ertainment, and other fields; =95 provide guidance for future research, consolidating lessons into good p= ractices and identifying the conceptual and methodological challenges that = remain; or =95 transfer or adapt evaluation practices from neighbouring fields such as= speech therapy, HCI, or psychology. Manuscript submission information: Important Dates: * Submission Open Date: December 1, 2025 * Submission Deadline: June 30, 2026 * Editorial Acceptance Deadline: March 31, 2027 --_000_AS8PR07MB73819A6B1D72CF30EA93558E92AAAAS8PR07MB7381eurp_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> <style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo= ttom:0;} </style> </head> <body dir=3D"ltr"> <div style=3D"font-family: Aptos, &quot;Aptos_EmbeddedFont&quot;, &quot;Apt= os_MSFontService&quot;, Calibri, Helvetica, sans-serif; font-size: 12pt; co= lor: rgb(0, 0, 0);" class=3D"elementToProof"> **Apologies for cross posting**</div> <div style=3D"font-family: Aptos, &quot;Aptos_EmbeddedFont&quot;, &quot;Apt= os_MSFontService&quot;, Calibri, Helvetica, sans-serif; font-size: 12pt; co= lor: rgb(0, 0, 0);" class=3D"elementToProof"> <br> </div> <div class=3D"elementToProof" id=3D"docs-internal-guid-f254f2b5-7fff-aae9-b= 671-1ae3677b38c0" style=3D"direction: ltr; line-height: 1.38; margin-top: 1= 4pt; margin-bottom: 4pt; font-family: Aptos, &quot;Aptos_EmbeddedFont&quot;= , &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-serif; font-siz= e: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Evaluation of speech and speech synthesis= </span></div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 0pt; margin-bottom: 0pt; font-family: Aptos, &quot;Aptos_Embedde= dFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-seri= f; font-size: 12pt; color: rgb(0, 0, 0);"> Submission deadline: <span style=3D"font-weight: 700;">30 June 2026</span><= /div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> Synthetic speech has advanced to a point where its quality and diversity ch= allenge the boundaries of existing evaluation methods. Frameworks such as M= OS and MUSHRA were designed to measure transmission quality rather than to = assess speech as such; they were never intended to capture the communicative or functional properties of sp= eech when transmission is no longer the limiting factor. In contemporary sy= stems, performance ought instead to be defined by how well the speech fulfi= ls its intended task, role, or utility.</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> The Special Issue therefore asks how evaluation can be made more responsive= to this new landscape: one in which human and synthetic speech can, and sh= ould, be assessed by comparable principles tied to task and situation.</div= > <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> Much of today=92s evaluation practice still relies on comparing synthetic s= peech to static recordings of human voices. Such tests can be useful for me= asuring surface similarity, but they ignore the dynamic and situational asp= ects that determine whether speech actually fulfils its purpose. Human speakers continuously adapt timing, pr= osody, and style to the communicative setting and to the role or persona th= ey embody. A synthetic voice should be expected to perform similarly: it sh= ould use a speaking style suited to the situation or task, be it audiobook narration, dialogue interaction,= public announcement, or personalised replacement voice, and align it with = the intended persona, be that a robot, a disembodied assistant, a child, or= an adult. This Special Issue particularly seeks evaluations that capture such situational and functional adequacy, r= ather than limiting comparison to perceived =93human-likeness.=94</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Guest editors:</span></div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Prof. Jens Edlund (Executive Guest Editor= )</span>, KTH Royal Institute of Technology, Stockholm, Sweden; Email: edlu= nd@xxxxxxxx&nbsp;</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Dr. S=E9bastien Le Maguer</span>, Univers= ity of Helsinki, Helsinki, Finland; Email: sebastien.lemaguer@xxxxxxxx&n= bsp;</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Christina T=E5nnander</span>, MTM, Swedis= h Agency for Accessible Media and KTH Royal Institute of Technology, Stockh= olm, Sweden; Email: christina.tannander@xxxxxxxx&nbsp;</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Prof. Petra Wagner</span>, Bielefeld Univ= ersity, Bielefeld, Germany; Email: petra.wagner@xxxxxxxx&nbsp;</div= > <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Special issue information:</span></div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> We invite contributions that reinvent, extend or refine evaluation practice= in these directions, including but not limited to studies that:</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> =95 propose concrete alternatives to established evaluation paradigms, demo= nstrating that more informative and diagnostically useful practices are bot= h possible and practicable;</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> =95 investigate the generalisability of established evaluation schemes acro= ss different applications or tasks, or compare various evaluation schemes w= ithin a single application domain;</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> =95 align measurement with real-world use, broadening evaluation perspectiv= es through situated examples from accessibility, education, healthcare, ent= ertainment, and other fields;</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> =95 provide guidance for future research, consolidating lessons into good p= ractices and identifying the conceptual and methodological challenges that = remain; or</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> =95 transfer or adapt evaluation practices from neighbouring fields such as= speech therapy, HCI, or psychology.</div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Manuscript submission information:</span>= </div> <div class=3D"elementToProof" style=3D"direction: ltr; line-height: 1.38; m= argin-top: 12pt; margin-bottom: 12pt; font-family: Aptos, &quot;Aptos_Embed= dedFont&quot;, &quot;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-se= rif; font-size: 12pt; color: rgb(0, 0, 0);"> <span style=3D"font-weight: 700;">Important Dates:</span></div> <ul style=3D"margin-top: 0px; margin-bottom: 0px;"> <li style=3D"font-family: Aptos, &quot;Aptos_EmbeddedFont&quot;, &quot;Apto= s_MSFontService&quot;, Calibri, Helvetica, sans-serif; font-size: 12pt; col= or: rgb(0, 0, 0); direction: ltr; list-style-type: disc;"> <div role=3D"presentation" class=3D"elementToProof" style=3D"direction: ltr= ; line-height: 1.38; margin-top: 12pt; margin-bottom: 0pt;"> Submission Open Date: December 1, 2025</div> </li><li style=3D"font-family: Aptos, &quot;Aptos_EmbeddedFont&quot;, &quot= ;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-serif; font-size: 12pt= ; color: rgb(0, 0, 0); direction: ltr; list-style-type: disc;"> <div role=3D"presentation" class=3D"elementToProof" style=3D"direction: ltr= ; line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;"> Submission Deadline: June 30, 2026</div> </li><li style=3D"font-family: Aptos, &quot;Aptos_EmbeddedFont&quot;, &quot= ;Aptos_MSFontService&quot;, Calibri, Helvetica, sans-serif; font-size: 12pt= ; color: rgb(0, 0, 0); direction: ltr; list-style-type: disc;"> <div role=3D"presentation" class=3D"elementToProof" style=3D"direction: ltr= ; line-height: 1.38; margin-top: 0pt; margin-bottom: 12pt;"> Editorial Acceptance Deadline: March 31, 2027</div> </li></ul> </body> </html> --_000_AS8PR07MB73819A6B1D72CF30EA93558E92AAAAS8PR07MB7381eurp_--


This message came from the mail archive
postings/2025/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University