Final CFP: Special issue on Speech Separation and Recognition in Multisource Environments (jon )


Subject: Final CFP:  Special issue on Speech Separation and Recognition in Multisource Environments
From:    jon  <j.barker@xxxxxxxx>
Date:    Thu, 15 Dec 2011 17:12:02 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--Apple-Mail=_26C60962-E64C-45B6-BCD0-526F8B40402F Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 +++++++++++++++++++++++++++++++++++++++++++ COMPUTER SPEECH AND LANGUAGE http://www.elsevier.com/locate/csl Special issue on SPEECH SEPARATION AND RECOGNITION IN MULTISOURCE ENVIRONMENTS Submission Deadline: DECEMBER 31, 2011 +++++++++++++++++++++++++++++++++++++++++++ One of the chief difficulties of building distant-microphone speech = recognition systems for use in everyday applications is that the noise = background is typically `multisource'. A speech recognition system = designed to operate in a family home, for example, must contend with = competing noise from televisions and radios, children playing, vacuum = cleaners, and outdoors noises from open windows. Despite their = complexity, such environments contain structure that can be learnt and = exploited using advanced source separation, machine learning and speech = recognition techniques such as those presented at the 1st International = Workshop on Machine Listening in Multisource Environments (CHiME 2011). = http://spandh.dcs.shef.ac.uk/projects/chime/workshop/ This special issue solicits papers describing advances in speech = separation and recognition in multisource noise environments, including = theoretical developments, algorithms or systems. Examples of topics relevant to the special issue include: =95 multiple speaker localization, beamforming and source separation, =95 hearing inspired approaches to multisource processing, =95 background noise tracking and modelling, =95 noise-robust speech decoding, =95 model combination approaches to robust speech recognition, =95 datasets, toolboxes and other resources for multisource speech = separation and recognition. SUBMISSION INSTRUCTIONS: Manuscript submissions shall be made through the Elsevier Editorial = System (EES) at http://ees.elsevier.com/csl/ Once logged in, click on =93Submit New Manuscript=94 then select = =93Special Issue: Multisource Environments=94 in the =93Choose Article = Type=94 dropdown menu. IMPORTANT DATES: December 31, 2011: Paper submission March 30, 2012: First review May 30, 2012: Revised submission July 30, 2012: Second review August 30, 2012: Camera-ready submission We are looking forward to your submission! Jon Barker, University of Sheffield, UK Emmanuel Vincent, INRIA, France --- --Apple-Mail=_26C60962-E64C-45B6-BCD0-526F8B40402F Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 <html><head></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; = ">+++++++++++++++++++++++++++++++++++++++++++<br><br>&nbsp;&nbsp;&nbsp;&nb= sp;&nbsp;COMPUTER SPEECH AND = LANGUAGE<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a = href=3D"http://www.elsevier.com/locate/csl">http://www.elsevier.com/locate= /csl</a><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Special issue = on<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;SPEECH SEPARATION AND RECOGNITION IN = MULTISOURCE ENVIRONMENTS<br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Submission = Deadline: &nbsp;DECEMBER 31, = 2011<br><br>+++++++++++++++++++++++++++++++++++++++++++<br><br>One of = the chief difficulties of building distant-microphone speech recognition = systems for use in everyday applications is that the noise background is = typically `multisource'. A speech recognition system designed to operate = in a family home, for example, must contend with competing noise from = televisions and radios, children playing, vacuum cleaners, and outdoors = noises from open windows. Despite their complexity, such environments = contain structure that can be learnt and exploited using advanced source = separation, machine learning and speech recognition techniques such as = those presented at the 1st International Workshop on Machine Listening = in Multisource Environments (CHiME 2011).&nbsp;<a = href=3D"http://spandh.dcs.shef.ac.uk/projects/chime/workshop/">http://span= dh.dcs.shef.ac.uk/projects/chime/workshop/</a><br><br>This special issue = solicits papers describing advances in speech separation and recognition = in multisource noise environments, including theoretical developments, = algorithms or systems.<br><br>Examples of topics relevant to the special = issue include:<br>=95 multiple speaker localization, beamforming and = source separation,<br>=95 hearing inspired approaches to multisource = processing,<br>=95 background noise tracking and modelling,<br>=95 = noise-robust speech decoding,<br>=95 model combination approaches to = robust speech recognition,<br>=95 datasets, toolboxes and other = resources for multisource speech separation and = recognition.<br><br><br>SUBMISSION INSTRUCTIONS:<br>Manuscript = submissions shall be made through the Elsevier Editorial System (EES) = at<br><a = href=3D"http://ees.elsevier.com/csl/">http://ees.elsevier.com/csl/</a><br>= Once logged in, click on =93Submit New Manuscript=94 then select = =93Special Issue: Multisource Environments=94 in the =93Choose Article = Type=94 dropdown menu.<br><br><br>IMPORTANT DATES:<br>December 31, 2011: = Paper submission<br>March 30, 2012: First review<br>May 30, 2012: = Revised submission<br>July 30, 2012: Second review<br>August 30, 2012: = Camera-ready submission<br><br><br>We are looking forward to your = submission!<br><br><br>Jon Barker, University of Sheffield, = UK<br>Emmanuel Vincent, INRIA, France<br><br>---<br><div><span = class=3D"Apple-style-span" style=3D"border-collapse: separate; color: = rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: = normal; font-variant: normal; font-weight: normal; letter-spacing: = normal; line-height: normal; orphans: 2; text-indent: 0px; = text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; = -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: = 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: = auto; -webkit-text-stroke-width: 0px; "><span class=3D"Apple-style-span" = style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: = Helvetica; font-size: medium; font-style: normal; font-variant: normal; = font-weight: normal; letter-spacing: normal; line-height: normal; = orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; = widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; = -webkit-border-vertical-spacing: 0px; = -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: = auto; -webkit-text-stroke-width: 0px; "><div style=3D"word-wrap: = break-word; -webkit-nbsp-mode: space; -webkit-line-break: = after-white-space; = "><div><br></div></div></span></span></div></body></html>= --Apple-Mail=_26C60962-E64C-45B6-BCD0-526F8B40402F--


This message came from the mail archive
/var/www/postings/2011/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University