[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Deadline extension: HSCMA 2014 (4th Joint Workshop on Hands-Free Speech Communication and Microphone Arrays)

[Apologies for cross-posting]

  4th Joint Workshop on Hands-Free Speech Communication
            and Microphone Arrays (HSCMA 2014)

              May 12-14, 2014, Nancy, France


*Deadline extension*
The submission deadline has been postponed to February 2. There will be an additional 1-week grace period to update PDFs until February 9.

HSCMA is proud to announce technical co-sponsorship by the IEEE Signal Processing Society and support by ISCA through its Robust Speech Processing SIG.

The workshop will feature two special sessions:
- Advances in sparse modeling and low-rank modeling for speech processing, proposed by Hervé Bourlard and Afsaneh Asaei (Idiap) - Speech detection and speaker localization in domestic environments, proposed by Maurizio Omologo (FBK) and the DIRHA consortium

The best paper and the best student paper will each receive a 500$ award.

*Call for Papers*
HSCMA 2014 will bring together researchers and practitioners from academia and industry in an intimate and collegial setting to discuss problems of interest in the capture, enhancement, and recognition of far-field speech signals. Relevant topics include, but are not limited to, speech or speaker recognition in noisy or reverberant environments, single or multi-channel speech enhancement, dereverberation, microphone array processing, source separation, and multiple input/multiple-output (MIMO) acoustic signal processing. Interdisciplinary work that crosses multiple technical areas is especially encouraged. Demonstrations of experimental systems and prototypes are also welcome.

HSCMA 2014 is being held in conjunction with ICASSP 2014 (http://icassp2014.org/) and the REVERB challenge (http://reverb2014.dereverberation.org/).

*Workshop Topics*
Papers in all areas of distant-talking human/human and human/machine interaction are encouraged, including: - Multi-channel and single-channel approaches for speech acquisition, noise suppression, source localization and separation, dereverberation, echo cancellation, and acoustic event detection - Speech and speaker recognition technology for hands-free scenarios, including robust features, feature-domain enhancement and dereverberation, and model adaptation - Microphone array technology and architectures, especially for distant-talking speech recognition and acoustic scene analysis - Speech corpora for training and evaluation of distant-talking speech systems
 - Applications based on microphone arrays and hands-free speech systems.

*Paper & Demo Submission*
The workshop technical program will consist of oral presentations, poster sessions, and demonstrations. Prospective authors are invited to submit full-length papers up to four pages, with a fifth page permitted for references only. Submissions for proposed demonstrations may be up to two pages in length.

The workshop will feature three keynotes:
- Marc Moonen (KU Leuven, Belgium): Distributed adaptive node-specific signal estimation in wireless acoustic sensor networks - Steve Renals (University of Edinburgh, United Kingdom): Neural networks for distant speech recognition - Volker Hohmann (University of Oldenburg, Germany): Modeling auditory processing of complex sounds

*Important Dates*
Submission of papers & demos: February 2, 2014
Grace period: February 9, 2014
Paper & demo decisions announced: March 12, 2014
Submission of camera-ready papers & demos: April 4, 2014
Workshop: May 12-14, 2014

*Organizing Committee*
Emmanuel Vincent (Inria, France)
Dietrich Klakow (Saarland University, Germany)
Hiroshi Saruwatari (Nara Institute of Technology, Japan)
Mike Seltzer (Microsoft Research, USA)
Bhiksha Raj (Carnegie Mellon University, USA)

*Supported by*
Microsoft Research
Honda Research Institute - Japan
Central Research Laboratory, Hitachi
MH Acoustics