Subject: Re: [AUDITORY] CFP: Computer Speech and Language Special Issue on Separation, Recognition, and Diarization of Conversational Speech From: Michael Mandel <mim@xxxxxxxx> Date: Tue, 24 Nov 2020 10:14:31 -0500--0000000000002f04e305b4dbc587 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Dear all, Submissions are now open for the special issue of Computer Speech and Language on Separation, Recognition, and Diarization of Conversational Speech. Please see the below CfP. Sincerely, the guest editors On Mon, Sep 21, 2020 at 10:16 PM Michael Mandel <mim@xxxxxxxx> wrote: > *Call for papers* > > Computer Speech and Language > https://www.journals.elsevier.com/computer-speech-and-language > > Special Issue on Separation, Recognition, and Diarization of > Conversational Speech > > https://www.journals.elsevier.com/computer-speech-and-language/call-for-p= apers/call-for-papers-computer-speech-and-language-special-issue > > *Submission deadline: December 15, 2020* > > > While great advances have been made in conversational automatic speech > recognition in recent years, several fundamental problems remain before t= he > goal of a richly annotated transcript of speech and speakers can be > realized. The current special issue invites papers to discuss the > robustness of speech processing in everyday environments, i.e., real-worl= d > conditions with acoustic clutter, where the number and nature of the soun= d > sources is unknown and changing over time. > > Relevant research topics include (but are not limited to): > > - Speaker identification and diarization > - Speaker localization and beamforming > - Single- or multi-microphone enhancement and separation > - Robust features and feature transforms > - Robust acoustic and language modeling > - Traditional or end-to-end robust speech recognition > - Training schemes: data simulation and augmentation, semi-supervised > training > - Robust speaker and language recognition > - Robust paralinguistics > - Cross-environment or cross-dataset performance analysis > - Environmental background noise modelling. > > In addition to traditional research papers, the special issue also hopes > to include descriptions of successful conversational speech recognition > systems where the contribution is more in the implementation than the > techniques themselves as well as successful applications of conversationa= l > speech recognition systems. > > The recently concluded sixth CHiME challenge serves as a focus for > discussion in this special issue. The challenge considered the problem of > conversational speech recognition and diarization in everyday home > environments from multiple distant microphone arrays. It used a > resychronized version of the Dinner Party speech data featured in CHiME-5 > and added a new joint diarization and ASR task. Papers reporting evaluati= on > results on the CHiME-6 dataset or on other datasets are equally welcome. > > > > *Submission instructions* > Manuscript submissions shall be made through: > https://www.editorialmanager.com/YCSLA/. > > The submission system will be open in November. When submitting your > manuscript please select the article type =E2=80=9CVSI:SeparateRecognizeD= iarize=E2=80=9D. > Please submit your manuscript before the submission deadline. > > All submissions deemed suitable to be sent for peer review will be > reviewed by at least two independent reviewers. Once your manuscript is > accepted, it will go into production, and will be simultaneously publishe= d > in the current regular issue and pulled into the online Special Issue. > Articles from this Special Issue will appear in different regular issues = of > the journal, though they will be clearly marked and branded as Special > Issue articles. Please see an example here: > https://www.sciencedirect.com/journal/science-of-the-total-environment/sp= ecial-issue/10SWS2W7VVV > > Please ensure you read the Guide for Authors before writing your > manuscript. The Guide for Authors and the link to submit your manuscript = is > available on the Journal=E2=80=99s homepage https://www.elsevier.com/loca= te/csl. > > > Important dates: > > - Submission opens: November 16, 2020 > - Submission deadline: December 15, 2020 > - Acceptance deadline: September 1, 2021 > - Expected publication date: November 1, 2021 > > > Guest editors > > - Michael Mandel, Brooklyn College, CUNY > - Jon Barker, University of Sheffield > - Jun Du, University of Science and Technology of China > - Leibny Paola Garcia, Johns Hopkins University > - Emmanuel Vincent, Inria > - Shinji Watanabe, Johns Hopkins University > > > > -- > Michael I Mandel > Associate Professor > Department of Computer and Information Science, Brooklyn College > Computer Science PhD Program, CUNY Graduate Center > Linguistics PhD Program, CUNY Graduate Center > > http://mr-pc.org > 2232 Ingersoll Hall > 718-951-5000 x2053 (Office) > 347-881-6165 (Cell) > --0000000000002f04e305b4dbc587 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div>Dear all,</div><div><br></div>Submissions=C2=A0are no= w open for the special issue of Computer Speech and Language on Separation,= Recognition, and Diarization of Conversational Speech. Please see the belo= w CfP.<div><br></div><div>Sincerely,</div><div>the guest editors</div><div>= </div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_= attr">On Mon, Sep 21, 2020 at 10:16 PM Michael Mandel <<a href=3D"mailto= :mim@xxxxxxxx">mim@xxxxxxxx</a>> wrote:<br></div><blockquote class=3D"= gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(20= 4,204,204);padding-left:1ex"><div dir=3D"ltr"><div><b>Call for papers</b></= div><div><br></div><div>Computer Speech and Language<br></div><div><a href= =3D"https://www.journals.elsevier.com/computer-speech-and-language" target= =3D"_blank">https://www.journals.elsevier.com/computer-speech-and-language<= /a><br></div><div><br></div><div>Special Issue on Separation, Recognition, = and Diarization of Conversational Speech</div><a href=3D"https://www.journa= ls.elsevier.com/computer-speech-and-language/call-for-papers/call-for-paper= s-computer-speech-and-language-special-issue" target=3D"_blank">https://www= .journals.elsevier.com/computer-speech-and-language/call-for-papers/call-fo= r-papers-computer-speech-and-language-special-issue</a><div><br></div><div>= <b>Submission deadline: December 15, 2020</b></div><div><br></div><div><br>= </div><div>While great advances have been made in conversational automatic = speech recognition in recent years, several fundamental problems remain bef= ore the goal of a richly annotated transcript of speech and speakers can be= realized. The current special issue invites papers to discuss the robustne= ss of speech processing in everyday environments, i.e., real-world conditio= ns with acoustic clutter, where the number and nature of the sound sources = is unknown and changing over time.</div><div><br></div><div>Relevant resear= ch topics include (but are not limited to):<ul><li>Speaker identification a= nd diarization</li><li>Speaker localization and beamforming</li><li>Single-= or multi-microphone enhancement and separation</li><li>Robust features and= feature transforms</li><li>Robust acoustic and language modeling</li><li>T= raditional or end-to-end robust speech recognition</li><li>Training schemes= : data simulation and augmentation, semi-supervised training</li><li>Robust= speaker and language recognition</li><li>Robust paralinguistics</li><li>Cr= oss-environment or cross-dataset performance analysis</li><li>Environmental= background noise modelling.</li></ul>In addition to traditional research p= apers, the special issue also hopes to include descriptions of successful c= onversational speech recognition systems where the contribution is more in = the implementation than the techniques themselves as well as successful app= lications of conversational speech recognition systems.<br><br>The recently= concluded sixth CHiME challenge serves as a focus for discussion in this s= pecial issue. The challenge considered the problem of conversational speech= recognition and diarization in everyday home environments from multiple di= stant microphone arrays. It used a resychronized version of the Dinner Part= y speech data featured in CHiME-5 and added a new joint diarization and ASR= task. Papers reporting evaluation results on the CHiME-6 dataset or on oth= er datasets are equally welcome.<br><br><br><b>Submission instructions<br><= /b><br>Manuscript submissions shall be made through: <a href=3D"https://www= .editorialmanager.com/YCSLA/" target=3D"_blank">https://www.editorialmanage= r.com/YCSLA/</a>.<br><br>The submission system will be open in November. Wh= en submitting your manuscript please select the article type =E2=80=9CVSI:S= eparateRecognizeDiarize=E2=80=9D. Please submit your manuscript before the = submission deadline.<br><br>All submissions deemed suitable to be sent for = peer review will be reviewed by at least two independent reviewers. Once yo= ur manuscript is accepted, it will go into production, and will be simultan= eously published in the current regular issue and pulled into the online Sp= ecial Issue. Articles from this Special Issue will appear in different regu= lar issues of the journal, though they will be clearly marked and branded a= s Special Issue articles. Please see an example here: <a href=3D"https://ww= w.sciencedirect.com/journal/science-of-the-total-environment/special-issue/= 10SWS2W7VVV" target=3D"_blank">https://www.sciencedirect.com/journal/scienc= e-of-the-total-environment/special-issue/10SWS2W7VVV</a><br><br>Please ensu= re you read the Guide for Authors before writing your manuscript. The Guide= for Authors and the link to submit your manuscript is available on the Jou= rnal=E2=80=99s homepage <a href=3D"https://www.elsevier.com/locate/csl" tar= get=3D"_blank">https://www.elsevier.com/locate/csl</a>.<br><br><br>Importan= t dates:<br><ul><li>Submission opens: November 16, 2020</li><li>Submission = deadline: December 15, 2020</li><li>Acceptance deadline: September 1, 2021<= /li><li>Expected publication date: November 1, 2021</li></ul><br>Guest edit= ors<br><ul><li>Michael Mandel, Brooklyn College, CUNY</li><li>Jon Barker, U= niversity of Sheffield</li><li>Jun Du, University of Science and Technology= of China</li><li>Leibny Paola Garcia, Johns Hopkins University</li><li>Emm= anuel Vincent, Inria</li><li>Shinji Watanabe, Johns Hopkins University</li>= </ul><div><br></div><div><div><br></div>-- <br><div dir=3D"ltr"><div dir=3D= "ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><div>Michael I Mandel</di= v><div>Associate Professor</div><div>Department of Computer and Information= Science, Brooklyn College</div><div><div>Computer Science PhD Program, CUN= Y Graduate Center</div><div><div>Linguistics PhD Program, CUNY Graduate Cen= ter</div></div><div><br></div></div><div><a href=3D"http://mr-pc.org" targe= t=3D"_blank">http://mr-pc.org</a></div>2232 Ingersoll Hall<br>718-951-5000 = x2053 (Office)<br>347-881-6165 (Cell)</div></div></div></div></div></div></= div></div></div> </blockquote></div> --0000000000002f04e305b4dbc587--