Re: [AUDITORY] Why is it that joint speech-enhancement with ASR is not a popular research topic? (Phil Green )


Subject: Re: [AUDITORY] Why is it that joint speech-enhancement with ASR is not a popular research topic?
From:    Phil Green  <p.green@xxxxxxxx>
Date:    Mon, 25 Jun 2018 18:25:35 +0100
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

This is a multi-part message in MIME format. --------------05E331F28BE2DFD7B5307B93 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit On 25/06/2018 17:00, Samer Hijazi wrote: > Thanks Laszlo and Phil, > I am not speaking about doing ASR in two steps, i am speaking about > doing the ASR and speech enhancement jointly in multi-objective > learning process. Are, you mean multitask learning. That didn't come over at all in your first mail. > There are many papers showing if you used related objective resumes to > train your network, you will get better results on both objectives > than what you would get if you train for each one separately. An early paper on this, probably the first application to ASR, was / //Parveen & Green, Multitask Learning in Connectionist Robust ASR using Recurrent Neural Networks, Eurospeech 2003./ > And it seams obvious that if we used speech contents (i.e. text) and > perfect speech waveform as two independent but correlated targets, we > will end up with a better text recognition and better speech > enhancement; am i missing something? It would be wrong to start with clean speech, add noise, use that as input and clean speech + text as training targets, because in real life speech & other sound sources don't combine like that. That's why the spectacular results in the Parveen/Green paper are misleading.. HTH -- *** note email is now p.green@xxxxxxxx *** Professor Phil Green SPandH Dept of Computer Science University of Sheffield *** note email is now p.green@xxxxxxxx *** --------------05E331F28BE2DFD7B5307B93 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by edgeum1.it.mcgill.ca id w5PHPfIJ006526 <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf= -8"> </head> <body text=3D"#000000" bgcolor=3D"#FFFFFF"> <p><br> </p> <br> <div class=3D"moz-cite-prefix">On 25/06/2018 17:00, Samer Hijazi wrote:<br> </div> <blockquote type=3D"cite" cite=3D"mid:CANPVCKik8kvCgv1=3DJ5z2OefP3WLMUjJOOHmNA6uVvs=3DXgk-nYw@xxxxxxxx= gmail.com"> <div dir=3D"ltr">Thanks Laszlo and Phil, <div>I am not speaking about doing ASR in two steps, i am speaking about doing the ASR and speech enhancement jointly in multi-objective learning process. </div> </div> </blockquote> Are, you mean multitask learning. That didn't come over at all in your first mail. <br> <blockquote type=3D"cite" cite=3D"mid:CANPVCKik8kvCgv1=3DJ5z2OefP3WLMUjJOOHmNA6uVvs=3DXgk-nYw@xxxxxxxx= gmail.com"> <div dir=3D"ltr"> <div>There are many papers showing if you used related objective resumes to train your network, you will get better results on both objectives than what you would get if you train for each one separately. </div> </div> </blockquote> An early paper on this, probably the first application to ASR, was <b= r> <i><br> </i><i>Parveen &amp; Green, Multitask Learning in Connectionist Robust ASR using Recurrent Neural Networks, Eurospeech 2003.</i><br= > <div style=3D"left: 127.679px; top: 141.153px; font-size: 23.1844px; font-family: serif; transform: scaleX(1.07841);"><br> </div> <blockquote type=3D"cite" cite=3D"mid:CANPVCKik8kvCgv1=3DJ5z2OefP3WLMUjJOOHmNA6uVvs=3DXgk-nYw@xxxxxxxx= gmail.com"> <div dir=3D"ltr"> <div>And it seams obvious that if we used speech contents (i.e. text) and perfect speech waveform as two independent but correlated targets, we will end up with a better text recognition and better speech enhancement; am i missing something?=C2=A0 =C2=A0=C2=A0 <br> </div> </div> </blockquote> <br> It would be wrong to start with clean speech, add noise, use that as input and clean speech + text as training targets, because in real life speech &amp; other sound sources don't combine like that. That's why the spectacular results in the Parveen/Green paper are misleading..<br> <br> HTH<br> <pre class=3D"moz-signature" cols=3D"72">--=20 *** note email is now <a class=3D"moz-txt-link-abbreviated" href=3D"mailt= o:p.green@xxxxxxxx">p.green@xxxxxxxx</a> *** Professor Phil Green SPandH Dept of Computer Science University of Sheffield *** note email is now <a class=3D"moz-txt-link-abbreviated" href=3D"mailt= o:p.green@xxxxxxxx">p.green@xxxxxxxx</a> *** </pre> </body> </html> --------------05E331F28BE2DFD7B5307B93--


This message came from the mail archive
src/postings/2018/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University