Research Fellow vacancy on Blind Source Separation at CVSSP/Surrey/UK (Closing date: March 17th, 2013, i.e. this weekend) ("Wenwu W. Wang" )


Subject: Research Fellow vacancy on Blind Source Separation at CVSSP/Surrey/UK (Closing date: March 17th, 2013, i.e. this weekend)
From:    "Wenwu W. Wang"  <W.Wang@xxxxxxxx>
Date:    Wed, 13 Mar 2013 19:34:43 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--_000_EBCDBA8C34662B48ADCB9C62B3E9B9A9C20EBB2EA0EXMB05CMSsurr_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Dear list, Please feel free to circulate the above job advert to those who you think m= ight be interested. Thank you. Kind regards, Wenwu ---------------------------------------------------------------------------= ------------------------------------------------------------------------- Research Fellow Low-Complexity Source Separation Algorithms Centre for Vision Speech and Signal Processing (CVSSP) Salary: =A329,541-=A330,424 per annum (Subject to qualifications and experience) Applications are invited for a three-year postdoctoral research fellow posi= tion available at CVSSP, starting on Monday, April 1, 2013, to work on a pr= oject entitled "Signal Processing Solutions for a Networked Battlespace", f= unded by the Engineering and Physical Sciences Research Council (EPSRC) and= Defence Science and Technology Laboratory (Dstl), as part of the Ministry = of Defence (MoD) University Defence Research Centre (UDRC) Scheme in signal= processing. This project will be undertaken by a unique consortium of acad= emic experts from Loughborough, Surrey, Strathclyde and Cardiff (LSSC) Univ= ersities together with six industrial project partners QinetiQ, Selex-Galil= eo, Thales, Texas Instruments, PrismTech and Steepest Ascent. The overall a= im of the project is to provide fundamental signal processing solutions to = enable intelligent and robust processing of the very large amount of multi-= sensor data acquired from various networked communications and weapons plat= forms, in order to retain military advantage and mitigate smart adversaries= who present multiple threats within an anarchic and extended operating are= a (battlespace). The research fellow will be expected to work in close coll= aboration with our academic and industrial partners together with members o= f the lead consortium based at Edinburgh and Heriott Watt Universities. The prospective research fellow will be expected to develop low-complexity = robust algorithms for underdetermined, convolutive signal separation, broad= band distributed beamforming. The work will be facilitated by low-rank and = sparse representations, and directed toward fast implementations. He/she wi= ll develop robust source separation algorithms in highly dense signal envir= onments, with the presence of uncertainties, such as weak signals and the u= nknown number of targets. Successful applicants will join the CVSSP, a leading research group in sens= ory (visual and auditory) data analysis and interpretation, and will work c= losely with Dr Wenwu Wang, Prof Josef Kittler and Dr Philip Jackson. CVSSP = is one of the largest UK research groups in machine vision and audition wit= h more than 120 researchers, with core expertise in Signal Processing, Imag= e and Video Processing, Pattern Recognition, Computer Vision, Machine Learn= ing and Artificial Intelligence, Computer Graphics and Human Computer Inter= action. CVSSP forms part of the Department of Electronic Engineering, which= received one of the highest ratings (joint second position across the UK) = in the last research quality assessment, i.e. 2008 RAE, with 70% of its res= earch classified as either 4* ("world-leading") or 3* ("internationally exc= ellent"). Applicants should have a PhD degree or equivalent in electrical and electro= nic engineering, computer science, mathematical science, statistics, physic= s, or related disciplines. Applicants should be able to demonstrate excelle= nt mathematical, analytical and computer programming skills. Advantages wil= l be given to the applicants who have experience in sparse representations,= blind source separation, low-rank linear algebra, and/or machine learning. For informal inquiries about the position, please contact Dr Wenwu Wang (w.= wang@xxxxxxxx<mailto:w.wang@xxxxxxxx>). For an application pack and to apply on-line please go to our website: http= ://www.surrey.ac.uk/vacancies. If you are unable to apply on-line please co= ntact Mr Peter Li, HR Assistant on Tel: +44 (0) 1483 683419 or email: k.li@xxxxxxxx= surrey.ac.uk<mailto:k.li@xxxxxxxx>. The closing date for applications is March 17th, 2013. For further information about the University of Surrey, please visit www.su= rrey.ac.uk<http://www.surrey.ac.uk/>. We acknowledge, understand and embrace cultural diversity. ------------------------------------------------------------------ -- Dr Wenwu Wang Centre for Vision Speech and Signal Processing Department of Electronic Engineering University of Surrey Guildford GU2 7XH United Kingdom Phone: +44 (0) 1483 686039 Fax: +44 (0) 1483 686031 Email: w.wang@xxxxxxxx http://personal.ee.surrey.ac.uk/Personal/W.Wang/ ________________________________ From: AUDITORY - Research in Auditory Perception [AUDITORY@xxxxxxxx= On Behalf Of Mark Cartwright [mcartwright@xxxxxxxx Sent: 12 March 2013 11:12 To: AUDITORY@xxxxxxxx Subject: [AUDITORY] SocialEQ - Audio Descriptor Data Collection ------------------------------------------------------ Apologies for potential cross-postings ------------------------------------------------------ Dear community, We are conducting a study on adjectives for sounds. We are specifically int= erested in adjectives that describe characteristics which can be modified b= y audio production tools, such as reverberation and equalization. We are ho= ping that you can help us with a data collection of such adjectives by usin= g our new web-based tool, SocialEQ - http://socialeq.org<http://socialeq.or= g/>. SocialEQ<http://socialeq.org/> is a tool to learn the meaning of sound adje= ctives that relate to equalization. We'd like you to teach the SocialEQ equ= alizer an adjective that you would use to describe a sound. Once the system= thinks it understands, it will give you a slider to make the sound more or= less like the adjective (for example, more or less "bright"). After that t= ask is complete, we will present you with a survey. The whole thing should = take about five minutes. To participate, go to http://socialeq.org<http://socialeq.org/>. Thank you for your time! Best regards, Mark Cartwright <mcartwright@xxxxxxxx<mailto:mcartwright@xxxxxxxx= estern.edu>> Bryan Pardo <pardo@xxxxxxxx<mailto:pardo@xxxxxxxx>> The Interactive Audio Lab<http://music.cs.northwestern.edu/> Electrical Engineering and Computer Science Northwestern University --_000_EBCDBA8C34662B48ADCB9C62B3E9B9A9C20EBB2EA0EXMB05CMSsurr_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html dir=3D"ltr"><head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-= 1"> <meta name=3D"GENERATOR" content=3D"MSHTML 9.00.8112.16441"> <style id=3D"owaTempEditStyle"></style><style title=3D"owaParaStyle"><!--P = { MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px } --></style> </head> <body ocsi=3D"x"> <div style=3D"FONT-FAMILY: Tahoma; DIRECTION: ltr; COLOR: #003300; FONT-SIZ= E: x-small"> <div> <div><font face=3D"tahoma">Dear list,</font></div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div><font face=3D"tahoma"></font></div> <div><font face=3D"tahoma">Please feel free to circulate the above job adve= rt to those who you think might be interested. Thank you.</font></div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div><font face=3D"tahoma"></font></div> <div> <div class=3D"BodyFragment"><font size=3D"2"> <div class=3D"PlainText">Kind regards,<br> <br> Wenwu<br> <br> <br> ---------------------------------------------------------------------------= -------------------------------------------------------------------------</= font></div> </div> </div> </div> <div>Research Fellow</div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div>Low-Complexity Source Separation Algorithms</div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div>Centre for Vision Speech and Signal Processing (CVSSP)</div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div>Salary: =A329,541-=A330,424 per annum </div> <div>(Subject to qualifications and experience)</div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div>Applications are invited for a three-year postdoctoral research fellow= position available at CVSSP, starting on Monday, April 1, 2013, to work on= a project entitled &quot;Signal Processing Solutions for a Networked Battl= espace&quot;, funded by the Engineering and Physical Sciences Research Council (EPSRC) and Defence Science and Technol= ogy Laboratory (Dstl), as part of the Ministry of Defence (MoD) University = Defence Research Centre (UDRC) Scheme in signal processing. This project wi= ll be undertaken by a unique consortium of academic experts from Loughborough, Surrey, Strathclyde and Cardiff (LS= SC) Universities together with six industrial project partners QinetiQ, Sel= ex-Galileo, Thales, Texas Instruments, PrismTech and Steepest Ascent. The o= verall aim of the project is to provide fundamental signal processing solutions to enable intelligent and = robust processing of the very large amount of multi-sensor data acquired fr= om various networked communications and weapons platforms, in order to reta= in military advantage and mitigate smart adversaries who present multiple threats within an anarchic and exte= nded operating area (battlespace). The research fellow will be expected to = work in close collaboration with our academic and industrial partners toget= her with members of the lead consortium based at Edinburgh and Heriott Watt Universities.</div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div>The prospective research fellow will be expected to develop low-comple= xity robust algorithms for underdetermined, convolutive signal separation, = broadband distributed beamforming. The work will be facilitated by low-rank= and sparse representations, and directed toward fast implementations. He/she will develop robust source se= paration algorithms in highly dense signal environments, with the presence = of uncertainties, such as weak signals and the unknown number of targets.</= div> <div>&nbsp;</div> <div>Successful applicants will join the CVSSP, a leading research group in= sensory (visual and auditory) data analysis and interpretation, and will w= ork closely with Dr Wenwu Wang, Prof Josef Kittler and Dr Philip Jackson. C= VSSP is one of the largest UK research groups in machine vision and audition with more than 120 researchers, with= core expertise in Signal Processing, Image and Video Processing, Pattern R= ecognition, Computer Vision, Machine Learning and Artificial Intelligence, = Computer Graphics and Human Computer Interaction. CVSSP forms part of the Department of Electronic Engineering,= which received one of the highest ratings (joint second position across th= e UK) in the last research quality assessment, i.e. 2008 RAE, with 70% of i= ts research classified as either 4* (&quot;world-leading&quot;) or 3* (&quot;internationally excellent&quot= ;).</div> <div>Applicants should have a PhD degree or equivalent in electrical and el= ectronic engineering, computer science, mathematical science, statistics, p= hysics, or related disciplines. Applicants should be able to demonstrate ex= cellent mathematical, analytical and computer programming skills. Advantages will be given to the applicant= s who have experience in sparse representations, blind source separation, l= ow-rank linear algebra, and/or machine learning.</div> <div>&nbsp;</div> <div>For informal inquiries about the position, please contact Dr Wenwu Wan= g (<a href=3D"mailto:w.wang@xxxxxxxx">w.wang@xxxxxxxx</a>).</div> <div>&nbsp;</div> <div>For an application pack and to apply on-line please go to our website:= <a href=3D"http://www.surrey.ac.uk/vacancies"> http://www.surrey.ac.uk/vacancies</a>. If you are unable to apply on-line p= lease contact Mr Peter Li, HR Assistant on Tel: &#43;44 (0) 1483 683419 or = email: <a href=3D"mailto:k.li@xxxxxxxx">k.li@xxxxxxxx</a>.</div> <div>&nbsp;</div> <div>The closing date for applications is <font color=3D"#ff0000">March 17t= h</font>, 2013.</div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div>For further information about the University of Surrey, please visit <= a href=3D"http://www.surrey.ac.uk/"> www.surrey.ac.uk</a>.</div> <div>&nbsp;</div> <div>We acknowledge, understand and embrace cultural diversity.</div> <div><font face=3D"tahoma">------------------------------------------------= ------------------</font></div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div><br> --<br> Dr Wenwu Wang<br> Centre for Vision Speech and Signal Processing <br> Department of Electronic Engineering <br> University of Surrey <br> Guildford GU2 7XH <br> United Kingdom<br> Phone: &#43;44 (0) 1483 686039 <br> Fax: &#43;44 (0) 1483 686031 <br> Email: w.wang@xxxxxxxx<br> http://personal.ee.surrey.ac.uk/Personal/W.Wang/</div> <div><font face=3D"tahoma"></font>&nbsp;</div> <div dir=3D"ltr"><font color=3D"#003300" size=3D"2" face=3D"Tahoma"></font>= &nbsp;</div> <div style=3D"DIRECTION: ltr" id=3D"divRpF835878"> <hr tabindex=3D"-1"> <font color=3D"#000000" size=3D"2" face=3D"Tahoma"><b>From:</b> AUDITORY - = Research in Auditory Perception [AUDITORY@xxxxxxxx On Behalf Of Mar= k Cartwright [mcartwright@xxxxxxxx<br> <b>Sent:</b> 12 March 2013 11:12<br> <b>To:</b> AUDITORY@xxxxxxxx<br> <b>Subject:</b> [AUDITORY] SocialEQ - Audio Descriptor Data Collection<br> </font><br> </div> <div></div> <div> <div dir=3D"ltr"> <p style=3D"FONT-FAMILY: arial,sans-serif; FONT-SIZE: 13px">---------------= ---------------------------------------<br> </p> <p style=3D"FONT-FAMILY: arial,sans-serif; FONT-SIZE: 13px">Apologies for p= otential cross-postings<br> ------------------------------------------------------<br> <br> Dear community,<br> <br> We are conducting a study on adjectives for sounds. We are specifically int= erested in adjectives that describe characteristics which can be modified b= y audio production tools, such as reverberation and equalization. We are ho= ping that you can help us with a data collection of such adjectives by using our new web-based tool,&nbsp;<= a href=3D"http://socialeq.org/" target=3D"_blank">SocialEQ - http://sociale= q.org</a>.<br> <br> <a href=3D"http://socialeq.org/" target=3D"_blank">SocialEQ</a>&nbsp;is a t= ool to learn the meaning of sound adjectives that relate to equalization. W= e'd like you to teach the SocialEQ equalizer an adjective that you would us= e to describe a sound. Once the system thinks it understands, it will give you a slider to make the sound more or less l= ike the adjective (for example, more or less &quot;bright&quot;). After tha= t task is complete, we will present you with a survey. The whole thing shou= ld take about five minutes.<br> <br> To participate, go to&nbsp;<a href=3D"http://socialeq.org/" target=3D"_blan= k">http://socialeq.org</a>.<br> <br> Thank you for your time!<br> <br> Best regards,</p> <p style=3D"FONT-FAMILY: arial,sans-serif; FONT-SIZE: 13px"></p> <p style=3D"FONT-FAMILY: arial,sans-serif; FONT-SIZE: 13px">Mark Cartwright= &lt;<a href=3D"mailto:mcartwright@xxxxxxxx">mcartwright@xxxxxxxx= estern.edu</a>&gt;<br> Bryan Pardo &lt;<a href=3D"mailto:pardo@xxxxxxxx">pardo@xxxxxxxx= n.edu</a>&gt;<br> <br> <a href=3D"http://music.cs.northwestern.edu/" target=3D"_blank">The Interac= tive Audio Lab</a><br> Electrical Engineering and Computer Science<br> Northwestern University</p> </div> </div> </div> </body> </html> --_000_EBCDBA8C34662B48ADCB9C62B3E9B9A9C20EBB2EA0EXMB05CMSsurr_--


This message came from the mail archive
/var/www/postings/2013/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University