Offer for a post-doc: UNDERSTANDING AND MODELING OF THE INFLUENCE OF AUDIO ON VISUAL PERCEPTION (Huynh-Thu Quan )


Subject: Offer for a post-doc: UNDERSTANDING AND MODELING OF THE INFLUENCE OF AUDIO ON VISUAL PERCEPTION
From:    Huynh-Thu Quan  <Quan.Huynh-Thu@xxxxxxxx>
Date:    Mon, 11 Oct 2010 16:02:48 +0200
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

This is a multi-part message in MIME format. ------_=_NextPart_001_01CB694C.FC57B66F Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable POST-DOC SUBJECT : UNDERSTANDING AND MODELING OF THE INFLUENCE OF AUDIO ON VISUAL PERCEPTION =20 DURATION: 12-18 MONTHS FROM: ASAP =20 DESCRIPTION: =20 Technicolor (http://www.technicolor.com <http://www.technicolor.com> ) provides technology, systems and services to its Media & Entertainment clients involved in the different components of the video chain (content creation, production, distribution and access). Technicolor Research & Innovation (R&I) is the research arm of the company. In particular, Technicolor R&I is constantly investigating human perception in new application scenarios. The ambition is to understand how users perceive multimedia content, derive innovative computational models from this understanding, and propose new applications/services. Target applications include cinema, television, gaming, communication... =20 Technicolor R&I in Rennes, France, offers a Post-Doc position in the area of audio-visual perception. Technicolor has an existing visual attention model, which is a computational algorithm that predicts the main regions-of-interest in images/videos by modeling the human visual system. This model currently considers only the visual information. However, video is rarely shown without an audio track and the presence of audio may influence visual attention. The goal of this Post-Doc is to gain understanding of the influence of audio on visual attention, identify relevant audio features/information/cues, and develop an audio-visual computational attention model. =20 More specifically, the main goals/tasks of this research work include: * A complete state-of-the-art in the field.=20 * The definition/set-up of audio equipment to be used in a subjective test environment in order to conduct audio-video experiments with human participants. * An implementation of a solution (algorithm) within the existing model: development of the detection of audio cues as well as their fusion with visual cues.=20 * Conduct a user study (subjective experiment)=20 =20 The successful candidate must have a PhD (or soon), and strong knowledge in audio processing or audio perception. Additional knowledge in image processing, human perception would be valuable. The ideal candidate will also have experience in conducting subjective experiments (setup/protocol). Since software development is expected in this research work, good programming skills (C/C++) are required. =20 The position is located in the Video Processing & Perception Lab, in Rennes, France, and proposes an excellent salary package. =20 Applicants should submit a CV, recent list of publications, a statement of research interests and examples of research work achievements.=20 Applications should be submitted electronically to philippe.guillotel@xxxxxxxx =20 =20 Dr. Quan HUYNH-THU Senior Scientist, Video Processing & Perception Group Technicolor Research & Innovation email: quan.huynh-thu@xxxxxxxx =20 ------_=_NextPart_001_01CB694C.FC57B66F Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:x=3D"urn:schemas-microsoft-com:office:excel" = xmlns:p=3D"urn:schemas-microsoft-com:office:powerpoint" = xmlns:a=3D"urn:schemas-microsoft-com:office:access" = xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" = xmlns:s=3D"uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" = xmlns:rs=3D"urn:schemas-microsoft-com:rowset" xmlns:z=3D"#RowsetSchema" = xmlns:b=3D"urn:schemas-microsoft-com:office:publisher" = xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadsheet" = xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" = xmlns:odc=3D"urn:schemas-microsoft-com:office:odc" = xmlns:oa=3D"urn:schemas-microsoft-com:office:activation" = xmlns:html=3D"http://www.w3.org/TR/REC-html40" = xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" = xmlns:rtc=3D"http://microsoft.com/officenet/conferencing" = xmlns:D=3D"DAV:" xmlns:Repl=3D"http://schemas.microsoft.com/repl/" = xmlns:mt=3D"http://schemas.microsoft.com/sharepoint/soap/meetings/" = xmlns:x2=3D"http://schemas.microsoft.com/office/excel/2003/xml" = xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" = xmlns:ois=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" = xmlns:dir=3D"http://schemas.microsoft.com/sharepoint/soap/directory/" = xmlns:ds=3D"http://www.w3.org/2000/09/xmldsig#" = xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint/dsp" = xmlns:udc=3D"http://schemas.microsoft.com/data/udc" = xmlns:xsd=3D"http://www.w3.org/2001/XMLSchema" = xmlns:sub=3D"http://schemas.microsoft.com/sharepoint/soap/2002/1/alerts/"= xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#" = xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" = xmlns:sps=3D"http://schemas.microsoft.com/sharepoint/soap/" = xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance" = xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/soap" = xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" = xmlns:udcp2p=3D"http://schemas.microsoft.com/data/udc/parttopart" = xmlns:wf=3D"http://schemas.microsoft.com/sharepoint/soap/workflow/" = xmlns:dsss=3D"http://schemas.microsoft.com/office/2006/digsig-setup" = xmlns:dssi=3D"http://schemas.microsoft.com/office/2006/digsig" = xmlns:mdssi=3D"http://schemas.openxmlformats.org/package/2006/digital-sig= nature" = xmlns:mver=3D"http://schemas.openxmlformats.org/markup-compatibility/2006= " xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns:mrels=3D"http://schemas.openxmlformats.org/package/2006/relationshi= ps" xmlns:spwp=3D"http://microsoft.com/sharepoint/webpartpages" = xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/2006/types"= = xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/2006/messag= es" = xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/SlideLibrary/= " = xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortalServer/Pub= lishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" = xmlns:st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40"> <head> <META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; = charset=3Dus-ascii"> <meta name=3DGenerator content=3D"Microsoft Word 12 (filtered medium)"> <style> <!-- /* Font Definitions */ @xxxxxxxx {font-family:PMingLiU; panose-1:2 2 3 0 0 0 0 0 0 0;} @xxxxxxxx {font-family:PMingLiU; panose-1:2 2 3 0 0 0 0 0 0 0;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @xxxxxxxx {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} @xxxxxxxx {font-family:Consolas; panose-1:2 11 6 9 2 2 4 3 2 4;} @xxxxxxxx {font-family:"Trebuchet MS"; panose-1:2 11 6 3 2 2 2 2 2 4;} @xxxxxxxx {font-family:"\@xxxxxxxx"; panose-1:2 2 3 0 0 0 0 0 0 0;} @xxxxxxxx {font-family:Times; panose-1:2 2 6 3 5 4 5 2 3 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.MsoPlainText, li.MsoPlainText, div.MsoPlainText {mso-style-priority:99; mso-style-link:"Plain Text Char"; margin:0cm; margin-bottom:.0001pt; font-size:10.5pt; font-family:Consolas;} p.MsoAcetate, li.MsoAcetate, div.MsoAcetate {mso-style-priority:99; mso-style-link:"Balloon Text Char"; margin:0cm; margin-bottom:.0001pt; font-size:8.0pt; font-family:"Tahoma","sans-serif";} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri","sans-serif"; color:windowtext;} span.BalloonTextChar {mso-style-name:"Balloon Text Char"; mso-style-priority:99; mso-style-link:"Balloon Text"; font-family:"Tahoma","sans-serif";} span.PlainTextChar {mso-style-name:"Plain Text Char"; mso-style-priority:99; mso-style-link:"Plain Text"; font-family:Consolas;} .MsoChpDefault {mso-style-type:export-only;} @xxxxxxxx Section1 {size:612.0pt 792.0pt; margin:70.85pt 70.85pt 70.85pt 70.85pt;} div.Section1 {page:Section1;} /* List Definitions */ @xxxxxxxx l0 {mso-list-id:1205141583; mso-list-type:hybrid; mso-list-template-ids:-438429234 67895297 67895299 67895301 67895297 = 67895299 67895301 67895297 67895299 67895301;} @xxxxxxxx l0:level1 {mso-level-number-format:bullet; mso-level-text:\F0B7; mso-level-tab-stop:none; mso-level-number-position:left; margin-left:18.0pt; text-indent:-18.0pt; font-family:Symbol;} @xxxxxxxx l0:level2 {mso-level-tab-stop:72.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level3 {mso-level-tab-stop:108.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level4 {mso-level-tab-stop:144.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level5 {mso-level-tab-stop:180.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level6 {mso-level-tab-stop:216.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level7 {mso-level-tab-stop:252.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level8 {mso-level-tab-stop:288.0pt; mso-level-number-position:left; text-indent:-18.0pt;} @xxxxxxxx l0:level9 {mso-level-tab-stop:324.0pt; mso-level-number-position:left; text-indent:-18.0pt;} ol {margin-bottom:0cm;} ul {margin-bottom:0cm;} --> </style> <!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3DFR link=3Dblue vlink=3Dpurple> <div class=3DSection1> <p class=3DMsoNormal><span lang=3DEN-US>POST-DOC SUBJECT&nbsp;: = UNDERSTANDING AND MODELING OF THE INFLUENCE OF AUDIO ON VISUAL PERCEPTION&nbsp;&nbsp; = <o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>DURATION: 12-18 = MONTHS<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>FROM: ASAP<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><span = lang=3DEN-US>DESCRIPTION:<o:p></o:p></span></p> <p class=3DMsoPlainText style=3D'text-align:justify'><span lang=3DEN-US style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'><o:p>&nbsp;= </o:p></span></p> <p class=3DMsoPlainText style=3D'text-align:justify'><span lang=3DEN-US style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'>Technicolor= (<a href=3D"http://www.technicolor.com"><span = style=3D'color:windowtext;text-decoration: none'>http://www.technicolor.com</span></a>) provides technology, = systems and services to its Media &amp; Entertainment clients involved in the = different components of the video chain (content creation, production, = distribution and access). Technicolor Research &amp; Innovation (R&amp;I) is the research = arm of the company. In particular, Technicolor R&amp;I is constantly = investigating human perception in new application scenarios. The ambition is to = understand how users perceive multimedia content, derive innovative computational = models from this understanding, and propose new applications/services. Target applications include cinema, television, gaming, = communication...<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>Technicolor R&amp;I in Rennes, = France, offers a Post-Doc position in the area of audio-visual perception. = Technicolor has an existing visual attention model, which is a computational = algorithm that predicts the main regions-of-interest in images/videos by modeling the = human visual system. This model currently considers only the visual = information. However, video is rarely shown without an audio track and the presence = of audio may influence visual attention. The goal of this Post-Doc is to gain = understanding of the influence of audio on visual attention, identify relevant audio features/information/cues, and develop an audio-visual computational = attention model.<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>More specifically, the main = goals/tasks of this research work include:<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>* A complete state-of-the-art in = the field. <o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>* The definition/set-up of audio = equipment to be used in a subjective test environment in order to conduct = audio-video experiments with human participants.<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>* An implementation of a = solution (algorithm) within the existing model: development of the detection of = audio cues as well as their fusion with visual cues. <o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>* Conduct a user study = (subjective experiment) <o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>The successful candidate must = have a PhD (or soon), and strong knowledge in audio processing or audio perception. Additional knowledge in image processing, human perception would be = valuable. The ideal candidate will also have experience in conducting subjective experiments (setup/protocol). Since software development is expected in = this research work, good programming skills (C/C++) are = required.<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>The position is located in the = Video Processing &amp; Perception Lab, in Rennes, France, and proposes an = excellent salary package.<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>Applicants should submit a CV, = recent list of publications, a statement of research interests and examples of = research work achievements. <o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US>Applications should be submitted electronically to <a = href=3D"mailto:philippe.guillotel@xxxxxxxx">philippe.guillotel@xxxxxxxx= hnicolor.com</a><o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> <p class=3DMsoNormal><b><span lang=3DEN-US = style=3D'font-size:10.0pt;font-family: "Trebuchet MS","sans-serif";color:black'>Dr. Quan = HUYNH-THU</span></b><span lang=3DEN-US style=3D'font-size:9.0pt;font-family:"Trebuchet = MS","sans-serif"; color:black'><br> Senior Scientist, Video Processing &amp; Perception = Group<o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:9.0pt;font-family:"Trebuchet MS","sans-serif"; color:black'>Technicolor Research &amp; Innovation<o:p></o:p></span></p> <p class=3DMsoNormal><span = style=3D'font-size:9.0pt;font-family:"Trebuchet MS","sans-serif"; color:black'>email: <a = href=3D"mailto:quan.huynh-thu@xxxxxxxx">quan.huynh-thu@xxxxxxxx= .com</a><o:p></o:p></span></p> <p class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p> </div> </body> </html> ------_=_NextPart_001_01CB694C.FC57B66F--


This message came from the mail archive
/home/empire6/dpwe/public_html/postings/2010/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University