[AUDITORY] CFP: Workshop on Audio-Visual Scene Understanding for Immersive Multimedia @xxxxxxxx ACM Multimedia (Hansung Kim )


Subject: [AUDITORY] CFP: Workshop on Audio-Visual Scene Understanding for Immersive Multimedia @xxxxxxxx ACM Multimedia
From:    Hansung Kim  <0000007072a27375-dmarc-request@xxxxxxxx>
Date:    Mon, 4 Jun 2018 11:12:41 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--_000_AM0PR0602MB334650EB3D4FBC31EABB34F3D5670AM0PR0602MB3346_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable ACM MULTIMEDIA 2018 Workshop on Audio-Visual Scene Understanding for Immersive Multimedia http://cvssp.org/data/s3a/mmav2018/ 22-26 October 2018, Seoul, South Korea, in conjunction with ACM Multimedia = 2018 http://www.acmmm.org/2018/ Dates: Submission Due: July 8, 2018 (Now the submission system is open!) Acceptance Notification: August 5, 2018 Camera Ready Submission: August 12, 2018 Workshop Date: TBA (Oct 22 or 26, 2018) --------------------------------------------------------------------- Call for papers: Audio-visual data is the most familiar format of multimedia information acq= uired in our daily life. In most cases, they are already paired as audio-vi= deo streams in various fields and platforms such as media contents, surveil= lance data, communication, game, biomedical, education, etc. However, audio= and video processing have been researched in separate research areas for l= ong time ignoring their synergy when they work together. The goals of this workshop are to (1) present and discuss the latest trends= in audio and computer vision fields for the common research goals, (2) und= erstand state-of-the-art techniques and bottlenecks in the other's discipli= ne for the common topics, (3) investigate research opportunities of joint a= udio-visual scene understandings in multimedia content production. This workshop will be a good opportunity to bring together leading experts = in audio and vision, and bridge the gap between two research fields in mult= imedia content production and reproduction. We welcome research contributio= ns related the following (but not limited to) topics: - 3D audio-visual capture system - Object segmentation and audio source separation - Audio-Visual Tracking - Speaker identification and speech recognition - Scene understanding using audio-visual sensors - Deep learning for audio-visual data analysis - Geometry-aware auditory scene analysis - Virtual/Augmented reality content production - 360 video and spatial audio - Adaptive audio-visual content rendering All the submissions will be reviewed by the workshop's Steering Committee M= embers. Accepted papers will be presented during the workshop and included in the A= CM Workshop proceedings. --------------------------------------------------------------------- We look forward to your contributions. Adrian Hilton (University of Surrey, UK) Hong-Goo Kang (Yonsei University, = Republic of Korea) Hansung Kim (University of Surrey, UK) Kwanghoon Sohn (Y= onsei University, Republic of Korea) --_000_AM0PR0602MB334650EB3D4FBC31EABB34F3D5670AM0PR0602MB3346_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"= > <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @xxxxxxxx {font-family:"Malgun Gothic"; panose-1:2 11 5 3 2 0 0 2 0 4;} @xxxxxxxx {font-family:"Malgun Gothic"; panose-1:2 11 5 3 2 0 0 2 0 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Calibri",sans-serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.msonormal0, li.msonormal0, div.msonormal0 {mso-style-name:msonormal; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; font-size:11.0pt; font-family:"Calibri",sans-serif;} span.EmailStyle18 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:windowtext;} span.EmailStyle19 {mso-style-type:personal; font-family:"Calibri",sans-serif; color:windowtext;} span.EmailStyle20 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:3.0cm 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-GB" link=3D"blue" vlink=3D"purple"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><span style=3D"font-size:14.0pt;color:#4F81BD">ACM M= ULTIMEDIA 2018 Workshop<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:14.0pt;color:#4F81BD">on Au= dio-Visual Scene Understanding for Immersive Multimedia<o:p></o:p></span></= p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><a href=3D"http://c= vssp.org/data/s3a/mmav2018/">http://cvssp.org/data/s3a/mmav2018/</a><o:p></= o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">22-26 October 2018,= Seoul, South Korea, in conjunction with ACM Multimedia 2018 <a href=3D"http://www.acmmm.org/2018/">http://www.acmmm.org/2018/</a><o:p><= /o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Dates:<o:p></o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Submission Due: Jul= y 8, 2018 (Now the submission system is open!)<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Acceptance Notifica= tion: August 5, 2018<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Camera Ready Submis= sion: August 12, 2018 <o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Workshop Date: TBA = (Oct 22 or 26, 2018)<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">-------------------= --------------------------------------------------<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Call for papers:<o:= p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Audio-visual data i= s the most familiar format of multimedia information acquired in our daily = life. In most cases, they are already paired as audio-video streams in vari= ous fields and platforms such as media contents, surveillance data, communication, game, biomedical, education, e= tc. However, audio and video processing have been researched in separate re= search areas for long time ignoring their synergy when they work together.<= o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">The goals of this w= orkshop are to (1) present and discuss the latest trends in audio and compu= ter vision fields for the common research goals, (2) understand state-of-th= e-art techniques and bottlenecks in the other&#8217;s discipline for the common topics, (3) investigate resear= ch opportunities of joint audio-visual scene understandings in multimedia c= ontent production.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">This workshop will = be a good opportunity to bring together leading experts in audio and vision= , and bridge the gap between two research fields in multimedia content prod= uction and reproduction. We welcome research contributions related the following (but not limited to) topics:<= o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- 3D audio-visual c= apture system<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Object segmentati= on and audio source separation<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Audio-Visual Trac= king<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Speaker identific= ation and speech recognition<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Scene understandi= ng using audio-visual sensors<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Deep learning for= audio-visual data analysis<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Geometry-aware au= ditory scene analysis<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Virtual/Augmented= reality content production<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- 360 video and spa= tial audio<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">- Adaptive audio-vi= sual content rendering<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">All the submissions= will be reviewed by the workshop&#8217;s Steering Committee Members.<o:p><= /o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Accepted papers wil= l be presented during the workshop and included in the ACM Workshop proceed= ings. <o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">-------------------= --------------------------------------------------<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">We look forward to = your contributions.<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt">Adrian Hilton (Univ= ersity of Surrey, UK) Hong-Goo Kang (Yonsei University, Republic of Korea) = Hansung Kim (University of Surrey, UK) Kwanghoon Sohn (Yonsei University, R= epublic of Korea)<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt"><o:p>&nbsp;</o:p></= span></p> </div> </body> </html> --_000_AM0PR0602MB334650EB3D4FBC31EABB34F3D5670AM0PR0602MB3346_--


This message came from the mail archive
src/postings/2018/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University