CFP - MAMCA2011 - Workshop on Multimodal Audio-based Multimedia Content Analysis (XAVIER ANGUERA MIRO )


Subject: CFP - MAMCA2011 - Workshop on Multimodal Audio-based Multimedia Content Analysis
From:    XAVIER ANGUERA MIRO  <xanguera@xxxxxxxx>
Date:    Fri, 10 Dec 2010 17:31:54 +0100
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--Boundary_(ID_/eTATkePRXFkt2AzJbYjiA) Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT (Apologies if you receive multiple copies) ************************************************************** Workshop on Multimodal Audio-based Multimedia Content Analysis (MAMCA-2011) website: http://www.mamca2011.com In Conjunction with the IEEE International Conference on Multimedia and Expo (ICME) Barcelona, Spain, July 11-15, 2011 Call for Papers ************************************************************** By definition, multimedia content is composed of multiple forms, including audio, video, text/ subtitles, and others. Traditionally, applications and algorithms that work with such content have considered only a single modality, allowing for example searching of textual tags, thereby ignoring any information available from others modalities. The limitations of this approach are obvious, and there is a recent trend towards multimodal processing, in which different content modalities complement each other, or are used for bootstrapping analysis of new modalities. Audio is a prominent part of multimedia content, which is backed up by extensive research by the speech and music communities, although usually performed on audio-only systems. Utility of audio-only systems is often limited by the quality of the acoustic environment or the information contained therein, so they can benefit from a multimodal analysis of multimedia data, to enhance the resulting performance, robustness, and efficiency. The main goal of the workshop is to explore ways in which audio processing can be enhanced, bootstrapped, or facilitated by other available information modalities. We are interested not only in applications that show successful combinations of audio and other sources of information, but also on algorithms that effectively integrate them and leverage complementary information from each modality to obtain an enhanced result, in terms of degree of detail, coverage of the corpus, or other enabling factors. The workshop will provide a forum for publication of high-quality, novel research on multimedia applications and multimodal processing, with a special focus on the audio modality. Paper submission ----------------- MAMCA 2011 solicits regular technical papers of up to 6 pages following the ICME author guidelines. The proceedings of the workshop will be published as part of the IEEE ICME 2011 main conference proceedings and will be indexed by IEEE Xplore. Papers must be original and not submitted to or accepted by any other conference or journal. Papers can be submitted through the ICME submission website. Papers submitted to the workshop will be peer-reviewed by members of the community with extensive experience both in audio processing as well as other relevant modalities considered. The review will be semi-blind and assignment will be performed manually in order to generally produce three best practice reviews of each of the submitted papers. Papers can be submitted through the ICME submissions website at http://www.icme2011.org/submission.php Topics of interest ------------------ including, but not limited to: - Effective fusion of audio with other modalities - Multimodal input applications, where one input is audio - Multimodal databases - Bootstrapping of multimodal systems - Co-training for labeling new data - User-in-the loop calculations to detect preferences - Games with a purpose to label new data - Improving robustness through multimodality - Prediction of modality preference - Applications that utilize multimodality Important dates --------------- - Paper submission deadline: February 20th 2011 - Paper acceptance notification: April 10th 2011 - Camera-ready paper: April 20th 2011 - Workshop day: tentative date July 11th or 15th 2011 Organizing committee -------------------- Xavier Anguera (Telefonica Research) Gerald Friedland (ICSI) Florian Metze (CMU) --Boundary_(ID_/eTATkePRXFkt2AzJbYjiA) Content-type: text/html; charset=us-ascii Content-transfer-encoding: quoted-printable <html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode:= space; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-si= ze: 14px; font-family: Calibri, sans-serif; "><div><div><div><font class=3D= "Apple-style-span" face=3D"Helvetica, Verdana, Arial" size=3D"4"><span clas= s=3D"Apple-style-span" style=3D"font-size: 16px;"><font class=3D"Apple-styl= e-span" face=3D"Calibri,sans-serif" size=3D"4"><span class=3D"Apple-style-s= pan" style=3D"font-size: 14px;"><div>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbs= p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (A= pologies if you receive multiple copies)</div><div><br></div><div>*********= *****************************************************</div><div>&nbsp;&nbsp= ; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb= sp; &nbsp; &nbsp; &nbsp; Workshop on Multimodal Audio-based Multimedia&nbsp= ;</div><div>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &= nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;= Content Analysis (MAMCA-2011)</div><div>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; = &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp= ; &nbsp; &nbsp; &nbsp; &nbsp;website: http://www.mamca2011.com</div><div><b= r></div><div><span class=3D"Apple-tab-span" style=3D"white-space:pre"> <= /span></div><div>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb= sp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; In Conjunction with th= e IEEE International Conference&nbsp;</div><div>&nbsp;&nbsp; &nbsp; &nbsp; = &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp= ; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;on Multimedia and Expo (ICME)&nb= sp;</div><div>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;= &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs= p;Barcelona, Spain, July 11-15, 2011</div><div><br></div><div>&nbsp;&nbsp; = &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp= ; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Cal= l for Papers</div><div><br></div><div>*************************************= *************************</div><div><br></div><div><br></div><div>By defini= tion, multimedia content is composed of multiple forms, including&nbsp;</di= v><div>audio, video, text/ subtitles, and others. Traditionally, applicatio= ns and&nbsp;</div><div>algorithms that work with such content have consider= ed only a single modality,&nbsp;</div><div>allowing for example searching o= f textual tags, thereby ignoring any information&nbsp;</div><div>available = from others modalities. The limitations of this approach are obvious,&nbsp;= </div><div>and there is a recent trend towards multimodal processing, in wh= ich different&nbsp;</div><div>content modalities complement each other, or = are used for bootstrapping analysis&nbsp;</div><div>of new modalities.</div= ><div><br></div><div>Audio is a prominent part of multimedia content, which= is backed up by extensive&nbsp;</div><div>research by the speech and music= communities, although usually performed on&nbsp;</div><div>audio-only syst= ems. Utility of audio-only systems is often limited by the quality&nbsp;</d= iv><div>of the acoustic environment or the information contained therein, s= o they can&nbsp;</div><div>benefit from a multimodal analysis of multimedia= data, to enhance the resulting&nbsp;</div><div>performance, robustness, an= d efficiency.</div><div><br></div><div>The main goal of the workshop is to = explore ways in which audio processing can&nbsp;</div><div>be enhanced, boo= tstrapped, or facilitated by other available information modalities.&nbsp;<= /div><div>We are interested not only in applications that show successful c= ombinations of&nbsp;</div><div>audio and other sources of information, but = also on algorithms that effectively&nbsp;</div><div>integrate them and leve= rage complementary information from each modality to obtain&nbsp;</div><div= >an enhanced result, in terms of degree of detail, coverage of the corpus, = or other&nbsp;</div><div>enabling factors.</div><div><br></div><div>The wor= kshop will provide a forum for publication of high-quality, novel research&= nbsp;</div><div>on multimedia applications and multimodal processing, with = a special focus on the&nbsp;</div><div>audio modality.&nbsp;</div><div><br>= </div><div>Paper submission</div><div>-----------------</div><div><br></div= ><div>MAMCA 2011 solicits regular technical papers of up to 6 pages followi= ng the ICME&nbsp;</div><div>author guidelines. The proceedings of the works= hop will be published as part of&nbsp;</div><div>the IEEE ICME 2011 main co= nference proceedings and will be indexed by IEEE Xplore.&nbsp;</div><div>Pa= pers must be original and not submitted to or accepted by any other confere= nce&nbsp;</div><div>or journal. Papers can be submitted through the ICME su= bmission website.</div><div><br></div><div>Papers submitted to the workshop= will be peer-reviewed by members of the community&nbsp;</div><div>with ext= ensive experience both in audio processing as well as other relevant&nbsp;<= /div><div>modalities considered. The review will be semi-blind and assignme= nt will be&nbsp;</div><div>performed manually in order to generally produce= three best practice reviews of&nbsp;</div><div>each of the submitted paper= s.</div><div><br></div><div>Papers can be submitted through the ICME submis= sions website at http://www.icme2011.org/submission.php</div><div><br></div= ><div><br></div><div>Topics of interest&nbsp;</div><div>------------------<= /div><div><br></div><div>including, but not limited to:</div><div>- Effecti= ve fusion of audio with other modalities</div><div>- Multimodal input appli= cations, where one input is audio</div><div>- Multimodal databases</div><di= v>- Bootstrapping of multimodal systems</div><div>- Co-training for labelin= g new data</div><div>- User-in-the loop calculations to detect preferences<= /div><div>- Games with a purpose to label new data</div><div>- Improving ro= bustness through multimodality</div><div>- Prediction of modality preferenc= e</div><div>- Applications that utilize multimodality</div><div><br></div><= div><br></div><div>Important dates</div><div>---------------</div><div><br>= </div><div>- Paper submission deadline: February 20th 2011</div><div>- Pape= r acceptance notification: April 10th 2011</div><div>- Camera-ready paper: = April 20th 2011</div><div>- Workshop day: tentative date July 11th or 15th = 2011</div><div><br></div><div><br></div><div>Organizing committee</div><div= >--------------------</div><div><br></div><div>Xavier Anguera (Telefonica R= esearch)</div><div>Gerald Friedland (ICSI)</div><div>Florian Metze (CMU)</d= iv><div><br></div></span></font></span></font></div><div> </div></div></div></body></html> --Boundary_(ID_/eTATkePRXFkt2AzJbYjiA)--


This message came from the mail archive
/home/empire6/dpwe/public_html/postings/2010/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University