2nd Call for Papers: Special Issue on Interactive Sonification, IEEE MultiMedia magazine (Norberto Degara )


Subject: 2nd Call for Papers: Special Issue on Interactive Sonification, IEEE MultiMedia magazine
From:    Norberto Degara  <norberto.degara@xxxxxxxx>
Date:    Wed, 18 Dec 2013 22:05:15 +0100
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--047d7bea3f6404ffbb04edd568cd Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline [Apologies for cross-posting. Please distribute] Dear all, We would like to announce the following Call for Papers for a special issue on Interactive Sonification to be published in the IEEE MultiMedia magazine. Authors are encouraged to contact the guest editors indicating their intention to participate. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D *IEEE MultiMedia magazineSpecial Issue on Interactive Sonification * Submission deadline: 15 February 2014 Publication issue: January=96March 2015 http://www.computer.org/portal/web/computingnow/mmcfp-012015 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D *Interactive Sonification* This special issue will address computational models, techniques, methods, and systems for interactive sonification and their evaluation. Sonification and auditory display research takes place in a community that builds on a range of disciplines, including physics, acoustics, psychoacoustics, signal processing, statistics, computer science, and musicology. Application examples range from auditory displays in assistive technology for visually impaired people to data exploration and industrial process monitoring. Auditory displays are systems that transform data into sound and present this information to the human user using an interface to allow the user to interact with the sound synthesis process. This transformation of data into sound is called sonification, which can be defined as the data-dependent generation of sound in a way that reflects objective properties of the input data. The aim of auditory displays and sonification is to exploit, among other capabilities, the ability of our powerful auditory sense to interpret sounds using multiple layers of understanding, perceive multiple auditory objects within an auditory scene, turn our focus of attention to particular objects and learn and improve the discrimination of auditory stimuli. Auditory displays typically evolve over time since sound is inherently a temporal phenomenon. Interaction thus becomes an integral part of the process in order to select, manipulate, excite or control the display, and this has implications for the interface between humans and computers. In recent years it has become clear that there is an important need for research to address the interaction with auditory displays more explicitly. *Call for Papers* For this special issue, we invite submissions of research papers that deals with (but are not limited to) the following areas: - Interfaces between humans and auditory displays - New platforms for interactive sonification - Reproducible research in interactive sonification - Mapping strategies and models for creating coherency between action and reaction (e.g. acoustic feedback, but also combined with haptic or visual feedback) - Perceptual aspects of the display (how to relate actions and sound, e.g. cross-modal effects, importance of synchronisation) - Applications of interactive sonification - Evaluation of performance, usability and multi-modal interactive systems including auditory feedback. *Important Dates* - Manuscript submission: 15 February 2014 - Acceptance/Revision notification: 15 June 2014 - Revised manuscript due: 20 July 2014 - Final acceptance notification: 5 October 2014 - Final manuscript due: 20 October 2014 - Tentative publication: January 2015 *Guest Editors * - Norberto Degara, Fraunhofer IIS, Audio Department, Erlangen, Germany; *norberto.degara@xxxxxxxx* <norberto.degara@xxxxxxxx> - Andy Hunt, University of York, Electronics Deptartment, York, UK; *andy.hunt@xxxxxxxx* <andy.hunt@xxxxxxxx> - Thomas Hermann, Bielefeld University, Ambient Intelligence Group, Bielefeld, Germany; *thermann@xxxxxxxx*<thermann@xxxxxxxx> *Submission Procedures* Authors are encouraged to send to Norberto Degara ( *norberto.degara@xxxxxxxx* <norberto.degara@xxxxxxxx>) a brief email indicating their intention to participate as soon as possible, including their contact information and the topic they intend to address in their submissions. Submit your paper at *https://mc.manuscriptcentral.com/cs-ieee*<https://mc.manuscriptcentral.com= /cs-ieee>. When uploading your paper, please select the appropriate special issue title under the category "Manuscript Type." (Contact *mm-ma@xxxxxxxx*<mm-ma@xxxxxxxx> with any questions regarding the submission system.) All submissions will undergo a blind peer review by at least two expert reviewers to ensure a high standard of quality. All submissions must contain original, previously unpublished research or engineering work. Papers must stay within the following limits: 6,500 words maximum, 12 total combined figures and tables (each figure and table counts as 200 words toward the total word count), and 18 references. *Questions?* For more information about the special issue focus, please contact the guest editors. - For general author guidelines, see *www.computer.org/multimedia/author*<http://www.computer.org/multimedia/aut= hor> . - For submission details, email *multimedia@xxxxxxxx*<multimedia@xxxxxxxx> . - To submit an article, visit *https://mc.manuscriptcentral.com/mm-cs*<https://mc.manuscriptcentral.com/m= m-cs> and click on "Special Issue on Interactive Sonification." -- Norberto Degara, Ph.D. Emerging Audio Research Audio Department Fraunhofer-Institut f=FCr Integrierte Schaltungen IIS *norberto.degara@xxxxxxxx* <norberto.degara@xxxxxxxx> *http://www.iis.fraunhofer.de <https://mc.manuscriptcentral.com/mm-cs>* --047d7bea3f6404ffbb04edd568cd Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline <div dir=3D"ltr"><div style=3D"font-family:Helvetica;text-align:-webkit-aut= o;margin:0px;font-size:14px;color:rgb(35,35,35)">[Apologies for cross-posti= ng. Please distribute]</div><div style=3D"font-family:Helvetica;text-align:= -webkit-auto;margin:0px;font-size:14px;color:rgb(35,35,35);min-height:17px"> <br></div><div style=3D"font-family:Helvetica;text-align:-webkit-auto;margi= n:0px;font-size:14px;color:rgb(35,35,35)">Dear all,</div><div style=3D"font= -family:Helvetica;text-align:-webkit-auto;margin:0px;font-size:14px;color:r= gb(35,35,35);min-height:17px"> <br></div><div style=3D"font-family:Helvetica;text-align:-webkit-auto;margi= n:0px;font-size:14px;color:rgb(35,35,35)">We would like to announce the fol= lowing Call for Papers for a special issue on Interactive Sonification to b= e published in the IEEE MultiMedia magazine. Authors are encouraged to cont= act the guest editors indicating their intention to participate.</div> <div style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px;font= -size:14px;color:rgb(35,35,35);min-height:17px"><br></div><div style=3D"fon= t-family:Helvetica;text-align:-webkit-auto;margin:0px;font-size:14px;color:= rgb(35,35,35)"> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D</div><p style=3D"font-family:Helvetica;text-align:= -webkit-auto;margin:0px 0px 20px;font-size:14px;color:rgb(35,35,35)"><b>IEE= E MultiMedia magazine<br>Special Issue on Interactive Sonification<br> </b><br>Submission deadline: 15 February 2014<b><br></b>Publication issue: = January=96March 2015<br><a href=3D"http://www.computer.org/portal/web/compu= tingnow/mmcfp-012015">http://www.computer.org/portal/web/computingnow/mmcfp= -012015</a><br> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D</p><p style=3D"font-family:Helvetica;text-align:-w= ebkit-auto;margin:0px 0px 20px;font-size:14px;color:rgb(35,35,35)"><b>Inter= active Sonification</b><br><br>This special issue will address computationa= l models, techniques, methods, and systems for interactive sonification and= their evaluation.</p> <p style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px 0px 14= px;font-size:14px;color:rgb(35,35,35)">Sonification and auditory display re= search takes place in a community that builds on a range of disciplines, in= cluding physics, acoustics, psychoacoustics, signal processing, statistics,= computer science, and musicology. Application examples range from auditory= displays in assistive technology for visually impaired people to data expl= oration and industrial process monitoring. Auditory displays are systems th= at transform data into sound and present this information to the human user= using an interface to allow the user to interact with the sound synthesis = process. This transformation of data into sound is called sonification, whi= ch can be defined as the data-dependent generation of sound in a way that r= eflects objective properties of the input data.</p> <p style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px 0px 14= px;font-size:14px;color:rgb(35,35,35)">The aim of auditory displays and son= ification is to exploit, among other capabilities, the ability of our power= ful auditory sense to interpret sounds using multiple layers of understandi= ng, perceive multiple auditory objects within an auditory scene, turn our f= ocus of attention to particular objects and learn and improve the discrimin= ation of auditory stimuli.</p> <p style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px 0px 14= px;font-size:14px;color:rgb(35,35,35)">Auditory displays typically evolve o= ver time since sound is inherently a temporal phenomenon. Interaction thus = becomes an integral part of the process in order to select, manipulate, exc= ite or control the display, and this has implications for the interface bet= ween humans and computers. In recent years it has become clear that there i= s an important need for research to address the interaction with auditory d= isplays more explicitly.</p> <p style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px 0px 5p= x;font-size:14px;color:rgb(35,35,35)"><b>Call for Papers</b></p><p style=3D= "font-family:Helvetica;text-align:-webkit-auto;margin:0px 0px 5px;font-size= :14px;color:rgb(35,35,35)"> <br>For this special issue, we invite submissions of research papers that d= eals with (but are not limited to) the following areas:</p><ul style=3D"col= or:rgb(0,0,0);font-family:Helvetica;font-size:medium;text-align:-webkit-aut= o"> <li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">Interfaces betw= een humans and auditory displays</li><li style=3D"margin:0px;font-size:14px= ;color:rgb(35,35,35)">New platforms for interactive sonification</li><li st= yle=3D"margin:0px;font-size:14px;color:rgb(35,35,35)"> Reproducible research in interactive sonification</li><li style=3D"margin:0= px;font-size:14px;color:rgb(35,35,35)">Mapping strategies and models for cr= eating coherency between action and reaction (e.g. acoustic feedback, but a= lso combined with haptic or visual feedback)</li> <li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">Perceptual aspe= cts of the display (how to relate actions and sound, e.g. cross-modal effec= ts, importance of synchronisation)</li><li style=3D"margin:0px;font-size:14= px;color:rgb(35,35,35)"> Applications of interactive sonification</li><li style=3D"margin:0px;font-s= ize:14px;color:rgb(35,35,35)">Evaluation of performance, usability and mult= i-modal interactive systems including auditory feedback.<br></li></ul><p st= yle=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px 0px 5px;fon= t-size:14px;color:rgb(35,35,35)"> <b>Important Dates</b></p><div style=3D"font-family:Helvetica;text-align:-w= ebkit-auto;margin:0px;font-size:14px;color:rgb(35,35,35);min-height:17px"><= br></div><ul style=3D"color:rgb(0,0,0);font-family:Helvetica;font-size:medi= um;text-align:-webkit-auto"> <li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">Manuscript subm= ission: 15 February 2014</li><li style=3D"margin:0px;font-size:14px;color:r= gb(35,35,35)">Acceptance/Revision notification: 15 June 2014</li><li style= =3D"margin:0px;font-size:14px;color:rgb(35,35,35)"> Revised manuscript due: 20 July 2014</li><li style=3D"margin:0px;font-size:= 14px;color:rgb(35,35,35)">Final acceptance notification: 5 October 2014</li= ><li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">Final manuscri= pt due: 20 October 2014</li> <li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">Tentative publi= cation: January 2015<br></li></ul><p style=3D"font-family:Helvetica;text-al= ign:-webkit-auto;margin:0px 0px 5px;font-size:14px;color:rgb(35,35,35)"><b>= Guest Editors<br> </b></p><ul style=3D"color:rgb(0,0,0);font-family:Helvetica;font-size:mediu= m;text-align:-webkit-auto"><li style=3D"margin:0px;font-size:14px;color:rgb= (35,35,35)">Norberto Degara, Fraunhofer IIS, Audio Department, Erlangen, Ge= rmany;=A0<a href=3D"mailto:norberto.degara@xxxxxxxx"><span style= =3D"color:rgb(1,78,134)"><b>norberto.degara@xxxxxxxx</b></span></a= ></li> <li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">Andy Hunt, Univ= ersity of York, Electronics Deptartment, York, UK;=A0<a href=3D"mailto:andy= .hunt@xxxxxxxx"><span style=3D"color:rgb(1,78,134)"><b>andy.hunt@xxxxxxxx= uk</b></span></a></li> <li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">Thomas Hermann,= Bielefeld University, Ambient Intelligence Group, Bielefeld, Germany;=A0<a= href=3D"mailto:thermann@xxxxxxxx"><span style=3D"color:rgb= (1,78,134)"><b>thermann@xxxxxxxx</b></span></a><span style= =3D"color:rgb(1,78,134)"><b><br> </b></span></li></ul><p style=3D"font-family:Helvetica;text-align:-webkit-a= uto;margin:0px 0px 5px;font-size:14px;color:rgb(35,35,35)"><b>Submission Pr= ocedures</b><br><br>Authors are encouraged to send to Norberto Degara (<a h= ref=3D"mailto:norberto.degara@xxxxxxxx"><span style=3D"color:rgb(1= ,78,134)"><b>norberto.degara@xxxxxxxx</b></span></a>) a brief emai= l indicating their intention to participate as soon as possible, including = their contact information and the topic they intend to address in their sub= missions. Submit your paper at=A0<a href=3D"https://mc.manuscriptcentral.co= m/cs-ieee"><span style=3D"color:rgb(1,78,134)"><b>https://mc.manuscriptcent= ral.com/cs-ieee</b></span></a>. When uploading your paper, please select th= e appropriate special issue title under the category &quot;Manuscript Type.= &quot; (Contact=A0<a href=3D"mailto:mm-ma@xxxxxxxx"><span style=3D"colo= r:rgb(1,78,134)"><b>mm-ma@xxxxxxxx</b></span></a>=A0with any questions = regarding the submission system.) All submissions will undergo a blind peer= review by at least two expert reviewers to ensure a high standard of quali= ty. All submissions must contain original, previously unpublished research = or engineering work. Papers must stay within the following limits: 6,500 wo= rds maximum, 12 total combined figures and tables (each figure and table co= unts as 200 words toward the total word count), and 18 references.<br> <br><b>Questions?</b><br><br>For more information about the special issue f= ocus, please contact the guest editors.</p><ul style=3D"color:rgb(0,0,0);fo= nt-family:Helvetica;font-size:medium;text-align:-webkit-auto"><li style=3D"= margin:0px;font-size:14px;color:rgb(35,35,35)"> For general author guidelines, see=A0<a href=3D"http://www.computer.org/mul= timedia/author"><span style=3D"color:rgb(1,78,134)"><b>www.computer.org/mul= timedia/author</b></span></a>.</li><li style=3D"margin:0px;font-size:14px;c= olor:rgb(35,35,35)"> For submission details, email=A0<a href=3D"mailto:multimedia@xxxxxxxx">= <span style=3D"color:rgb(1,78,134)"><b>multimedia@xxxxxxxx</b></span></= a>.</li><li style=3D"margin:0px;font-size:14px;color:rgb(35,35,35)">To subm= it an article, visit=A0<a href=3D"https://mc.manuscriptcentral.com/mm-cs"><= span style=3D"color:rgb(1,78,134)"><b>https://mc.manuscriptcentral.com/mm-c= s</b></span></a>=A0and click on &quot;Special Issue on Interactive Sonifica= tion.&quot;</li> </ul><div style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px= ;font-size:14px;color:rgb(35,35,35);min-height:17px"><br></div><div style= =3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px;font-size:14px= ;color:rgb(35,35,35)"> --</div><div style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:= 0px;font-size:14px;color:rgb(35,35,35)">Norberto Degara, Ph.D.</div><div st= yle=3D"font-family:Helvetica;text-align:-webkit-auto;margin:0px;font-size:1= 4px;color:rgb(35,35,35)"> Emerging Audio Research</div><div style=3D"font-family:Helvetica;text-align= :-webkit-auto;margin:0px;font-size:14px;color:rgb(35,35,35)">Audio Departme= nt</div><div style=3D"font-family:Helvetica;text-align:-webkit-auto;margin:= 0px;font-size:14px;color:rgb(35,35,35)"> Fraunhofer-Institut f=FCr Integrierte Schaltungen IIS</div><p style=3D"font= -family:Helvetica;text-align:-webkit-auto;margin:0px 0px 14px;font-size:14p= x;color:rgb(18,85,204)"><a href=3D"mailto:norberto.degara@xxxxxxxx= "><b>norberto.degara@xxxxxxxx</b></a><span style=3D"color:rgb(71,1= 35,255)"><br> <span style=3D"color:rgb(1,78,134)"><b><!-- <a href=3D"https://mc.manuscrip= tcentral.com/mm-cs"> -->http://www.iis.fraunhofer.de<!-- </a> --></b></span= ></span></p></div> --047d7bea3f6404ffbb04edd568cd--


This message came from the mail archive
/var/www/postings/2013/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University