[AUDITORY] CfP: Special Issue on Interactive Sonification at Multimodal User Interfaces (JMUI) (Roberto Bresin )


Subject: [AUDITORY] CfP: Special Issue on Interactive Sonification at Multimodal User Interfaces (JMUI)
From:    Roberto Bresin  <roberto@xxxxxxxx>
Date:    Thu, 26 Oct 2017 15:35:32 +0200
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--Apple-Mail=_807641B8-D850-474E-97AC-B1F3FDD91DF4 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 ***Apologies for cross-posting***** Following the ISon 2016 Workshop on December 15-16th 2016 (proceedings = can be found on http://interactive-sonification.org/ISon2016/ = <http://interactive-sonification.org/ISon2016/>), we would like to = announce the Journal on Multimodal User Interfaces (JMUI) Special Issue = on =E2=80=9CInteractive Sonification=E2=80=9D. This special issue will address computational models, techniques, = methods, and systems for Interactive Sonification and their evaluation. = This is the second special issue of JMUI dedicated to Interaction = Sonification. The first one was published in 2012. Sonification & Auditory Displays are increasingly becoming an = established technology for exploring data, monitoring complex processes, = or assisting exploration and navigation of data spaces. Sonification = addresses the auditory sense by transforming data into sound, allowing = the human user to get valuable information from data by using their = natural listening skills.=20 The main differences of sound displays over visual displays are that = sound can: Represent frequency responses in an instant (as timbral characteristics) Represent changes over time, naturally Allow microstructure to be perceived Rapidly portray large amounts of data Alert listener to events outside the current visual focus Holistically bring together many channels of information=20 Auditory displays typically evolve over time since sound is inherently a = temporal phenomenon. Interaction thus becomes an integral part of the = process in order to select, manipulate, excite or control the display, = and this has implications for the interface between humans and = computers. In recent years, it has become clear that there is an = important need for research to address the interaction with auditory = displays more explicitly. Interactive Sonification is the specialized = research topic concerned with the use of sound to portray data, but = where there is a human being at the heart of an interactive control = loop. Specifically it deals with the following areas (but not limited = to), in which we invite submissions of research papers: interfaces between humans and auditory displays mapping strategies and models for creating coherency between action and = reaction (e.g. acoustic feedback, but also combined with haptic or = visual feedback) perceptual aspects of the display (how to relate actions and sound, e.g. = cross-modal effects, importance of synchronisation) applications of Interactive Sonification evaluation of performance, usability and multimodal interactive systems = including auditory feedback For this special issue, we invite particular works focussing on = Adaptivity and Scaffolding in Interactive Sonification, i.e. how = auditory feedback and Interactive Sonification provide a scaffolding for = familiarizing with interaction and learning to interact, and how users = adapt their activity patterns according to the feedback and their level = of experience. For example, a sports movement sonification could = initially focus the displayed information on the most basic pattern = (e.g. active arm) and once the user progresses (i.e. feedback indicates = that they understand and utilize this information), increasingly = emphasize subtle further cues (e.g. knees) by making such auditory = streams more salient. This feeds into the important question, how we can = evaluate the complex and temporally developing interrelationship between = the human user and an interactive system that is coupled to the user by = means of Interactive Sonification. To make a sustainable contribution, = we strongly encourage a reproducible research approach in Interactive = Sonification to: allow for the formal evaluation and comparison of Interactive = Sonification systems, establish standards in Interactive Sonification =20 Schedule (subject to change): Submission deadline: December 15, 2017 Notification of acceptance: March 16, 2018 Final paper submission: April 20, 2018 Tentative Publication: June/July 2018 =20 Guest Editors: Roberto Bresin, KTH School of Computer Science and Communication, = Stockholm, Sweden; roberto@xxxxxxxx <mailto:roberto@xxxxxxxx> Thomas Hermann, Bielefeld University, Ambient Intelligence Group, = Bielefeld, Germany; thermann@xxxxxxxx = <mailto:thermann@xxxxxxxx> Jiajun Yang, Bielefeld University, Ambient Intelligence Group, = Bielefeld, Germany; jyang@xxxxxxxx = <mailto:jyang@xxxxxxxx>-bielefeld.de <http://bielefeld.de/> =20 Authors are requested to follow instructions for manuscript submission = to the Journal of Multimodal User Interfaces = (http://www.springer.com/computer/hci/journal/12193 = <http://www.springer.com/computer/hci/journal/12193>) and to submit = manuscripts at the following link: http://www.editorialmanager.com/jmui/ = <ttp://www.editorialmanager.com/jmui/>. The article type to be selected = is =E2=80=9CSpecial Issue S.I. : Interactive Sonification - 2017=E2=80=9D.= =20 Link to the PDF file with the call for papers:=20 = http://static.springer.com/sgw/documents/1620044/application/pdf/CFP_JMUI_= Special+Issue_InteractiveSonification_2017.pdf = <http://static.springer.com/sgw/documents/1620044/application/pdf/CFP_JMUI= _Special+Issue_InteractiveSonification_2017.pdf> Best regards, Roberto Bresin, Thomas Hermann, Jiajun Yang= --Apple-Mail=_807641B8-D850-474E-97AC-B1F3FDD91DF4 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><p class=3D"MsoNormal"><span class=3D"">***Apologies for = cross-posting*****</span></p><p class=3D"MsoNormal"><span = class=3D"">Following the ISon 2016 Workshop on December 15-16th 2016 = (proceedings can be found on&nbsp;<a = href=3D"http://interactive-sonification.org/ISon2016/" class=3D""><span = class=3D"" style=3D"color: = blue;">http://interactive-sonification.org/ISon2016/</span></a>), we = would like to announce&nbsp;the&nbsp;Journal on Multimodal User = Interfaces (JMUI) Special Issue on&nbsp;</span><span lang=3D"ZH-CN" = class=3D"">=E2=80=9C</span><span class=3D"">Interactive = Sonification=E2=80=9D.</span></p><p class=3D"MsoNormal"><span = class=3D"">This special issue will address computational models, = techniques, methods, and systems for Interactive Sonification and their = evaluation. This is the second special issue of JMUI dedicated to = Interaction Sonification. The first one was published in = 2012.</span></p><p class=3D"MsoNormal"><span class=3D"">Sonification = &amp; Auditory Displays are increasingly becoming an established = technology for exploring data, monitoring complex processes, or = assisting exploration and navigation of data spaces. Sonification = addresses the auditory sense by transforming data into sound, allowing = the human user to get valuable information from data by using their = natural listening skills.&nbsp;<o:p class=3D""></o:p></span></p><p = class=3D"MsoNormal"><span class=3D"">The main differences of sound = displays over visual displays are that sound can:<o:p = class=3D""></o:p></span></p><ul type=3D"disc" class=3D"" = style=3D"margin-top: 0in;"><li class=3D"MsoNormal"><span = class=3D"">Represent frequency responses in an instant (as timbral = characteristics)<o:p class=3D""></o:p></span></li><li = class=3D"MsoNormal"><span class=3D"">Represent changes over time, = naturally<o:p class=3D""></o:p></span></li><li class=3D"MsoNormal"><span = class=3D"">Allow microstructure to be perceived<o:p = class=3D""></o:p></span></li><li class=3D"MsoNormal"><span = class=3D"">Rapidly portray large amounts of data<o:p = class=3D""></o:p></span></li><li class=3D"MsoNormal"><span = class=3D"">Alert listener to events outside the current visual focus<o:p = class=3D""></o:p></span></li><li class=3D"MsoNormal"><span = class=3D"">Holistically bring together many channels of = information</span>&nbsp;</li></ul><p class=3D"MsoNormal"><span = class=3D"">Auditory displays typically evolve over time since sound is = inherently a temporal phenomenon. Interaction thus becomes an integral = part of the process in order to select, manipulate, excite or control = the display, and this has implications for the interface between humans = and computers. In recent years, it has become clear that there is an = important need for research to address the interaction with auditory = displays more explicitly. Interactive Sonification is the specialized = research topic concerned with the use of sound to portray data, but = where there is a human being at the heart of an interactive control = loop. Specifically it deals with the following areas (but not limited = to), in which we invite submissions of research papers:<o:p = class=3D""></o:p></span></p><ul type=3D"disc" class=3D"" = style=3D"margin-top: 0in;"><li class=3D"MsoNormal"><span = class=3D"">interfaces between humans and auditory displays<o:p = class=3D""></o:p></span></li><li class=3D"MsoNormal"><span = class=3D"">mapping strategies and models for creating coherency between = action and reaction (e.g. acoustic feedback, but also combined with = haptic or visual feedback)<o:p class=3D""></o:p></span></li><li = class=3D"MsoNormal"><span class=3D"">perceptual aspects of the display = (how to relate actions and sound, e.g. cross-modal effects, importance = of synchronisation)<o:p class=3D""></o:p></span></li><li = class=3D"MsoNormal"><span class=3D"">applications of Interactive = Sonification<o:p class=3D""></o:p></span></li><li = class=3D"MsoNormal"><span class=3D"">evaluation of performance, = usability and multimodal interactive systems including auditory = feedback</span></li></ul><p class=3D"MsoNormal"><span class=3D"">For = this special issue, we invite particular works focussing on&nbsp;<b = class=3D"">Adaptivity and Scaffolding</b>&nbsp;in Interactive = Sonification, i.e. how auditory feedback and Interactive Sonification = provide a scaffolding for familiarizing with interaction and learning to = interact, and how users adapt their activity patterns according to the = feedback and their level of experience. For example, a sports movement = sonification could initially focus the displayed information on the most = basic pattern (e.g. active arm) and once the user progresses (i.e. = feedback indicates that they understand and utilize this information), = increasingly emphasize subtle further cues (e.g. knees) by making such = auditory streams more salient. This feeds into the important question, = how we can evaluate the complex and temporally developing = interrelationship between the human user and an interactive system that = is coupled to the user by means of Interactive Sonification. To make a = sustainable contribution, we strongly encourage a reproducible research = approach in Interactive Sonification to:<o:p = class=3D""></o:p></span></p><ul type=3D"disc" class=3D"" = style=3D"margin-top: 0in;"><li class=3D"MsoNormal"><span class=3D"">allow = for the formal evaluation and comparison of Interactive Sonification = systems,<o:p class=3D""></o:p></span></li><li class=3D"MsoNormal"><span = class=3D"">establish standards in Interactive = Sonification&nbsp;</span><b class=3D""><span = class=3D"">&nbsp;</span></b></li></ul><p class=3D"MsoNormal"><b = class=3D""><span class=3D"">Schedule&nbsp;(subject to = change):</span></b></p><p class=3D"MsoListParagraphCxSpFirst" = style=3D"text-indent: -0.25in;"><span class=3D"">&nbsp; &nbsp; = &nbsp;Submission deadline: December 15, 2017<o:p = class=3D""></o:p></span></p><p class=3D"MsoListParagraphCxSpMiddle" = style=3D"text-indent: -0.25in;"><span class=3D"">&nbsp; &nbsp; = &nbsp;Notification of acceptance: March 16, 2018<o:p = class=3D""></o:p></span></p><p class=3D"MsoListParagraphCxSpMiddle" = style=3D"text-indent: -0.25in;"><span class=3D"">&nbsp; &nbsp; = &nbsp;Final paper submission: April 20, 2018<o:p = class=3D""></o:p></span></p><p class=3D"MsoListParagraphCxSpLast" = style=3D"text-indent: -0.25in;"><span class=3D"">&nbsp; &nbsp; = &nbsp;Tentative Publication: June/July 2018<o:p = class=3D""></o:p></span></p><p class=3D"MsoNormal"><b class=3D""><span = class=3D"">&nbsp;</span></b></p><p class=3D"MsoNormal"><b class=3D""><span= class=3D"">Guest Editors:</span></b></p><p class=3D"MsoNormal"><span = class=3D"">Roberto Bresin, KTH School of Computer Science and = Communication, Stockholm, Sweden;&nbsp;</span><span class=3D"" = style=3D"color: blue;"><a href=3D"mailto:roberto@xxxxxxxx" = class=3D"">roberto@xxxxxxxx</a></span></p><p class=3D"MsoNormal">Thomas = Hermann, Bielefeld University, Ambient Intelligence Group, Bielefeld, = Germany;&nbsp;<a href=3D"mailto:thermann@xxxxxxxx" = class=3D"">thermann@xxxxxxxx</a></p><div class=3D""><span = class=3D"">&nbsp;</span><span class=3D"">Jiajun Yang, Bielefeld = University, Ambient Intelligence Group, Bielefeld, = Germany;&nbsp;</span><span class=3D"" style=3D"color: blue;"><a = href=3D"mailto:jyang@xxxxxxxx" class=3D""><span class=3D"" = style=3D"color: blue;">jyang@xxxxxxxx</span></a>-<span class=3D""><a = href=3D"http://bielefeld.de/" = class=3D"">bielefeld.de</a></span></span></div><div class=3D""><span = class=3D"">&nbsp;</span><br class=3D"webkit-block-placeholder"></div><p = class=3D"MsoNormal"><span class=3D"">Authors are requested to follow = instructions for manuscript submission to the Journal of Multimodal User = Interfaces (<a href=3D"http://www.springer.com/computer/hci/journal/12193"= class=3D""><span class=3D"" style=3D"color: = blue;">http://www.springer.com/computer/hci/journal/12193</span></a>) = and to submit manuscripts at the following link: h<a = href=3D"ttp://www.editorialmanager.com/jmui/" class=3D""><span class=3D"" = style=3D"color: blue;">ttp://www.editorialmanager.com/jmui/</span></a>. = The article type to be selected is&nbsp;</span><span lang=3D"ZH-CN" = class=3D"">=E2=80=9C</span><span class=3D"">Special&nbsp;Issue S.I. : = Interactive Sonification - 2017</span><span lang=3D"ZH-CN" = class=3D"">=E2=80=9D</span><span class=3D"">.&nbsp;</span></p><p = class=3D"MsoNormal"><span class=3D"">Link to&nbsp;the PDF file with the = call for papers:&nbsp;<o:p class=3D""></o:p></span></p><p = class=3D"MsoNormal"><span class=3D""><span class=3D"" style=3D"color: = blue;"><a = href=3D"http://static.springer.com/sgw/documents/1620044/application/pdf/C= FP_JMUI_Special+Issue_InteractiveSonification_2017.pdf" = class=3D"">http://static.springer.com/sgw/documents/1620044/application/pd= f/CFP_JMUI_Special+Issue_InteractiveSonification_2017.pdf</a></span></span= ></p><p class=3D"MsoNormal"><span class=3D"">Best regards,<o:p = class=3D""></o:p></span></p><div class=3D"">Roberto Bresin, Thomas = Hermann, &nbsp;Jiajun Yang</div></body></html>= --Apple-Mail=_807641B8-D850-474E-97AC-B1F3FDD91DF4--


This message came from the mail archive
../postings/2017/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University