[AUDITORY] ICAD - ABBA (Auditory Brown Bag At home): July 9th, 10am and 10pm EST (Myounghoon Jeon )


Subject: [AUDITORY] ICAD - ABBA (Auditory Brown Bag At home): July 9th, 10am and 10pm EST
From:    Myounghoon Jeon  <philart@xxxxxxxx>
Date:    Tue, 30 Jun 2020 15:26:29 -0400

--0000000000001b27b805a9522912 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello Auditory Community, ICAD (International Community for Auditory Display) is hosting the ABBA (Auditory Brown Bag At-home) virtual talks on the second Thursday, July 9th= . We are very excited to announce the presenters and topics for the July sessions. Session A at 10 AM Eastern Time (US) Visda Goudarzi (Columbia College Chicago, USA) *"Designing Participatory Musical Interaction and Performance"* Jiajun Yang (Bielefeld University, Germany) *"Introduction to pya - An audio library for Python" * Session B at 10 PM Eastern Time (US) Jiayuan Dong (Virginia Tech, USA) *"Female Voice Agents in Fully Autonomous Vehicles Are Not Only More Likeable and Comfortable, But Also More Competent"* Full abstracts appear at the bottom of this message. Different sessions each day will have different speakers, so all are welcome to drop in to either or both sessions. Each talk will last 10-15 minutes, with breakout small group discussion sessions following the plenary session. *How to Attend* To attend, you will need to register for each event via the links at: https://icad.org/abba-july-2020/ Before the meeting, registered attendees will receive a Zoom link via email. The registration is just to help us plan for the discussion and also to prevent Zoom bombing by unauthorized non-ICADers. If you want to present at the 10 pm session on July 3rd, you can still do. Register in the link above. However, all four presentation slots for August 13 (10 AM and 10 PM EST) have been booked. Thanks for your interests and contributions!! We want to hear about your work! We invite you to offer an informal/semi-formal presentation of around 10-15 minutes. All topics of interest to the ICAD community are welcome, including but not limited to: finished research, works-in-progress, student/thesis work, tutorials, requests for feedback on new or on-going projects, practice talks, and updates on current events in your lab. The format of the talk is flexible, so long as it is amenable to virtual presentation via Zoom. You will be prompted to provide a title and brief summary/abstract of your talk. In general, sign-ups to give talks will be allocated first-come-first-serve. You will be asked to indicate the days and times that work for you, and the session chairs will work with presenters to try to fit everyone into the schedule and/or to group similar talks when possible. There will not be a formal review of sign-ups for presenters, but the session chairs will perform a light check to ensure the proposed topic fits the broad interests of the ICAD community. *Questions?* If you have any questions about the July 9th sessions, you can contact Thomas Hermann: thermann@xxxxxxxx (chair of the July 9th, 10:00 am EST session), Myounghoon Jeon (Philart): philart@xxxxxxxx (chair of the July 9th, 10:00 pm EST session) or any member of the ICAD board (https://icad.org/board/) We are looking forward to seeing you at the ABBA sessions! ****** Abstracts for July 9th talks: *Designing Participatory Musical Interaction and Performance* Visda Goudarzi (Columbia College Chicago, USA) Participatory artworks aim at creating an experience that is open to interpretation. I argue that such interpretations should not be just entirely predetermined by the creators=E2=80=99 expectations; rather, they = should vary among audience members. I also argue that audience members that experienced the artwork could act as co-designers for the successive iterations of the artwork and broaden the creative process. In this talk, I discuss investigating these arguments with an exploratory approach aimed at transforming creative practices, by reflecting on case studies of interactive audio interfaces. Audience engagement, designers=E2=80=99 feedb= ack, and reflections of expert audience members are the focus. *Introduction to pya - An audio library for Python* Jiajun Yang (Bielefeld University, Germany) We develop pya with the mind to simplify the process of editing, processing, playing and analysing audio signals in Python. pya can be used for creating application audio, editing multichannel audios, signal processing and analysis, feature extraction, and with Python being a popular language for handling data, pya naturally lends itself toward auditory display and sonification. In this talk, we will focus on the key concepts and core classes and functionalities. We will demonstrate a few examples of using pya in the context of sonification design. *Female Voice Agents in Fully Autonomous Vehicles Are Not Only More Likeable and Comfortable, But Also More Competent* Jiayuan Dong (Virginia Tech, USA) Driving agents can provide an effective solution to improve drivers=E2=80= =99 trust in and to manage interactions with autonomous vehicles. Research has focused on voice-agents, while few have explored robot-agents or the comparison between the two. The present study tested two variables - voice gender and agent embodiment, using conversational scripts. Twenty participants experienced autonomous driving using the simulator for four agent conditions and filled out subjective questionnaires for their perception of each agent. Results showed that the participants perceived the voice only female agent as more likeable, more comfortable, and more competent than other conditions. Their final preference ranking also favored this agent over the others. Interestingly, eye-tracking data showed that embodied agents did not add more visual distractions than the voice only agents. The results are discussed with the traditional gender stereotype, uncanny valley, and participants=E2=80=99 gender. This study ca= n contribute to the design of in-vehicle agents in the autonomous vehicles and future studies are planned to further identify the underlying mechanisms of user perception on different agents. ************************************************ Myounghoon Jeon (Philart), Ph.D. pronounced as /"me" + "young"/ /hoon/ /juhn/ Associate Professor *Mind Music Machine Lab <http://ise.vt.edu/philart>* Grado Department of Industrial and Systems Engineering Department of Computer Science (by courtesy) Virginia Tech 519D Whittemore Hall 1185 Perry St. Blacksburg, VA 24061 myounghoonjeon@xxxxxxxx --0000000000001b27b805a9522912 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div><div>Hello Auditory Community,=C2=A0</div><div><br></= div><div>ICAD (International Community for Auditory Display) is hosting the= ABBA (Auditory Brown Bag At-home)=C2=A0virtual talks on the second Thursda= y, July 9th.</div><div>We are very excited to announce the presenters and t= opics for the July sessions.</div><div><br></div><div><br>Session A at 10 A= M Eastern Time (US)<br><br>Visda Goudarzi (Columbia College Chicago, USA)= =C2=A0</div><div><b>&quot;Designing Participatory Musical Interaction and P= erformance&quot;<br></b><br><br><span style=3D"font-size:10pt;font-family:A= rial">Jiajun=C2=A0</span><span style=3D"color:rgb(0,0,0);font-family:Arial;= font-size:13px;white-space:pre-wrap">Yang</span>=C2=A0(<span style=3D"color= :rgb(0,0,0);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-= size:13px;white-space:pre-wrap">Bielefeld University</span>, Germany)</div>= <div><b><span style=3D"color:rgb(0,0,0);font-family:Roboto,RobotoDraft,Helv= etica,Arial,sans-serif;font-size:13px;white-space:pre-wrap">&quot;Introduct= ion to pya - An audio library for Python&quot; </span></b></div><div><br></= div><div><br>Session B at 10 PM Eastern Time (US)<br><br>Jiayuan Dong (Virg= inia Tech, USA)=C2=A0</div><div><b>&quot;Female Voice Agents in Fully Auton= omous Vehicles Are Not Only More Likeable and Comfortable, But Also More Co= mpetent&quot;</b></div><div><b><br></b><br>Full abstracts appear at the bot= tom of this message.<br><br><br>Different sessions each day will have diffe= rent speakers, so all are<br>welcome to drop in to either or both sessions.= <br><br>Each talk will last 10-15 minutes, with breakout small group discus= sion<br>sessions following the plenary session.<br><br><br></div><div><b>Ho= w to Attend</b><br>To attend, you will need to register for each event via = the links at:<br><a href=3D"https://icad.org/abba-july-2020/" target=3D"_bl= ank">https://icad.org/abba-july-2020/</a><br><br>Before the meeting, regist= ered attendees will receive a Zoom link via<br>email. The registration is j= ust to help us plan for the discussion and also<br>to prevent Zoom bombing = by unauthorized non-ICADers.<br><br></div><div>If you want to present at th= e 10 pm session on July 3rd, you can still do.</div><div>Register in the li= nk above.</div><div><br></div><div>However, all four presentation=C2=A0slot= s for August 13 (10 AM and 10 PM EST) have been booked. Thanks for your int= erests and contributions!!</div><div><br>We want to hear about your work! W= e invite you to offer an<br>informal/semi-formal presentation of around 10-= 15 minutes. All topics of<br>interest to the ICAD community are welcome, in= cluding but not limited to:<br>finished research, works-in-progress, studen= t/thesis work, tutorials,<br>requests for feedback on new or on-going proje= cts, practice talks, and<br>updates on current events in your lab. The form= at of the talk is flexible,<br>so long as it is amenable to virtual present= ation via Zoom.<br><br>You will be prompted to provide a title and brief su= mmary/abstract of your<br>talk.<br><br>In general, sign-ups to give talks w= ill be allocated<br>first-come-first-serve. You will be asked to indicate t= he days and times<br>that work for you, and the session chairs will work wi= th presenters to try<br>to fit everyone into the schedule and/or to group s= imilar talks when<br>possible. There will not be a formal review of sign-up= s for presenters, but<br>the session chairs will perform a light check to e= nsure the proposed topic<br>fits the broad interests of the ICAD community.= <br><br><b>Questions?</b><br><br></div><div>If you have any questions about= the July 9th sessions, you can contact<br>Thomas Hermann:=C2=A0<a href=3D"= mailto:thermann@xxxxxxxx" target=3D"_blank">thermann@xxxxxxxx= k.uni-bielefeld.de</a>=C2=A0(chair of the July 9th, 10:00 am EST session), = Myounghoon Jeon (Philart):=C2=A0<a href=3D"mailto:philart@xxxxxxxx" target= =3D"_blank">philart@xxxxxxxx</a>=C2=A0<br>(chair of the July 9th, 10:00 pm= EST session)=C2=A0or any member of the ICAD board (<a href=3D"https://icad= .org/board/" rel=3D"noreferrer" target=3D"_blank">https://icad.org/board/</= a>)<br><br>We are looking forward to seeing you at the ABBA sessions!<br><b= r><br>******</div><div><br>Abstracts for July 9th talks:<br><br><span style= =3D"font-size:10pt;font-family:Arial"><b>Designing Participatory Musical In= teraction and Performance</b></span><br>Visda Goudarzi (Columbia College Ch= icago, USA)=C2=A0<br><br><span style=3D"font-size:10pt;font-family:Arial">P= articipatory artworks aim at creating an experience that is open to interpr= etation. I argue that such interpretations should not be just entirely pred= etermined by the creators=E2=80=99 expectations; rather, they should vary a= mong audience members. I also argue that audience members that experienced = the artwork could act as co-designers for the successive iterations of the = artwork and broaden the creative process. In this talk, I discuss investiga= ting these arguments with an exploratory approach aimed at transforming cre= ative practices, by reflecting on case studies of interactive audio interfa= ces. Audience engagement, designers=E2=80=99 feedback, and reflections of e= xpert audience members are the focus.</span><br><br><br><div><b><span style= =3D"color:rgb(0,0,0);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-se= rif;font-size:13px;white-space:pre-wrap">Introduction to pya - An audio lib= rary for Python</span></b></div><div></div><div><span style=3D"font-size:10= pt;font-family:Arial">Jiajun=C2=A0</span><span style=3D"color:rgb(0,0,0);fo= nt-family:Arial;font-size:13px;white-space:pre-wrap">Yang</span>=C2=A0(<spa= n style=3D"color:rgb(0,0,0);font-family:Roboto,RobotoDraft,Helvetica,Arial,= sans-serif;font-size:13px;white-space:pre-wrap">Bielefeld University</span>= , Germany)</div><div><b><span style=3D"color:rgb(0,0,0);font-family:Roboto,= RobotoDraft,Helvetica,Arial,sans-serif;font-size:13px;white-space:pre-wrap"= ><br></span></b></div><span style=3D"font-size:10pt;font-family:Arial">We d= evelop pya with the mind to simplify the process of editing, processing, pl= aying and analysing audio signals in Python. pya can be used for creating a= pplication audio, editing multichannel audios, signal processing and analys= is, feature extraction, and with Python being a popular language for handli= ng data, pya naturally lends itself toward auditory display and sonificatio= n. In this talk, we will focus on the key concepts and core classes and fun= ctionalities. We will demonstrate a few examples of using pya in the contex= t of sonification design.=C2=A0=C2=A0</span><br><br><br><b>Female Voice Age= nts in Fully Autonomous Vehicles Are Not Only More Likeable and Comfortable= , But Also More Competent</b><br><div>Jiayuan Dong (Virginia Tech, USA)=C2= =A0</div><br><p class=3D"MsoNormal" style=3D"margin:0in 0in 0.0001pt;line-h= eight:normal"><font face=3D"arial, sans-serif"><span lang=3D"EN">Driving ag= ents can provide an effective solution to improve drivers=E2=80=99 trust in= and to manage interactions with autonomous vehicles. Research has focused = on voice-agents, while few have explored robot-agents or the comparison bet= ween the two. The present study tested two variables - voice gender and age= nt embodiment, using conversational scripts. Twenty participants experience= d autonomous driving using the simulator for four agent conditions and fill= ed out subjective questionnaires for their perception of each agent. Result= s showed that the participants perceived the voice only female agent as mor= e likeable, more comfortable, and more competent than other conditions. The= ir final preference ranking also favored this agent over the others. Intere= stingly, eye-tracking data showed that embodied agents did not add more vis= ual distractions than the voice only agents.=C2=A0</span><span lang=3D"EN">= T</span><span lang=3D"EN">he results are discussed with the traditional gen= der stereotype, uncanny valley, and participants=E2=80=99 gender. This stud= y can contribute to the design of in-vehicle agents in the autonomous vehic= les and future studies are planned to further identify the underlying mecha= nisms of user perception on different agents.</span></font></p></div></div>= <br clear=3D"all"><div><div dir=3D"ltr" class=3D"gmail_signature" data-smar= tmail=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div = dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"lt= r"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><div= dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D= "ltr"><div><font color=3D"#666666">****************************************= ********</font><br><font color=3D"#666666">Myounghoon Jeon (Philart), Ph.D.= </font></div><div><font color=3D"#666666">pronounced as /&quot;me&quot; + &= quot;young&quot;/ /hoon/ /juhn/</font></div><div><font color=3D"#666666"><b= r></font></div><div><font color=3D"#666666">Associate Professor</font></div= ><div><div style=3D"font-size:small"><b style=3D"font-size:12.8px"><font fa= ce=3D"arial, helvetica, sans-serif" color=3D"#666666"><a href=3D"http://ise= .vt.edu/philart" target=3D"_blank">Mind Music Machine Lab</a></font></b></d= iv></div><div><div style=3D"font-size:12.8px"><font color=3D"#666666">Grado= Department of Industrial and Systems Engineering=C2=A0</font></div><div st= yle=3D"font-size:12.8px"><span style=3D"color:rgb(102,102,102);font-size:12= .8px">Department of Computer Science (by courtesy)</span><font color=3D"#66= 6666"><br></font></div><div style=3D"font-size:12.8px"><font color=3D"#6666= 66">Virginia=C2=A0Tech</font></div><div style=3D"font-size:12.8px"><font co= lor=3D"#666666"><span style=3D"font-size:12.8px;letter-spacing:0.2px">519D= =C2=A0</span><span style=3D"font-size:12.8px;letter-spacing:0.2px">Whittemo= re Hall </span><b style=3D"font-size:12.8px;letter-spacing:0.2px">=C2=A0</b= ></font></div><div style=3D"font-size:12.8px"><span style=3D"font-size:12.8= px;letter-spacing:0.2px"><font color=3D"#666666">1185 Perry St. Blacksburg,= VA 24061=C2=A0</font></span></div><div><font color=3D"#666666"><a href=3D"= mailto:myounghoonjeon@xxxxxxxx" target=3D"_blank">myounghoonjeon@xxxxxxxx</a></= font><br></div></div><div><br><br></div></div></div></div></div></div></div= ></div></div></div></div></div></div></div></div></div></div></div></div></= div></div></div></div></div> --0000000000001b27b805a9522912--


This message came from the mail archive
src/postings/2020/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University