Subject: Re: AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246) From: sandra quinn <s_quinn08@xxxxxxxx> Date: Tue, 25 Oct 2011 11:49:14 +0000 List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>--_57b05377-977d-4986-a651-662bd3c102b9_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Could you please post the call for submissions to the following conference?= thank you, Sandra Quinn ________________________________________________________________PREDICTING= PERCEPTIONS: The 3rd International Conference on=20 Appearance Edinburgh, 17-19 April 2012 Following on from two highly=20 successful cross-disciplinary conferences in Ghent and Paris we are very=20 happy to invite submissions for the above event. IMPORTANT DATES -=20 05 December 2011: Submission deadline - 19 December 2011: Review allocation=20 to reviewers - 09 January 2012: Review upload deadline - 14 January 2012:=20 Authors informed - 17-19 April 2012: Conference CONFERENCE=20 WEBSITE www.perceptions.macs.hw.ac.uk INVITED SPEAKERS -=20 Larry Maloney, Dept. Psychology, New York University, USA. - Fran=E7oise=20 Vi=E9not, Mus=E9um National d'Histoire Naturelle, Paris. CONFERENCE=20 CHAIRS Mike Chantler, Julie Harris, Mike Pointer SCOPE Originally=20 focused on the perception of texture and translucency and particularly gloss,=20 and colour we wish to extend the conference to include other senses not just=20 sight (e.g. how does sound affect our perception of the qualities of a=20 fabric) and to emotive as well as objective qualities (e.g. desirability and=20 engagement) and to digital as well as physical media. CALL FOR=20 PAPERS This conference addresses appearance in its broadest sense and=20 seeks to be truly cross-disciplinary. Papers related, but not restricted=20 to the following are welcomed: - Prediction and measurement of human=20 perceptions formed by sensory input of the physical and digital worlds -=20 New methods for estimating psychometric transfer functions - Methods for=20 measuring perceived texture, translucency and form. - Effects of lighting and=20 other environmental factors on perception - Effects of binocular viewing,=20 motion parallax, and depth from focus - Methods for measuring engagement and=20 emotions such as desirability - Effects of other sensory input (e.g. audio,=20 smell, touch) - Effects of user control of media - Colour fidelity, colour=20 harmony, colour and emotion - Methods for measuring inferred qualities=20 including expensiveness, quality, wearability etc - Techniques for=20 encouraging and facilitating observer participation (design games,=20 gamification of experiments, crowd sourcing etc.) -=20 Saliency _______________________________________________ Predicting=20 Perceptions: the 3rd International Conference on Appearance http://www.perceptions.macs.hw.ac.uk/ > Date: Tue, 25 Oct 2011 00:12:54 -04= 00 > From: LISTSERV@xxxxxxxx > Subject: AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246) > To: AUDITORY@xxxxxxxx >=20 > There are 5 messages totalling 546 lines in this issue. >=20 > Topics of the day: >=20 > 1. Glitch-free presentations with Windows 7 and Matlab > 2. question about streaming (3) > 3. Workshop announcement: The Listening Talker >=20 > ---------------------------------------------------------------------- >=20 > Date: Mon, 24 Oct 2011 12:16:04 +0200 > From: Martin Hansen <martin.hansen@xxxxxxxx> > Subject: Re: Glitch-free presentations with Windows 7 and Matlab >=20 > Hi all, >=20 > Trevor has mentioned PortAudio as one solution (and so has Matlab > themselves told to a colleague of mine in a recent email). >=20 > Already some years before this Matlab-2011 problem popped up, we have > used PortAudio to create our "msound" tool, which is a wrapper for > PortAudio for block-wise audio input and ouput, for unlimited duration > (in principle). You can download it freely from here: > http://www.hoertechnik-audiologie.de/web/file/Forschung/Software.php#msou= =3D > nd: >=20 >=20 > It is written as a mex file and published under the free LGPL license. > It contains the precompiled mex-files "msound" for windows (dll, mexw32) > and linux and also some example functions, e.g. one called > "msound_play_record.m" which does simultaneous output and input to and > from your soundcard for as long as your output lasts. This functions > also handles all intialization automatically for you. Another function, > called "msound_play.m", does what it is named after. > We have had msound running for several years now, and a large number of > our students have used it successfully for their assignments, projects > and theses as well. > which >=20 > Best regards, > Martin >=20 >=20 > --=3D20 > Prof. Dr. Martin Hansen > Jade Hochschule Wilhelmshaven/Oldenburg/Elsfleth > Studiendekan H=3DC3=3DB6rtechnik und Audiologie > Ofener Str. 16/19 > D-26121 Oldenburg > Tel. (+49) 441 7708-3725 Fax -3777 > http://www.hoertechnik-audiologie.de/ >=20 >=20 >=20 >=20 >=20 > On 18.10.2011 19:29, David Magezi wrote: > > Many thanks for that review Trevor. > >=3D20 > > Am not sure, if the following has been mentioned: There appears to be a= =3D > matlab-ASIO interface from University of Birmingham (UK), using ActiveX. > >=3D20 > >=3D20 > > http://www.eee.bham.ac.uk/collinst/asio.html > >=3D20 > > I would also be keen to hear of other solutions found, > >=3D20 > > D > >=3D20 > > =3D20 > > *************************************************** > > David Magezi > >=3D20 > > *************************************************** > >=3D20 > >=3D20 > > ________________________________ > > From: Trevor Agus <Trevor.Agus@xxxxxxxx> > > To: AUDITORY@xxxxxxxx > > Sent: Tuesday, October 18, 2011 5:52 PM > > Subject: [AUDITORY] Glitch-free presentations with Windows 7 and Matlab > >=3D20 > > I've found it surprisingly difficult to present glitch-free sounds with > > Windows 7. > >=3D20 > > The short answer is that Padraig Kitterick's "asioWavPlay" seems to be = =3D > the > > simplest reliable method (remembering to buffer the waveforms with 256 = =3D > samples > > of silence to avoid truncation issues). For those with more complex nee= =3D > ds, > > perhaps soundmexpro or PsychToolbox would be better. I'd value any seco= =3D > nd > > opinions and double-checking, so a review of the options follows, with = =3D > all the > > gory details. > >=3D20 > > I've been using a relatively old version of Matlab (R2007b) with a Fire= =3D > face UC > > soundcard. If the problems are fixed in another version or soundcard, I= =3D > 'd love > > to know about it. > >=3D20 > > =3D3D=3D3D=3D3DMatlab's native functions (sound, wavplay, audioplayer) > > Large, unpredictable truncations were the least of our problems. We als= =3D > o often > > got mid-sound glitches, ranging from sporadic (just a few subtle glitch= =3D > es per > > minute) to frequent (making the sound barely recognisable). The magic f= =3D > ormula > > for eliminating the glitches seemed to be to keep the soundcard turned = =3D > off > > until > > the desktop was ready, with all background programs loaded. (Restarting= =3D > either > > the soundcard or the computer alone guaranteed some glitches.) So this= =3D > formula > > seems to work, but it's a bit too Harry Potter for my liking, and the s= =3D > pell > > might change with the next Windows update. I think I read that Firefac= =3D > e were > > no longer supporting Microsoft's vagaries, and they recommended using A= =3D > SIO. I'm > > not sure if other high-end soundcard manufacturers are any different. S= =3D > ince > > Matlab's native functions don't support ASIO (unless the new versions d= =3D > o?), > > I think we're forced to look at the ASIO options. > >=3D20 > > =3D3D=3D3D=3D3Dplayrec > > This seems to be potentially the most flexible method of presenting sou= =3D > nds but > > I've hit a brick wall compiling it for Windows 7. I think its author st= =3D > opped > > providing support for it a few years ago. Has anyone had more success t= =3D > han me? > >=3D20 > > =3D3D=3D3D=3D3DasioWavPlay > > This simply presents a .wav file using ASIO. It's a little annoying tha= =3D > t you > > have to save your sound to disk before presenting it, but as Joachim po= =3D > inted > > out, it's not too difficult to automate this process. While doing that,= =3D > I add > > 256 samples of silence to the end to work around the truncation problem= =3D > . > >=3D20 > > =3D3D=3D3D=3D3Dpa_wavplay > > This is nearly the perfect solution except (1) the number of samples tr= =3D > uncated > > from the end is slightly unpredictable and (2) it prints a message on t= =3D > he > > screen after every sound ("Playing on device 0"). For these two reasons= =3D > , I > > prefer asioWavPlay. > >=3D20 > > =3D3D=3D3D=3D3Dsoundmexpro > > This might be best choice for the high-end user (I've just had a quick = =3D > look at > > the demo version today). It's easy to install and there are good tutori= =3D > als, but > > it involves initialising sound objects, etc. -- it's not just a replace= =3D > ment for > > Matlab's "sound" command. Also it looks like it's =3DE2=3D82=3DAC500+. > >=3D20 > > =3D3D=3D3D=3D3DPsychToolbox > > Originally designed for visual experiments, PsychToolbox has now got qu= =3D > ite > > extensive low-latency sound functions, including realtime continuous > > playing/recording. It's also free. However, it's slightly challenging t= =3D > o > > install Like soundmexpro, it's object-oriented -- so don't expect to p= =3D > lay a > > sound with a simple one-liner. > >=3D20 > > =3D3D=3D3D=3D3DPortAudio > > Most of above programs are based on this C library. If you're an experi= =3D > enced > > programmer, perhaps you'd prefer to go direct the source? And while you= =3D > 're > > there, perhaps you could write the perfect Matlab-ASIO interfaces for t= =3D > he rest > > of us? (Please!) > >=3D20 > > Has anyone found a simpler solution? I'd be glad to hear it. > >=3D20 > > Trevor >=20 > ------------------------------ >=20 > Date: Mon, 24 Oct 2011 14:06:26 +0100 > From: A Davidson <pspc1d@xxxxxxxx> > Subject: question about streaming >=20 > Hello everyone, >=20 > I was wondering if anyone could point me in the direction of some=20=20 > clear and relatively simple tutorial information and/or good review=20=20 > papers about streaming and the problems of trying to discern between=20= =20 > two auditory stimuli presented to two different ears concurrently. >=20 > Many thanks, >=20 > Alison >=20 > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. >=20 > ------------------------------ >=20 > Date: Mon, 24 Oct 2011 17:37:39 +0100 > From: Etienne Gaudrain <egaudrain.cam@xxxxxxxx> > Subject: Re: question about streaming >=20 > Dear Alison, >=20 > First because this is so recent, a paper by Stainsby et al. on=20 > sequential streaming : >=20 > Sequential streaming due to manipulation of interaural time differences. > Stainsby TH, Fullgrabe C, Flanagan HJ, Waldman SK, Moore BC. > J Acoust Soc Am. 2011 Aug;130(2):904-14. > PMID: 21877805 >=20 > Otherwise two papers that include a fairly comprehensive review of the=20 > literature: >=20 > Spatial release from energetic and informational masking in a divided=20 > speech identification task. > Ihlefeld A, Shinn-Cunningham B. > J Acoust Soc Am. 2008 Jun;123(6):4380-92. > PMID: 18537389 >=20 > Spatial release from energetic and informational masking in a selective= =20 > speech identification task. > Ihlefeld A, Shinn-Cunningham B. > J Acoust Soc Am. 2008 Jun;123(6):4369-79. > PMID: 18537388 >=20 > These are not review papers, but you might find what you're looking for. >=20 > -Etienne >=20 >=20 > On 24/10/2011 14:06, A Davidson wrote: > > Hello everyone, > > > > I was wondering if anyone could point me in the direction of some=20 > > clear and relatively simple tutorial information and/or good review=20 > > papers about streaming and the problems of trying to discern between=20 > > two auditory stimuli presented to two different ears concurrently. > > > > Many thanks, > > > > Alison > > > > ---------------------------------------------------------------- > > This message was sent using IMP, the Internet Messaging Program. >=20 >=20 > --=20 > Etienne Gaudrain, PhD > MRC Cognition and Brain Sciences Unit > 15 Chaucer Road > Cambridge, CB2 7EF > UK > Phone: +44 1223 355 294, ext. 645 > Fax (unit): +44 1223 359 062 >=20 > ------------------------------ >=20 > Date: Mon, 24 Oct 2011 12:31:38 -0700 > From: Diana Deutsch <ddeutsch@xxxxxxxx> > Subject: Re: question about streaming >=20 > --Apple-Mail-2--272401268 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/plain; > charset=3Dus-ascii >=20 > Hi Alson, >=20 > You might want to read my review chapter: >=20 > Deutsch, D. Grouping mechanisms in music. In D. Deutsch (Ed.). The =3D > psychology of music, 2nd Edition, 1999, 299-348, Academic Press. [PDF =3D > Document] >=20 > The book is going into a third edition, and the updated chapter should = =3D > be available in a few months. >=20 > Cheers, >=20 > Diana Deutsch >=20 >=20 > Professor Diana Deutsch > Department of Psychology =3D20 > University of California, San Diego > 9500 Gilman Dr. #0109 =3D20 > La Jolla, CA 92093-0109, USA >=20 > 858-453-1558 (tel) > 858-453-4763 (fax) >=20 > http://deutsch.ucsd.edu > http://www.philomel.com >=20 >=20 >=20 >=20 > On Oct 24, 2011, at 6:06 AM, A Davidson wrote: >=20 > > Hello everyone, > >=3D20 > > I was wondering if anyone could point me in the direction of some =3D > clear and relatively simple tutorial information and/or good review =3D > papers about streaming and the problems of trying to discern between two = =3D > auditory stimuli presented to two different ears concurrently. > >=3D20 > > Many thanks, > >=3D20 > > Alison > >=3D20 > > ---------------------------------------------------------------- > > This message was sent using IMP, the Internet Messaging Program. >=20 >=20 >=20 > --Apple-Mail-2--272401268 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/html; > charset=3Dus-ascii >=20 > <html><head></head><body style=3D3D"word-wrap: break-word; =3D > -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi =3D > Alson,<div><br></div><div>You might want to read my review =3D > chapter:</div><div><br></div><div><span class=3D3D"Apple-style-span" =3D > style=3D3D"font-family: Arial, Helvetica, sans-serif; font-size: 12px; = =3D > ">Deutsch, D. Grouping mechanisms in music. In D. Deutsch =3D > (Ed.). <i>The psychology of music, 2nd Edition</i>, 1999, 299-348, = =3D > Academic Press. <nobr>[<a =3D > href=3D3D"http://philomel.com/pdf/PsychMus_Ch9.pdf" target=3D3D"_blank">P= DF =3D > Document</a>]</nobr><br></span><div><div><div><br></div><div>The book is = =3D > going into a third edition, and the updated chapter should be available = =3D > in a few =3D > months.</div><div><br></div><div>Cheers,</div><div><br></div><div>Diana = =3D > Deutsch</div><div><br></div><div><div><br =3D > class=3D3D"webkit-block-placeholder"></div><div><span =3D > class=3D3D"Apple-style-span" style=3D3D"border-collapse: separate; color:= =3D > rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: =3D > normal; font-variant: normal; font-weight: normal; letter-spacing: =3D > normal; line-height: normal; orphans: 2; text-indent: 0px; =3D > text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; = =3D > -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: = =3D > 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: = =3D > auto; -webkit-text-stroke-width: 0px; "><span class=3D3D"Apple-style-span= " =3D > style=3D3D"font-size: 12px; "><div>Professor Diana =3D > Deutsch</div><div>Department of Psychology =3D > =3D > </div><div>University of California, San =3D > Diego</div><div>9500 Gilman Dr. #0109 = =3D > </div><div>La Jolla, CA 92093-0109, USA</div><div><br =3D > class=3D3D"khtml-block-placeholder"></div><div>858-453-1558 =3D > (tel)</div><div>858-453-4763 (fax)</div><div><br =3D > class=3D3D"khtml-block-placeholder"></div><div><a =3D > href=3D3D"http://deutsch.ucsd.edu">http://deutsch.ucsd.edu</a></div><div>= <a =3D > href=3D3D"http://www.philomel.com">http://www.philomel.com</a></div><div>= <br=3D > class=3D3D"khtml-block-placeholder"></div></span></span><br =3D > class=3D3D"Apple-interchange-newline"></div></div><div><br></div><div><br= ></=3D > div><div>On Oct 24, 2011, at 6:06 AM, A Davidson wrote:</div><br =3D > class=3D3D"Apple-interchange-newline"><blockquote type=3D3D"cite"><div>He= llo =3D > everyone,<br><br>I was wondering if anyone could point me in the =3D > direction of some clear and relatively simple tutorial information =3D > and/or good review papers about streaming and the problems of trying to = =3D > discern between two auditory stimuli presented to two different ears =3D > concurrently.<br><br>Many =3D > thanks,<br><br>Alison<br><br>--------------------------------------------= -=3D > -------------------<br>This message was sent using IMP, the Internet =3D > Messaging Program.<br></div></blockquote></div><br><div> > <span class=3D3D"Apple-style-span" style=3D3D"border-collapse: separate; = =3D > color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; =3D > font-style: normal; font-variant: normal; font-weight: normal; =3D > letter-spacing: normal; line-height: normal; orphans: 2; text-align: =3D > auto; text-indent: 0px; text-transform: none; white-space: normal; =3D > widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; =3D > -webkit-border-vertical-spacing: 0px; =3D > -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =3D > auto; -webkit-text-stroke-width: 0px; "><span class=3D3D"Apple-style-span= " =3D > style=3D3D"font-size: 12px; "><div><span class=3D3D"Apple-style-span" =3D > style=3D3D"font-size: medium;"><br =3D > class=3D3D"Apple-interchange-newline"></span></div></span></span></div></= div=3D > ></div></body></html>=3D >=20 > --Apple-Mail-2--272401268-- >=20 > ------------------------------ >=20 > Date: Tue, 25 Oct 2011 00:04:32 +0200 > From: Martin <m.cooke@xxxxxxxx> > Subject: Workshop announcement: The Listening Talker >=20 > --Apple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/plain; > charset=3Dus-ascii >=20 > The Listening Talker: an interdisciplinary workshop on natural and =3D > synthetic=3D20 > modification of speech in response to listening conditions >=20 > Edinburgh, 2-3 May 2012 >=20 > http://listening-talker.org/workshop >=20 > When talkers speak, they also listen. Talkers routinely adapt to their = =3D > interlocutors=3D20 > and environment, maintaining intelligibility and dialogue fluidity in a = =3D > way that=3D20 > promotes efficient exchange of information. In contrast, current speech = =3D > output=3D20 > technology is largely deaf, incapable of adapting to the listener's =3D > context, > inefficient in use and lacking the naturalness that comes from rapid =3D > appreciation > of the speaker-listener environment. A key scientific challenge is to = =3D > better=3D20 > understand how "talker-listeners" respond to context and to apply these= =3D20=3D >=20 > findings to the modification of natural (live/recorded) and generated =3D > (synthetic) > speech. The ISCA-supported Listening Talker (LISTA) workshop brings=3D20 > together linguists, psychologists, neuroscientists, engineers and others = =3D > working=3D20 > on human and machine speech perception and production, to explore new=3D20 > approaches to context-sensitive speech generation. >=20 > The workshop will be single-track, with invited talks and contributed =3D > oral=3D20 > and poster presentations. An open call for a special issue of Computer=3D= 20=3D >=20 > Speech and Language on the theme of the listening talker will follow the = =3D > workshop. >=20 > Contributions are invited on any aspect of the listening talker, =3D > including but not limited to: >=20 > - theories and models of human communication involving the listening =3D > talker > - human speech production modifications induced by noise > - speech production changes with manipulated feedback > - algorithms/vocoders for speech modification > - transformations from casual to clear speech > - characterisation of the listening context > - intelligibility and quality metrics for modified speech > - application to natural dialogues, PA, teleconferencing >=20 > Invited speakers >=20 > Torsten Dau (Danish Technical University) > Valerie Hazan (University College, London) > Richard Heusdens (Technical University Delft) > Hideki Kawahara (Wakayama University) > Roger Moore (University of Sheffield) > Martin Pickering (University of Edinburgh) > Peter Vary (Aachen University) > Junichi Yamagishi (University of Edinburgh) > =3D09 > Important dates >=20 > 30th January 2012: Submission of 4-page papers=3D20 > 27th February 2012: Notification of acceptance/rejection >=20 > Co-chairs >=20 > Martin Cooke (University of the Basque Country) > Simon King (University of Edinburgh) > Bastiaan Kleijn (Victoria University of Wellington) > Yannis Stylianou (University of Crete)=3D >=20 > --Apple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/html; > charset=3Dus-ascii >=20 > <html><head></head><body style=3D3D"word-wrap: break-word; =3D > -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">The = =3D > Listening Talker: an interdisciplinary workshop on natural and =3D > synthetic <div>modification of speech in response to listening =3D > conditions<br><br>Edinburgh, 2-3 May 2012<br><br><a =3D > href=3D3D"http://listening-talker.org/workshop">http://listening-talker.o= rg/=3D > workshop</a><br><br>When talkers speak, they also listen. Talkers =3D > routinely adapt to their interlocutors </div><div>and environment, = =3D > maintaining intelligibility and dialogue fluidity in a way =3D > that </div><div>promotes efficient exchange of information. In =3D > contrast, current speech output </div><div>technology is largely =3D > deaf, incapable of adapting to the listener's =3D > context,</div><div>inefficient in use and lacking the naturalness that = =3D > comes from rapid appreciation</div><div>of the speaker-listener =3D > environment. A key scientific challenge is to =3D > better </div><div>understand how "talker-listeners" respond to =3D > context and to apply these </div><div>findings to the modification = =3D > of natural (live/recorded) and generated (synthetic)</div><div>speech. = =3D > The ISCA-supported Listening Talker (LISTA) workshop =3D > brings </div><div>together linguists, psychologists, =3D > neuroscientists, engineers and others working </div><div>on human = =3D > and machine speech perception and production, to explore =3D > new </div><div>approaches to context-sensitive speech =3D > generation.<br><br>The workshop will be single-track, with invited talks = =3D > and contributed oral </div><div>and poster presentations. An open = =3D > call for a special issue of Computer </div><div>Speech and Language = =3D > on the theme of the listening talker will follow the =3D > workshop.<br><br>Contributions are invited on any aspect of the =3D > listening talker, including but not limited to:<br><br>- theories and =3D > models of human communication involving the listening talker<br>- human = =3D > speech production modifications induced by noise<br>- speech production = =3D > changes with manipulated feedback<br>- algorithms/vocoders for speech =3D > modification<br>- transformations from casual to clear speech<br>- =3D > characterisation of the listening context<br>- intelligibility and =3D > quality metrics for modified speech<br>- application to natural =3D > dialogues, PA, teleconferencing<br><br>Invited =3D > speakers<br><br> Torsten Dau (Danish Technical =3D > University)<br> Valerie Hazan (University College, =3D > London)<br> Richard Heusdens (Technical University =3D > Delft)<br> Hideki Kawahara (Wakayama =3D > University)<br> Roger Moore (University of =3D > Sheffield)<br> Martin Pickering (University of =3D > Edinburgh)<br> Peter Vary (Aachen =3D > University)<br> Junichi Yamagishi (University of =3D > Edinburgh)<br><span class=3D3D"Apple-tab-span" style=3D3D"white-space: pr= e; =3D > "> </span><br>Important dates<br><br> 30th January 2012: =3D > Submission of 4-page papers <br> 27th February 2012: =3D > Notification of =3D > acceptance/rejection<br><br>Co-chairs<br><br> Martin Cooke =3D > (University of the Basque Country)<br> Simon King =3D > (University of Edinburgh)<br> Bastiaan Kleijn (Victoria = =3D > University of Wellington)<br> Yannis Stylianou (University of = =3D > Crete)</div></body></html>=3D >=20 > --Apple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4-- >=20 > ------------------------------ >=20 > End of AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246) > *************************************************************** =20=09=09=20=09=20=20=20=09=09=20=20= --_57b05377-977d-4986-a651-662bd3c102b9_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline <html> <head> <style><!-- .hmmessage P { margin:0px; padding:0px } body.hmmessage { font-size: 10pt; font-family:Tahoma } --></style> </head> <body class=3D'hmmessage'><div dir=3D'ltr'> Could you please post the call for submissions to the following confer= ence?<BR> <BR>thank you,<br><br>Sandra Quinn<br> <BR><div><div id= =3D"SkyDrivePlaceholder">__________________________________________________= ______________</div><div>PREDICTING PERCEPTIONS: The 3rd International Conf= erence on=20 Appearance<br>Edinburgh, 17-19 April 2012<br><br>Following on from two high= ly=20 successful cross-disciplinary conferences<br>in Ghent and Paris we are very= =20 happy to invite submissions for the<br>above event.<br><br>IMPORTANT DATES<= br>-=20 05 December 2011: Submission deadline<br>- 19 December 2011: Review allocat= ion=20 to reviewers<br>- 09 January 2012: Review upload deadline<br>- 14 January 2= 012:=20 Authors informed<br>- 17-19 April 2012: Conference<br><br><br>CONFERENCE=20 WEBSITE<br><!-- <a href=3D"https://unimail.st-andrews.ac.uk/owa/redir.aspx?= C=3Dec564999877740b3912b2a1b4b5b0dcb&URL=3Dhttp%3a%2f%2fwww.perceptions= .macs.hw.ac.uk" target=3D"_blank"> -->www.perceptions.macs.hw.ac.uk<!-- </a= > --><br><br><br>INVITED SPEAKERS<br>-=20 Larry Maloney, Dept. Psychology, New York University, USA.<br>- Fran=E7oise= =20 Vi=E9not, Mus=E9um National d'Histoire Naturelle, Paris.<br><br>CONFERENCE= =20 CHAIRS<br>Mike Chantler, Julie Harris, Mike Pointer<br><br>SCOPE<br>Origina= lly=20 focused on the perception of texture and translucency and<br>particularly g= loss,=20 and colour we wish to extend the conference to<br>include other senses not = just=20 sight (e.g. how does sound affect our<br>perception of the qualities of a= =20 fabric) and to emotive as well as<br>objective qualities (e.g. desirability= and=20 engagement) and to digital<br>as well as physical media.<br><br>CALL FOR=20 PAPERS<br>This conference addresses appearance in its broadest sense and=20 seeks<br>to be truly cross-disciplinary. Papers related, but not restricted= =20 to<br>the following are welcomed:<br><br>- Prediction and measurement of hu= man=20 perceptions formed by sensory<br>input of the physical and digital worlds<b= r>-=20 New methods for estimating psychometric transfer functions<br>- Methods for= =20 measuring perceived texture, translucency and form.<br>- Effects of lightin= g and=20 other environmental factors on perception<br>- Effects of binocular viewing= ,=20 motion parallax, and depth from focus<br>- Methods for measuring engagement= and=20 emotions such as desirability<br>- Effects of other sensory input (e.g. aud= io,=20 smell, touch)<br>- Effects of user control of media<br>- Colour fidelity, c= olour=20 harmony, colour and emotion<br>- Methods for measuring inferred qualities= =20 including expensiveness,<br>quality, wearability etc<br>- Techniques for=20 encouraging and facilitating observer participation<br>(design games,=20 gamification of experiments, crowd sourcing etc.)<br>-=20 Saliency<br><br>_______________________________________________<br><br>Pred= icting=20 Perceptions: the 3rd International Conference on Appearance<br><br><!-- <a = href=3D"https://unimail.st-andrews.ac.uk/owa/redir.aspx?C=3Dec564999877740b= 3912b2a1b4b5b0dcb&URL=3Dhttp%3a%2f%2fwww.perceptions.macs.hw.ac.uk%2f" = target=3D"_blank"> -->http://www.perceptions.macs.hw.ac.uk/<!-- </a> --></d= iv><div> </div>> Date: Tue, 25 Oct 2011 00:12:54 -0400<br>> From= : LISTSERV@xxxxxxxx<br>> Subject: AUDITORY Digest - 23 Oct 2011 t= o 24 Oct 2011 (#2011-246)<br>> To: AUDITORY@xxxxxxxx<br>> <br>= > There are 5 messages totalling 546 lines in this issue.<br>> <br>&g= t; Topics of the day:<br>> <br>> 1. Glitch-free presentations with = Windows 7 and Matlab<br>> 2. question about streaming (3)<br>> 3.= Workshop announcement: The Listening Talker<br>> <br>> -------------= ---------------------------------------------------------<br>> <br>> = Date: Mon, 24 Oct 2011 12:16:04 +0200<br>> From: Martin Hansen <= ;martin.hansen@xxxxxxxx><br>> Subject: Re: Glitch-free presentation= s with Windows 7 and Matlab<br>> <br>> Hi all,<br>> <br>> Trevo= r has mentioned PortAudio as one solution (and so has Matlab<br>> themse= lves told to a colleague of mine in a recent email).<br>> <br>> Alrea= dy some years before this Matlab-2011 problem popped up, we have<br>> us= ed PortAudio to create our "msound" tool, which is a wrapper for<br>> Po= rtAudio for block-wise audio input and ouput, for unlimited duration<br>>= ; (in principle). You can download it freely from here:<br>> http://www= .hoertechnik-audiologie.de/web/file/Forschung/Software.php#msou=3D<br>> = nd:<br>> <br>> <br>> It is written as a mex file and published und= er the free LGPL license.<br>> It contains the precompiled mex-files "ms= ound" for windows (dll, mexw32)<br>> and linux and also some example fun= ctions, e.g. one called<br>> "msound_play_record.m" which does simultane= ous output and input to and<br>> from your soundcard for as long as your= output lasts. This functions<br>> also handles all intialization auto= matically for you. Another function,<br>> called "msound_play.m", does = what it is named after.<br>> We have had msound running for several year= s now, and a large number of<br>> our students have used it successfully= for their assignments, projects<br>> and theses as well.<br>> which<= br>> <br>> Best regards,<br>> Martin<br>> <br>> <br>> --= =3D20<br>> Prof. Dr. Martin Hansen<br>> Jade Hochschule Wilhelmshaven= /Oldenburg/Elsfleth<br>> Studiendekan H=3DC3=3DB6rtechnik und Audiologie= <br>> Ofener Str. 16/19<br>> D-26121 Oldenburg<br>> Tel. (+49) 441= 7708-3725 Fax -3777<br>> http://www.hoertechnik-audiologie.de/<br>>= <br>> <br>> <br>> <br>> <br>> On 18.10.2011 19:29, David Ma= gezi wrote:<br>> > Many thanks for that review Trevor.<br>> >= =3D20<br>> > Am not sure, if the following has been mentioned: There = appears to be a=3D<br>> matlab-ASIO interface from University of Birmin= gham (UK), using ActiveX.<br>> >=3D20<br>> >=3D20<br>> > = http://www.eee.bham.ac.uk/collinst/asio.html<br>> >=3D20<br>> >= I would also be keen to hear of other solutions found,<br>> >=3D20<b= r>> > D<br>> >=3D20<br>> > =3D20<br>> > ***********= ****************************************<br>> > David Magezi<br>> = >=3D20<br>> > ***************************************************<= br>> >=3D20<br>> >=3D20<br>> > __________________________= ______<br>> > From: Trevor Agus <Trevor.Agus@xxxxxxxx><br>> &g= t; To: AUDITORY@xxxxxxxx<br>> > Sent: Tuesday, October 18, 201= 1 5:52 PM<br>> > Subject: [AUDITORY] Glitch-free presentations with W= indows 7 and Matlab<br>> >=3D20<br>> > I've found it surprising= ly difficult to present glitch-free sounds with<br>> > Windows 7.<br>= > >=3D20<br>> > The short answer is that Padraig Kitterick's "a= sioWavPlay" seems to be =3D<br>> the<br>> > simplest reliable meth= od (remembering to buffer the waveforms with 256 =3D<br>> samples<br>>= ; > of silence to avoid truncation issues). For those with more complex = nee=3D<br>> ds,<br>> > perhaps soundmexpro or PsychToolbox would b= e better. I'd value any seco=3D<br>> nd<br>> > opinions and double= -checking, so a review of the options follows, with =3D<br>> all the<br>= > > gory details.<br>> >=3D20<br>> > I've been using a re= latively old version of Matlab (R2007b) with a Fire=3D<br>> face UC<br>&= gt; > soundcard. If the problems are fixed in another version or soundca= rd, I=3D<br>> 'd love<br>> > to know about it.<br>> >=3D20<b= r>> > =3D3D=3D3D=3D3DMatlab's native functions (sound, wavplay, audio= player)<br>> > Large, unpredictable truncations were the least of our= problems. We als=3D<br>> o often<br>> > got mid-sound glitches, r= anging from sporadic (just a few subtle glitch=3D<br>> es per<br>> &g= t; minute) to frequent (making the sound barely recognisable). The magic f= =3D<br>> ormula<br>> > for eliminating the glitches seemed to be t= o keep the soundcard turned =3D<br>> off<br>> > until<br>> >= the desktop was ready, with all background programs loaded. (Restarting=3D= <br>> either<br>> > the soundcard or the computer alone guarante= ed some glitches.) So this=3D<br>> formula<br>> > seems to work, = but it's a bit too Harry Potter for my liking, and the s=3D<br>> pell<br= >> > might change with the next Windows update. I think I read that = Firefac=3D<br>> e were<br>> > no longer supporting Microsoft's vag= aries, and they recommended using A=3D<br>> SIO. I'm<br>> > not su= re if other high-end soundcard manufacturers are any different. S=3D<br>>= ; ince<br>> > Matlab's native functions don't support ASIO (unless th= e new versions d=3D<br>> o?),<br>> > I think we're forced to look = at the ASIO options.<br>> >=3D20<br>> > =3D3D=3D3D=3D3Dplayrec<= br>> > This seems to be potentially the most flexible method of prese= nting sou=3D<br>> nds but<br>> > I've hit a brick wall compiling i= t for Windows 7. I think its author st=3D<br>> opped<br>> > provid= ing support for it a few years ago. Has anyone had more success t=3D<br>>= ; han me?<br>> >=3D20<br>> > =3D3D=3D3D=3D3DasioWavPlay<br>>= > This simply presents a .wav file using ASIO. It's a little annoying t= ha=3D<br>> t you<br>> > have to save your sound to disk before pre= senting it, but as Joachim po=3D<br>> inted<br>> > out, it's not t= oo difficult to automate this process. While doing that,=3D<br>> I add<= br>> > 256 samples of silence to the end to work around the truncatio= n problem=3D<br>> .<br>> >=3D20<br>> > =3D3D=3D3D=3D3Dpa_wav= play<br>> > This is nearly the perfect solution except (1) the number= of samples tr=3D<br>> uncated<br>> > from the end is slightly unp= redictable and (2) it prints a message on t=3D<br>> he<br>> > scre= en after every sound ("Playing on device 0"). For these two reasons=3D<br>&= gt; , I<br>> > prefer asioWavPlay.<br>> >=3D20<br>> > =3D= 3D=3D3D=3D3Dsoundmexpro<br>> > This might be best choice for the high= -end user (I've just had a quick =3D<br>> look at<br>> > the demo = version today). It's easy to install and there are good tutori=3D<br>> a= ls, but<br>> > it involves initialising sound objects, etc. -- it's n= ot just a replace=3D<br>> ment for<br>> > Matlab's "sound" command= . Also it looks like it's =3DE2=3D82=3DAC500+.<br>> >=3D20<br>> &g= t; =3D3D=3D3D=3D3DPsychToolbox<br>> > Originally designed for visual = experiments, PsychToolbox has now got qu=3D<br>> ite<br>> > extens= ive low-latency sound functions, including realtime continuous<br>> >= playing/recording. It's also free. However, it's slightly challenging t=3D= <br>> o<br>> > install Like soundmexpro, it's object-oriented -- = so don't expect to p=3D<br>> lay a<br>> > sound with a simple one-= liner.<br>> >=3D20<br>> > =3D3D=3D3D=3D3DPortAudio<br>> >= Most of above programs are based on this C library. If you're an experi=3D= <br>> enced<br>> > programmer, perhaps you'd prefer to go direct t= he source? And while you=3D<br>> 're<br>> > there, perhaps you cou= ld write the perfect Matlab-ASIO interfaces for t=3D<br>> he rest<br>>= ; > of us? (Please!)<br>> >=3D20<br>> > Has anyone found a s= impler solution? I'd be glad to hear it.<br>> >=3D20<br>> > Tre= vor<br>> <br>> ------------------------------<br>> <br>> Date: = Mon, 24 Oct 2011 14:06:26 +0100<br>> From: A Davidson <pspc1d@xxxxxxxx= ANGOR.AC.UK><br>> Subject: question about streaming<br>> <br>> = Hello everyone,<br>> <br>> I was wondering if anyone could point me i= n the direction of some <br>> clear and relatively simple tutorial info= rmation and/or good review <br>> papers about streaming and the problem= s of trying to discern between <br>> two auditory stimuli presented to = two different ears concurrently.<br>> <br>> Many thanks,<br>> <br>= > Alison<br>> <br>> ----------------------------------------------= ------------------<br>> This message was sent using IMP, the Internet Me= ssaging Program.<br>> <br>> ------------------------------<br>> <b= r>> Date: Mon, 24 Oct 2011 17:37:39 +0100<br>> From: Etienne Ga= udrain <egaudrain.cam@xxxxxxxx><br>> Subject: Re: question about = streaming<br>> <br>> Dear Alison,<br>> <br>> First because this= is so recent, a paper by Stainsby et al. on <br>> sequential streaming = :<br>> <br>> Sequential streaming due to manipulation of interaural t= ime differences.<br>> Stainsby TH, Fullgrabe C, Flanagan HJ, Waldman SK,= Moore BC.<br>> J Acoust Soc Am. 2011 Aug;130(2):904-14.<br>> PMID: 2= 1877805<br>> <br>> Otherwise two papers that include a fairly compreh= ensive review of the <br>> literature:<br>> <br>> Spatial release = from energetic and informational masking in a divided <br>> speech ident= ification task.<br>> Ihlefeld A, Shinn-Cunningham B.<br>> J Acoust So= c Am. 2008 Jun;123(6):4380-92.<br>> PMID: 18537389<br>> <br>> Spat= ial release from energetic and informational masking in a selective <br>>= ; speech identification task.<br>> Ihlefeld A, Shinn-Cunningham B.<br>&g= t; J Acoust Soc Am. 2008 Jun;123(6):4369-79.<br>> PMID: 18537388<br>>= <br>> These are not review papers, but you might find what you're looki= ng for.<br>> <br>> -Etienne<br>> <br>> <br>> On 24/10/2011 1= 4:06, A Davidson wrote:<br>> > Hello everyone,<br>> ><br>> &= gt; I was wondering if anyone could point me in the direction of some <br>&= gt; > clear and relatively simple tutorial information and/or good revie= w <br>> > papers about streaming and the problems of trying to discer= n between <br>> > two auditory stimuli presented to two different ear= s concurrently.<br>> ><br>> > Many thanks,<br>> ><br>>= > Alison<br>> ><br>> > ------------------------------------= ----------------------------<br>> > This message was sent using IMP, = the Internet Messaging Program.<br>> <br>> <br>> -- <br>> Etien= ne Gaudrain, PhD<br>> MRC Cognition and Brain Sciences Unit<br>> 15 C= haucer Road<br>> Cambridge, CB2 7EF<br>> UK<br>> Phone: +44 1223 3= 55 294, ext. 645<br>> Fax (unit): +44 1223 359 062<br>> <br>> ----= --------------------------<br>> <br>> Date: Mon, 24 Oct 2011 12:31= :38 -0700<br>> From: Diana Deutsch <ddeutsch@xxxxxxxx><br>> = Subject: Re: question about streaming<br>> <br>> --Apple-Mail-2--2724= 01268<br>> Content-Transfer-Encoding: quoted-printable<br>> Content-T= ype: text/plain;<br>> charset=3Dus-ascii<br>> <br>> Hi Alson,<br>= > <br>> You might want to read my review chapter:<br>> <br>> De= utsch, D. Grouping mechanisms in music. In D. Deutsch (Ed.). The =3D<br>>= ; psychology of music, 2nd Edition, 1999, 299-348, Academic Press. [PDF =3D= <br>> Document]<br>> <br>> The book is going into a third edition,= and the updated chapter should =3D<br>> be available in a few months.<b= r>> <br>> Cheers,<br>> <br>> Diana Deutsch<br>> <br>> <br= >> Professor Diana Deutsch<br>> Department of Psychology = =3D20<br>> University of California, San Diego<br>> 9500 = Gilman Dr. #0109 =3D20<br>> La Jolla, CA 92093-0109, USA<br>&g= t; <br>> 858-453-1558 (tel)<br>> 858-453-4763 (fax)<br>> <br>> = http://deutsch.ucsd.edu<br>> http://www.philomel.com<br>> <br>> <b= r>> <br>> <br>> On Oct 24, 2011, at 6:06 AM, A Davidson wrote:<br>= > <br>> > Hello everyone,<br>> >=3D20<br>> > I was won= dering if anyone could point me in the direction of some =3D<br>> clear = and relatively simple tutorial information and/or good review =3D<br>> p= apers about streaming and the problems of trying to discern between two =3D= <br>> auditory stimuli presented to two different ears concurrently.<br>= > >=3D20<br>> > Many thanks,<br>> >=3D20<br>> > Ali= son<br>> >=3D20<br>> > ----------------------------------------= ------------------------<br>> > This message was sent using IMP, the = Internet Messaging Program.<br>> <br>> <br>> <br>> --Apple-Mail= -2--272401268<br>> Content-Transfer-Encoding: quoted-printable<br>> C= ontent-Type: text/html;<br>> charset=3Dus-ascii<br>> <br>> <ht= ml><head></head><body style=3D3D"word-wrap: break-word; = =3D<br>> -webkit-nbsp-mode: space; -webkit-line-break: after-white-space= ; ">Hi =3D<br>> Alson,<div><br></div><div>You= might want to read my review =3D<br>> chapter:</div><div>&l= t;br></div><div><span class=3D3D"Apple-style-span" =3D<br= >> style=3D3D"font-family: Arial, Helvetica, sans-serif; font-size: 12px= ; =3D<br>> ">Deutsch, D. Grouping mechanisms in music. In D. Deutsch = =3D<br>> (Ed.).&nbsp;<i>The psychology of music, 2nd Edition&l= t;/i>, 1999, 299-348, =3D<br>> Academic Press.&nbsp;<nobr>[= <a =3D<br>> href=3D3D"http://philomel.com/pdf/PsychMus_Ch9.pdf" targe= t=3D3D"_blank">PDF =3D<br>> Document</a>]</nobr><br>= ;</span><div><div><div><br></div><di= v>The book is =3D<br>> going into a third edition, and the updated ch= apter should be available =3D<br>> in a few =3D<br>> months.</div&= gt;<div><br></div><div>Cheers,</div><div&g= t;<br></div><div>Diana =3D<br>> Deutsch</div><= ;div><br></div><div><div><br =3D<br>> clas= s=3D3D"webkit-block-placeholder"></div><div><span =3D<br>= > class=3D3D"Apple-style-span" style=3D3D"border-collapse: separate; col= or: =3D<br>> rgb(0, 0, 0); font-family: Helvetica; font-size: medium; fo= nt-style: =3D<br>> normal; font-variant: normal; font-weight: normal; le= tter-spacing: =3D<br>> normal; line-height: normal; orphans: 2; text-ind= ent: 0px; =3D<br>> text-transform: none; white-space: normal; widows: 2;= word-spacing: 0px; =3D<br>> -webkit-border-horizontal-spacing: 0px; -we= bkit-border-vertical-spacing: =3D<br>> 0px; -webkit-text-decorations-in-= effect: none; -webkit-text-size-adjust: =3D<br>> auto; -webkit-text-stro= ke-width: 0px; "><span class=3D3D"Apple-style-span" =3D<br>> style= =3D3D"font-size: 12px; "><div>Professor Diana =3D<br>> Deutsch&= lt;/div><div>Department of Psychology&nbsp; &nbsp; &nb= sp; &nbsp; =3D<br>> &nbsp; &nbsp; &nbsp; &nbsp; &= ;nbsp; &nbsp; &nbsp; &nbsp; =3D<br>> &nbsp;&nbsp;<= ;/div><div>University of California, San =3D<br>> Diego</div= ><div>9500 Gilman Dr. #0109&nbsp; &nbsp; &nbsp; &n= bsp; &nbsp; =3D<br>> &nbsp;&nbsp;</div><div>La J= olla, CA 92093-0109, USA</div><div><br =3D<br>> class=3D3= D"khtml-block-placeholder"></div><div>858-453-1558 =3D<br>&g= t; (tel)</div><div>858-453-4763 (fax)</div><div><= ;br =3D<br>> class=3D3D"khtml-block-placeholder"></div><div&= gt;<a =3D<br>> href=3D3D"http://deutsch.ucsd.edu">http://deutsch.u= csd.edu</a></div><div><a =3D<br>> href=3D3D"http://= www.philomel.com">http://www.philomel.com</a></div><div&g= t;<br=3D<br>> class=3D3D"khtml-block-placeholder"></div><= ;/span></span><br =3D<br>> class=3D3D"Apple-interchange-newl= ine"></div></div><div><br></div><div>= ;<br></=3D<br>> div><div>On Oct 24, 2011, at 6:06 AM, = A Davidson wrote:</div><br =3D<br>> class=3D3D"Apple-interchang= e-newline"><blockquote type=3D3D"cite"><div>Hello =3D<br>>= ; everyone,<br><br>I was wondering if anyone could point me in = the =3D<br>> direction of some clear and relatively simple tutorial info= rmation =3D<br>> and/or good review papers about streaming and the probl= ems of trying to =3D<br>> discern between two auditory stimuli presented= to two different ears =3D<br>> concurrently.<br><br>Many = =3D<br>> thanks,<br><br>Alison<br><br>----------= -----------------------------------=3D<br>> -------------------<br>= ;This message was sent using IMP, the Internet =3D<br>> Messaging Progra= m.<br></div></blockquote></div><br><div>= ;<br>> <span class=3D3D"Apple-style-span" style=3D3D"border-collapse:= separate; =3D<br>> color: rgb(0, 0, 0); font-family: Helvetica; font-si= ze: medium; =3D<br>> font-style: normal; font-variant: normal; font-weig= ht: normal; =3D<br>> letter-spacing: normal; line-height: normal; orphan= s: 2; text-align: =3D<br>> auto; text-indent: 0px; text-transform: none;= white-space: normal; =3D<br>> widows: 2; word-spacing: 0px; -webkit-bor= der-horizontal-spacing: 0px; =3D<br>> -webkit-border-vertical-spacing: 0= px; =3D<br>> -webkit-text-decorations-in-effect: none; -webkit-text-size= -adjust: =3D<br>> auto; -webkit-text-stroke-width: 0px; "><span cl= ass=3D3D"Apple-style-span" =3D<br>> style=3D3D"font-size: 12px; "><= ;div><span class=3D3D"Apple-style-span" =3D<br>> style=3D3D"font-s= ize: medium;"><br =3D<br>> class=3D3D"Apple-interchange-newline"&g= t;</span></div></span></span></div></div= =3D<br>> ></div></body></html>=3D<br>> <br>> = --Apple-Mail-2--272401268--<br>> <br>> ------------------------------= <br>> <br>> Date: Tue, 25 Oct 2011 00:04:32 +0200<br>> From: = Martin <m.cooke@xxxxxxxx><br>> Subject: Workshop announceme= nt: The Listening Talker<br>> <br>> --Apple-Mail=3D_CE921D79-D8BC-43C= 1-8355-D3CA57A705E4<br>> Content-Transfer-Encoding: quoted-printable<br>= > Content-Type: text/plain;<br>> charset=3Dus-ascii<br>> <br>>= The Listening Talker: an interdisciplinary workshop on natural and =3D<br>= > synthetic=3D20<br>> modification of speech in response to listening= conditions<br>> <br>> Edinburgh, 2-3 May 2012<br>> <br>> http:= //listening-talker.org/workshop<br>> <br>> When talkers speak, they a= lso listen. Talkers routinely adapt to their =3D<br>> interlocutors=3D20= <br>> and environment, maintaining intelligibility and dialogue fluidity= in a =3D<br>> way that=3D20<br>> promotes efficient exchange of info= rmation. In contrast, current speech =3D<br>> output=3D20<br>> techno= logy is largely deaf, incapable of adapting to the listener's =3D<br>> c= ontext,<br>> inefficient in use and lacking the naturalness that comes f= rom rapid =3D<br>> appreciation<br>> of the speaker-listener environm= ent. A key scientific challenge is to =3D<br>> better=3D20<br>> unde= rstand how "talker-listeners" respond to context and to apply these=3D20=3D= <br>> <br>> findings to the modification of natural (live/recorded) a= nd generated =3D<br>> (synthetic)<br>> speech. The ISCA-supported Lis= tening Talker (LISTA) workshop brings=3D20<br>> together linguists, psyc= hologists, neuroscientists, engineers and others =3D<br>> working=3D20<b= r>> on human and machine speech perception and production, to explore ne= w=3D20<br>> approaches to context-sensitive speech generation.<br>> <= br>> The workshop will be single-track, with invited talks and contribut= ed =3D<br>> oral=3D20<br>> and poster presentations. An open call for= a special issue of Computer=3D20=3D<br>> <br>> Speech and Language o= n the theme of the listening talker will follow the =3D<br>> workshop.<b= r>> <br>> Contributions are invited on any aspect of the listening ta= lker, =3D<br>> including but not limited to:<br>> <br>> - theories= and models of human communication involving the listening =3D<br>> talk= er<br>> - human speech production modifications induced by noise<br>>= - speech production changes with manipulated feedback<br>> - algorithms= /vocoders for speech modification<br>> - transformations from casual to = clear speech<br>> - characterisation of the listening context<br>> - = intelligibility and quality metrics for modified speech<br>> - applicati= on to natural dialogues, PA, teleconferencing<br>> <br>> Invited spea= kers<br>> <br>> Torsten Dau (Danish Technical University)<br>> = Valerie Hazan (University College, London)<br>> Richard Heusdens (Tec= hnical University Delft)<br>> Hideki Kawahara (Wakayama University)<br= >> Roger Moore (University of Sheffield)<br>> Martin Pickering (U= niversity of Edinburgh)<br>> Peter Vary (Aachen University)<br>> J= unichi Yamagishi (University of Edinburgh)<br>> =3D09<br>> Important= dates<br>> <br>> 30th January 2012: Submission of 4-page papers=3D2= 0<br>> 27th February 2012: Notification of acceptance/rejection<br>>= <br>> Co-chairs<br>> <br>> Martin Cooke (University of the Basq= ue Country)<br>> Simon King (University of Edinburgh)<br>> Bastiaa= n Kleijn (Victoria University of Wellington)<br>> Yannis Stylianou (U= niversity of Crete)=3D<br>> <br>> --Apple-Mail=3D_CE921D79-D8BC-43C1-= 8355-D3CA57A705E4<br>> Content-Transfer-Encoding: quoted-printable<br>&g= t; Content-Type: text/html;<br>> charset=3Dus-ascii<br>> <br>> &l= t;html><head></head><body style=3D3D"word-wrap: break-wor= d; =3D<br>> -webkit-nbsp-mode: space; -webkit-line-break: after-white-sp= ace; ">The =3D<br>> Listening Talker: an interdisciplinary workshop o= n natural and =3D<br>> synthetic&nbsp;<div>modification of spe= ech in response to listening =3D<br>> conditions<br><br>Edin= burgh, 2-3 May 2012<br><br><a =3D<br>> href=3D3D"http://l= istening-talker.org/workshop">http://listening-talker.org/=3D<br>> wo= rkshop</a><br><br>When talkers speak, they also listen. T= alkers =3D<br>> routinely adapt to their interlocutors&nbsp;</div= ><div>and environment, =3D<br>> maintaining intelligibility and= dialogue fluidity in a way =3D<br>> that&nbsp;</div><div&g= t;promotes efficient exchange of information. In =3D<br>> contrast, curr= ent speech output&nbsp;</div><div>technology is largely =3D= <br>> deaf, incapable of adapting to the listener's =3D<br>> context,= </div><div>inefficient in use and lacking the naturalness that = =3D<br>> comes from rapid appreciation</div><div>of the spea= ker-listener =3D<br>> environment. &nbsp;A key scientific challenge = is to =3D<br>> better&nbsp;</div><div>understand how "ta= lker-listeners" respond to =3D<br>> context and to apply these&nbsp;= </div><div>findings to the modification =3D<br>> of natural = (live/recorded) and generated (synthetic)</div><div>speech. =3D= <br>> The ISCA-supported Listening Talker (LISTA) workshop =3D<br>> b= rings&nbsp;</div><div>together linguists, psychologists, = =3D<br>> neuroscientists, engineers and others working&nbsp;</div= ><div>on human =3D<br>> and machine speech perception and produ= ction, to explore =3D<br>> new&nbsp;</div><div>approache= s to context-sensitive speech =3D<br>> generation.<br><br>Th= e workshop will be single-track, with invited talks =3D<br>> and contrib= uted oral&nbsp;</div><div>and poster presentations. An open= =3D<br>> call for a special issue of Computer&nbsp;</div><= div>Speech and Language =3D<br>> on the theme of the listening talker= will follow the =3D<br>> workshop.<br><br>Contributions are= invited on any aspect of the =3D<br>> listening talker, including but n= ot limited to:<br><br>- theories and =3D<br>> models of huma= n communication involving the listening talker<br>- human =3D<br>>= speech production modifications induced by noise<br>- speech product= ion =3D<br>> changes with manipulated feedback<br>- algorithms/voc= oders for speech =3D<br>> modification<br>- transformations from c= asual to clear speech<br>- =3D<br>> characterisation of the listen= ing context<br>- intelligibility and =3D<br>> quality metrics for = modified speech<br>- application to natural =3D<br>> dialogues, PA= , teleconferencing<br><br>Invited =3D<br>> speakers<br>= ;<br>&nbsp;Torsten Dau &nbsp;(Danish Technical =3D<br>> Un= iversity)<br>&nbsp;Valerie Hazan &nbsp;(University College, = =3D<br>> London)<br>&nbsp;Richard Heusdens &nbsp;(Technica= l University =3D<br>> Delft)<br>&nbsp;Hideki Kawahara &nbs= p;(Wakayama =3D<br>> University)<br>&nbsp;Roger Moore &nbs= p;(University of =3D<br>> Sheffield)<br>&nbsp;Martin Pickering= &nbsp;(University of =3D<br>> Edinburgh)<br>&nbsp;Peter V= ary &nbsp;(Aachen =3D<br>> University)<br>&nbsp;Junichi Ya= magishi &nbsp;(University of =3D<br>> Edinburgh)<br><span c= lass=3D3D"Apple-tab-span" style=3D3D"white-space: pre; =3D<br>> "> &l= t;/span><br>Important dates<br><br>&nbsp;30th Janu= ary 2012: =3D<br>> Submission of 4-page papers&nbsp;<br>&n= bsp;27th February 2012: =3D<br>> Notification of =3D<br>> acceptance/= rejection<br><br>Co-chairs<br><br>&nbsp;Martin = Cooke =3D<br>> &nbsp;(University of the Basque Country)<br>&am= p;nbsp;Simon King =3D<br>> &nbsp;(University of Edinburgh)<br>= &nbsp;Bastiaan Kleijn &nbsp;(Victoria =3D<br>> University of Wel= lington)<br>&nbsp;Yannis Stylianou &nbsp;(University of =3D<b= r>> Crete)</div></body></html>=3D<br>> <br>> --A= pple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4--<br>> <br>> ------= ------------------------<br>> <br>> End of AUDITORY Digest - 23 Oct 2= 011 to 24 Oct 2011 (#2011-246)<br>> ************************************= ***************************<br></div> </div></body> </html>= --_57b05377-977d-4986-a651-662bd3c102b9_--