Re: AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246) (sandra quinn )


Subject: Re: AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246)
From:    sandra quinn  <s_quinn08@xxxxxxxx>
Date:    Tue, 25 Oct 2011 11:49:14 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--_57b05377-977d-4986-a651-662bd3c102b9_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Could you please post the call for submissions to the following conference?= thank you, Sandra Quinn ________________________________________________________________PREDICTING= PERCEPTIONS: The 3rd International Conference on=20 Appearance Edinburgh, 17-19 April 2012 Following on from two highly=20 successful cross-disciplinary conferences in Ghent and Paris we are very=20 happy to invite submissions for the above event. IMPORTANT DATES -=20 05 December 2011: Submission deadline - 19 December 2011: Review allocation=20 to reviewers - 09 January 2012: Review upload deadline - 14 January 2012:=20 Authors informed - 17-19 April 2012: Conference CONFERENCE=20 WEBSITE www.perceptions.macs.hw.ac.uk INVITED SPEAKERS -=20 Larry Maloney, Dept. Psychology, New York University, USA. - Fran=E7oise=20 Vi=E9not, Mus=E9um National d'Histoire Naturelle, Paris. CONFERENCE=20 CHAIRS Mike Chantler, Julie Harris, Mike Pointer SCOPE Originally=20 focused on the perception of texture and translucency and particularly gloss,=20 and colour we wish to extend the conference to include other senses not just=20 sight (e.g. how does sound affect our perception of the qualities of a=20 fabric) and to emotive as well as objective qualities (e.g. desirability and=20 engagement) and to digital as well as physical media. CALL FOR=20 PAPERS This conference addresses appearance in its broadest sense and=20 seeks to be truly cross-disciplinary. Papers related, but not restricted=20 to the following are welcomed: - Prediction and measurement of human=20 perceptions formed by sensory input of the physical and digital worlds -=20 New methods for estimating psychometric transfer functions - Methods for=20 measuring perceived texture, translucency and form. - Effects of lighting and=20 other environmental factors on perception - Effects of binocular viewing,=20 motion parallax, and depth from focus - Methods for measuring engagement and=20 emotions such as desirability - Effects of other sensory input (e.g. audio,=20 smell, touch) - Effects of user control of media - Colour fidelity, colour=20 harmony, colour and emotion - Methods for measuring inferred qualities=20 including expensiveness, quality, wearability etc - Techniques for=20 encouraging and facilitating observer participation (design games,=20 gamification of experiments, crowd sourcing etc.) -=20 Saliency _______________________________________________ Predicting=20 Perceptions: the 3rd International Conference on Appearance http://www.perceptions.macs.hw.ac.uk/ > Date: Tue, 25 Oct 2011 00:12:54 -04= 00 > From: LISTSERV@xxxxxxxx > Subject: AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246) > To: AUDITORY@xxxxxxxx >=20 > There are 5 messages totalling 546 lines in this issue. >=20 > Topics of the day: >=20 > 1. Glitch-free presentations with Windows 7 and Matlab > 2. question about streaming (3) > 3. Workshop announcement: The Listening Talker >=20 > ---------------------------------------------------------------------- >=20 > Date: Mon, 24 Oct 2011 12:16:04 +0200 > From: Martin Hansen <martin.hansen@xxxxxxxx> > Subject: Re: Glitch-free presentations with Windows 7 and Matlab >=20 > Hi all, >=20 > Trevor has mentioned PortAudio as one solution (and so has Matlab > themselves told to a colleague of mine in a recent email). >=20 > Already some years before this Matlab-2011 problem popped up, we have > used PortAudio to create our "msound" tool, which is a wrapper for > PortAudio for block-wise audio input and ouput, for unlimited duration > (in principle). You can download it freely from here: > http://www.hoertechnik-audiologie.de/web/file/Forschung/Software.php#msou= =3D > nd: >=20 >=20 > It is written as a mex file and published under the free LGPL license. > It contains the precompiled mex-files "msound" for windows (dll, mexw32) > and linux and also some example functions, e.g. one called > "msound_play_record.m" which does simultaneous output and input to and > from your soundcard for as long as your output lasts. This functions > also handles all intialization automatically for you. Another function, > called "msound_play.m", does what it is named after. > We have had msound running for several years now, and a large number of > our students have used it successfully for their assignments, projects > and theses as well. > which >=20 > Best regards, > Martin >=20 >=20 > --=3D20 > Prof. Dr. Martin Hansen > Jade Hochschule Wilhelmshaven/Oldenburg/Elsfleth > Studiendekan H=3DC3=3DB6rtechnik und Audiologie > Ofener Str. 16/19 > D-26121 Oldenburg > Tel. (+49) 441 7708-3725 Fax -3777 > http://www.hoertechnik-audiologie.de/ >=20 >=20 >=20 >=20 >=20 > On 18.10.2011 19:29, David Magezi wrote: > > Many thanks for that review Trevor. > >=3D20 > > Am not sure, if the following has been mentioned: There appears to be a= =3D > matlab-ASIO interface from University of Birmingham (UK), using ActiveX. > >=3D20 > >=3D20 > > http://www.eee.bham.ac.uk/collinst/asio.html > >=3D20 > > I would also be keen to hear of other solutions found, > >=3D20 > > D > >=3D20 > > =3D20 > > *************************************************** > > David Magezi > >=3D20 > > *************************************************** > >=3D20 > >=3D20 > > ________________________________ > > From: Trevor Agus <Trevor.Agus@xxxxxxxx> > > To: AUDITORY@xxxxxxxx > > Sent: Tuesday, October 18, 2011 5:52 PM > > Subject: [AUDITORY] Glitch-free presentations with Windows 7 and Matlab > >=3D20 > > I've found it surprisingly difficult to present glitch-free sounds with > > Windows 7. > >=3D20 > > The short answer is that Padraig Kitterick's "asioWavPlay" seems to be = =3D > the > > simplest reliable method (remembering to buffer the waveforms with 256 = =3D > samples > > of silence to avoid truncation issues). For those with more complex nee= =3D > ds, > > perhaps soundmexpro or PsychToolbox would be better. I'd value any seco= =3D > nd > > opinions and double-checking, so a review of the options follows, with = =3D > all the > > gory details. > >=3D20 > > I've been using a relatively old version of Matlab (R2007b) with a Fire= =3D > face UC > > soundcard. If the problems are fixed in another version or soundcard, I= =3D > 'd love > > to know about it. > >=3D20 > > =3D3D=3D3D=3D3DMatlab's native functions (sound, wavplay, audioplayer) > > Large, unpredictable truncations were the least of our problems. We als= =3D > o often > > got mid-sound glitches, ranging from sporadic (just a few subtle glitch= =3D > es per > > minute) to frequent (making the sound barely recognisable). The magic f= =3D > ormula > > for eliminating the glitches seemed to be to keep the soundcard turned = =3D > off > > until > > the desktop was ready, with all background programs loaded. (Restarting= =3D > either > > the soundcard or the computer alone guaranteed some glitches.) So this= =3D > formula > > seems to work, but it's a bit too Harry Potter for my liking, and the s= =3D > pell > > might change with the next Windows update. I think I read that Firefac= =3D > e were > > no longer supporting Microsoft's vagaries, and they recommended using A= =3D > SIO. I'm > > not sure if other high-end soundcard manufacturers are any different. S= =3D > ince > > Matlab's native functions don't support ASIO (unless the new versions d= =3D > o?), > > I think we're forced to look at the ASIO options. > >=3D20 > > =3D3D=3D3D=3D3Dplayrec > > This seems to be potentially the most flexible method of presenting sou= =3D > nds but > > I've hit a brick wall compiling it for Windows 7. I think its author st= =3D > opped > > providing support for it a few years ago. Has anyone had more success t= =3D > han me? > >=3D20 > > =3D3D=3D3D=3D3DasioWavPlay > > This simply presents a .wav file using ASIO. It's a little annoying tha= =3D > t you > > have to save your sound to disk before presenting it, but as Joachim po= =3D > inted > > out, it's not too difficult to automate this process. While doing that,= =3D > I add > > 256 samples of silence to the end to work around the truncation problem= =3D > . > >=3D20 > > =3D3D=3D3D=3D3Dpa_wavplay > > This is nearly the perfect solution except (1) the number of samples tr= =3D > uncated > > from the end is slightly unpredictable and (2) it prints a message on t= =3D > he > > screen after every sound ("Playing on device 0"). For these two reasons= =3D > , I > > prefer asioWavPlay. > >=3D20 > > =3D3D=3D3D=3D3Dsoundmexpro > > This might be best choice for the high-end user (I've just had a quick = =3D > look at > > the demo version today). It's easy to install and there are good tutori= =3D > als, but > > it involves initialising sound objects, etc. -- it's not just a replace= =3D > ment for > > Matlab's "sound" command. Also it looks like it's =3DE2=3D82=3DAC500+. > >=3D20 > > =3D3D=3D3D=3D3DPsychToolbox > > Originally designed for visual experiments, PsychToolbox has now got qu= =3D > ite > > extensive low-latency sound functions, including realtime continuous > > playing/recording. It's also free. However, it's slightly challenging t= =3D > o > > install Like soundmexpro, it's object-oriented -- so don't expect to p= =3D > lay a > > sound with a simple one-liner. > >=3D20 > > =3D3D=3D3D=3D3DPortAudio > > Most of above programs are based on this C library. If you're an experi= =3D > enced > > programmer, perhaps you'd prefer to go direct the source? And while you= =3D > 're > > there, perhaps you could write the perfect Matlab-ASIO interfaces for t= =3D > he rest > > of us? (Please!) > >=3D20 > > Has anyone found a simpler solution? I'd be glad to hear it. > >=3D20 > > Trevor >=20 > ------------------------------ >=20 > Date: Mon, 24 Oct 2011 14:06:26 +0100 > From: A Davidson <pspc1d@xxxxxxxx> > Subject: question about streaming >=20 > Hello everyone, >=20 > I was wondering if anyone could point me in the direction of some=20=20 > clear and relatively simple tutorial information and/or good review=20=20 > papers about streaming and the problems of trying to discern between=20= =20 > two auditory stimuli presented to two different ears concurrently. >=20 > Many thanks, >=20 > Alison >=20 > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. >=20 > ------------------------------ >=20 > Date: Mon, 24 Oct 2011 17:37:39 +0100 > From: Etienne Gaudrain <egaudrain.cam@xxxxxxxx> > Subject: Re: question about streaming >=20 > Dear Alison, >=20 > First because this is so recent, a paper by Stainsby et al. on=20 > sequential streaming : >=20 > Sequential streaming due to manipulation of interaural time differences. > Stainsby TH, Fullgrabe C, Flanagan HJ, Waldman SK, Moore BC. > J Acoust Soc Am. 2011 Aug;130(2):904-14. > PMID: 21877805 >=20 > Otherwise two papers that include a fairly comprehensive review of the=20 > literature: >=20 > Spatial release from energetic and informational masking in a divided=20 > speech identification task. > Ihlefeld A, Shinn-Cunningham B. > J Acoust Soc Am. 2008 Jun;123(6):4380-92. > PMID: 18537389 >=20 > Spatial release from energetic and informational masking in a selective= =20 > speech identification task. > Ihlefeld A, Shinn-Cunningham B. > J Acoust Soc Am. 2008 Jun;123(6):4369-79. > PMID: 18537388 >=20 > These are not review papers, but you might find what you're looking for. >=20 > -Etienne >=20 >=20 > On 24/10/2011 14:06, A Davidson wrote: > > Hello everyone, > > > > I was wondering if anyone could point me in the direction of some=20 > > clear and relatively simple tutorial information and/or good review=20 > > papers about streaming and the problems of trying to discern between=20 > > two auditory stimuli presented to two different ears concurrently. > > > > Many thanks, > > > > Alison > > > > ---------------------------------------------------------------- > > This message was sent using IMP, the Internet Messaging Program. >=20 >=20 > --=20 > Etienne Gaudrain, PhD > MRC Cognition and Brain Sciences Unit > 15 Chaucer Road > Cambridge, CB2 7EF > UK > Phone: +44 1223 355 294, ext. 645 > Fax (unit): +44 1223 359 062 >=20 > ------------------------------ >=20 > Date: Mon, 24 Oct 2011 12:31:38 -0700 > From: Diana Deutsch <ddeutsch@xxxxxxxx> > Subject: Re: question about streaming >=20 > --Apple-Mail-2--272401268 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/plain; > charset=3Dus-ascii >=20 > Hi Alson, >=20 > You might want to read my review chapter: >=20 > Deutsch, D. Grouping mechanisms in music. In D. Deutsch (Ed.). The =3D > psychology of music, 2nd Edition, 1999, 299-348, Academic Press. [PDF =3D > Document] >=20 > The book is going into a third edition, and the updated chapter should = =3D > be available in a few months. >=20 > Cheers, >=20 > Diana Deutsch >=20 >=20 > Professor Diana Deutsch > Department of Psychology =3D20 > University of California, San Diego > 9500 Gilman Dr. #0109 =3D20 > La Jolla, CA 92093-0109, USA >=20 > 858-453-1558 (tel) > 858-453-4763 (fax) >=20 > http://deutsch.ucsd.edu > http://www.philomel.com >=20 >=20 >=20 >=20 > On Oct 24, 2011, at 6:06 AM, A Davidson wrote: >=20 > > Hello everyone, > >=3D20 > > I was wondering if anyone could point me in the direction of some =3D > clear and relatively simple tutorial information and/or good review =3D > papers about streaming and the problems of trying to discern between two = =3D > auditory stimuli presented to two different ears concurrently. > >=3D20 > > Many thanks, > >=3D20 > > Alison > >=3D20 > > ---------------------------------------------------------------- > > This message was sent using IMP, the Internet Messaging Program. >=20 >=20 >=20 > --Apple-Mail-2--272401268 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/html; > charset=3Dus-ascii >=20 > <html><head></head><body style=3D3D"word-wrap: break-word; =3D > -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi =3D > Alson,<div><br></div><div>You might want to read my review =3D > chapter:</div><div><br></div><div><span class=3D3D"Apple-style-span" =3D > style=3D3D"font-family: Arial, Helvetica, sans-serif; font-size: 12px; = =3D > ">Deutsch, D. Grouping mechanisms in music. In D. Deutsch =3D > (Ed.).&nbsp;<i>The psychology of music, 2nd Edition</i>, 1999, 299-348, = =3D > Academic Press.&nbsp;<nobr>[<a =3D > href=3D3D"http://philomel.com/pdf/PsychMus_Ch9.pdf" target=3D3D"_blank">P= DF =3D > Document</a>]</nobr><br></span><div><div><div><br></div><div>The book is = =3D > going into a third edition, and the updated chapter should be available = =3D > in a few =3D > months.</div><div><br></div><div>Cheers,</div><div><br></div><div>Diana = =3D > Deutsch</div><div><br></div><div><div><br =3D > class=3D3D"webkit-block-placeholder"></div><div><span =3D > class=3D3D"Apple-style-span" style=3D3D"border-collapse: separate; color:= =3D > rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: =3D > normal; font-variant: normal; font-weight: normal; letter-spacing: =3D > normal; line-height: normal; orphans: 2; text-indent: 0px; =3D > text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; = =3D > -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: = =3D > 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: = =3D > auto; -webkit-text-stroke-width: 0px; "><span class=3D3D"Apple-style-span= " =3D > style=3D3D"font-size: 12px; "><div>Professor Diana =3D > Deutsch</div><div>Department of Psychology&nbsp; &nbsp; &nbsp; &nbsp; =3D > &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =3D > &nbsp;&nbsp;</div><div>University of California, San =3D > Diego</div><div>9500 Gilman Dr. #0109&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; = =3D > &nbsp;&nbsp;</div><div>La Jolla, CA 92093-0109, USA</div><div><br =3D > class=3D3D"khtml-block-placeholder"></div><div>858-453-1558 =3D > (tel)</div><div>858-453-4763 (fax)</div><div><br =3D > class=3D3D"khtml-block-placeholder"></div><div><a =3D > href=3D3D"http://deutsch.ucsd.edu">http://deutsch.ucsd.edu</a></div><div>= <a =3D > href=3D3D"http://www.philomel.com">http://www.philomel.com</a></div><div>= <br=3D > class=3D3D"khtml-block-placeholder"></div></span></span><br =3D > class=3D3D"Apple-interchange-newline"></div></div><div><br></div><div><br= ></=3D > div><div>On Oct 24, 2011, at 6:06 AM, A Davidson wrote:</div><br =3D > class=3D3D"Apple-interchange-newline"><blockquote type=3D3D"cite"><div>He= llo =3D > everyone,<br><br>I was wondering if anyone could point me in the =3D > direction of some clear and relatively simple tutorial information =3D > and/or good review papers about streaming and the problems of trying to = =3D > discern between two auditory stimuli presented to two different ears =3D > concurrently.<br><br>Many =3D > thanks,<br><br>Alison<br><br>--------------------------------------------= -=3D > -------------------<br>This message was sent using IMP, the Internet =3D > Messaging Program.<br></div></blockquote></div><br><div> > <span class=3D3D"Apple-style-span" style=3D3D"border-collapse: separate; = =3D > color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; =3D > font-style: normal; font-variant: normal; font-weight: normal; =3D > letter-spacing: normal; line-height: normal; orphans: 2; text-align: =3D > auto; text-indent: 0px; text-transform: none; white-space: normal; =3D > widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; =3D > -webkit-border-vertical-spacing: 0px; =3D > -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =3D > auto; -webkit-text-stroke-width: 0px; "><span class=3D3D"Apple-style-span= " =3D > style=3D3D"font-size: 12px; "><div><span class=3D3D"Apple-style-span" =3D > style=3D3D"font-size: medium;"><br =3D > class=3D3D"Apple-interchange-newline"></span></div></span></span></div></= div=3D > ></div></body></html>=3D >=20 > --Apple-Mail-2--272401268-- >=20 > ------------------------------ >=20 > Date: Tue, 25 Oct 2011 00:04:32 +0200 > From: Martin <m.cooke@xxxxxxxx> > Subject: Workshop announcement: The Listening Talker >=20 > --Apple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/plain; > charset=3Dus-ascii >=20 > The Listening Talker: an interdisciplinary workshop on natural and =3D > synthetic=3D20 > modification of speech in response to listening conditions >=20 > Edinburgh, 2-3 May 2012 >=20 > http://listening-talker.org/workshop >=20 > When talkers speak, they also listen. Talkers routinely adapt to their = =3D > interlocutors=3D20 > and environment, maintaining intelligibility and dialogue fluidity in a = =3D > way that=3D20 > promotes efficient exchange of information. In contrast, current speech = =3D > output=3D20 > technology is largely deaf, incapable of adapting to the listener's =3D > context, > inefficient in use and lacking the naturalness that comes from rapid =3D > appreciation > of the speaker-listener environment. A key scientific challenge is to = =3D > better=3D20 > understand how "talker-listeners" respond to context and to apply these= =3D20=3D >=20 > findings to the modification of natural (live/recorded) and generated =3D > (synthetic) > speech. The ISCA-supported Listening Talker (LISTA) workshop brings=3D20 > together linguists, psychologists, neuroscientists, engineers and others = =3D > working=3D20 > on human and machine speech perception and production, to explore new=3D20 > approaches to context-sensitive speech generation. >=20 > The workshop will be single-track, with invited talks and contributed =3D > oral=3D20 > and poster presentations. An open call for a special issue of Computer=3D= 20=3D >=20 > Speech and Language on the theme of the listening talker will follow the = =3D > workshop. >=20 > Contributions are invited on any aspect of the listening talker, =3D > including but not limited to: >=20 > - theories and models of human communication involving the listening =3D > talker > - human speech production modifications induced by noise > - speech production changes with manipulated feedback > - algorithms/vocoders for speech modification > - transformations from casual to clear speech > - characterisation of the listening context > - intelligibility and quality metrics for modified speech > - application to natural dialogues, PA, teleconferencing >=20 > Invited speakers >=20 > Torsten Dau (Danish Technical University) > Valerie Hazan (University College, London) > Richard Heusdens (Technical University Delft) > Hideki Kawahara (Wakayama University) > Roger Moore (University of Sheffield) > Martin Pickering (University of Edinburgh) > Peter Vary (Aachen University) > Junichi Yamagishi (University of Edinburgh) > =3D09 > Important dates >=20 > 30th January 2012: Submission of 4-page papers=3D20 > 27th February 2012: Notification of acceptance/rejection >=20 > Co-chairs >=20 > Martin Cooke (University of the Basque Country) > Simon King (University of Edinburgh) > Bastiaan Kleijn (Victoria University of Wellington) > Yannis Stylianou (University of Crete)=3D >=20 > --Apple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4 > Content-Transfer-Encoding: quoted-printable > Content-Type: text/html; > charset=3Dus-ascii >=20 > <html><head></head><body style=3D3D"word-wrap: break-word; =3D > -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">The = =3D > Listening Talker: an interdisciplinary workshop on natural and =3D > synthetic&nbsp;<div>modification of speech in response to listening =3D > conditions<br><br>Edinburgh, 2-3 May 2012<br><br><a =3D > href=3D3D"http://listening-talker.org/workshop">http://listening-talker.o= rg/=3D > workshop</a><br><br>When talkers speak, they also listen. Talkers =3D > routinely adapt to their interlocutors&nbsp;</div><div>and environment, = =3D > maintaining intelligibility and dialogue fluidity in a way =3D > that&nbsp;</div><div>promotes efficient exchange of information. In =3D > contrast, current speech output&nbsp;</div><div>technology is largely =3D > deaf, incapable of adapting to the listener's =3D > context,</div><div>inefficient in use and lacking the naturalness that = =3D > comes from rapid appreciation</div><div>of the speaker-listener =3D > environment. &nbsp;A key scientific challenge is to =3D > better&nbsp;</div><div>understand how "talker-listeners" respond to =3D > context and to apply these&nbsp;</div><div>findings to the modification = =3D > of natural (live/recorded) and generated (synthetic)</div><div>speech. = =3D > The ISCA-supported Listening Talker (LISTA) workshop =3D > brings&nbsp;</div><div>together linguists, psychologists, =3D > neuroscientists, engineers and others working&nbsp;</div><div>on human = =3D > and machine speech perception and production, to explore =3D > new&nbsp;</div><div>approaches to context-sensitive speech =3D > generation.<br><br>The workshop will be single-track, with invited talks = =3D > and contributed oral&nbsp;</div><div>and poster presentations. An open = =3D > call for a special issue of Computer&nbsp;</div><div>Speech and Language = =3D > on the theme of the listening talker will follow the =3D > workshop.<br><br>Contributions are invited on any aspect of the =3D > listening talker, including but not limited to:<br><br>- theories and =3D > models of human communication involving the listening talker<br>- human = =3D > speech production modifications induced by noise<br>- speech production = =3D > changes with manipulated feedback<br>- algorithms/vocoders for speech =3D > modification<br>- transformations from casual to clear speech<br>- =3D > characterisation of the listening context<br>- intelligibility and =3D > quality metrics for modified speech<br>- application to natural =3D > dialogues, PA, teleconferencing<br><br>Invited =3D > speakers<br><br>&nbsp;Torsten Dau &nbsp;(Danish Technical =3D > University)<br>&nbsp;Valerie Hazan &nbsp;(University College, =3D > London)<br>&nbsp;Richard Heusdens &nbsp;(Technical University =3D > Delft)<br>&nbsp;Hideki Kawahara &nbsp;(Wakayama =3D > University)<br>&nbsp;Roger Moore &nbsp;(University of =3D > Sheffield)<br>&nbsp;Martin Pickering &nbsp;(University of =3D > Edinburgh)<br>&nbsp;Peter Vary &nbsp;(Aachen =3D > University)<br>&nbsp;Junichi Yamagishi &nbsp;(University of =3D > Edinburgh)<br><span class=3D3D"Apple-tab-span" style=3D3D"white-space: pr= e; =3D > "> </span><br>Important dates<br><br>&nbsp;30th January 2012: =3D > Submission of 4-page papers&nbsp;<br>&nbsp;27th February 2012: =3D > Notification of =3D > acceptance/rejection<br><br>Co-chairs<br><br>&nbsp;Martin Cooke =3D > &nbsp;(University of the Basque Country)<br>&nbsp;Simon King =3D > &nbsp;(University of Edinburgh)<br>&nbsp;Bastiaan Kleijn &nbsp;(Victoria = =3D > University of Wellington)<br>&nbsp;Yannis Stylianou &nbsp;(University of = =3D > Crete)</div></body></html>=3D >=20 > --Apple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4-- >=20 > ------------------------------ >=20 > End of AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246) > *************************************************************** =20=09=09=20=09=20=20=20=09=09=20=20= --_57b05377-977d-4986-a651-662bd3c102b9_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline <html> <head> <style><!-- .hmmessage P { margin:0px; padding:0px } body.hmmessage { font-size: 10pt; font-family:Tahoma } --></style> </head> <body class=3D'hmmessage'><div dir=3D'ltr'> Could you please post the call for submissions to the&nbsp;following confer= ence?<BR>&nbsp;<BR>thank you,<br><br>Sandra Quinn<br>&nbsp;<BR><div><div id= =3D"SkyDrivePlaceholder">__________________________________________________= ______________</div><div>PREDICTING PERCEPTIONS: The 3rd International Conf= erence on=20 Appearance<br>Edinburgh, 17-19 April 2012<br><br>Following on from two high= ly=20 successful cross-disciplinary conferences<br>in Ghent and Paris we are very= =20 happy to invite submissions for the<br>above event.<br><br>IMPORTANT DATES<= br>-=20 05 December 2011: Submission deadline<br>- 19 December 2011: Review allocat= ion=20 to reviewers<br>- 09 January 2012: Review upload deadline<br>- 14 January 2= 012:=20 Authors informed<br>- 17-19 April 2012: Conference<br><br><br>CONFERENCE=20 WEBSITE<br><!-- <a href=3D"https://unimail.st-andrews.ac.uk/owa/redir.aspx?= C=3Dec564999877740b3912b2a1b4b5b0dcb&amp;URL=3Dhttp%3a%2f%2fwww.perceptions= .macs.hw.ac.uk" target=3D"_blank"> -->www.perceptions.macs.hw.ac.uk<!-- </a= > --><br><br><br>INVITED SPEAKERS<br>-=20 Larry Maloney, Dept. Psychology, New York University, USA.<br>- Fran=E7oise= =20 Vi=E9not, Mus=E9um National d'Histoire Naturelle, Paris.<br><br>CONFERENCE= =20 CHAIRS<br>Mike Chantler, Julie Harris, Mike Pointer<br><br>SCOPE<br>Origina= lly=20 focused on the perception of texture and translucency and<br>particularly g= loss,=20 and colour we wish to extend the conference to<br>include other senses not = just=20 sight (e.g. how does sound affect our<br>perception of the qualities of a= =20 fabric) and to emotive as well as<br>objective qualities (e.g. desirability= and=20 engagement) and to digital<br>as well as physical media.<br><br>CALL FOR=20 PAPERS<br>This conference addresses appearance in its broadest sense and=20 seeks<br>to be truly cross-disciplinary. Papers related, but not restricted= =20 to<br>the following are welcomed:<br><br>- Prediction and measurement of hu= man=20 perceptions formed by sensory<br>input of the physical and digital worlds<b= r>-=20 New methods for estimating psychometric transfer functions<br>- Methods for= =20 measuring perceived texture, translucency and form.<br>- Effects of lightin= g and=20 other environmental factors on perception<br>- Effects of binocular viewing= ,=20 motion parallax, and depth from focus<br>- Methods for measuring engagement= and=20 emotions such as desirability<br>- Effects of other sensory input (e.g. aud= io,=20 smell, touch)<br>- Effects of user control of media<br>- Colour fidelity, c= olour=20 harmony, colour and emotion<br>- Methods for measuring inferred qualities= =20 including expensiveness,<br>quality, wearability etc<br>- Techniques for=20 encouraging and facilitating observer participation<br>(design games,=20 gamification of experiments, crowd sourcing etc.)<br>-=20 Saliency<br><br>_______________________________________________<br><br>Pred= icting=20 Perceptions: the 3rd International Conference on Appearance<br><br><!-- <a = href=3D"https://unimail.st-andrews.ac.uk/owa/redir.aspx?C=3Dec564999877740b= 3912b2a1b4b5b0dcb&amp;URL=3Dhttp%3a%2f%2fwww.perceptions.macs.hw.ac.uk%2f" = target=3D"_blank"> -->http://www.perceptions.macs.hw.ac.uk/<!-- </a> --></d= iv><div>&nbsp;</div>&gt; Date: Tue, 25 Oct 2011 00:12:54 -0400<br>&gt; From= : LISTSERV@xxxxxxxx<br>&gt; Subject: AUDITORY Digest - 23 Oct 2011 t= o 24 Oct 2011 (#2011-246)<br>&gt; To: AUDITORY@xxxxxxxx<br>&gt; <br>= &gt; There are 5 messages totalling 546 lines in this issue.<br>&gt; <br>&g= t; Topics of the day:<br>&gt; <br>&gt; 1. Glitch-free presentations with = Windows 7 and Matlab<br>&gt; 2. question about streaming (3)<br>&gt; 3.= Workshop announcement: The Listening Talker<br>&gt; <br>&gt; -------------= ---------------------------------------------------------<br>&gt; <br>&gt; = Date: Mon, 24 Oct 2011 12:16:04 +0200<br>&gt; From: Martin Hansen &lt= ;martin.hansen@xxxxxxxx&gt;<br>&gt; Subject: Re: Glitch-free presentation= s with Windows 7 and Matlab<br>&gt; <br>&gt; Hi all,<br>&gt; <br>&gt; Trevo= r has mentioned PortAudio as one solution (and so has Matlab<br>&gt; themse= lves told to a colleague of mine in a recent email).<br>&gt; <br>&gt; Alrea= dy some years before this Matlab-2011 problem popped up, we have<br>&gt; us= ed PortAudio to create our "msound" tool, which is a wrapper for<br>&gt; Po= rtAudio for block-wise audio input and ouput, for unlimited duration<br>&gt= ; (in principle). You can download it freely from here:<br>&gt; http://www= .hoertechnik-audiologie.de/web/file/Forschung/Software.php#msou=3D<br>&gt; = nd:<br>&gt; <br>&gt; <br>&gt; It is written as a mex file and published und= er the free LGPL license.<br>&gt; It contains the precompiled mex-files "ms= ound" for windows (dll, mexw32)<br>&gt; and linux and also some example fun= ctions, e.g. one called<br>&gt; "msound_play_record.m" which does simultane= ous output and input to and<br>&gt; from your soundcard for as long as your= output lasts. This functions<br>&gt; also handles all intialization auto= matically for you. Another function,<br>&gt; called "msound_play.m", does = what it is named after.<br>&gt; We have had msound running for several year= s now, and a large number of<br>&gt; our students have used it successfully= for their assignments, projects<br>&gt; and theses as well.<br>&gt; which<= br>&gt; <br>&gt; Best regards,<br>&gt; Martin<br>&gt; <br>&gt; <br>&gt; --= =3D20<br>&gt; Prof. Dr. Martin Hansen<br>&gt; Jade Hochschule Wilhelmshaven= /Oldenburg/Elsfleth<br>&gt; Studiendekan H=3DC3=3DB6rtechnik und Audiologie= <br>&gt; Ofener Str. 16/19<br>&gt; D-26121 Oldenburg<br>&gt; Tel. (+49) 441= 7708-3725 Fax -3777<br>&gt; http://www.hoertechnik-audiologie.de/<br>&gt;= <br>&gt; <br>&gt; <br>&gt; <br>&gt; <br>&gt; On 18.10.2011 19:29, David Ma= gezi wrote:<br>&gt; &gt; Many thanks for that review Trevor.<br>&gt; &gt;= =3D20<br>&gt; &gt; Am not sure, if the following has been mentioned: There = appears to be a=3D<br>&gt; matlab-ASIO interface from University of Birmin= gham (UK), using ActiveX.<br>&gt; &gt;=3D20<br>&gt; &gt;=3D20<br>&gt; &gt; = http://www.eee.bham.ac.uk/collinst/asio.html<br>&gt; &gt;=3D20<br>&gt; &gt;= I would also be keen to hear of other solutions found,<br>&gt; &gt;=3D20<b= r>&gt; &gt; D<br>&gt; &gt;=3D20<br>&gt; &gt; =3D20<br>&gt; &gt; ***********= ****************************************<br>&gt; &gt; David Magezi<br>&gt; = &gt;=3D20<br>&gt; &gt; ***************************************************<= br>&gt; &gt;=3D20<br>&gt; &gt;=3D20<br>&gt; &gt; __________________________= ______<br>&gt; &gt; From: Trevor Agus &lt;Trevor.Agus@xxxxxxxx&gt;<br>&gt; &g= t; To: AUDITORY@xxxxxxxx<br>&gt; &gt; Sent: Tuesday, October 18, 201= 1 5:52 PM<br>&gt; &gt; Subject: [AUDITORY] Glitch-free presentations with W= indows 7 and Matlab<br>&gt; &gt;=3D20<br>&gt; &gt; I've found it surprising= ly difficult to present glitch-free sounds with<br>&gt; &gt; Windows 7.<br>= &gt; &gt;=3D20<br>&gt; &gt; The short answer is that Padraig Kitterick's "a= sioWavPlay" seems to be =3D<br>&gt; the<br>&gt; &gt; simplest reliable meth= od (remembering to buffer the waveforms with 256 =3D<br>&gt; samples<br>&gt= ; &gt; of silence to avoid truncation issues). For those with more complex = nee=3D<br>&gt; ds,<br>&gt; &gt; perhaps soundmexpro or PsychToolbox would b= e better. I'd value any seco=3D<br>&gt; nd<br>&gt; &gt; opinions and double= -checking, so a review of the options follows, with =3D<br>&gt; all the<br>= &gt; &gt; gory details.<br>&gt; &gt;=3D20<br>&gt; &gt; I've been using a re= latively old version of Matlab (R2007b) with a Fire=3D<br>&gt; face UC<br>&= gt; &gt; soundcard. If the problems are fixed in another version or soundca= rd, I=3D<br>&gt; 'd love<br>&gt; &gt; to know about it.<br>&gt; &gt;=3D20<b= r>&gt; &gt; =3D3D=3D3D=3D3DMatlab's native functions (sound, wavplay, audio= player)<br>&gt; &gt; Large, unpredictable truncations were the least of our= problems. We als=3D<br>&gt; o often<br>&gt; &gt; got mid-sound glitches, r= anging from sporadic (just a few subtle glitch=3D<br>&gt; es per<br>&gt; &g= t; minute) to frequent (making the sound barely recognisable). The magic f= =3D<br>&gt; ormula<br>&gt; &gt; for eliminating the glitches seemed to be t= o keep the soundcard turned =3D<br>&gt; off<br>&gt; &gt; until<br>&gt; &gt;= the desktop was ready, with all background programs loaded. (Restarting=3D= <br>&gt; either<br>&gt; &gt; the soundcard or the computer alone guarante= ed some glitches.) So this=3D<br>&gt; formula<br>&gt; &gt; seems to work, = but it's a bit too Harry Potter for my liking, and the s=3D<br>&gt; pell<br= >&gt; &gt; might change with the next Windows update. I think I read that = Firefac=3D<br>&gt; e were<br>&gt; &gt; no longer supporting Microsoft's vag= aries, and they recommended using A=3D<br>&gt; SIO. I'm<br>&gt; &gt; not su= re if other high-end soundcard manufacturers are any different. S=3D<br>&gt= ; ince<br>&gt; &gt; Matlab's native functions don't support ASIO (unless th= e new versions d=3D<br>&gt; o?),<br>&gt; &gt; I think we're forced to look = at the ASIO options.<br>&gt; &gt;=3D20<br>&gt; &gt; =3D3D=3D3D=3D3Dplayrec<= br>&gt; &gt; This seems to be potentially the most flexible method of prese= nting sou=3D<br>&gt; nds but<br>&gt; &gt; I've hit a brick wall compiling i= t for Windows 7. I think its author st=3D<br>&gt; opped<br>&gt; &gt; provid= ing support for it a few years ago. Has anyone had more success t=3D<br>&gt= ; han me?<br>&gt; &gt;=3D20<br>&gt; &gt; =3D3D=3D3D=3D3DasioWavPlay<br>&gt;= &gt; This simply presents a .wav file using ASIO. It's a little annoying t= ha=3D<br>&gt; t you<br>&gt; &gt; have to save your sound to disk before pre= senting it, but as Joachim po=3D<br>&gt; inted<br>&gt; &gt; out, it's not t= oo difficult to automate this process. While doing that,=3D<br>&gt; I add<= br>&gt; &gt; 256 samples of silence to the end to work around the truncatio= n problem=3D<br>&gt; .<br>&gt; &gt;=3D20<br>&gt; &gt; =3D3D=3D3D=3D3Dpa_wav= play<br>&gt; &gt; This is nearly the perfect solution except (1) the number= of samples tr=3D<br>&gt; uncated<br>&gt; &gt; from the end is slightly unp= redictable and (2) it prints a message on t=3D<br>&gt; he<br>&gt; &gt; scre= en after every sound ("Playing on device 0"). For these two reasons=3D<br>&= gt; , I<br>&gt; &gt; prefer asioWavPlay.<br>&gt; &gt;=3D20<br>&gt; &gt; =3D= 3D=3D3D=3D3Dsoundmexpro<br>&gt; &gt; This might be best choice for the high= -end user (I've just had a quick =3D<br>&gt; look at<br>&gt; &gt; the demo = version today). It's easy to install and there are good tutori=3D<br>&gt; a= ls, but<br>&gt; &gt; it involves initialising sound objects, etc. -- it's n= ot just a replace=3D<br>&gt; ment for<br>&gt; &gt; Matlab's "sound" command= . Also it looks like it's =3DE2=3D82=3DAC500+.<br>&gt; &gt;=3D20<br>&gt; &g= t; =3D3D=3D3D=3D3DPsychToolbox<br>&gt; &gt; Originally designed for visual = experiments, PsychToolbox has now got qu=3D<br>&gt; ite<br>&gt; &gt; extens= ive low-latency sound functions, including realtime continuous<br>&gt; &gt;= playing/recording. It's also free. However, it's slightly challenging t=3D= <br>&gt; o<br>&gt; &gt; install Like soundmexpro, it's object-oriented -- = so don't expect to p=3D<br>&gt; lay a<br>&gt; &gt; sound with a simple one-= liner.<br>&gt; &gt;=3D20<br>&gt; &gt; =3D3D=3D3D=3D3DPortAudio<br>&gt; &gt;= Most of above programs are based on this C library. If you're an experi=3D= <br>&gt; enced<br>&gt; &gt; programmer, perhaps you'd prefer to go direct t= he source? And while you=3D<br>&gt; 're<br>&gt; &gt; there, perhaps you cou= ld write the perfect Matlab-ASIO interfaces for t=3D<br>&gt; he rest<br>&gt= ; &gt; of us? (Please!)<br>&gt; &gt;=3D20<br>&gt; &gt; Has anyone found a s= impler solution? I'd be glad to hear it.<br>&gt; &gt;=3D20<br>&gt; &gt; Tre= vor<br>&gt; <br>&gt; ------------------------------<br>&gt; <br>&gt; Date: = Mon, 24 Oct 2011 14:06:26 +0100<br>&gt; From: A Davidson &lt;pspc1d@xxxxxxxx= ANGOR.AC.UK&gt;<br>&gt; Subject: question about streaming<br>&gt; <br>&gt; = Hello everyone,<br>&gt; <br>&gt; I was wondering if anyone could point me i= n the direction of some <br>&gt; clear and relatively simple tutorial info= rmation and/or good review <br>&gt; papers about streaming and the problem= s of trying to discern between <br>&gt; two auditory stimuli presented to = two different ears concurrently.<br>&gt; <br>&gt; Many thanks,<br>&gt; <br>= &gt; Alison<br>&gt; <br>&gt; ----------------------------------------------= ------------------<br>&gt; This message was sent using IMP, the Internet Me= ssaging Program.<br>&gt; <br>&gt; ------------------------------<br>&gt; <b= r>&gt; Date: Mon, 24 Oct 2011 17:37:39 +0100<br>&gt; From: Etienne Ga= udrain &lt;egaudrain.cam@xxxxxxxx&gt;<br>&gt; Subject: Re: question about = streaming<br>&gt; <br>&gt; Dear Alison,<br>&gt; <br>&gt; First because this= is so recent, a paper by Stainsby et al. on <br>&gt; sequential streaming = :<br>&gt; <br>&gt; Sequential streaming due to manipulation of interaural t= ime differences.<br>&gt; Stainsby TH, Fullgrabe C, Flanagan HJ, Waldman SK,= Moore BC.<br>&gt; J Acoust Soc Am. 2011 Aug;130(2):904-14.<br>&gt; PMID: 2= 1877805<br>&gt; <br>&gt; Otherwise two papers that include a fairly compreh= ensive review of the <br>&gt; literature:<br>&gt; <br>&gt; Spatial release = from energetic and informational masking in a divided <br>&gt; speech ident= ification task.<br>&gt; Ihlefeld A, Shinn-Cunningham B.<br>&gt; J Acoust So= c Am. 2008 Jun;123(6):4380-92.<br>&gt; PMID: 18537389<br>&gt; <br>&gt; Spat= ial release from energetic and informational masking in a selective <br>&gt= ; speech identification task.<br>&gt; Ihlefeld A, Shinn-Cunningham B.<br>&g= t; J Acoust Soc Am. 2008 Jun;123(6):4369-79.<br>&gt; PMID: 18537388<br>&gt;= <br>&gt; These are not review papers, but you might find what you're looki= ng for.<br>&gt; <br>&gt; -Etienne<br>&gt; <br>&gt; <br>&gt; On 24/10/2011 1= 4:06, A Davidson wrote:<br>&gt; &gt; Hello everyone,<br>&gt; &gt;<br>&gt; &= gt; I was wondering if anyone could point me in the direction of some <br>&= gt; &gt; clear and relatively simple tutorial information and/or good revie= w <br>&gt; &gt; papers about streaming and the problems of trying to discer= n between <br>&gt; &gt; two auditory stimuli presented to two different ear= s concurrently.<br>&gt; &gt;<br>&gt; &gt; Many thanks,<br>&gt; &gt;<br>&gt;= &gt; Alison<br>&gt; &gt;<br>&gt; &gt; ------------------------------------= ----------------------------<br>&gt; &gt; This message was sent using IMP, = the Internet Messaging Program.<br>&gt; <br>&gt; <br>&gt; -- <br>&gt; Etien= ne Gaudrain, PhD<br>&gt; MRC Cognition and Brain Sciences Unit<br>&gt; 15 C= haucer Road<br>&gt; Cambridge, CB2 7EF<br>&gt; UK<br>&gt; Phone: +44 1223 3= 55 294, ext. 645<br>&gt; Fax (unit): +44 1223 359 062<br>&gt; <br>&gt; ----= --------------------------<br>&gt; <br>&gt; Date: Mon, 24 Oct 2011 12:31= :38 -0700<br>&gt; From: Diana Deutsch &lt;ddeutsch@xxxxxxxx&gt;<br>&gt; = Subject: Re: question about streaming<br>&gt; <br>&gt; --Apple-Mail-2--2724= 01268<br>&gt; Content-Transfer-Encoding: quoted-printable<br>&gt; Content-T= ype: text/plain;<br>&gt; charset=3Dus-ascii<br>&gt; <br>&gt; Hi Alson,<br>= &gt; <br>&gt; You might want to read my review chapter:<br>&gt; <br>&gt; De= utsch, D. Grouping mechanisms in music. In D. Deutsch (Ed.). The =3D<br>&gt= ; psychology of music, 2nd Edition, 1999, 299-348, Academic Press. [PDF =3D= <br>&gt; Document]<br>&gt; <br>&gt; The book is going into a third edition,= and the updated chapter should =3D<br>&gt; be available in a few months.<b= r>&gt; <br>&gt; Cheers,<br>&gt; <br>&gt; Diana Deutsch<br>&gt; <br>&gt; <br= >&gt; Professor Diana Deutsch<br>&gt; Department of Psychology = =3D20<br>&gt; University of California, San Diego<br>&gt; 9500 = Gilman Dr. #0109 =3D20<br>&gt; La Jolla, CA 92093-0109, USA<br>&g= t; <br>&gt; 858-453-1558 (tel)<br>&gt; 858-453-4763 (fax)<br>&gt; <br>&gt; = http://deutsch.ucsd.edu<br>&gt; http://www.philomel.com<br>&gt; <br>&gt; <b= r>&gt; <br>&gt; <br>&gt; On Oct 24, 2011, at 6:06 AM, A Davidson wrote:<br>= &gt; <br>&gt; &gt; Hello everyone,<br>&gt; &gt;=3D20<br>&gt; &gt; I was won= dering if anyone could point me in the direction of some =3D<br>&gt; clear = and relatively simple tutorial information and/or good review =3D<br>&gt; p= apers about streaming and the problems of trying to discern between two =3D= <br>&gt; auditory stimuli presented to two different ears concurrently.<br>= &gt; &gt;=3D20<br>&gt; &gt; Many thanks,<br>&gt; &gt;=3D20<br>&gt; &gt; Ali= son<br>&gt; &gt;=3D20<br>&gt; &gt; ----------------------------------------= ------------------------<br>&gt; &gt; This message was sent using IMP, the = Internet Messaging Program.<br>&gt; <br>&gt; <br>&gt; <br>&gt; --Apple-Mail= -2--272401268<br>&gt; Content-Transfer-Encoding: quoted-printable<br>&gt; C= ontent-Type: text/html;<br>&gt; charset=3Dus-ascii<br>&gt; <br>&gt; &lt;ht= ml&gt;&lt;head&gt;&lt;/head&gt;&lt;body style=3D3D"word-wrap: break-word; = =3D<br>&gt; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space= ; "&gt;Hi =3D<br>&gt; Alson,&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;You= might want to read my review =3D<br>&gt; chapter:&lt;/div&gt;&lt;div&gt;&l= t;br&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=3D3D"Apple-style-span" =3D<br= >&gt; style=3D3D"font-family: Arial, Helvetica, sans-serif; font-size: 12px= ; =3D<br>&gt; "&gt;Deutsch, D. Grouping mechanisms in music. In D. Deutsch = =3D<br>&gt; (Ed.).&amp;nbsp;&lt;i&gt;The psychology of music, 2nd Edition&l= t;/i&gt;, 1999, 299-348, =3D<br>&gt; Academic Press.&amp;nbsp;&lt;nobr&gt;[= &lt;a =3D<br>&gt; href=3D3D"http://philomel.com/pdf/PsychMus_Ch9.pdf" targe= t=3D3D"_blank"&gt;PDF =3D<br>&gt; Document&lt;/a&gt;]&lt;/nobr&gt;&lt;br&gt= ;&lt;/span&gt;&lt;div&gt;&lt;div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;di= v&gt;The book is =3D<br>&gt; going into a third edition, and the updated ch= apter should be available =3D<br>&gt; in a few =3D<br>&gt; months.&lt;/div&= gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;Cheers,&lt;/div&gt;&lt;div&g= t;&lt;br&gt;&lt;/div&gt;&lt;div&gt;Diana =3D<br>&gt; Deutsch&lt;/div&gt;&lt= ;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;&lt;div&gt;&lt;br =3D<br>&gt; clas= s=3D3D"webkit-block-placeholder"&gt;&lt;/div&gt;&lt;div&gt;&lt;span =3D<br>= &gt; class=3D3D"Apple-style-span" style=3D3D"border-collapse: separate; col= or: =3D<br>&gt; rgb(0, 0, 0); font-family: Helvetica; font-size: medium; fo= nt-style: =3D<br>&gt; normal; font-variant: normal; font-weight: normal; le= tter-spacing: =3D<br>&gt; normal; line-height: normal; orphans: 2; text-ind= ent: 0px; =3D<br>&gt; text-transform: none; white-space: normal; widows: 2;= word-spacing: 0px; =3D<br>&gt; -webkit-border-horizontal-spacing: 0px; -we= bkit-border-vertical-spacing: =3D<br>&gt; 0px; -webkit-text-decorations-in-= effect: none; -webkit-text-size-adjust: =3D<br>&gt; auto; -webkit-text-stro= ke-width: 0px; "&gt;&lt;span class=3D3D"Apple-style-span" =3D<br>&gt; style= =3D3D"font-size: 12px; "&gt;&lt;div&gt;Professor Diana =3D<br>&gt; Deutsch&= lt;/div&gt;&lt;div&gt;Department of Psychology&amp;nbsp; &amp;nbsp; &amp;nb= sp; &amp;nbsp; =3D<br>&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp= ;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; =3D<br>&gt; &amp;nbsp;&amp;nbsp;&lt= ;/div&gt;&lt;div&gt;University of California, San =3D<br>&gt; Diego&lt;/div= &gt;&lt;div&gt;9500 Gilman Dr. #0109&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;n= bsp; &amp;nbsp; =3D<br>&gt; &amp;nbsp;&amp;nbsp;&lt;/div&gt;&lt;div&gt;La J= olla, CA 92093-0109, USA&lt;/div&gt;&lt;div&gt;&lt;br =3D<br>&gt; class=3D3= D"khtml-block-placeholder"&gt;&lt;/div&gt;&lt;div&gt;858-453-1558 =3D<br>&g= t; (tel)&lt;/div&gt;&lt;div&gt;858-453-4763 (fax)&lt;/div&gt;&lt;div&gt;&lt= ;br =3D<br>&gt; class=3D3D"khtml-block-placeholder"&gt;&lt;/div&gt;&lt;div&= gt;&lt;a =3D<br>&gt; href=3D3D"http://deutsch.ucsd.edu"&gt;http://deutsch.u= csd.edu&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;a =3D<br>&gt; href=3D3D"http://= www.philomel.com"&gt;http://www.philomel.com&lt;/a&gt;&lt;/div&gt;&lt;div&g= t;&lt;br=3D<br>&gt; class=3D3D"khtml-block-placeholder"&gt;&lt;/div&gt;&lt= ;/span&gt;&lt;/span&gt;&lt;br =3D<br>&gt; class=3D3D"Apple-interchange-newl= ine"&gt;&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt= ;&lt;br&gt;&lt;/=3D<br>&gt; div&gt;&lt;div&gt;On Oct 24, 2011, at 6:06 AM, = A Davidson wrote:&lt;/div&gt;&lt;br =3D<br>&gt; class=3D3D"Apple-interchang= e-newline"&gt;&lt;blockquote type=3D3D"cite"&gt;&lt;div&gt;Hello =3D<br>&gt= ; everyone,&lt;br&gt;&lt;br&gt;I was wondering if anyone could point me in = the =3D<br>&gt; direction of some clear and relatively simple tutorial info= rmation =3D<br>&gt; and/or good review papers about streaming and the probl= ems of trying to =3D<br>&gt; discern between two auditory stimuli presented= to two different ears =3D<br>&gt; concurrently.&lt;br&gt;&lt;br&gt;Many = =3D<br>&gt; thanks,&lt;br&gt;&lt;br&gt;Alison&lt;br&gt;&lt;br&gt;----------= -----------------------------------=3D<br>&gt; -------------------&lt;br&gt= ;This message was sent using IMP, the Internet =3D<br>&gt; Messaging Progra= m.&lt;br&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;br&gt;&lt;div&gt= ;<br>&gt; &lt;span class=3D3D"Apple-style-span" style=3D3D"border-collapse:= separate; =3D<br>&gt; color: rgb(0, 0, 0); font-family: Helvetica; font-si= ze: medium; =3D<br>&gt; font-style: normal; font-variant: normal; font-weig= ht: normal; =3D<br>&gt; letter-spacing: normal; line-height: normal; orphan= s: 2; text-align: =3D<br>&gt; auto; text-indent: 0px; text-transform: none;= white-space: normal; =3D<br>&gt; widows: 2; word-spacing: 0px; -webkit-bor= der-horizontal-spacing: 0px; =3D<br>&gt; -webkit-border-vertical-spacing: 0= px; =3D<br>&gt; -webkit-text-decorations-in-effect: none; -webkit-text-size= -adjust: =3D<br>&gt; auto; -webkit-text-stroke-width: 0px; "&gt;&lt;span cl= ass=3D3D"Apple-style-span" =3D<br>&gt; style=3D3D"font-size: 12px; "&gt;&lt= ;div&gt;&lt;span class=3D3D"Apple-style-span" =3D<br>&gt; style=3D3D"font-s= ize: medium;"&gt;&lt;br =3D<br>&gt; class=3D3D"Apple-interchange-newline"&g= t;&lt;/span&gt;&lt;/div&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div= =3D<br>&gt; &gt;&lt;/div&gt;&lt;/body&gt;&lt;/html&gt;=3D<br>&gt; <br>&gt; = --Apple-Mail-2--272401268--<br>&gt; <br>&gt; ------------------------------= <br>&gt; <br>&gt; Date: Tue, 25 Oct 2011 00:04:32 +0200<br>&gt; From: = Martin &lt;m.cooke@xxxxxxxx&gt;<br>&gt; Subject: Workshop announceme= nt: The Listening Talker<br>&gt; <br>&gt; --Apple-Mail=3D_CE921D79-D8BC-43C= 1-8355-D3CA57A705E4<br>&gt; Content-Transfer-Encoding: quoted-printable<br>= &gt; Content-Type: text/plain;<br>&gt; charset=3Dus-ascii<br>&gt; <br>&gt;= The Listening Talker: an interdisciplinary workshop on natural and =3D<br>= &gt; synthetic=3D20<br>&gt; modification of speech in response to listening= conditions<br>&gt; <br>&gt; Edinburgh, 2-3 May 2012<br>&gt; <br>&gt; http:= //listening-talker.org/workshop<br>&gt; <br>&gt; When talkers speak, they a= lso listen. Talkers routinely adapt to their =3D<br>&gt; interlocutors=3D20= <br>&gt; and environment, maintaining intelligibility and dialogue fluidity= in a =3D<br>&gt; way that=3D20<br>&gt; promotes efficient exchange of info= rmation. In contrast, current speech =3D<br>&gt; output=3D20<br>&gt; techno= logy is largely deaf, incapable of adapting to the listener's =3D<br>&gt; c= ontext,<br>&gt; inefficient in use and lacking the naturalness that comes f= rom rapid =3D<br>&gt; appreciation<br>&gt; of the speaker-listener environm= ent. A key scientific challenge is to =3D<br>&gt; better=3D20<br>&gt; unde= rstand how "talker-listeners" respond to context and to apply these=3D20=3D= <br>&gt; <br>&gt; findings to the modification of natural (live/recorded) a= nd generated =3D<br>&gt; (synthetic)<br>&gt; speech. The ISCA-supported Lis= tening Talker (LISTA) workshop brings=3D20<br>&gt; together linguists, psyc= hologists, neuroscientists, engineers and others =3D<br>&gt; working=3D20<b= r>&gt; on human and machine speech perception and production, to explore ne= w=3D20<br>&gt; approaches to context-sensitive speech generation.<br>&gt; <= br>&gt; The workshop will be single-track, with invited talks and contribut= ed =3D<br>&gt; oral=3D20<br>&gt; and poster presentations. An open call for= a special issue of Computer=3D20=3D<br>&gt; <br>&gt; Speech and Language o= n the theme of the listening talker will follow the =3D<br>&gt; workshop.<b= r>&gt; <br>&gt; Contributions are invited on any aspect of the listening ta= lker, =3D<br>&gt; including but not limited to:<br>&gt; <br>&gt; - theories= and models of human communication involving the listening =3D<br>&gt; talk= er<br>&gt; - human speech production modifications induced by noise<br>&gt;= - speech production changes with manipulated feedback<br>&gt; - algorithms= /vocoders for speech modification<br>&gt; - transformations from casual to = clear speech<br>&gt; - characterisation of the listening context<br>&gt; - = intelligibility and quality metrics for modified speech<br>&gt; - applicati= on to natural dialogues, PA, teleconferencing<br>&gt; <br>&gt; Invited spea= kers<br>&gt; <br>&gt; Torsten Dau (Danish Technical University)<br>&gt; = Valerie Hazan (University College, London)<br>&gt; Richard Heusdens (Tec= hnical University Delft)<br>&gt; Hideki Kawahara (Wakayama University)<br= >&gt; Roger Moore (University of Sheffield)<br>&gt; Martin Pickering (U= niversity of Edinburgh)<br>&gt; Peter Vary (Aachen University)<br>&gt; J= unichi Yamagishi (University of Edinburgh)<br>&gt; =3D09<br>&gt; Important= dates<br>&gt; <br>&gt; 30th January 2012: Submission of 4-page papers=3D2= 0<br>&gt; 27th February 2012: Notification of acceptance/rejection<br>&gt;= <br>&gt; Co-chairs<br>&gt; <br>&gt; Martin Cooke (University of the Basq= ue Country)<br>&gt; Simon King (University of Edinburgh)<br>&gt; Bastiaa= n Kleijn (Victoria University of Wellington)<br>&gt; Yannis Stylianou (U= niversity of Crete)=3D<br>&gt; <br>&gt; --Apple-Mail=3D_CE921D79-D8BC-43C1-= 8355-D3CA57A705E4<br>&gt; Content-Transfer-Encoding: quoted-printable<br>&g= t; Content-Type: text/html;<br>&gt; charset=3Dus-ascii<br>&gt; <br>&gt; &l= t;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body style=3D3D"word-wrap: break-wor= d; =3D<br>&gt; -webkit-nbsp-mode: space; -webkit-line-break: after-white-sp= ace; "&gt;The =3D<br>&gt; Listening Talker: an interdisciplinary workshop o= n natural and =3D<br>&gt; synthetic&amp;nbsp;&lt;div&gt;modification of spe= ech in response to listening =3D<br>&gt; conditions&lt;br&gt;&lt;br&gt;Edin= burgh, 2-3 May 2012&lt;br&gt;&lt;br&gt;&lt;a =3D<br>&gt; href=3D3D"http://l= istening-talker.org/workshop"&gt;http://listening-talker.org/=3D<br>&gt; wo= rkshop&lt;/a&gt;&lt;br&gt;&lt;br&gt;When talkers speak, they also listen. T= alkers =3D<br>&gt; routinely adapt to their interlocutors&amp;nbsp;&lt;/div= &gt;&lt;div&gt;and environment, =3D<br>&gt; maintaining intelligibility and= dialogue fluidity in a way =3D<br>&gt; that&amp;nbsp;&lt;/div&gt;&lt;div&g= t;promotes efficient exchange of information. In =3D<br>&gt; contrast, curr= ent speech output&amp;nbsp;&lt;/div&gt;&lt;div&gt;technology is largely =3D= <br>&gt; deaf, incapable of adapting to the listener's =3D<br>&gt; context,= &lt;/div&gt;&lt;div&gt;inefficient in use and lacking the naturalness that = =3D<br>&gt; comes from rapid appreciation&lt;/div&gt;&lt;div&gt;of the spea= ker-listener =3D<br>&gt; environment. &amp;nbsp;A key scientific challenge = is to =3D<br>&gt; better&amp;nbsp;&lt;/div&gt;&lt;div&gt;understand how "ta= lker-listeners" respond to =3D<br>&gt; context and to apply these&amp;nbsp;= &lt;/div&gt;&lt;div&gt;findings to the modification =3D<br>&gt; of natural = (live/recorded) and generated (synthetic)&lt;/div&gt;&lt;div&gt;speech. =3D= <br>&gt; The ISCA-supported Listening Talker (LISTA) workshop =3D<br>&gt; b= rings&amp;nbsp;&lt;/div&gt;&lt;div&gt;together linguists, psychologists, = =3D<br>&gt; neuroscientists, engineers and others working&amp;nbsp;&lt;/div= &gt;&lt;div&gt;on human =3D<br>&gt; and machine speech perception and produ= ction, to explore =3D<br>&gt; new&amp;nbsp;&lt;/div&gt;&lt;div&gt;approache= s to context-sensitive speech =3D<br>&gt; generation.&lt;br&gt;&lt;br&gt;Th= e workshop will be single-track, with invited talks =3D<br>&gt; and contrib= uted oral&amp;nbsp;&lt;/div&gt;&lt;div&gt;and poster presentations. An open= =3D<br>&gt; call for a special issue of Computer&amp;nbsp;&lt;/div&gt;&lt;= div&gt;Speech and Language =3D<br>&gt; on the theme of the listening talker= will follow the =3D<br>&gt; workshop.&lt;br&gt;&lt;br&gt;Contributions are= invited on any aspect of the =3D<br>&gt; listening talker, including but n= ot limited to:&lt;br&gt;&lt;br&gt;- theories and =3D<br>&gt; models of huma= n communication involving the listening talker&lt;br&gt;- human =3D<br>&gt;= speech production modifications induced by noise&lt;br&gt;- speech product= ion =3D<br>&gt; changes with manipulated feedback&lt;br&gt;- algorithms/voc= oders for speech =3D<br>&gt; modification&lt;br&gt;- transformations from c= asual to clear speech&lt;br&gt;- =3D<br>&gt; characterisation of the listen= ing context&lt;br&gt;- intelligibility and =3D<br>&gt; quality metrics for = modified speech&lt;br&gt;- application to natural =3D<br>&gt; dialogues, PA= , teleconferencing&lt;br&gt;&lt;br&gt;Invited =3D<br>&gt; speakers&lt;br&gt= ;&lt;br&gt;&amp;nbsp;Torsten Dau &amp;nbsp;(Danish Technical =3D<br>&gt; Un= iversity)&lt;br&gt;&amp;nbsp;Valerie Hazan &amp;nbsp;(University College, = =3D<br>&gt; London)&lt;br&gt;&amp;nbsp;Richard Heusdens &amp;nbsp;(Technica= l University =3D<br>&gt; Delft)&lt;br&gt;&amp;nbsp;Hideki Kawahara &amp;nbs= p;(Wakayama =3D<br>&gt; University)&lt;br&gt;&amp;nbsp;Roger Moore &amp;nbs= p;(University of =3D<br>&gt; Sheffield)&lt;br&gt;&amp;nbsp;Martin Pickering= &amp;nbsp;(University of =3D<br>&gt; Edinburgh)&lt;br&gt;&amp;nbsp;Peter V= ary &amp;nbsp;(Aachen =3D<br>&gt; University)&lt;br&gt;&amp;nbsp;Junichi Ya= magishi &amp;nbsp;(University of =3D<br>&gt; Edinburgh)&lt;br&gt;&lt;span c= lass=3D3D"Apple-tab-span" style=3D3D"white-space: pre; =3D<br>&gt; "&gt; &l= t;/span&gt;&lt;br&gt;Important dates&lt;br&gt;&lt;br&gt;&amp;nbsp;30th Janu= ary 2012: =3D<br>&gt; Submission of 4-page papers&amp;nbsp;&lt;br&gt;&amp;n= bsp;27th February 2012: =3D<br>&gt; Notification of =3D<br>&gt; acceptance/= rejection&lt;br&gt;&lt;br&gt;Co-chairs&lt;br&gt;&lt;br&gt;&amp;nbsp;Martin = Cooke =3D<br>&gt; &amp;nbsp;(University of the Basque Country)&lt;br&gt;&am= p;nbsp;Simon King =3D<br>&gt; &amp;nbsp;(University of Edinburgh)&lt;br&gt;= &amp;nbsp;Bastiaan Kleijn &amp;nbsp;(Victoria =3D<br>&gt; University of Wel= lington)&lt;br&gt;&amp;nbsp;Yannis Stylianou &amp;nbsp;(University of =3D<b= r>&gt; Crete)&lt;/div&gt;&lt;/body&gt;&lt;/html&gt;=3D<br>&gt; <br>&gt; --A= pple-Mail=3D_CE921D79-D8BC-43C1-8355-D3CA57A705E4--<br>&gt; <br>&gt; ------= ------------------------<br>&gt; <br>&gt; End of AUDITORY Digest - 23 Oct 2= 011 to 24 Oct 2011 (#2011-246)<br>&gt; ************************************= ***************************<br></div> </div></body> </html>= --_57b05377-977d-4986-a651-662bd3c102b9_--


This message came from the mail archive
/var/www/postings/2011/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University