[AUDITORY] Auditory Model Framework (and request for Klemm 1920 paper) (Mathias Dietz )


Subject: [AUDITORY] Auditory Model Framework  (and request for Klemm 1920 paper)
From:    Mathias Dietz  <mdietz@xxxxxxxx>
Date:    Wed, 6 Dec 2017 14:59:52 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--_000_YTXPR0101MB1184E0BA9E201CA1BF42C78AB0320YTXPR0101MB1184_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Dear List, At ARO 2016 about 50 auditory model developers and model users met for a di= scussion on how to improve applicability and comparability of auditory mode= ls. A team of 8 modelers from 7 labs moved on to develop an interface conne= cting models with experiments and a software frameworks for this purpose - = in Matlab/Octave and in Python. In brief it operates through a file I/O. Ex= periment sends *.wav file to model and the model sends a response file to t= he experiment which has the same format as you would obtain from a real exp= eriment (e.g. a spike train or the selection of a target interval). The fra= mework is compatible with existing toolboxes such as AFC or the Auditory Mo= del Toolbox. Throughout the process we had a certain focus on binaural hearing but the f= inal product should be equally useful for other models and experiments. We now have a first version online and hope someone finds it useful: https://github.com/model-initiative It is ideal if you want to compare several models on the same experiment or= if you have models and experiments in different programming languages. Its= particularly useful for modeling AFC type experiments. More information can be found in the readme.pdf on github and in the Hearin= g Research paper we just published on the subject. Again there is a binaura= l focus especially Sec. 2 (review on binaural models) but the framework des= cribed in Sec. 6 is fairly universal: http://www.sciencedirect.com/science/article/pii/S0378595517302605 if you do not have access, you find a pre-print here: http://neural-reckoning.org/pub_framework_comparing_binaural_models.html If you want to give the framework a try I recommend the quick_example.txt i= nstruction. We welcome feedback, beta testers, and anyone who wants to use or improve t= he framework. On a different topic I am looking for a historic paper: Klemm, O. (1920). =DCber den Einfluss des binauralen Zeitunterschiedes auf = die Lokalisation. Arch. ges. Psychol., 1920, 40, 117-146 Best, Mathias ************************************************ Mathias Dietz Canada Research Chair in Binaural Hearing Associate Professor National Centre for Audiology School of Communication Sciences & Disorders Faculty of Health Sciences 1201 Western Road, Elborn College Room 2262F Western University London, Ontario CANADA N6G 1H1 T 519-661-2111 Ext 88258 e-mail mdietz@xxxxxxxx<mailto:mdietz@xxxxxxxx> http://www.uwo.ca/fhs/csd/people/faculty/dietz_m.html --_000_YTXPR0101MB1184E0BA9E201CA1BF42C78AB0320YTXPR0101MB1184_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:b=3D"urn:schem= as-microsoft-com:office:publisher" xmlns:m=3D"http://schemas.microsoft.com/= office/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-= 1"> <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri",sans-serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif;} @xxxxxxxx WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72"> <div class=3D"WordSection1"> <p class=3D"MsoNormal">Dear List,<o:p></o:p></p> <p class=3D"MsoNormal">At ARO 2016 about 50 auditory model developers and m= odel users met for a discussion on how to improve applicability and compara= bility of auditory models. A team of 8 modelers from 7 labs moved on to dev= elop an interface connecting models with experiments and a software frameworks for this purpose &#8211; in Mat= lab/Octave and in Python. In brief it operates through a file I/O. Experime= nt sends *.wav file to model and the model sends a response file to the exp= eriment which has the same format as you would obtain from a real experiment (e.g. a spike train or the selection o= f a target interval). The framework is compatible with existing toolboxes s= uch as AFC or the Auditory Model Toolbox.<o:p></o:p></p> <p class=3D"MsoNormal">Throughout the process we had a certain focus on bin= aural hearing but the final product should be equally useful for other mode= ls and experiments.<o:p></o:p></p> <p class=3D"MsoNormal">We now have a first version online and hope someone = finds it useful:<o:p></o:p></p> <p class=3D"MsoNormal"><a href=3D"https://github.com/model-initiative">http= s://github.com/model-initiative</a><o:p></o:p></p> <p class=3D"MsoNormal">It is ideal if you want to compare several models on= the same experiment or if you have models and experiments in different pro= gramming languages. Its particularly useful for modeling AFC type experimen= ts.<o:p></o:p></p> <p class=3D"MsoNormal">More information can be found in the readme.pdf on g= ithub and in the Hearing Research paper we just published on the subject. A= gain there is a binaural focus especially Sec. 2 (review on binaural models= ) but the framework described in Sec. 6 is fairly universal:<o:p></o:p></p> <p class=3D"MsoNormal"><a href=3D"http://www.sciencedirect.com/science/arti= cle/pii/S0378595517302605">http://www.sciencedirect.com/science/article/pii= /S0378595517302605</a><o:p></o:p></p> <p class=3D"MsoNormal">if you do not have access, you find a pre-print here= :<o:p></o:p></p> <p class=3D"MsoNormal"><a href=3D"http://neural-reckoning.org/pub_framework= _comparing_binaural_models.html">http://neural-reckoning.org/pub_framework_= comparing_binaural_models.html</a><o:p></o:p></p> <p class=3D"MsoNormal">If you want to give the framework a try I recommend = the quick_example.txt instruction. <o:p></o:p></p> <p class=3D"MsoNormal">We welcome feedback, beta testers, and anyone who wa= nts to use or improve the framework.<o:p></o:p></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> <p class=3D"MsoNormal">On a different topic I am looking for a historic pap= er:<o:p></o:p></p> <p class=3D"MsoNormal"><span lang=3D"DE">Klemm, O. (1920). =DCber den Einfl= uss des binauralen Zeitunterschiedes auf die Lokalisation. </span>Arch. ges. Psychol., 1920, 40, 117-146<o:p></o:p></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> <p class=3D"MsoNormal">Best,<o:p></o:p></p> <p class=3D"MsoNormal">Mathias<o:p></o:p></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">*******************************************= *****</span><o:p></o:p></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">Mathias Dietz</span><span style=3D"font-siz= e:12.0pt"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">Canada Research Chair in Binaural Hearing</= span><o:p></o:p></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">Associate Professor</span><o:p></o:p></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">National Centre for Audiology</span><o:p></= o:p></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">School of Communication Sciences &amp; Diso= rders</span><o:p></o:p></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">Faculty of Health Sciences</span><o:p></o:p= ></p> <p class=3D"MsoNormal" style=3D"background:white"><span style=3D"font-famil= y:&quot;Arial&quot;,sans-serif">1201 Western Road, Elborn College Room 2262= F<br> Western University<br> London, Ontario CANADA N6G 1H1<br> T 519-661-2111 Ext 88258<br> e-mail&nbsp;<a href=3D"mailto:mdietz@xxxxxxxx">mdietz@xxxxxxxx</a> <br> <u><span style=3D"color:#0000EE"><a href=3D"http://www.uwo.ca/fhs/csd/peopl= e/faculty/dietz_m.html">http://www.uwo.ca/fhs/csd/people/faculty/dietz_m.ht= ml</a></span></u></span><o:p></o:p></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> </div> </body> </html> --_000_YTXPR0101MB1184E0BA9E201CA1BF42C78AB0320YTXPR0101MB1184_--


This message came from the mail archive
../postings/2017/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University