[AUDITORY] Fwd: Re: AUDITORY Digest - 13 Nov 2020 to 14 Nov 2020 (#2020-301) (Emily Coffey )


Subject: [AUDITORY] Fwd: Re: AUDITORY Digest - 13 Nov 2020 to 14 Nov 2020 (#2020-301)
From:    Emily Coffey  <emily@xxxxxxxx>
Date:    Tue, 17 Nov 2020 15:10:04 -0500

This is a multi-part message in MIME format. --------------F8BEC620BE768A839A3CC3D0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Hi Patricia, We've developed a freely available Music-in-Noise Task (MINT) which correlates quite well with speech-in-noise and also allows you to separately look at the benefits of addition, spatial, visual, and predictive information. We published results on a largish sample with relationships to HINT, auditory working memory, fine pitch discrimination, musicianship and language last year, here: https://www.frontiersin.org/articles/10.3389/fnins.2019.00199/full Unfortunately our online version died. You can find the pre-prepared stimuli as sound and video files on our website if you want to run them separately: https://www.coffeylab.ca/open-science/ We also have a PsychoPy implementation for general use which you can run locally, though PsychoPy had a problem with presenting videos that meant it was not fully working on some systems as of a year ago (on website too if you'd like to try). We have a MATLAB implementation that some collaborators created for another project, which should be available for use if that serves your purposes. Let me know if you'd like to use one of these. Best regards, Emily Coffey ------------------------------------------------------------------------------------ On 12/11/2020 15.27, Patricia Bestelmeyer wrote: Dear List, Has anyone set up or know of a speech in noise test that runs online and gives the participant a score? Preferably calibrated? It is for a pedagogical exercise only (undergraduate project students). Perhaps someone has set up an online SIN experiment that we can help with collecting data? (we would need the data and some methodological detail so the students can write a methods section). I know there are some commercial ones out there that I am not crazy about. Patricia ------------------------------------------- Patricia E. G. Bestelmeyer, PhD School of Psychology Bangor University Brigantia Building LL57 2AS, Bangor, Gwynedd, UK Phone: (+44) 01248 383488 --------------F8BEC620BE768A839A3CC3D0 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"content-type" content=3D"text/html; charset=3DUTF= -8"> </head> <body text=3D"#000000" bgcolor=3D"#FFFFFF"> <br> <div class=3D"moz-forward-container"> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DU= TF-8"> <p>Hi Patricia, <br> </p> <p>We've developed a freely available Music-in-Noise Task (MINT) which correlates quite well with speech-in-noise and also allows you to separately look at the benefits of addition, spatial, visual, and predictive information. We published results on a largish sample with relationships to HINT, auditory working memory, fine pitch discrimination, musicianship and language last year, here:<br> </p> <p><a class=3D"moz-txt-link-freetext" href=3D"https://www.frontiersin.org/articles/10.3389/fnins.2019.00199/ful= l" moz-do-not-send=3D"true">https://www.frontiersin.org/articles/1= 0.3389/fnins.2019.00199/full</a><br> </p> <p>Unfortunately our online version died. You can find the pre-prepared stimuli as sound and video files on our website if you want to run them separately: <a class=3D"moz-txt-link-freetext" href=3D"https://www.coffeylab.ca/open-science/" moz-do-not-send=3D"true">https://www.coffeylab.ca/open-science/= </a> We also have a PsychoPy implementation for general use which you can run locally, though PsychoPy had a problem with presenting videos that meant it was not fully working on some systems as of a year ago (on website too if you'd like to try). We have a MATLAB implementation that some collaborators created for another project, which should be available for use if that serves your purposes. <br> </p> <p>Let me know if you'd like to use one of these. <br> </p> <p>Best regards, <br> </p> <p>Emily Coffey<br> </p> <p><br> </p> <p>----------------------------------------------------------------= --------------------<br> </p> <div class=3D"moz-cite-prefix">On 12/11/2020 15.27, Patricia Bestelmeyer wrote:<br> </div> <p class=3D"MsoNormal">Dear List,</p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">Has anyone set up or know of a speech in noise test that runs online and gives the participant a score? Preferably calibrated?</p> <p class=3D"MsoNormal">It is for a pedagogical exercise only (undergraduate project students). Perhaps someone has set up an online SIN experiment that we can help with collecting data? (we would need the data and some methodological detail so the students can write a methods section).</p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">I know there are some commercial ones out there that I am not crazy about.</p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">P= atricia</span></p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">=C2= =A0</span></p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">-= ------------------------------------------</span></p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">P= atricia E. G. Bestelmeyer, PhD</span></p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">S= chool of Psychology</span></p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">B= angor University</span></p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">B= rigantia Building</span></p> <p class=3D"MsoNormal"><span style=3D"mso-fareast-language:EN-GB">L= L57 2AS, Bangor, Gwynedd, UK</span></p> <span style=3D"mso-fareast-language:EN-GB">Phone: (+44) 01248 38348= 8</span> </div> </body> </html> --------------F8BEC620BE768A839A3CC3D0--


This message came from the mail archive
src/postings/2020/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University