[AUDITORY] Interspeech - Special session on Speech Neuroscience (Giovanni Di Liberto)


Subject: [AUDITORY] Interspeech - Special session on Speech Neuroscience
From:    Giovanni Di Liberto <Giovanni Di Liberto>
Date:    Mon, 30 Jan 2023 18:30:15 +0000

--000000000000ab552b05f37f68e2 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Dear colleagues, The Interspeech conference has accepted our proposal for a special session on speech neuroscience. Here is the full title: "Neural Processing of Speech and Language: Encoding and Decoding the Diverse Auditory Brain". I hope many of you will come to Dublin (Ireland) for this event on the 20-24th August 2023. - Paper submission deadline: 1st March (paper update deadline: 8th March); - Please find the summary below; - Link to the special session: https://www.interspeech2023.org/special-sessions-challenges/#toggle-id-6 Thank you! Kind regards, Giovanni, on behalf of the organisers (Alejandro, Mick, and Mounya) Summary: This special session has the goal of serving as a central hub for researchers investigating how the human brain processes speech under various acoustic/linguistic conditions and in various populations. Understanding speech requires our brain to rapidly process a variety of acoustic and linguistic properties, with variability due to age, language proficiency, attention, and neurocognitive ability among other factors. Until recently, neurophysiology research was limited to studying the encoding of individual linguistic units in isolation (e.g., syllables) using tightly controlled and uniform experiments that were far from realistic scenarios. Recent advances in modelling techniques led to the possibility of studying the neural processing of speech with more ecologically constructed stimuli involving natural, conversational speech, enabling researchers to examine the contribution of factors such as native language and language proficiency, speaker sex, and age to speech perception. One of the approaches, known as forward modelling, involves modelling how the brain encodes speech information as a function of certain parameters (e.g., time, frequency, brain region), contributing to our understanding of what happens to the speech signal as it passes along the auditory pathway. This framework has been used to study both young and ageing populations, as well as neurocognitive deficits. Another approach, known as backward modelling, involves decoding speech features or other relevant parameters from the neural response recorded during natural listening tasks. A noteworthy contribution of this approach was the discovery that auditory attention can be reliably decoded from several seconds of non-invasive brain recordings (EEG/MEG) in multi-speaker environments, leading to a new subfield of auditory neuroscience focused on neuro-enabled hearing technology applications. Other details: Eight submissions will be selected for podium presentations (12 min talk + 3 min questions). The other accepted submissions will be assigned to a poster session. --=20 Giovanni Di Liberto, PhD Assistant Professor in Intelligent Systems School of Computer Science and Statistics Discipline of Artificial Intelligence Trinity College Dublin, the University of Dublin, Dublin 2, Ireland diliberg@xxxxxxxx diliberg.net/ <https://www.diliberg.net/index.html> ------------------------ Col=C3=A1iste na Tr=C3=ADon=C3=B3ide, Baile =C3=81tha Cliath, Ollscoil =C3= =81tha Cliath, Baile =C3=81tha Cliath 2, =C3=89ire. Show your support at: tcd.ie/campaign --000000000000ab552b05f37f68e2 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">Dear colleagues,<br><br><div>The Interspeech conference ha= s accepted our proposal for a special session on speech neuroscience. Here = is the full title: &quot;Neural Processing of Speech and Language: Encoding= and Decoding the Diverse Auditory Brain&quot;.=20 I hope many of you will come to Dublin (Ireland) for this event on the 20-2= 4th August 2023. </div><div><br></div><div>- Paper submission deadline: 1st March (paper upd= ate deadline: 8th March); <br></div><div>- Please find the summary below;<b= r></div><div>- Link to the special session:</div><div><a href=3D"https://ww= w.interspeech2023.org/special-sessions-challenges/#toggle-id-6">https://www= .interspeech2023.org/special-sessions-challenges/#toggle-id-6</a></div><br>= Thank you!<br>Kind regards,<br>Giovanni, on behalf of the organisers (Aleja= ndro, Mick, and Mounya)<br><div><br></div><div><br></div><div>Summary:<br><= br>This special session has the goal of serving as a central hub for resear= chers investigating how the human brain processes speech under various acou= stic/linguistic conditions and in various populations. Understanding speech= requires our brain to rapidly process a variety of acoustic and linguistic= properties, with variability due to age, language proficiency, attention, = and neurocognitive ability among other factors. Until recently, neurophysio= logy research was limited to studying the encoding of individual linguistic= units in isolation (e.g., syllables) using tightly controlled and uniform = experiments that were far from realistic scenarios. Recent advances in mode= lling techniques led to the possibility of studying the neural processing o= f speech with more ecologically constructed stimuli involving natural, conv= ersational speech, enabling researchers to examine the contribution of fact= ors such as native language and language proficiency, speaker sex, and age = to speech perception. <br><br>One of the approaches, known as forward model= ling, involves modelling how the brain encodes speech information as a func= tion of certain parameters (e.g., time, frequency, brain region), contribut= ing to our understanding of what happens to the speech signal as it passes = along the auditory pathway. This framework has been used to study both youn= g and ageing populations, as well as neurocognitive deficits. Another appro= ach, known as backward modelling, involves decoding speech features or othe= r relevant parameters from the neural response recorded during natural list= ening tasks. A noteworthy contribution of this approach was the discovery t= hat auditory attention can be reliably decoded from several seconds of non-= invasive brain recordings (EEG/MEG) in multi-speaker environments, leading = to a new subfield of auditory neuroscience focused on neuro-enabled hearing= technology applications.<br><br>Other details:<br><br>Eight submissions wi= ll be selected for podium presentations (12 min talk + 3 min questions). Th= e other accepted submissions will be assigned to a poster session.</div><di= v><br></div><div>-- <br><div dir=3D"ltr" class=3D"gmail_signature" data-sma= rtmail=3D"gmail_signature"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"lt= r"><div dir=3D"ltr"><span><font color=3D"#888888"><div>Giovanni Di Liberto,= PhD</div><div>Assistant Professor in Intelligent Systems</div><div>School = of Computer Science and Statistics</div><div>Discipline of Artificial Intel= ligence<br></div><div><span style=3D"color:rgb(61,133,198)">Trinity College= Dublin, the University of Dublin,</span></div><div>Dublin 2, Ireland<br></= div></font></span><span><font color=3D"#888888"><div><a href=3D"mailto:dili= berg@xxxxxxxx" target=3D"_blank">diliberg@xxxxxxxx</a></div></font></span><span= ><font color=3D"#888888"><div><a href=3D"https://www.diliberg.net/index.htm= l" target=3D"_blank">diliberg.net/</a></div><div>------------------------<b= r></div><div><span style=3D"color:rgb(61,133,198)">Col=C3=A1iste na Tr=C3= =ADon=C3=B3ide, Baile =C3=81tha Cliath, Ollscoil =C3=81tha Cliath, Baile = =C3=81tha </span>Cliath 2, =C3=89ire.</div><div>Show your support at: <a hr= ef=3D"http://tcd.ie/campaign" target=3D"_blank">tcd.ie/campaign</a><br></di= v><div><br></div></font></span></div></div></div></div></div></div></div> --000000000000ab552b05f37f68e2--


This message came from the mail archive
src/postings/2023/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University