The Interspeech conference has accepted our proposal for a special session on speech neuroscience. Here is the full title: "Neural Processing of Speech and Language: Encoding and Decoding the Diverse Auditory Brain".
I hope many of you will come to Dublin (Ireland) for this event on the 20-24th August 2023.
- Paper submission deadline: 1st March (paper update deadline: 8th March);
- Please find the summary below;
- Link to the special session:
Giovanni, on behalf of the organisers (Alejandro, Mick, and Mounya)
This special session has the goal of serving as a central hub for researchers investigating how the human brain processes speech under various acoustic/linguistic conditions and in various populations. Understanding speech requires our brain to rapidly process a variety of acoustic and linguistic properties, with variability due to age, language proficiency, attention, and neurocognitive ability among other factors. Until recently, neurophysiology research was limited to studying the encoding of individual linguistic units in isolation (e.g., syllables) using tightly controlled and uniform experiments that were far from realistic scenarios. Recent advances in modelling techniques led to the possibility of studying the neural processing of speech with more ecologically constructed stimuli involving natural, conversational speech, enabling researchers to examine the contribution of factors such as native language and language proficiency, speaker sex, and age to speech perception.
One of the approaches, known as forward modelling, involves modelling how the brain encodes speech information as a function of certain parameters (e.g., time, frequency, brain region), contributing to our understanding of what happens to the speech signal as it passes along the auditory pathway. This framework has been used to study both young and ageing populations, as well as neurocognitive deficits. Another approach, known as backward modelling, involves decoding speech features or other relevant parameters from the neural response recorded during natural listening tasks. A noteworthy contribution of this approach was the discovery that auditory attention can be reliably decoded from several seconds of non-invasive brain recordings (EEG/MEG) in multi-speaker environments, leading to a new subfield of auditory neuroscience focused on neuro-enabled hearing technology applications.
Eight submissions will be selected for podium presentations (12 min talk + 3 min questions). The other accepted submissions will be assigned to a poster session.
Giovanni Di Liberto, PhD
Assistant Professor in Intelligent Systems
School of Computer Science and Statistics
Discipline of Artificial Intelligence
Trinity College Dublin, the University of Dublin,
Dublin 2, Ireland
Coláiste na Tríonóide, Baile Átha Cliath, Ollscoil Átha Cliath, Baile Átha Cliath 2, Éire.