Dear fellow neuroscientists,
We would like to invite you to join us on Tuesday, May 10 at 1:00 pm EDT (UTC-4) for the next edition of E.A.R.S. (Electronic Auditory Research Seminars), a monthly auditory seminar series with the focus
on central auditory processing and circuits. Please pre-register (for free) and tune in via Crowdcast (enter your email to receive the link for the talk): https://www.crowdcast.io/e/ears/18
(Note: for optimal performance, we recommend using Google Chrome as your browser).
Speakers:
- Diego Elgueda (University of Chile): “Sound and behavioral meaning encoding in the auditory cortex”
- Animals adapt to their environment by analyzing sensory information and integrating it with internal representations (such as behavioral goals, memories of past stimulus-event associations and
expectations) and linking perception with appropriate adaptive responses. The mechanisms by which the brain integrates acoustic feature information with these internal representations are not yet clear. We are interested in understanding how auditory representations
are transformed in the areas of the auditory cortex and how these areas interact with higher-order association areas of the cerebral cortex. We have shown that neurons in non-primary areas in the auditory cortex of the ferret, while responsive to auditory
stimuli, can greatly enhance their responses to sounds when these become behaviorally-relevant to the animal. Interestingly, tertiary area VPr can display responses that share similarities with those previously shown in ferret frontal cortex, in which attended
sounds are selectively enhanced during performance of auditory tasks, and also show long sustained short-term memory activity after stimulus offset, which correlates with the task response timing. In order to expand on these findings, we are currently training
rats in a 2AFC task in order to record from primary and non-primary areas of the auditory cortex, as well as from medial prefrontal cortex, in order to explore how these areas represent sounds and interact during selective attention and decision-making.
- Narayan Sankaran (University of California San Francisco): “Intracranial recordings reveal the encoding of melody in the human superior temporal gyrus”
- With cultural exposure across our lives, humans experience sequences of pitches as melodies that convey emotion and meaning. The perception of melody operates along three fundamental dimensions:
(1) the pitch of each note, (2) the intervals in pitch between adjacent notes, and (3) how expected each note is within its musical context. To date, it is unclear how these dimensions are collectively represented in the brain and whether their encoding is
specialized for music. I’ll present recent work in which we used high-density electrocorticography to record local population activity directly from the human brain while participants listened to continuous Western melodies. Across the superior temporal gyrus
(STG), separate populations selectively encoded pitch, intervals, and expectations, demonstrating a spatial code for independently representing each melodic dimension. The same participants also listened to naturally spoken English sentences. Whereas previous
work suggests cortical selectivity for broad sound categories like ‘music’, here we demonstrate that music-selectivity is systematically driven by the encoding of expectations, suggesting neural specialization for representing a specific sequence property
of music. In contrast, the pitch and interval dimensions of melody were represented by neural populations that also responded to speech and encoded similar acoustic content across the two domains. Melodic perception thus arises from the extraction of multiple
streams of statistical and acoustic information via specialized and domain-general mechanisms, respectively, within distinct sub-populations of higher-order auditory cortex.
With kind wishes,
Maria Geffen
Yale Cohen
Steve Eliades
Stephen David
Alexandria Lesicko
Nathan Vogler
Jean-Hugues Lestang
Huaizhen Cai
|