[AUDITORY] Seminar Announcement - May 10 - E.A.R.S. (Electronic Auditory Research Seminars) ("Vogler, Nathan" )


Subject: [AUDITORY] Seminar Announcement - May 10 - E.A.R.S. (Electronic Auditory Research Seminars)
From:    "Vogler, Nathan"  <Nathan.Vogler@xxxxxxxx>
Date:    Tue, 3 May 2022 13:58:41 +0000

--_000_MN2PR04MB612730D257EAE200BD179FD8C8C09MN2PR04MB6127namp_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Dear fellow neuroscientists, We would like to invite you to join us on Tuesday, May 10 at 1:00 pm EDT (U= TC-4) for the next edition of E.A.R.S. (Electronic Auditory Research Semina= rs), a monthly auditory seminar series with the focus on central auditory p= rocessing and circuits. Please pre-register (for free) and tune in via Crow= dcast (enter your email to receive the link for the talk): https://www.crow= dcast.io/e/ears/18 (Note: for optimal performance, we recommend using Google Chrome as your br= owser). Speakers: * Diego Elgueda (University of Chile): =93Sound and behavioral meaning = encoding in the auditory cortex=94 * Animals adapt to their environment by analyzing sensory informatio= n and integrating it with internal representations (such as behavioral goal= s, memories of past stimulus-event associations and expectations) and linki= ng perception with appropriate adaptive responses. The mechanisms by which = the brain integrates acoustic feature information with these internal repre= sentations are not yet clear. We are interested in understanding how audito= ry representations are transformed in the areas of the auditory cortex and = how these areas interact with higher-order association areas of the cerebra= l cortex. We have shown that neurons in non-primary areas in the auditory c= ortex of the ferret, while responsive to auditory stimuli, can greatly enha= nce their responses to sounds when these become behaviorally-relevant to th= e animal. Interestingly, tertiary area VPr can display responses that share= similarities with those previously shown in ferret frontal cortex, in whic= h attended sounds are selectively enhanced during performance of auditory t= asks, and also show long sustained short-term memory activity after stimulu= s offset, which correlates with the task response timing. In order to expan= d on these findings, we are currently training rats in a 2AFC task in order= to record from primary and non-primary areas of the auditory cortex, as we= ll as from medial prefrontal cortex, in order to explore how these areas re= present sounds and interact during selective attention and decision-making. * Narayan Sankaran (University of California San Francisco): =93Intracr= anial recordings reveal the encoding of melody in the human superior tempor= al gyrus=94 * With cultural exposure across our lives, humans experience sequenc= es of pitches as melodies that convey emotion and meaning. The perception o= f melody operates along three fundamental dimensions: (1) the pitch of each= note, (2) the intervals in pitch between adjacent notes, and (3) how expec= ted each note is within its musical context. To date, it is unclear how the= se dimensions are collectively represented in the brain and whether their e= ncoding is specialized for music. I=92ll present recent work in which we us= ed high-density electrocorticography to record local population activity di= rectly from the human brain while participants listened to continuous Weste= rn melodies. Across the superior temporal gyrus (STG), separate populations= selectively encoded pitch, intervals, and expectations, demonstrating a sp= atial code for independently representing each melodic dimension. The same = participants also listened to naturally spoken English sentences. Whereas p= revious work suggests cortical selectivity for broad sound categories like = =91music=92, here we demonstrate that music-selectivity is systematically d= riven by the encoding of expectations, suggesting neural specialization for= representing a specific sequence property of music. In contrast, the pitch= and interval dimensions of melody were represented by neural populations t= hat also responded to speech and encoded similar acoustic content across th= e two domains. Melodic perception thus arises from the extraction of multip= le streams of statistical and acoustic information via specialized and doma= in-general mechanisms, respectively, within distinct sub-populations of hig= her-order auditory cortex. With kind wishes, Maria Geffen Yale Cohen Steve Eliades Stephen David Alexandria Lesicko Nathan Vogler Jean-Hugues Lestang Huaizhen Cai --_000_MN2PR04MB612730D257EAE200BD179FD8C8C09MN2PR04MB6127namp_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html xmlns:o=3D"urn:schemas-microsoft-com:office:office" xmlns:w=3D"urn:sc= hemas-microsoft-com:office:word" xmlns:m=3D"http://schemas.microsoft.com/of= fice/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:Wingdings; panose-1:5 0 0 0 0 0 0 0 0 0;} @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-margin-top-alt:auto; margin-right:0in; mso-margin-bottom-alt:auto; margin-left:0in; font-size:11.0pt; font-family:"Calibri",sans-serif;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif;} .MsoPapDefault {mso-style-type:export-only; mso-margin-top-alt:auto; mso-margin-bottom-alt:auto;} @xxxxxxxx WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} /* List Definitions */ @xxxxxxxx l0 {mso-list-id:190843805; mso-list-template-ids:1988765516;} @xxxxxxxx l0:level1 {mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level2 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:1.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level3 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:1.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level4 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:2.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level5 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:2.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level6 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:3.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level7 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:3.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level8 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:4.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l0:level9 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:4.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1 {mso-list-id:2085760160; mso-list-template-ids:1306590202;} @xxxxxxxx l1:level1 {mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level2 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:1.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level3 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:1.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level4 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:2.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level5 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:2.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level6 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:3.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level7 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:3.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level8 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:4.0in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} @xxxxxxxx l1:level9 {mso-level-start-at:0; mso-level-number-format:bullet; mso-level-text:\F0A7 ; mso-level-tab-stop:4.5in; mso-level-number-position:left; text-indent:-.25in; mso-ansi-font-size:10.0pt; font-family:Wingdings;} --></style> </head> <body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72" style=3D"word-wrap:= break-word"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><span style=3D"color:black">Dear fellow neuroscienti= sts,&nbsp;<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black"><o:p>&nbsp;</o:p></span>= </p> <p class=3D"MsoNormal"><span style=3D"color:black">We would like to invite = you to join us on&nbsp;<b>Tuesday, May 10 at 1:00 pm EDT</b>&nbsp;(UTC-4) f= or the next edition of E.A.R.S. (Electronic Auditory Research Seminars), a = monthly auditory seminar series with the focus on central auditory processing and circuits. Please pre-register (for free= ) and tune in via Crowdcast (enter your email to receive the link for the t= alk):&nbsp;<a href=3D"https://www.crowdcast.io/e/ears/18" title=3D"https://= www.crowdcast.io/e/ears/18"><span style=3D"color:#044A91">https://www.crowd= cast.io/e/ears/18</span></a><o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">(Note: for optimal perfo= rmance, we recommend using Google Chrome as your browser).&nbsp;&nbsp;<o:p>= </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">&nbsp;<o:p></o:p></span>= </p> <p class=3D"MsoNormal"><span style=3D"color:black">Speakers:<o:p></o:p></sp= an></p> <ul type=3D"square"> <li class=3D"MsoNormal" style=3D"color:black;mso-list:l0 level1 lfo1"><b>Di= ego Elgueda</b>&nbsp;(University of Chile): =93<i>Sound and behavioral mean= ing encoding in the auditory cortex=94</i><o:p></o:p></li><ul type=3D"squar= e"> <li class=3D"MsoNormal" style=3D"color:black;mso-list:l0 level2 lfo1">Anima= ls adapt to their environment by analyzing sensory information and integrat= ing it with internal representations (such as behavioral goals, memories of= past stimulus-event associations and expectations) and linking perception with appropriate adaptive responses. = The mechanisms by which the brain integrates acoustic feature information w= ith these internal representations are not yet clear. We are interested in = understanding how auditory representations are transformed in the areas of the auditory cortex and how these areas in= teract with higher-order association areas of the cerebral cortex. We have = shown that neurons in non-primary areas in the auditory cortex of the ferre= t, while responsive to auditory stimuli, can greatly enhance their responses to sounds when these become b= ehaviorally-relevant to the animal. Interestingly, tertiary area VPr can di= splay responses that share similarities with those previously shown in ferr= et frontal cortex, in which attended sounds are selectively enhanced during performance of auditory tasks, and = also show long sustained short-term memory activity after stimulus offset, = which correlates with the task response timing. In order to expand on these= findings, we are currently training rats in a 2AFC task in order to record from primary and non-primary areas = of the auditory cortex, as well as from medial prefrontal cortex, in order = to explore how these areas represent sounds and interact during selective a= ttention and decision-making.<o:p></o:p></li></ul> </ul> <p class=3D"MsoNormal" style=3D"margin-left:.5in;mso-add-space:auto"><span = style=3D"color:black">&nbsp;<o:p></o:p></span></p> <ul type=3D"square"> <li class=3D"MsoNormal" style=3D"color:black;mso-list:l1 level1 lfo2"><b>Na= rayan Sankaran</b>&nbsp;(University of California San Francisco): =93<i>Int= racranial recordings reveal the encoding of melody in the human superior te= mporal gyrus=94</i><o:p></o:p></li><ul type=3D"square"> <li class=3D"MsoNormal" style=3D"color:black;mso-list:l1 level2 lfo2">With = cultural exposure across our lives, humans experience sequences of pitches = as melodies that convey emotion and meaning. The perception of melody opera= tes along three fundamental dimensions: (1) the pitch of each note, (2) the intervals in pitch between adjacent no= tes, and (3) how expected each note is within its musical context. To date,= it is unclear how these dimensions are collectively represented in the bra= in and whether their encoding is specialized for music. I=92ll present recent work in which we used high-de= nsity electrocorticography to record local population activity directly fro= m the human brain while participants listened to continuous Western melodie= s. Across the superior temporal gyrus (STG), separate populations selectively encoded pitch, intervals, and expe= ctations, demonstrating a spatial code for independently representing each = melodic dimension. The same participants also listened to naturally spoken = English sentences. Whereas previous work suggests cortical selectivity for broad sound categories like =91musi= c=92, here we demonstrate that music-selectivity is systematically driven b= y the encoding of expectations, suggesting neural specialization for repres= enting a specific sequence property of music. In contrast, the pitch and interval dimensions of melody were re= presented by neural populations that also responded to speech and encoded s= imilar acoustic content across the two domains. Melodic perception thus ari= ses from the extraction of multiple streams of statistical and acoustic information via specialized and domain= -general mechanisms, respectively, within distinct sub-populations of highe= r-order auditory cortex.<o:p></o:p></li></ul> </ul> <p class=3D"MsoNormal"><span style=3D"color:black">&nbsp;<o:p></o:p></span>= </p> <p class=3D"MsoNormal"><span style=3D"color:black">With kind wishes,&nbsp;<= o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">&nbsp;&nbsp;<o:p></o:p><= /span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Maria Geffen&nbsp;<o:p><= /o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Yale Cohen&nbsp;<o:p></o= :p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Steve Eliades&nbsp;<o:p>= </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Stephen David&nbsp;<o:p>= </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Alexandria Lesicko&nbsp;= <o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Nathan Vogler&nbsp;<o:p>= </o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Jean-Hugues Lestang&nbsp= ;<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"color:black">Huaizhen Cai&nbsp;<o:p><= /o:p></span></p> <p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p> </div> </body> </html> --_000_MN2PR04MB612730D257EAE200BD179FD8C8C09MN2PR04MB6127namp_--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University