[AUDITORY] Workshop announcement: Hierarchical Multisensory Integration: Theory and Experiments, Barcelona, Spain, June 18-19 (=?UTF-8?B?UGF3ZcWCIEt1xZttaWVyZWs=?= )


Subject: [AUDITORY] Workshop announcement: Hierarchical Multisensory Integration: Theory and Experiments, Barcelona, Spain, June 18-19
From:    =?UTF-8?B?UGF3ZcWCIEt1xZttaWVyZWs=?=  <pawel.kusmierek@xxxxxxxx>
Date:    Wed, 15 Mar 2017 18:14:57 -0400
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--94eb2c12543a297837054acc48db Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable The ability to map sensory inputs to meaningful semantic labels, i.e., to recognize objects, is foundational to cognition, and the human brain excels at object recognition tasks across sensory domains. Examples include perceiving spoken speech, reading written words, even recognizing tactile Braille patterns. In each sensory modality, processing appears to be realized by multi-stage processing hierarchies in which tuning complexity grows gradually from simple features in primary sensory areas to complex representations in higher-level areas that ultimately interface with task-related circuits in prefrontal/premotor cortices. Crucially, real world stimuli usually do not have sensory signatures in just one modality but activate representations in different sensory domains, and successfully integrating these different hierarchical representations appears to be of key importance for cognition. Prior theoretical work has mostly focused on tackling multisensory integration at isolated processing stages, and the computational functions and benefits of *hierarchical* multisensory interactions are still unclear. For instance, what characteristics of the input determine at which levels of two linked sensory processing hierarchies cross-sensory integration occurs? Can these connections form through unsupervised learning, just based on temporal coincidence? Which stages are connected: For instance, is there selective audio-visual integration just at a low level of the hierarchy, e.g., to enable letter-by-letter reading, or even earlier levels at the level of primary sensory cortices, with multisensory selectivity in higher hierarchical levels then resulting from feedforward processing within each hierarchy, or are there selective connections at multiple hierarchical levels? What are the computational advantages of different cross-sensory connection schemes? What are roles for =E2=80=9Ctop-down=E2=80=9D vs. =E2= =80=9Clateral=E2=80=9D inputs in learning cross-hierarchical connections? What are computationally efficient ways to leverage prior learning from one modality in learning hierarchical representations in a new modality? The workshop will gather a small group of experts to informally exchange the latest ideas and findings, both experimental and theoretical, in the field of multisensory integration. It will consist of two days packed with talks by invited speakers (see below) as well as discussions. There will also be a poster session. Researchers, postdocs and graduate students interested in multisensory integration and hierarchical processing are all invited to apply. Click here <http://eventum.upf.edu/event_detail/8963/sections/6797/event-details.html> ( http://eventum.upf.edu/event_detail/8963/sections/6797/event-details.htm= l ) for more information on the event and here <http://eventum.upf.edu/event_detail/8963/sections/6798/registration-inform= ation.html> ( http://eventum.upf.edu/event_detail/8963/sections/6798/registration-informa= tion.html ) for more information on the registration. This event is jointly organized by the Center for Brain and Cognition at the Universitat Pompeu Fabra in Barcelona and Georgetown University, with funding from the U.S. National Science Foundation and the Spanish Ministry of Economy, Industry and Competitiveness. --94eb2c12543a297837054acc48db Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><br><div><blockquote type=3D"cite"><div><div dir=3D"auto" = style=3D"word-wrap:break-word"><div style=3D"font-size:12.8px"><p style=3D"= line-height:normal;margin:0px 0px 6px;padding:0px;border:0px;outline:0px;te= xt-align:justify"><font face=3D"Georgia" style=3D"font-size:14px"><span sty= le=3D"line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><sp= an style=3D"line-height:normal;margin:0px;padding:0px;border:0px;outline:0p= x">The ability to map sensory inputs to meaningful semantic labels, i.e., t= o recognize objects, is foundational to cognition, and the human brain exce= ls at object recognition tasks across sensory domains.=C2=A0</span></span><= span style=3D"line-height:normal;margin:0px;padding:0px;border:0px;outline:= 0px">Examples include perceiving spoken speech, reading written words, even= recognizing tactile Braille patterns.=C2=A0</span>In each sensory modality= , processing appears to be realized by multi-stage processing hierarchies i= n which tuning complexity grows gradually from simple features in primary s= ensory areas to complex representations in higher-level areas that ultimate= ly interface with task-related circuits in prefrontal/premotor cortices.</f= ont></p><p style=3D"line-height:normal;margin:0px 0px 6px;padding:0px;borde= r:0px;outline:0px;text-align:justify"><font face=3D"Georgia" style=3D"font-= size:14px">Crucially, real world stimuli usually do not have sensory signat= ures in just one modality but activate representations in different sensory= domains, and successfully integrating these different hierarchical represe= ntations appears to be of key importance for cognition.=C2=A0<span style=3D= "line-height:normal;margin:0px;padding:0px;border:0px;outline:0px">Prior th= eoretical work has mostly focused on tackling multisensory integration at i= solated processing stages, and the computational functions and benefits of= =C2=A0</span><em style=3D"line-height:normal;margin:0px;padding:0px;border:= 0px;outline:0px">hierarchical</em><span style=3D"line-height:normal;margin:= 0px;padding:0px;border:0px;outline:0px">=C2=A0multisensory interactions are= still unclear. For instance, what characteristics of the input determine a= t which levels of two linked sensory processing hierarchies cross-sensory i= ntegration occurs? Can these connections form through unsupervised learning= , just based on temporal coincidence? Which stages are connected: For insta= nce, is there selective audio-visual integration just at a low level of the= hierarchy, e.g., to enable letter-by-letter reading, or even earlier level= s at the level of primary sensory cortices, with multisensory selectivity i= n higher hierarchical levels then resulting from feedforward processing wit= hin each hierarchy, or are there selective connections at multiple hierarch= ical levels? What are the computational advantages of different cross-senso= ry connection schemes? What are roles for =E2=80=9Ctop-down=E2=80=9D vs. = =E2=80=9Clateral=E2=80=9D inputs in learning cross-hierarchical connections= ? What are computationally efficient ways to leverage prior learning from o= ne modality in learning hierarchical representations in a new modality?=C2= =A0</span></font></p></div><div><div><font face=3D"Georgia" style=3D"font-s= ize:14px">The workshop will gather a small group of experts to=C2=A0informa= lly =C2=A0exchange the latest ideas and findings, both experimental and the= oretical, in the field of multisensory integration. It will consist of two = days packed with talks by invited speakers (see below) as well as discussio= ns. There will also be a poster session. Researchers, postdocs and graduate= students interested in multisensory integration and hierarchical processin= g are all invited to apply. Click=C2=A0<a href=3D"http://eventum.upf.edu/ev= ent_detail/8963/sections/6797/event-details.html" target=3D"_blank" style= =3D"color:rgb(171,26,47);line-height:normal;margin:0px;padding:0px;border:0= px;outline:0px;text-decoration:none">here</a>=C2=A0(=C2=A0</font><font face= =3D"Georgia"><span style=3D"font-size:14px"><a href=3D"http://eventum.upf.e= du/event_detail/8963/sections/6797/event-details.html">http://eventum.upf.e= du/event_detail/8963/sections/6797/event-details.html</a> )</span></font><s= pan style=3D"font-size:14px;font-family:georgia">=C2=A0for more information= on the event and=C2=A0</span><a href=3D"http://eventum.upf.edu/event_detai= l/8963/sections/6798/registration-information.html" target=3D"_blank" style= =3D"font-size:14px;font-family:georgia;color:rgb(171,26,47);line-height:nor= mal;margin:0px;padding:0px;border:0px;outline:0px;text-decoration:none">her= e</a>=C2=A0(<span style=3D"font-size:14px;font-family:georgia">=C2=A0</span= ><font face=3D"georgia"><span style=3D"font-size:14px"><a href=3D"http://ev= entum.upf.edu/event_detail/8963/sections/6798/registration-information.html= ">http://eventum.upf.edu/event_detail/8963/sections/6798/registration-infor= mation.html</a> )=C2=A0</span></font><span style=3D"font-family:georgia;fon= t-size:14px">for more information on the registration.</span></div></div></= div></div></blockquote><blockquote type=3D"cite" style=3D"font-size:12.8px"= ><div dir=3D"auto" style=3D"word-wrap:break-word"><div><font face=3D"Georgi= a" style=3D"font-size:14px"><br></font></div><div style=3D"line-height:1.38= ;margin:0pt 0px;padding:0px;border:0px;outline:0px;overflow:inherit"><span = style=3D"line-height:normal;margin:0px;padding:0px;border:0px;outline:0px;f= ont-size:14px"><font face=3D"Georgia" style=3D"line-height:normal">This eve= nt is jointly organized by the Center for Brain and Cognition at the Univer= sitat Pompeu Fabra in Barcelona and Georgetown University, with funding fro= m the U.S. National Science Foundation and the Spanish Ministry of Economy,= Industry and Competitiveness.</font></span></div></div></blockquote></div>= </div> --94eb2c12543a297837054acc48db--


This message came from the mail archive
../postings/2017/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University