[AUDITORY] CFP =?UTF-8?Q?=E2=80=93_?=Special Issue in Neurorobotics - Active Vision and Perception in Human-Robot Collaboration (Letizia Marchegiani )


Subject: [AUDITORY] CFP =?UTF-8?Q?=E2=80=93_?=Special Issue in Neurorobotics - Active Vision and Perception in Human-Robot Collaboration
From:    Letizia Marchegiani  <m.letizia.marchegiani@xxxxxxxx>
Date:    Tue, 9 Jun 2020 19:00:34 +0200

--000000000000183d8505a7a9ab5c Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable CFP =E2=80=93 Special Issue in Neurorobotics - Active Vision and Perception= in Human-Robot Collaboration Link: https://www.frontiersin.org/research-topics/13958/active-vision-and-percept= ion-in-human-robot-collaboration ---------------------------------------------------------------------------= ------------------------ Scope: Humans naturally interact and collaborate in unstructured social environments that produce an overwhelming amount of information and may yet hide behaviourally relevant variables. Uncovering the underlying design principles that allow humans to adaptively find and select relevant information, e.g. effectors, affordances, etc, is important for Robotics, but also other fields, such as Computational Neuroscience and Interaction Design. Current solutions cover specific tasks, e.g. autonomous cars, and usually employ over-redundant, expensive, and computationally demanding sensory systems that attempt to cover the wide set of sensing conditions which the systems may have to deal with. Adaptive control of the sensors and of the perception process is a key solution found by nature to cope with such problems, as shown by the foveal anatomy of the eye and its high mobility. Alongside this interest in =E2=80=9Cactive=E2=80=9D vision, collaborative r= obotics has recently progressed to human-robot interaction in real manufacturing. Measuring and modelling task-specific gaze behaviours seems to be essential for smooth human-robot interaction. Indeed, anticipatory control for human-in-the-loop architectures, which can enable robots to proactively collaborate with humans, relies heavily on observing gaze and actions patterns of the human partner. We are interested in manuscripts that present novel computational and robotic models, theories and experimental results as well as reviews relevant to understand how human actively control their perception during social interaction and in which condition they fail, and how these insights may enable natural interaction between humans and artificial systems in non-trivial conditions. Topics: topics of interest include (but are not limited to): Active perception for intention and action prediction Activity and action recognition in the wild Active perception for social interaction Human-robot collaboration in unstructured environments Human-robot collaboration in presence of sensory limits Joint Human-Robot search and exploration Testing setup for social perception in real or virtual environments Setup for transferring active perception skills from humans to robots Machine learning methods for active social perception Benchmarking and quantitative evaluation with human subject experiments Gaze-based Factors for Intuitive Human-Robot Collaboration Active perception modelling for social interaction and collaboration Head-mounted eye tracking and gaze estimation during social interaction Estimation and guidance of partner situation awareness and attentional state in human-robot collaboration Multimodal Social perception Adaptive social perception Egocentric vision in social interaction; Explicit and implicit sensorimotor communication; Social attention; Natural human-robot (machine) interaction; Collaborative exploration; Joint attention; Multimodal social attention; Attentive activity recognition; Belief and mental state attribution in robots Keywords: Active Vision, Social Perception, Intention Prediction, Egocentric Vision, Natural Human-Robot Interaction ---------------------------------------------------------------------------= ------------------------ ** Important Dates **: Abstract Submission Deadline: 18/07/2020 Full Paper Submission Deadline: 31/10/2020 Following the general publication policy of the journal, as soon as papers are accepted for publication, they will be published shortly thereafter and available to online independently on the submission deadlines. ---------------------------------------------------------------------------= ------------------------ ** Topic Editors **: Dimitri Ognibene , University of Essex & Milano Bicocca University: dimitri.ognibene@xxxxxxxx Tom Foulsham, University of Essex : foulsham@xxxxxxxx Giovanni Maria Farinella, University of Catania : gfarinella@xxxxxxxx Fiora Pirri, Sapienza =E2=80=93 University of Rome: fiora.pirri@xxxxxxxx= a1.it Letizia Marchegiani =E2=80=93 Aalborg University: lm@xxxxxxxx ---------------------------------------------------------------------------= ------------------------ ** Joint Events **: AVHRC 2020 - Active Vision and perception in Human(-Robot) Collaboration Workshop within the 29th IEEE International Conference on Robot and Human Interactive Communication, Naples Italy, from August 31 to September 4, 2020. https://www.essex.ac.uk/departments/computer-science-and-electronic-enginee= ring/events/avhrc-2020 Selected papers from the workshop will be published with a discounted fee. The best paper award will be announced, offering a full publication fee waiver. --000000000000183d8505a7a9ab5c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div dir=3D"ltr"><div>CFP =E2=80=93 Special Issue in Neuro= robotics - Active Vision and Perception in Human-Robot Collaboration=C2=A0<= /div><div><br></div><div>Link: <a href=3D"https://www.frontiersin.org/resea= rch-topics/13958/active-vision-and-perception-in-human-robot-collaboration"= >https://www.frontiersin.org/research-topics/13958/active-vision-and-percep= tion-in-human-robot-collaboration</a>=C2=A0</div><div><br></div><div>------= ---------------------------------------------------------------------------= ------------------=C2=A0</div><div><br></div><div>Scope: Humans naturally i= nteract and collaborate in unstructured social environments that produce an= overwhelming amount of information and may yet hide behaviourally relevant= variables. Uncovering the underlying design principles that allow humans t= o adaptively find and select relevant information, e.g. effectors, affordan= ces, etc, is important for Robotics, but also other fields, such as Computa= tional Neuroscience and Interaction Design.=C2=A0</div><div><br></div><div>= Current solutions cover specific tasks, e.g. autonomous cars, and usually e= mploy over-redundant, expensive, and computationally demanding sensory syst= ems that attempt to cover the wide set of sensing conditions which the syst= ems may have to deal with. Adaptive control of the sensors and of the perce= ption process is a key solution found by nature to cope with such problems,= as shown by the foveal anatomy of the eye and its high mobility.=C2=A0</di= v><div><br></div><div>Alongside this interest in =E2=80=9Cactive=E2=80=9D v= ision, collaborative robotics has recently progressed to human-robot intera= ction in real manufacturing. Measuring and modelling task-specific gaze beh= aviours seems to be essential for smooth human-robot interaction. Indeed, a= nticipatory control for human-in-the-loop architectures, which can enable r= obots to proactively collaborate with humans, relies heavily on observing g= aze and actions patterns of the human partner.=C2=A0</div><div><br></div><d= iv>We are interested in manuscripts that present novel computational and ro= botic models, theories and experimental results as well as reviews relevant= to understand how human actively control their perception during social in= teraction and in which condition they fail, and how these insights may enab= le natural interaction between humans and artificial systems in non-trivial= conditions. =C2=A0</div><div><br></div><div>=C2=A0</div><div><br></div><di= v>Topics: topics of interest include (but are not limited to):=C2=A0</div><= div><br></div><div>=C2=A0 =C2=A0 Active perception for intention and action= prediction=C2=A0</div><div>=C2=A0 =C2=A0 Activity and action recognition i= n the wild=C2=A0</div><div>=C2=A0 =C2=A0 Active perception for social inter= action=C2=A0</div><div>=C2=A0 =C2=A0 Human-robot collaboration in unstructu= red environments=C2=A0</div><div>=C2=A0 =C2=A0 Human-robot collaboration in= presence of sensory limits=C2=A0</div><div>=C2=A0 =C2=A0 Joint Human-Robot= search and exploration=C2=A0</div><div>=C2=A0 =C2=A0 Testing setup for soc= ial perception in real or virtual environments=C2=A0</div><div>=C2=A0 =C2= =A0 Setup for transferring active perception skills from humans to robots= =C2=A0</div><div>=C2=A0 =C2=A0 Machine learning methods for active social p= erception=C2=A0</div><div>=C2=A0 =C2=A0 Benchmarking and quantitative evalu= ation with human subject experiments=C2=A0</div><div>=C2=A0 =C2=A0 Gaze-bas= ed Factors for Intuitive Human-Robot Collaboration=C2=A0</div><div>=C2=A0 = =C2=A0 Active perception modelling for social interaction and collaboration= =C2=A0</div><div>=C2=A0 =C2=A0 Head-mounted eye tracking and gaze estimatio= n during social interaction=C2=A0</div><div>=C2=A0 =C2=A0 Estimation and gu= idance of partner situation awareness and attentional state in human-robot = collaboration=C2=A0</div><div>=C2=A0 =C2=A0 Multimodal Social perception=C2= =A0</div><div>=C2=A0 =C2=A0 Adaptive social perception=C2=A0</div><div>=C2= =A0 =C2=A0 Egocentric vision in social interaction;=C2=A0</div><div>=C2=A0 = =C2=A0 Explicit and implicit sensorimotor communication;=C2=A0</div><div>= =C2=A0 =C2=A0 Social attention;=C2=A0</div><div>=C2=A0 =C2=A0 Natural human= -robot (machine) interaction;=C2=A0</div><div>=C2=A0 =C2=A0 Collaborative e= xploration;=C2=A0</div><div>=C2=A0 =C2=A0 Joint attention;=C2=A0</div><div>= =C2=A0 =C2=A0 Multimodal social attention;=C2=A0</div><div>=C2=A0 =C2=A0 At= tentive activity recognition;=C2=A0</div><div>=C2=A0 =C2=A0 Belief and ment= al state attribution in robots =C2=A0</div><div><br></div><div>=C2=A0</div>= <div>Keywords: Active Vision, Social Perception, Intention Prediction, Egoc= entric Vision, Natural Human-Robot Interaction=C2=A0</div><div><br></div><d= iv>------------------------------------------------------------------------= ---------------------------=C2=A0</div><div><br></div><div>** Important Dat= es **:=C2=A0</div><div><br></div><div>Abstract Submission Deadline: 18/07/2= 020=C2=A0</div><div>Full Paper Submission Deadline: 31/10/2020=C2=A0</div><= div>=C2=A0</div><div>Following the general publication policy of the journa= l, as soon as papers are accepted for publication, they will be published s= hortly thereafter and available to online independently on the submission d= eadlines.=C2=A0</div><div><br></div><div>----------------------------------= -----------------------------------------------------------------=C2=A0</di= v><div><br></div><div>** Topic Editors **:=C2=A0</div><div><br></div><div>D= imitri Ognibene , University of Essex =C2=A0&amp; Milano Bicocca University= : <a href=3D"mailto:dimitri.ognibene@xxxxxxxx">dimitri.ognibene@xxxxxxxx= .uk</a>=C2=A0</div><div>Tom Foulsham, University of Essex : <a href=3D"mail= to:foulsham@xxxxxxxx">foulsham@xxxxxxxx</a>=C2=A0</div><div>Giovanni = Maria Farinella, University of Catania : <a href=3D"mailto:gfarinella@xxxxxxxx= nict.it">gfarinella@xxxxxxxx</a>=C2=A0</div><div>Fiora Pirri, Sapienza = =E2=80=93 University of Rome: <a href=3D"mailto:fiora.pirri@xxxxxxxx= t">fiora.pirri@xxxxxxxx</a>=C2=A0</div><div>Letizia Marchegiani =E2= =80=93 Aalborg University: <a href=3D"mailto:lm@xxxxxxxx">lm@xxxxxxxx</a>= =C2=A0</div><div><br></div><div>=C2=A0</div><div>--------------------------= -------------------------------------------------------------------------= =C2=A0</div><div><br></div><div>** Joint Events **:=C2=A0</div><div><br></d= iv><div>AVHRC 2020 - Active Vision and perception in Human(-Robot) Collabor= ation Workshop within the 29th IEEE International Conference on Robot and H= uman Interactive Communication, Naples Italy, from August 31 to September 4= , 2020.=C2=A0</div><div><br></div><div><a href=3D"https://www.essex.ac.uk/d= epartments/computer-science-and-electronic-engineering/events/avhrc-2020">h= ttps://www.essex.ac.uk/departments/computer-science-and-electronic-engineer= ing/events/avhrc-2020</a>=C2=A0</div><div><br></div><div>Selected papers fr= om the workshop will be published with a discounted fee.=C2=A0</div><div><b= r></div><div>The best paper award will be announced, offering a full public= ation fee waiver.=C2=A0</div><div>=C2=A0</div><div><br></div><div>=C2=A0</d= iv></div></div> --000000000000183d8505a7a9ab5c--


This message came from the mail archive
src/postings/2020/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University