Re: [AUDITORY] Deadline extension Workshop on Socially Interactive Human-like Virtual Agents (SIVA'23) (Nicolas Obin )


Subject: Re: [AUDITORY] Deadline extension Workshop on Socially Interactive Human-like Virtual Agents (SIVA'23)
From:    Nicolas Obin  <Nicolas.Obin@xxxxxxxx>
Date:    Wed, 14 Sep 2022 14:33:56 +0200

--Apple-Mail=_456928EF-46E2-42FA-A50D-FC00148CB6C4 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 [Apologies for cross-posting] We are pleased to announce that the deadline for submission to the = SIVA=E2=80=9923 workshop has been extended to: September, 20 2022 CALL FOR PAPERS: SIVA'23 Workshop on Socially Interactive Human-like Virtual Agents =46rom expressive and context-aware multimodal generation of digital = humans to understanding the social cognition of real humans Submission: https://cmt3.research.microsoft.com/SIVA2023 = <https://cmt3.research.microsoft.com/SIVA2023> SIVA'23 workshop: January, 4 2023, Waikoloa, Hawaii, = https://www.stms-lab.fr/agenda/siva/detail/ = <https://www.stms-lab.fr/agenda/siva/detail/> FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, = https://fg2023.ieee-biometrics.org/ = <https://fg2023.ieee-biometrics.org/> IMPORTANT DATES Submission Deadline September, 12 2022 September, 20 2022 Notification of Acceptance: October, 15 2022=20 Camera-ready deadline: October, 31 2022 Workshop: January, 4 2023 OVERVIEW Due to the rapid growth of virtual, augmented, and hybrid reality = together with spectacular advances in artificial intelligence, the = ultra-realistic generation and animation of digital humans with = human-like behaviors is becoming a massive topic of interest. This = complex endeavor requires modeling several elements of human behavior = including the natural coordination of multimodal behaviors including = text, speech, face, and body, plus the contextualization of behavior in = response to interlocutors of different cultures and motivations. Thus, = challenges in this topic are two folds=E2=80=94the generation and = animation of coherent multimodal behaviors, and modeling the = expressivity and contextualization of the virtual agent with respect to = human behavior, plus understanding and modeling virtual agent behavior = adaptation to increase human=E2=80=99s engagement. The aim of this = workshop is to connect traditionally distinct communities (e.g., speech, = vision, cognitive neurosciences, social psychology) to elaborate and = discuss the future of human interaction with human-like virtual agents. = We expect contributions from the fields of signal processing, speech and = vision, machine learning and artificial intelligence, perceptual = studies, and cognitive and neuroscience. Topics will range from = multimodal generative modeling of virtual agent behaviors, and = speech-to-face and posture 2D and 3D animation, to original research = topics including style, expressivity, and context-aware animation of = virtual agents. Moreover, the availability of controllable real-time = virtual agent models can be used as state-of-the-art experimental = stimuli and confederates to design novel, groundbreaking experiments to = advance understanding of social cognition in humans. Finally, these = virtual humans can be used to create virtual environments for medical = purposes including rehabilitation and training. SCOPE Topics of interest include but are not limited to: + Analysis of Multimodal Human-like Behavior - Analyzing and understanding of human multimodal behavior (speech, = gesture, face) - Creating datasets for the study and modeling of human multimodal = behavior - Coordination and synchronization of human multimodal behavior - Analysis of style and expressivity in human multimodal behavior - Cultural variability of social multimodal behavior + Modeling and Generation of Multimodal Human-like Behavior - Multimodal generation of human-like behavior (speech, gesture, face) - Face and gesture generation driven by text and speech - Context-aware generation of multimodal human-like behavior - Modeling of style and expressivity for the generation of multimodal = behavior - Modeling paralinguistic cues for multimodal behavior generation - Few-shots or zero-shot transfer of style and expressivity - Slightly-supervised adaptation of multimodal behavior to context + Psychology and Cognition of of Multimodal Human-like Behavior - Cognition of deep fakes and ultra-realistic digital manipulation of = human-like behavior - Social agents/robots as tools for capturing, measuring and = understanding multimodal behavior (speech, gesture, face) - Neuroscience and social cognition of real humans using virtual agents = and physical robots VENUE The SIVA workshop is organized as a satellite workshop of the IEEE = International Conference on Automatic Face and Gesture Recognition 2023. = The workshop will be collocated with the FG 2023 and WACV 2023 = conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA. ADDITIONAL INFORMATION AND SUBMISSION DETAILS Submissions must be original and not published or submitted elsewhere. = Short papers of 3 pages excluding references encourage submissions of = early research in original emerging fields. Long paper of 6 to 8 pages = excluding references promote the presentation of strongly original = contributions, positional or survey papers. The manuscript should be = formatted according to the Word or Latex template provided on the = workshop website. All submissions will be reviewed by 3 reviewers. The = reviewing process will be single-blinded. Authors will be asked to = disclose possible conflict of interests, such as cooperation in the = previous two years. Moreover, care will be taken to avoid reviewers from = the same institution as the authors. Authors should submit their = articles in a single pdf file in the submission website - no later than = September, 12 2022. Notification of acceptance will be sent by October, = 15 2022, and the camera-ready version of the papers revised according to = the reviewers comments should be submitted by October, 31 2022. Accepted = papers will be published in the proceedings of the FG'2023 conference. = More information can be found on the SIVA website. DIVERSITY, EQUALITY, AND INCLUSION The format of this workshop will be hybrid online and onsite. This = format proposes format of scientific exchanges in order to satisfy = travel restrictions and COVID sanitary precautions, to promote inclusion = in the research community (travel costs are high, online presentations = will encourage research contributions from geographical regions which = would normally be excluded), and to consider ecological issues (e.g., = CO2 footprint). The organizing committee is committed to paying = attention to equality, diversity, and inclusivity in consideration of = invited speakers. This effort starts from the organizing committee and = the invited speakers to the program committee. ORGANIZING COMMITTEE =F0=9F=8C=B8 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Universit=C3=A9= , minist=C3=A8re de la Culture) =F0=9F=8C=B8 Ryo Ishii, NTT Human Informatics Laboratories =F0=9F=8C=B8 Rachael E. Jack, University of Glasgow =F0=9F=8C=B8 Louis-Philippe Morency, Carnegie Mellon University =F0=9F=8C=B8 Catherine Pelachaud, CNRS - ISIR, Sorbonne Universit=C3=A9 --Apple-Mail=_456928EF-46E2-42FA-A50D-FC00148CB6C4 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><span= style=3D"caret-color: rgb(0, 0, 0);" class=3D"">[Apologies for = cross-posting]</span><br class=3D""><div class=3D""> <div style=3D"color: rgb(0, 0, 0); letter-spacing: normal; orphans: = auto; text-align: start; text-indent: 0px; text-transform: none; = white-space: normal; widows: auto; word-spacing: 0px; = -webkit-text-stroke-width: 0px; word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><div class=3D""><br class=3D""></div></div></div><div><div = class=3D""><div style=3D"word-wrap: break-word; -webkit-nbsp-mode: = space; line-break: after-white-space;" class=3D""><div class=3D""><div = style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break: = after-white-space;" class=3D""><div class=3D""><div style=3D"word-wrap: = break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" = class=3D""><div class=3D""><b class=3D""><span style=3D"caret-color: = rgb(0, 0, 0);" class=3D"">We are pleased to announce that the deadline = for submission to the SIVA=E2=80=9923 workshop has been extended = to:&nbsp;September, 20 2022</span></b></div><div class=3D""><div = class=3D"" style=3D"caret-color: rgb(0, 0, 0);"><div class=3D"" = style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break: = after-white-space;"><div class=3D""><div class=3D""><div class=3D"" = style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break: = after-white-space;"><div class=3D""><div class=3D""><div dir=3D"auto" = class=3D"" style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; = line-break: after-white-space;"><div dir=3D"auto" class=3D"" = style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break: = after-white-space;"><div class=3D""><br class=3D"">CALL FOR PAPERS: = SIVA'23<br class=3D"">Workshop on Socially Interactive Human-like = Virtual Agents<br class=3D"">=46rom expressive and context-aware = multimodal generation of digital humans to understanding the social = cognition of real humans<br class=3D""><br class=3D"">Submission: = &nbsp;<a href=3D"https://cmt3.research.microsoft.com/SIVA2023" = class=3D"">https://cmt3.research.microsoft.com/SIVA2023</a><br = class=3D"">SIVA'23 workshop: January, 4 2023, Waikoloa, Hawaii,&nbsp;<a = href=3D"https://www.stms-lab.fr/agenda/siva/detail/" = class=3D"">https://www.stms-lab.fr/agenda/siva/detail/</a><br = class=3D"">FG 2023 conference: January 4-8 2023, Waikoloa, = Hawaii,&nbsp;<a href=3D"https://fg2023.ieee-biometrics.org/" = class=3D"">https://fg2023.ieee-biometrics.org/</a></div><div = class=3D""><br class=3D""></div><div class=3D"">IMPORTANT DATES<br = class=3D""><br class=3D"">Submission Deadline&nbsp;<strike = class=3D"">September, 12 2022</strike>&nbsp;September, 20 2022<br = class=3D"">Notification of Acceptance: October, 15 2022&nbsp;<br = class=3D"">Camera-ready deadline: October, 31 2022<br class=3D"">Workshop:= January, 4 2023<br class=3D""><br class=3D"">OVERVIEW<br class=3D""><br = class=3D"">Due to the rapid growth of virtual, augmented, and hybrid = reality together with spectacular advances in artificial intelligence, = the ultra-realistic generation and animation of digital humans with = human-like behaviors is becoming a massive topic of interest. This = complex endeavor requires modeling several elements of human behavior = including the natural coordination of multimodal behaviors including = text, speech, face, and body, plus the contextualization of behavior in = response to interlocutors of different cultures and motivations. Thus, = challenges in this topic are two folds=E2=80=94the generation and = animation of coherent multimodal behaviors, and modeling the = expressivity and contextualization of the virtual agent with respect to = human behavior, plus understanding and modeling virtual agent behavior = adaptation to increase human=E2=80=99s engagement. The aim of this = workshop is to connect traditionally distinct communities (e.g., speech, = vision, cognitive neurosciences, social psychology) to elaborate and = discuss the future of human interaction with human-like virtual agents. = We expect contributions from the fields of signal processing, speech and = vision, machine learning and artificial intelligence, perceptual = studies, and cognitive and neuroscience. Topics will range from = multimodal generative modeling of virtual agent behaviors, and = speech-to-face and posture 2D and 3D animation, to original research = topics including style, expressivity, and context-aware animation of = virtual agents. Moreover, the availability of controllable real-time = virtual agent models can be used as state-of-the-art experimental = stimuli and confederates to design novel, groundbreaking experiments to = advance understanding of social cognition in humans. Finally, these = virtual humans can be used to create virtual environments for medical = purposes including rehabilitation and training.<br class=3D""><br = class=3D"">SCOPE<br class=3D""><br class=3D"">Topics of interest include = but are not limited to:<br class=3D""><br class=3D"">+ Analysis of = Multimodal Human-like Behavior<br class=3D"">- Analyzing and = understanding of human multimodal behavior (speech, gesture, face)<br = class=3D"">- Creating datasets for the study and modeling of human = multimodal behavior<br class=3D"">- Coordination and synchronization of = human multimodal behavior<br class=3D"">- Analysis of style and = expressivity in human multimodal behavior<br class=3D"">- Cultural = variability of social multimodal behavior<br class=3D""><br class=3D"">+ = Modeling and Generation of Multimodal Human-like Behavior<br class=3D"">- = Multimodal generation of human-like behavior (speech, gesture, face)<br = class=3D"">- Face and gesture generation driven by text and speech<br = class=3D"">- Context-aware generation of multimodal human-like = behavior<br class=3D"">- Modeling of style and expressivity for the = generation of multimodal behavior<br class=3D"">- Modeling = paralinguistic cues for multimodal behavior generation<br class=3D"">- = Few-shots or zero-shot transfer of style and expressivity<br class=3D"">- = Slightly-supervised adaptation of multimodal behavior to context<br = class=3D""><br class=3D"">+ Psychology and Cognition of of Multimodal = Human-like Behavior<br class=3D"">- Cognition of deep fakes and = ultra-realistic digital manipulation of human-like behavior<br = class=3D"">- Social agents/robots as tools for capturing, measuring and = understanding multimodal behavior (speech, gesture, face)<br class=3D"">- = Neuroscience and social cognition of real humans using virtual agents = and physical robots<br class=3D""><br class=3D"">VENUE<br class=3D""><br = class=3D"">The SIVA workshop is organized as a satellite workshop of the = IEEE International Conference on Automatic Face and Gesture Recognition = 2023. The workshop will be collocated with the FG 2023 and WACV 2023 = conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.<br = class=3D""><br class=3D"">ADDITIONAL INFORMATION AND SUBMISSION = DETAILS<br class=3D""><br class=3D"">Submissions must be original and = not published or submitted elsewhere. &nbsp;Short papers of 3 pages = excluding references encourage submissions of early research in original = emerging fields. Long paper of 6 to 8 pages excluding references promote = the presentation of strongly original contributions, positional or = survey papers. The manuscript should be formatted according to the Word = or Latex template provided on the workshop website. &nbsp;All = submissions will be reviewed by 3 reviewers. The reviewing process will = be single-blinded. Authors will be asked to disclose possible conflict = of interests, such as cooperation in the previous two years. Moreover, = care will be taken to avoid reviewers from the same institution as the = authors. &nbsp;Authors should submit their articles in a single pdf file = in the submission website - no later than September, 12 2022. = Notification of acceptance will be sent by October, 15 2022, and the = camera-ready version of the papers revised according to the reviewers = comments should be submitted by October, 31 2022. Accepted papers will = be published in the proceedings of the FG'2023 conference. More = information can be found on the SIVA website.<br class=3D""><br = class=3D"">DIVERSITY, EQUALITY, AND INCLUSION<br class=3D""><br = class=3D"">The format of this workshop will be hybrid online and onsite. = This format proposes format of scientific exchanges in order to satisfy = travel restrictions and COVID sanitary precautions, to promote inclusion = in the research community (travel costs are high, online presentations = will encourage research contributions from geographical regions which = would normally be excluded), and to consider ecological issues (e.g., = CO2 footprint). The organizing committee is committed to paying = attention to equality, diversity, and inclusivity in consideration of = invited speakers. This effort starts from the organizing committee and = the invited speakers to the program committee.<br class=3D""><br = class=3D""><br class=3D"">ORGANIZING COMMITTEE<br class=3D"">=F0=9F=8C=B8 = Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Universit=C3=A9, = minist=C3=A8re de la Culture)<br class=3D"">=F0=9F=8C=B8 Ryo Ishii, NTT = Human Informatics Laboratories<br class=3D"">=F0=9F=8C=B8 Rachael E. = Jack, University of Glasgow<br class=3D"">=F0=9F=8C=B8 Louis-Philippe = Morency, Carnegie Mellon University<br class=3D"">=F0=9F=8C=B8 Catherine = Pelachaud, CNRS - ISIR, Sorbonne = Universit=C3=A9</div></div></div></div></div></div></div></div></div></div= ></div><div class=3D""><br class=3D""></div><div class=3D""><br = class=3D""></div><br class=3D""></div></div><br class=3D""></div></div><br= class=3D""></div></div></div><br class=3D""></body></html>= --Apple-Mail=_456928EF-46E2-42FA-A50D-FC00148CB6C4--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University