Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2522848.2532194acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Controllable models of gaze behavior for virtual agents and humanlike robots

Published: 09 December 2013 Publication History

Abstract

Embodied social agents, through their ability to afford embodied interaction using nonverbal human communicative cues, hold great promise in application areas such as education, training, rehabilitation, and collaborative work. Gaze cues are particularly important for achieving significant social and communicative goals. In this research, I explore how agents - both virtual agents and humanlike robots - might achieve such goals through the use of various gaze mechanisms. To this end, I am developing computational control models of gaze behavior that treat gaze as the output of a system with a number of multimodal inputs. These inputs can be characterized at different levels of interaction, from non-interactive (e.g., physical characteristics of the agent itself) to fully interactive (e.g., speech and gaze behavior of a human interlocutor). This research will result in a number of control models that each focus on a different gaze mechanism, combined into an open-source library of gaze behaviors that will be usable by both human-robot and human-virtual agent interaction designers. System-level evaluations in naturalistic settings will validate this gaze library for its ability to evoke positive social and cognitive responses in human users.

References

[1]
A. Abele. Functions of gaze in social interaction: Communication and monitoring. Journal of Nonverbal Behavior, 10(2):83--101, 1986.
[2]
S. Andrist, B. Mutlu, and M. Gleicher. Conversational gaze aversion for virtual agents. In Proc. of IVA 2013, pages 249--262, 2013.
[3]
S. Andrist, T. Pejsa, B. Mutlu, and M. Gleicher. Designing effective gaze mechanisms for virtual agents. In Proc. of CHI 2012, pages 705--714, 2012.
[4]
S. Andrist, T. Pejsa, B. Mutlu, and M. Gleicher. A head-eye coordination model for animating gaze shifts of virtual characters. In Proc. of ICMI 2012 (Gaze-In'12), pages 4:1--4:6, 2012.
[5]
M. Argyle and M. Cook. Gaze and mutual gaze. Cambridge University Press Cambridge, 1976.
[6]
C. Breazeal. Toward sociable robots. Robotics and Autonomous Systems, 42(3--4):167--175, 2003.
[7]
J. Cassell. Embodied conversational agents: representation and intelligence in user interfaces. AI magazine, 22(4):67--83, 2001.
[8]
J. Cassell, O. Torres, and S. Prevost. Turn taking vs. discourse structure: How best to model multimodal conversation. Machine conversations, pages 143--154, 1999.
[9]
M. Garau, M. Slater, S. Bee, and M. Sasse. The impact of eye gaze on communication using humanoid avatars. In Proc. of CHI 2001, pages 309--316, 2001.
[10]
A. M. Glenberg, J. L. Schroeder, and D. A. Robertson. Averting the gaze disengages the environment and facilitates remembering. Memory & Cognition, 26(4):651--658, 1998.
[11]
D. Heylen, I. Van Es, E. Van Dijk, A. Nijholt, J. van Kuppevelt, L. Dybkjaer, and N. Bernsen. Experimenting with the gaze of a conversational agent. In Proceedings of the International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialogue Systems, pages 93--100, 2005.
[12]
A. Kendon. Some functions of gaze-direction in social interaction. Acta psychologica, 26(1):22--63, 1967.
[13]
S. A. Moubayed, J. Edlund, and J. Beskow. Taming mona lisa: Communicating gaze faithfully in 2d and 3d facial projections. ACM Transactions on Interactive Intelligent Systems, 1(2):11:1--11:25, 2012.
[14]
B. Mutlu, J. Forlizzi, and J. Hodgins. A storytelling robot: Modeling and evaluation of human-like gaze behavior. In Proc. of Humanoids 2006, pages 518--523, 2006.

Cited By

View all
  • (2021)Examining the Use of Nonverbal Communication in Virtual AgentsInternational Journal of Human–Computer Interaction10.1080/10447318.2021.1898851(1-26)Online publication date: 28-Mar-2021

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interaction
December 2013
630 pages
ISBN:9781450321297
DOI:10.1145/2522848
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 December 2013

Check for updates

Author Tags

  1. control models
  2. gaze behavior
  3. humanlike robots
  4. multimodal interfaces
  5. nonverbal communication
  6. virtual agents

Qualifiers

  • Poster

Conference

ICMI '13
Sponsor:

Acceptance Rates

ICMI '13 Paper Acceptance Rate 49 of 133 submissions, 37%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)7
  • Downloads (Last 6 weeks)1
Reflects downloads up to 24 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2021)Examining the Use of Nonverbal Communication in Virtual AgentsInternational Journal of Human–Computer Interaction10.1080/10447318.2021.1898851(1-26)Online publication date: 28-Mar-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media