Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-642-34014-7_9guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Recognizing the visual focus of attention for human robot interaction

Published: 07 October 2012 Publication History

Abstract

We address the recognition of people's visual focus of attention (VFOA), the discrete version of gaze that indicates who is looking at whom or what. As a good indicator of addressee-hood (who speaks to whom, and in particular is a person speaking to the robot) and of people's interest, VFOA is an important cue for supporting dialog modelling in Human-Robot interactions involving multiple persons. In absence of high definition images, we rely on people's head pose to recognize the VFOA. Rather than assuming a fixed mapping between head pose directions and gaze target directions, we investigate models that perform a dynamic (temporal) mapping implicitly accounting for varying body/shoulder orientations of a person over time, as well as unsupervised adaptation. Evaluated on a public dataset and on data recorded with the humanoid robot Nao, the method exhibit better adaptivity and versatility producing equal or better performance than a state-of-the-art approach, while the proposed unsupervised adaptation does not improve results.

References

[1]
Ba, S. O., Odobez, J.-M.: Evaluation of multiple cue head pose estimation algorithms in natural environements. In: IEEE Int. Conf. on Multimedia and Expo (2005)
[2]
Ba, S. O., Odobez, J.-M.: Probabilistic Head Pose Tracking Evaluation in Single and Multiple Camera Setups. In: Stiefelhagen, R., Bowers, R., Fiscus, J. G. (eds.) RT 2007 and CLEAR 2007. LNCS, vol. 4625, pp. 276-286. Springer, Heidelberg (2008)
[3]
Ba, S. O., Odobez, J.-M.: Recognizing visual focus of attention from head pose in natural meetings. Trans. Sys. Man Cyber. Part B 39, 16-33 (2009)
[4]
Ba, S. O., Odobez, J.-M.: Multiperson visual focus of attention from head pose and meeting contextual cues. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 101-116 (2011)
[5]
Babcock, J. S., Pelz, J. B.: Building a lightweight eyetracking headgear. In: Proceedings of the 2004 Symposium on Eye Tracking Research & Applications, ETRA 2004, pp. 109-114. ACM, New York (2004)
[6]
Bohus, D., Horvitz, E.: Models for multiparty engagement in open-world dialog. In: Proc. of the SIGDIAL Conference, Stroudsburg, USA, pp. 225-234 (2009)
[7]
Bohus, D., Horvitz, E.: Open-world dialog: Challenges, directions, and prototype. In: Proceedings of IJCAI 2009 Workshop on Knowledge and Reasoning in Practical Dialogue Systems (2009)
[8]
Freedman, E. G., Sparks, D. L.: Eye-head coordination during head-unrestrained gaze shifts in rhesus monkeys. Journal of Neurophysiology 77(5), 2328-2348 (1997)
[9]
Gaschler, A., Huth, K., Giuliani, M., Kessler, I., de Ruiter, J., Knoll, A.: Modelling state of interaction from head poses for social human-robot interaction
[10]
Hanes, D. A., McCollum, G.: Variables contributing to the coordination of rapid eye/head gaze shifts. Biol. Cybern. 94, 300-324 (2006)
[11]
Hayhoe, M., Ballard, D.: Eye movements in natural behavior. Trends in Cognitive Sciences 9(4), 188-194 (2005)
[12]
Langton, S. R., Watt, R. J., Bruce, I.: Do the eyes have it? cues to the direction of social attention. Trends Cogn. Sci. 4(2), 50-59 (2000)
[13]
Michalowski, M. P., Sabanovic, S., Simmons, R.: A spatial model of engagement for a social robot. In: 9th IEEE Int. Workshop on Advanced Motion Control (2006)
[14]
Morency, L.-P., Darrell, T.: Conditional Sequence Model for Context-Based Recognition of Gaze Aversion. In: Popescu-Belis, A., Renals, S., Bourlard, H. (eds.) MLMI 2007. LNCS, vol. 4892, pp. 11-23. Springer, Heidelberg (2008)
[15]
Otsuka, K., Takemae, Y., Yamato, J.: A probabilistic inference of multipartyconversation structure based on markov-switching models of gaze patterns, head directions, and utterances. In: Proceedings of the 7th International Conference on Multimodal Interfaces, ICMI 2005, pp. 191-198. ACM, New York (2005)
[16]
Sidner, C. L., Lee, C.: Engagement rules for human-robot collaborative interactions. In: IEEE Int. Conf. on Systems, Man and Cybernetics, vol. 4 (2003)
[17]
Sidner, C. L., Lee, C., Kidd, C. D., Lesh, N., Rich, C.: Explorations in engagement for humans and robots. Artificial Intelligence 166(1), 140-164 (2005)
[18]
Stiefelhagen, R.: Tracking focus of attention in meetings. In: Proceedings of the 4th IEEE International Conference on Multimodal Interfaces, ICMI 2002, p. 273. IEEE Computer Society, Washington, DC (2002)
[19]
Voit, M., Stiefelhagen, R.: Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios. In: Proc. of the 10th Int. Conf. on Multimodal interfaces (ICMI), Chania, Crete, Greece (2008)
[20]
Yücel, Z., Salah, A. A.: Resolution of focus of attention using gaze direction estimation and saliency computation. In: Proceedings of the International Conference on Affective Computing and Intelligent Interfaces (2009)

Cited By

View all
  • (2022)Multiparty Interaction Between Humans and Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3563659.3563665(113-154)Online publication date: 27-Oct-2022
  • (2022)The Handbook on Socially Interactive AgentsundefinedOnline publication date: 27-Oct-2022
  • (2020)Bodily Expression of Social Initiation Behaviors in ASC and non-ASC children: Mixed Reality vs. LEGO Game PlayCompanion Publication of the 2020 International Conference on Multimodal Interaction10.1145/3395035.3425188(140-149)Online publication date: 25-Oct-2020
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
HBU'12: Proceedings of the Third international conference on Human Behavior Understanding
October 2012
173 pages
ISBN:9783642340130
  • Editors:
  • Albert Ali Salah,
  • Javier Ruiz-del-Solar,
  • Çetin Meriçli,
  • Pierre-Yves Oudeyer

Sponsors

  • EuCognition network
  • INRIA: Institut Natl de Recherche en Info et en Automatique
  • Boǧaziçi University Foundation

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 07 October 2012

Author Tags

  1. gaze
  2. head pose
  3. human robot interaction
  4. visual focus of attention

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 24 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2022)Multiparty Interaction Between Humans and Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3563659.3563665(113-154)Online publication date: 27-Oct-2022
  • (2022)The Handbook on Socially Interactive AgentsundefinedOnline publication date: 27-Oct-2022
  • (2020)Bodily Expression of Social Initiation Behaviors in ASC and non-ASC children: Mixed Reality vs. LEGO Game PlayCompanion Publication of the 2020 International Conference on Multimodal Interaction10.1145/3395035.3425188(140-149)Online publication date: 25-Oct-2020
  • (2018)Multiple-Gaze Geometry: Inferring Novel 3D Locations from Gazes Observed in Monocular VideoComputer Vision – ECCV 201810.1007/978-3-030-01225-0_38(641-659)Online publication date: 8-Sep-2018
  • (2017)How to Open an Interaction Between Robot and Museum Visitor?Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction10.1145/2909824.3020219(187-195)Online publication date: 6-Mar-2017
  • (2016)Using Random Forests for the Estimation of Multiple Users’ Visual Focus of Attention from Head PoseAI*IA 2016 Advances in Artificial Intelligence10.1007/978-3-319-49130-1_8(89-102)Online publication date: 29-Nov-2016
  • (2012)Human behavior understanding for roboticsProceedings of the Third international conference on Human Behavior Understanding10.1007/978-3-642-34014-7_1(1-16)Online publication date: 7-Oct-2012

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media