Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Multimodal Real-Time Platform for Studying Human-Avatar Interactions

  • Conference paper
Intelligent Virtual Agents (IVA 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6356))

Included in the following conference series:

Abstract

A better understanding of the human user’s expectations and sensitivities to the real-time behavior generated by virtual agents can provide insightful empirical data and infer useful principles to guide the design of intelligent virtual agents. In light of this, we propose and implement a research framework to systematically study and evaluate different important aspects of multimodal real-time interactions between humans and virtual agents. Our platform allows the virtual agent to keep track of the user’s gaze and hand movements in real time, and adjust his own behaviors accordingly. Multimodal data streams are collected in human-avatar interactions including speech, eye gaze, hand and head movements from both the human user and the virtual agent, which are then used to discover fine-grained behavioral patterns in human-agent interactions. We present a pilot study based on the proposed framework as an example of the kinds of research questions that can be rigorously addressed and answered. This first study investigating human-agent joint attention reveals promising results about the role and functioning of joint attention in human-avatar interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Tanenhaus, M.K., Spivey-Knowlton, M.J., Eberhard, K.M., Sedivy, J.C.: Integration of visual and linguistic information in spoken language comprehension. Science 268, 1632–1634 (1995)

    Article  Google Scholar 

  2. Shockley, K., Santana, M.V., Fowler, C.: Mutual interpersonal postural constraints are involved in cooperative conversation. Journal of Experimental Psychology: Human Perception and Performance 29, 326–333 (2003)

    Google Scholar 

  3. Kipp, M., Gebhard, P.: Igaze: Studying reactive gaze behavior in semi-immersive human-avatar interactions. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 191–199. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  4. Morency, L.P., Kok, I., Gratch, J.: Predicting listener backchannels: A probabilistic multimodal approach. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 176–190. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  5. Ishii, R., Nakano, Y.I.: Estimating user’s conversational engagement based on gaze behaviors. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 200–207. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Lee, J., Marsella, S., Traum, D., Gratch, J., Lance, B.: The rickel gaze model: A window on the mind of a virtual human. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 296–303. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Yu, C., Zhong, Y., Smith, T., Park, I., Huang, W.: Visual data mining of multimedia data for social and behavioral studies. Information Visualization 8, 56–70 (2009)

    Article  Google Scholar 

  8. Yu, C., Smith, L., Shen, H., Pereira, A.F., Smith, T.: Active information selection: Visual attention through the hands. IEEE Transactions on Autonomous Mental Development 2, 141–151 (2009)

    Google Scholar 

  9. Yu, C., Scheutz, M., Schermerhorn, P.: Investigating multimodal real-time patterns of joint attention in an hri word learning task. In: HRI 2010: Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction, pp. 309–316. ACM, New York (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, H., Fricker, D., Yu, C. (2010). A Multimodal Real-Time Platform for Studying Human-Avatar Interactions. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds) Intelligent Virtual Agents. IVA 2010. Lecture Notes in Computer Science(), vol 6356. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15892-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15892-6_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15891-9

  • Online ISBN: 978-3-642-15892-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics