Abstract
In pursuing the ultimate goal of enabling intelligent conversation with a virtual human, two key challenges are selecting nonverbal behaviors to implement and realizing those behaviors practically and reliably. In this paper, we explore the signals interlocutors use to display uncertainty face to face. Peoples’ signals were identified and annotated through systematic coding and then implemented onto our ECA (Embodied Conversational Agent), RUTH. We investigated whether RUTH animations were as effective as videos of talking people in conveying an agent’s level of uncertainty to human viewers. Our results show that people could pick up on different levels of uncertainty not only with another conversational partner, but also with the simulations on RUTH. In addition, we used animations containing different subsets of facial signals to understand in more detail how nonverbal behavior conveys uncertainty. The findings illustrate the promise of our methodology for creating specific inventories of fine-grained conversational behaviors from knowledge and observations of spontaneous human conversation.
Chapter PDF
Similar content being viewed by others
References
Clark, H.H.: Using Language. Cambridge University Press, Cambridge (1996)
Swerts, M., Krahmer, E.: Audiovisual prosody and feeling of knowing. Journal of Memory and Language 53(1), 81–94 (2005)
Oh, I., Frank, M., Stone, M.: Face-to-face communication of uncertainty: expression and recognition of uncertainty signals by different levels across modalities. In: ICA International Communication Association (2007)
Nakano, Y.I., Reinstein, G., Stocky, T., Cassell, J.: Towards a model of face-to-face grounding. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2003), pp. 553–561 (2003)
DeCarlo, D., Revilla, C., Stone, M., Venditti, J.: Specifying and animating facial signals for discourse in embodied conversational agents. Computer Animation and Virtual Worlds 15(1), 27–38 (2004)
Stone, M., Oh, I.: Interdisciplinary challenges of generating animated utterances. In: Wachsmuth, I., Knoblich, G. (eds.) Modeling Communication with Robots and Virtual Humans, Springer, Heidelberg (2007)
Bailenson, J.N., Blascovich, J.: Avatars. In: Bainbridge’s, W.S. (ed.) Encyclopedia of Human-Computer Interaction, pp. 64–68. Berkshire Publishing Group (2004)
Nass, C., Robles, E., Wang, Q.: ‘User as Assessor’ Approach to Embodied Conversational Agents. In: Ruttkay, Z., Pelachaud, C. (eds.) From Brows to Trust: Evaluating Embodied Conversational Agents (Human-Computer Interaction Series), Kluwer Academic Publisher, Netherlands (2004)
Ekman, P., Friesen, W.V., Hager, J.C.: Facial Action Coding System (FACS): Manual and Investigator’s Guide. A Human Face, Salt Lake City, UT (2002)
Bavelas, J.B., Chovil, N.: Visible acts of meaning: an integrated message model of language in face-to-face dialogue. Journal of Language and Social Psychology 19(2), 163–194 (2000)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Oh, I., Stone, M. (2007). Understanding RUTH: Creating Believable Behaviors for a Virtual Human Under Uncertainty. In: Duffy, V.G. (eds) Digital Human Modeling. ICDHM 2007. Lecture Notes in Computer Science, vol 4561. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73321-8_51
Download citation
DOI: https://doi.org/10.1007/978-3-540-73321-8_51
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-73318-8
Online ISBN: 978-3-540-73321-8
eBook Packages: Computer ScienceComputer Science (R0)