Abstract
Human interlocutors continuously show behaviour indicative of their perception, understanding, acceptance and agreement of and with the other’s utterances [1,4]. Such evidence can be provided in the form of verbal-vocal feedback signals, head gestures, facial expressions or gaze and often interacts with the current dialogue context. As feedback signals are able to express subtle differences in meaning, we hypothesise that they tend to reflect their producer’s mental state quite accurately.
To be cooperative and human-like dialogue partners, virtual conversational agents should be able to interpret their user’s evidence of understanding and to react appropriately to it by adapting to their needs [2]. We present a Bayesian network model for context-sensitive interpretation of listener feedback for such an ‘attentive speaker agent’, which takes the user’s multimodal behaviour (verbal-vocal feedback, headgestures, gaze) as well as its own utterance and knowledge of the dialogue domain into account to form a model of the user’s mental state.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Allwood, J., Nivre, J., Ahlsén, E.: On the semantics and pragmatics of linguistic feedback. Journal of Semantics 9, 1–26 (1992)
Buschmeier, H., Kopp, S.: Towards Conversational Agents That Attend to and Adapt to Communicative User Feedback. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 169–182. Springer, Heidelberg (2011)
Buschmeier, H., Kopp, S.: Using a Bayesian model of the listener to unveil the dialogue information state. In: Proceedings of the 16th Workshop on the Semantics and Pragmatics of Dialogue, Paris, France (to appear)
Clark, H.H.: Using Language. Cambridge University Press, Cambridge (1996)
Kopp, S., Allwood, J., Grammer, K., Ahlsen, E., Stocksmeier, T.: Modeling Embodied Feedback with Virtual Humans. In: Wachsmuth, I., Knoblich, G. (eds.) Modeling Communication. LNCS (LNAI), vol. 4930, pp. 18–37. Springer, Heidelberg (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Buschmeier, H., Kopp, S. (2012). Understanding How Well You Understood – Context- Sensitive Interpretation of Multimodal User Feedback. In: Nakano, Y., Neff, M., Paiva, A., Walker, M. (eds) Intelligent Virtual Agents. IVA 2012. Lecture Notes in Computer Science(), vol 7502. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33197-8_64
Download citation
DOI: https://doi.org/10.1007/978-3-642-33197-8_64
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33196-1
Online ISBN: 978-3-642-33197-8
eBook Packages: Computer ScienceComputer Science (R0)