Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/544862.544921acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
Article

Embodied contextual agent in information delivering application

Published: 15 July 2002 Publication History

Abstract

We aim at building a new human-computer interface for Information Delivering applications: the conversational agent that we have developed is a multimodal believable agent able to converse with the User by exhibiting a synchronized and coherent verbal and nonverbal behavior. The agent is provided with a personality and a social role, that allows her to show her emotion or to refrain from showing it, depending on the context in which the conversation takes place. The agent is provided with a face and a mind. The mind is designed according to a BDI structure that depends on the agent's personality; it evolves dynamically during the conversation, according to the User's dialog moves and to emotions triggered as a consequence of the Interlocutor's move; such cognitive features are then translated into facial behaviors. In this paper, we describe the overall architecture of our system and its various components; in particular, we present our dynamic model of emotions. We illustrate our results with an example of dialog all along the paper. We pay particular attention to the generation of verbal and nonverbal behaviors and to the way they are synchronized and combined with each other. We also discuss how these acts are translated into facial expressions.

References

[1]
J. Allbeck and N.I. Badler. Consistent communication with control. In Proceedings of the workshop on "Multimodal communication and context in embodied agents", Montreal, Canada, May 2001. The Fifth International Conference on Autonomous Agents
[2]
E. Andre, T. Rist, S. van Mulken, M. Klesen, and S. Baldes. The automated design of believable dialogues for animated presentation teams. In S. Prevost J. Cassell, J. Sullivan and E. Churchill, editors, Embodied Conversational Characters. MITpress, Cambridge, MA, 2000
[3]
G. Ball and J. Breese. Emotion and personality in a conversational agent. In S. Prevost J. Cassell, J. Sullivan and E. Churchill, editors, Embodied Conversational Characters. MITpress, Cambridge, MA, 2000
[4]
A.W. Black, P. Taylor, R. Caley, and R. Clark. Festival. http://www.cstr.ed.ac.uk/projects/festival/
[5]
V. Carofiglio and F. de Rosis. Mixed emotion modeling. In R. Aylett and D. Canamero, editors, Symposium of the AISB'02 Convention, volume Animating Expressive Characters for Social Interactions, London, Avril 2002
[6]
J. Cassell, J. Bickmore, M. Billinghurst, L. Campbell, K. Chang, H. Vilhjálmsson, and H. Yan. Embodiment in conversational interfaces: Rea. In CHI'99, pages 520--527, Pittsburgh, PA, 1999
[7]
J. Cassell, C. Pelachaud, N.I. Badler, M. Steedman, B. Achorn, T. Becket, B. Douville, S. Prevost, and M. Stone. Animated conversation: Rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In Computer Graphics Proceedings, Annual Conference Series, pages 413--420. ACM SIGGRAPH, 1994
[8]
J. Cassell and M. Stone. Living hand and mouth. Psychological theories about speech and gestures in interactive dialogue systems. In AAAI99 Fall Symposium on Psychological Models of Communication in Collaborative Systems, 1999
[9]
J. Cassell, H. Vilhjálmsson, and T. Bickmore. BEAT : the Behavior Expression Animation Toolkit. In Computer Graphics Proceedings, Annual Conference Series. ACM SIGGRAPH, 2001
[10]
M.G. Core, J.D. Moore, and C. Zinn. Supporting constructive learning with a feedback planner. In AAAI Fall Symposium on "Building Dialogue Systems for Tutorial Applications", Cape Cod, MA, Nov. 2000
[11]
N. De Carolis, V. Carofiglio, and C. Pelachaud. From discourse plans to believable behavior generation. In International Natural Language Generation Conference, New-York, 1-3 July 2002
[12]
N. De Carolis, C. Pelachaud, I. Poggi, and F. de Rosis. Behavior planning for a reflexive agent. In IJCAI'01, Seattle, USA, August 2001
[13]
F. deRosis and F. Grasso. Affective natural language generation. In Ana Paiva, editor, Affect in interactions. Springer-Verlag, Berlin, 2000
[14]
C. Elliott. An Affective Reasoner: A process model of emotions in a multiagent system. PhD thesis, Northwestern University, The Institute for the Learning Sciences, 1992. Technical Report No. 32
[15]
J. Gratch and S. Marsella. Tears and fears: Modeling emotions and emotional behaviors in synthetic agents. In Proceedings of the 5th International Conference on Autonomous Agents, Montreal, Canada, May 2001
[16]
W.L. Johnson, J.W. Rickel, and J.C. Lester. Animated pedagogical agents: Face-to-face interaction in interactive learning environments. To appear in International Journal of Artificial Intelligence in Education, 2000
[17]
S. Larsson, P. Bohlin, J. Bos, and D. Traum. TRINDIKIT 1.0 manual for D2.2. http://www.ling.gu.se/projekt/trindi
[18]
J.C. Lester, S.G. Stuart, C.B. Callaway, J.L. Voerman, and P.J. Fitzgerald. Deictic and emotive communication in animated pedagogical agents. In S. Prevost J. Cassell, J. Sullivan and E. Churchill, editors, Embodied Conversational Characters. MITpress, Cambridge, MA, 2000
[19]
A. Bryan Loyall and Joseph Bates. Personality-rich believable agents that use language. In W. Lewis Johnson and Barbara Hayes-Roth, editors, Proceedings of the First International Conference on Autonomous Agents (Agents'97), pages 106--113, Marina del Rey, CA, USA, 1997. ACM Press
[20]
M. Lundeberg and J. Beskow. Developing a 3D-agent for the August dialogue system. In Proceedings of the ESCA Workshop on Audio-Visual Speech Processing, Santa Cruz, USA, 1999
[21]
W.C. Mann, C.M.I.M. Matthiessen, and S. Thompson. Rhetorical structure theory and text analysis. Technical Report 89--242, ISI Research, 1989
[22]
S. Marsella, W.L. Johnson, and K. LaBore. Interactive pedagogical drama. In Proceedings of the 4th International Conference on Autonomous Agents, Barcelona, Spain, June 2000
[23]
A. Ortony. On making believable emotional agents believable. In R. Trappl and P. Petta, editors, Emotions in humans and artifacts. MIT Press, Cambridge, MA, in press
[24]
C. Pelachaud, N.I. Badler, and M. Steedman. Generating facial expressions for speech. Cognitive Science, 20(1):1--46, January-March 1996
[25]
C. Pelachaud, E. Magno-Caldognetto, C. Zmarich, and P. Cosi. An approach to an italian talking head. In Eurospeech'01, Aalborg, Denmark, September 3-7 2001
[26]
C. Pelachaud and I. Poggi. Subtleties of facial expressions in embodied agents. Journal of Visualization and Computer Animation, To appear
[27]
I. Poggi and E. Magno Caldognetto. A score for the analysis of gestures in multimodal communication. In L.Messing, editor, Proceedings of the Workshop on the Integration of Gesture and Language in Speech, Applied Science and Engineering Laboratories, pages 235--244, Newark and Wilmington, Del., October 7-8 1996
[28]
I. Poggi and C. Pelachaud. Performative faces. Speech Communication, 26:5--21, 1998
[29]
I. Poggi, C. Pelachaud, and F. de Rosis. Eye communication in a conversational 3D synthetic agent. AI Communications, 13(3):169--181, 2000
[30]
A.S. Rao and M.P. Georgeff. Modeling rational agents with a BDI-architecture. In J. ALlen, R. Fikes, and R. Sandewall, editors, Proceedings of the 2nd International Conference on Principles of Knowledge Representation and reasoning. Morgan Kaufman, 1991

Cited By

View all
  • (2024)Surveying the evolution of virtual humans expressiveness toward real humansComputers & Graphics10.1016/j.cag.2024.104034123(104034)Online publication date: Oct-2024
  • (2023)A Comprehensive Review of Data‐Driven Co‐Speech Gesture GenerationComputer Graphics Forum10.1111/cgf.1477642:2(569-596)Online publication date: 23-May-2023
  • (2023)Comparing Photorealistic and Animated Embodied Conversational Agents in Serious Games: An Empirical Study on User ExperienceHCI International 2023 – Late Breaking Papers10.1007/978-3-031-48050-8_22(317-335)Online publication date: 1-Dec-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
AAMAS '02: Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
July 2002
508 pages
ISBN:1581134800
DOI:10.1145/544862
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 July 2002

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. believable agent architectures
  2. cognitive models and models of mind
  3. human-like and believable qualities
  4. interface and conversational agents
  5. personality and emotion in agents

Qualifiers

  • Article

Conference

AAMAS02
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)21
  • Downloads (Last 6 weeks)4
Reflects downloads up to 27 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Surveying the evolution of virtual humans expressiveness toward real humansComputers & Graphics10.1016/j.cag.2024.104034123(104034)Online publication date: Oct-2024
  • (2023)A Comprehensive Review of Data‐Driven Co‐Speech Gesture GenerationComputer Graphics Forum10.1111/cgf.1477642:2(569-596)Online publication date: 23-May-2023
  • (2023)Comparing Photorealistic and Animated Embodied Conversational Agents in Serious Games: An Empirical Study on User ExperienceHCI International 2023 – Late Breaking Papers10.1007/978-3-031-48050-8_22(317-335)Online publication date: 1-Dec-2023
  • (2021)Examining the Use of Nonverbal Communication in Virtual AgentsInternational Journal of Human–Computer Interaction10.1080/10447318.2021.189885137:17(1648-1673)Online publication date: 28-Mar-2021
  • (2017)Facial expressions and speech acts: experimental evidences on the role of the upper face as an illocutionary force indicating device in language comprehensionCognitive Processing10.1007/s10339-017-0809-618:3(285-306)Online publication date: 22-Apr-2017
  • (2017)Designing User Interfaces in Emotionally-Sensitive ApplicationsHuman-Computer Interaction – INTERACT 201710.1007/978-3-319-67687-6_27(404-422)Online publication date: 21-Sep-2017
  • (2016)The USC CreativeIT database of multimodal dyadic interactionsLanguage Resources and Evaluation10.1007/s10579-015-9300-050:3(497-521)Online publication date: 1-Sep-2016
  • (2016)User-Oriented Requirements EngineeringUsability- and Accessibility-Focused Requirements Engineering10.1007/978-3-319-45916-5_2(11-33)Online publication date: 9-Sep-2016
  • (2015)Embodied Head Gesture and Distance EducationProcedia Manufacturing10.1016/j.promfg.2015.07.2513(2034-2041)Online publication date: 2015
  • (2014)Social Support Strategies for Embodied Conversational AgentsEmotion Modeling10.1007/978-3-319-12973-0_8(134-147)Online publication date: 12-Nov-2014
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media