Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1291233.1291453acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
Article

Xface open source project and smil-agent scripting language for creating and animating embodied conversational agents

Published: 29 September 2007 Publication History

Abstract

Xface is a set of open source tools for creation of embodied conversational agents (ECAs) using MPEG4 and keyframe based rendering driven by SMIL-Agent scripting language. Xface Toolkit, coupled with SMIL-Agent scripting serves as a full 3D facial animation authoring package. Xface project is initiated by Cognitive and Communication Technologies (TCC) division of FBK-irst (formerly ITC-irst). The toolkit is written in ANSI C++, and is open source and platform independent.

References

[1]
ISO/IEC JTC1/WG11 N1901, Text for CD 14496-1 Systems. Fribourg Meeting, November 1997.
[2]
ISO/IEC JTC1/WG11 N1902, Text for CD 14496-1 Visual. Fribourg Meeting, November 1997.
[3]
K. Balci. Xface: MPEG-4 based open source toolkit for 3d facial animation. In Proc. Advance Visual Interfaces, Italy, May 2004.
[4]
T. Bui, D. Heylen, and A. Nijholt. Combination of facial movements on a 3d talking head. In Proc. of Computer Graphics International 2004 (CGI 2004), Crete, Greece, June 2004. IEEE Computer Society.
[5]
Y. Cao, W. C. Tien, P. Faloutsos, and F. Pighin. Expressive speech-driven facial animation. ACM Transactions on Graphics, October 2005.
[6]
J. Edge and S. Maddock. Expressive visual speech using geometric muscle functions. In Proc. of the 19th Eurographics UK Chapter Annual Conference (UCL), pages 11--18, April 2001.
[7]
T. C. for Speech Technology Research. The Festival Speech Synthesis System. University of Edinburgh, 2002. http://www.cstr.ed.ac.uk/projects/festival/.
[8]
E. Not, K. Balci, F. Pianesi, and M. Zancanaro. Synthetic characters as multichannel interfaces. In Proc. ICMI05, Italy, October 2005.
[9]
I. Pandzic and R. Forchheimer. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, New York, 2002.
[10]
H. Pyun, W. Chae, Y. Kim, H. Kang, and S. Y. Shin. An example-based approach to text-driven speech animation with emotional expressions. Technical Report 200, KAIST, July 2004.

Cited By

View all
  • (2024)DigiHuman: A Conversational Digital Human with Facial ExpressionsTurkish Journal of Science and Technology10.55525/tjst.130132419:1(25-37)Online publication date: 28-Mar-2024
  • (2024)Character Animation Scripting EnvironmentEncyclopedia of Computer Graphics and Games10.1007/978-3-031-23161-2_43(274-285)Online publication date: 5-Jan-2024
  • (2024)Virtual Human for Assisted Healthcare: Application and TechnologyEncyclopedia of Computer Graphics and Games10.1007/978-3-031-23161-2_363(1993-2001)Online publication date: 5-Jan-2024
  • Show More Cited By

Recommendations

Reviews

María Jose Abásolo

A new trend continues to populate the planet, particularly through cyberspace: virtual humanoids, which are also found under such names as avatars, conversational agents, and talking heads. H-Anim and the MPEG-4 ISO standard arise as the main answers to the need for a standard representation of a virtual humanoid, both the body and the face. This short paper summarizes an open-source toolkit called Xface for the creation and animation of faces. The authors propose an Extensible Markup Language (XML) based language to configure the different parts of a face model: one or several three-dimensional (3D) models, textures, MPEG-4 facial definition parameters (FDPs), zone of influences for each FDP, and muscle models for the different zones of the face. Also, they use an XML-based scripting language, SMIL-agent, to express a particular conversation that combines speech with facial expression and gestures. The toolkit includes Xface and SMIL editors, and a player that animates the face with synchronization of voice, lip movements, facial expressions, gestures, and head and eye movements. This work is particularly interesting to developers, as the toolkit includes a core library to embed 3D faces in their applications. It should be emphasized that works like this are particularly valuable because the open-source nature allows for research development. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '07: Proceedings of the 15th ACM international conference on Multimedia
September 2007
1115 pages
ISBN:9781595937025
DOI:10.1145/1291233
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 September 2007

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. 3D talking heads
  2. MPEG4 facial animation
  3. SMIL
  4. embodied conversational agents
  5. open source
  6. scripting

Qualifiers

  • Article

Conference

MM07

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)DigiHuman: A Conversational Digital Human with Facial ExpressionsTurkish Journal of Science and Technology10.55525/tjst.130132419:1(25-37)Online publication date: 28-Mar-2024
  • (2024)Character Animation Scripting EnvironmentEncyclopedia of Computer Graphics and Games10.1007/978-3-031-23161-2_43(274-285)Online publication date: 5-Jan-2024
  • (2024)Virtual Human for Assisted Healthcare: Application and TechnologyEncyclopedia of Computer Graphics and Games10.1007/978-3-031-23161-2_363(1993-2001)Online publication date: 5-Jan-2024
  • (2019)ChASEACM Transactions on Mathematical Software10.1145/331382845:2(1-34)Online publication date: 26-Apr-2019
  • (2019)PLASMAACM Transactions on Mathematical Software10.1145/326449145:2(1-35)Online publication date: 3-May-2019
  • (2019)Synthesizing 3D Trump: Predicting and Visualizing the Relationship Between Text, Speech, and Articulatory MovementsIEEE/ACM Transactions on Audio, Speech, and Language Processing10.1109/TASLP.2019.293584327:12(2223-2233)Online publication date: Dec-2019
  • (2019)Virtual Human for Assisted Healthcare: Application and TechnologyEncyclopedia of Computer Graphics and Games10.1007/978-3-319-08234-9_363-1(1-8)Online publication date: 23-Apr-2019
  • (2018)Real-Time 3-D Facial Animation: From Appearance to Internal ArticulatorsIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2016.264350428:4(920-932)Online publication date: Apr-2018
  • (2018)Synthesizing Photo-Realistic 3D Talking Head: Learning Lip Synchronicity and Emotion from Audio and Video2018 25th IEEE International Conference on Image Processing (ICIP)10.1109/ICIP.2018.8451618(1448-1452)Online publication date: Oct-2018
  • (2018)Lip syncing method for realistic expressive 3D face modelMultimedia Tools and Applications10.1007/s11042-017-4437-z77:5(5323-5366)Online publication date: 1-Mar-2018
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media