Abstract
In this paper we present our recent results in automatic facial gesturing of graphically embodied animated agents. In one case, conversational agent is driven by speech in automatic Lip Sync process. By analyzing speech input, lip movements are determined from the speech signal. Another method provides virtual speaker capable of reading plain English text and rendering it in a form of speech accompanied by the appropriate facial gestures. Proposed statistical model for generating virtual speaker’s facial gestures, can be also applied as addition to lip synchronization process in order to obtain speech driven facial gesturing. In this case statistical model will be triggered with the input speech prosody instead of lexical analysis of the input text.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
McAllister, D.F., Rodman, R.D., Bitzer, D.L., Freeman, A.S.: Lip synchronization for Animation. In: Proceedings of SIGGRAPH 1997, Los Angeles, CA (1997)
Pandžic, I.S., Forchheimer, R. (eds.): MPEG-4 Facial Animation - The Standard, Implementation and Applications. John Wiley & Sons Ltd, Chichester (2002)
Axelsson, A., Björhall, E.: Real time speech driven face animation, Master Thesis at The Image Coding Group, Dept. of Electrical Engineering at Linköping University, Linköping (2003)
Dávila, J.J.: Genetic optimization of neural networks for the task of natural language processing, dissertation, New York (1999)
Rojas, R.: Neural networks. In: A Systematic Introduction, Springer, Heidelberg (1996)
Jones, A.J.: Genetic algorithms and their applications to the design of neural networks. Neural Computing & Applications 1(1), 32–45 (1993)
Black Box Genetic algorithm, http://fdtd.rice.edu/GA/
Pelachaud, C., Badler, N., Steedman, M.: Generating Facial Expressions for Speech. Cognitive, Science 20(1), 1–46 (1996)
Radman, V.: Leksicka analiza teksta za automatsku proizvodnju pokreta lica, Graduate work no. 2472 on Faculty of Electrical Engineering and Computing, University of Zagreb (2004)
Microsoft Speech Technologies, http://www.microsoft.com/speech
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zoric, G., Smid, K., Pandzic, I.S. (2006). Automated Gesturing for Embodied Agents. In: Washio, T., Sakurai, A., Nakajima, K., Takeda, H., Tojo, S., Yokoo, M. (eds) New Frontiers in Artificial Intelligence. JSAI 2005. Lecture Notes in Computer Science(), vol 4012. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11780496_42
Download citation
DOI: https://doi.org/10.1007/11780496_42
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-35470-3
Online ISBN: 978-3-540-35471-0
eBook Packages: Computer ScienceComputer Science (R0)