Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/846276.846304acmconferencesArticle/Chapter ViewAbstractPublication PagesscaConference Proceedingsconference-collections
Article

Vision-based control of 3D facial animation

Published: 26 July 2003 Publication History

Abstract

Controlling and animating the facial expression of a computer-generated 3D character is a difficult problem because the face has many degrees of freedom while most available input devices have few. In this paper, we show that a rich set of lifelike facial actions can be created from a preprocessed motion capture database and that a user can control these actions by acting out the desired motions in front of a video camera. We develop a real-time facial tracking system to extract a small set of animation control parameters from video. Because of the nature of video data, these parameters may be noisy, low-resolution, and contain errors. The system uses the knowledge embedded in motion capture data to translate these low-quality 2D animation control signals into high-quality 3D facial expressions. To adapt the synthesized motion to a new character model, we introduce an efficient expression retargeting technique whose run-time computation is constant independent of the complexity of the character model. We demonstrate the power of this approach through two users who control and animate a wide range of 3D facial expressions of different avatars.

References

[1]
S. Baker and I. Matthews. Lucas-Kanade 20 years on: A unifying framework part 1: The quantity approximated, the warp update rule, and the gradient descent approximation. In International Journal of Computer Vision. 2003.
[2]
J. R. Bergen, P. Anandan, and K. J. Hanna. Hierarchical model-based motion estimation. In Proceedings of European Conference on Computer Vision. 1992. 237--252.
[3]
M. Black. Robust incremental optical flow. PhD thesis, Yale University, 1992.
[4]
M. J. Black and Y. Yacoob. Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. In International Conference on Computer Vision. 1995. 374--381.
[5]
M. E. Brand. Voice puppetry. In Proceedings of SIGGRAPH 99. Computer Graphics Proceedings, Annual Conference Series, 1999. 21--28.
[6]
C. Bregler, M. Covell, and M. Slaney. Video rewrite: Driving visual speech with audio. In Proceedings of SIGGRAPH 97. Computer Graphics Proceedings, Annual Conference Series, 1997. 353--360.
[7]
I. Buck, A. Finkelstein, C. Jacobs, A. Klein, D. H. Salesin, J. Seims, R. Szeliski, and K. Toyama. Performance-driven hand-drawn animation. In Symposium on Non Photorealistic Animation and Rendering 2000. Annecy, France, 2000. 101--108.
[8]
M. La Cascia and S. Sclaroff. Fast, reliable head tracking under varying illumination. In IEEE Conf. on Computer Vision and Pattern Recognition. 1999. 604--610.
[9]
J. E. Chadwick, D. R. Haumann, and R. E. Parent. Layered construction for deformable animated characters. In Proceedings of SIGGRAPH 89. Computer Graphics Proceedings, Annual Conference Series, 1989. 243--252.
[10]
J. Chai and J. Xiao. A linear solution to nonrigid structure from motion. In Robotics Institute Technical Report, Carnegie Mellon University. 2003. CMU-RI-TR-03-16.
[11]
E. Chuang and C. Bregler. Performance driven facial animation using blendshape interpolation. Computer Science Technical Report, Stanford University, 2002. CS-TR-2002-02.
[12]
M. N. Cohen and D. W. Massaro. Modeling coarticulation in synthetic visual speech. In Models and Techniques in Computer Animation. Springer-Verlag, Tokyo, Japan, 1992. edited by Thalmann N. M. and Thalmann, D.
[13]
D. DeCarlo and D. Metaxas. Optical flow constraints on deformable models with applications to face tracking. In International Journal of Computer Vision. 2000. 38(2):99--127.
[14]
B. DeGraf. Notes on facial animation. In SIGGRAPH 89 Tutorial Notes: State of the Art in Facial Animation. 1989. 10--11.
[15]
I. Essa, S. Basu, T. Darrell, and A. Pentland. Modeling, tracking and interactive animation of faces and heads using input from video. In Proceedings of Computer Animation Conference. 1996. 68--79.
[16]
T. Ezzat, G. Geiger, and T. Poggio. Trainable videorealistic speech animation. In ACM Transaction on Graphics (SIGGRAPH 2002). San Antonio, Texas, 2002. 21(3):388--398.
[17]
FaceStation. URL: http://www.eyematic.com.
[18]
Faceworks. URL: http://www.puppetworks.com.
[19]
M. Gleicher and N. Ferrier. Evaluating video-based motion capture. In Proceedings of Computer Animation 2002. 2002.
[20]
S. B. Gokturk, J. Y. Bouguet, and R. Grzeszczuk. A data driven model for monocular face tracking. In IEEE International conference on computer vision. 2001. 701--708.
[21]
B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin. Making faces. In Proceedings of SIGGRAPH 98. Computer Graphics Proceedings, Annual Conference Series, 1998. 55--66.
[22]
N. Howe, M. Leventon, and W. Freeman. Bayesian reconstruction of 3d human motion from single-camera video. In Neural Information Processing Systems 1999 (NIPS 12). 1999.
[23]
D. F. Huber and M. Hebert. Fully automatic registration of multiple 3d data sets. In IEEE Computer Society Workshop on Computer Vision Beyond the Visible Spectrum (CVBVS 2001). December, 2001. 19:989--1003.
[24]
P. Kalra, A. Mangili, N. Magnenat Thalmann, and D. Thalmann. Simulation of facial muscle actions based on rational free form deformations. In Computer Graphics Forum (EUROGRAPHICS '92 Proceedings). Cambridge, 1992. 59--69.
[25]
R. Kjeldsen and J. Hartman. Design issues for vision based computer interaction systems. Proceedings of the 2001 Workshop on Perceptive User Interfaces, 2001.
[26]
A. Lanitis, C. J. Taylor, and T. F. Cootes. Automatic interpretation and coding of face images using flexible models. In IEEE Pattern Analysis and Machine Intelligence. 1997. 19(7):743--756.
[27]
Y. Lee, D. Terzopoulos, and K. Waters. Realistic modeling for facial animation. In Proceedings of SIGGRAPH 95. Computer Graphics Proceedings, Annual Conference Series, 1995. 55--62.
[28]
J. P. Lewis, M. Cordner, and N. Fong. Pose space deformation: A unified approach to shape interpolation and skeleton-driven deformation. In Proceedings of ACM SIGGRAPH 2000. Computer Graphics Proceedings, Annual Conference Series, New Orleans, LO, 2000. 165--172.
[29]
S. Nene and S. Nayar. A simple algorithm for nearest neighbor search in high dimensions. In IEEE transations on Pattern Analysis and machine Intelligence. 1997. 19:989--1003.
[30]
G. M. Nielson. Scattered data modeling. In IEEE Computer Graphics and Applications. 1993. 13(1):60--70.
[31]
J. Noh and U. Neumann. Expression cloning. In Proceedings of ACM SIGGRAPH 2001. Computer Graphics Proceedings, Annual Conference Series, Los Angeles CA, 2001. 277--288.
[32]
F. I. Parke. Computer generated animation of faces. In Proc. ACM National Conference, 1972. 1:451--457.
[33]
F. I. Parke. A parametric model for human faces. Salt Lake City, Utah, 1974. Ph.D thesis, University of Utah. UTEC-CSc-75-047.
[34]
F. I. Parke. Parameterized models for facial animation revisited. In ACM Siggraph Facial Animation Tutorial Notes. 1989. 53--56.
[35]
F. I. Parke and K. Waters. Computer facial animation. A. K. Peter, Wellesley, MA, 1996.
[36]
V. Pavlovic, J. M. Rehg, T.-J. Cham, and K. Murphy. A dynamic baysian network approach to figure tracking using learned dynamic models. In IEEE International Conference on Computer Vision. 1999. 94--101.
[37]
F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. Salesin. Synthesizing realistic facial expressions from photographs. In Proceedings of SIGGRAPH 98. Computer Graphics Proceedings, Annual Conference Series, San Antonio, TX, 1998. 75--84.
[38]
F. Pighin, R. Szeliski, and D. Salesin. Resynthesizing facial animation through 3d model-based tracking. In International Conference on Computer Vision. 1999. 143--150.
[39]
J. Shi and C. Tomasi. Good features to track. In Proceedins of IEEE Conference on Computer Vision and Pattern Recognition. 1994. 593--600.
[40]
H. Sidenbladh, M. J. Black, and L. Sigal. Implicit probabilistic models of human motion for synthesis and tracking. In European Conference on Computer Vision. Springer-Verlag, 2002. 784--800.
[41]
D. Terzopoulos and K. Waters. Physically-based facial modelling, analysis, and animation. In The Journal of Visualization and Computer Animation. 1990. 1(2):73--80.
[42]
D. Terzopoulos and K. Waters. Analysis and synthesis of facial image sequences using physical and anatomical models. In IEEE Trans. on Pattern Analysis and Machine Intelligence. 1993. 15:569--579.
[43]
N. M. Thalmann, N. E. Primeau, and D. Thalmann. Abstract muscle actions procedures for human face animation. In Visual Computer. 1988. 3(5):290--297.
[44]
L. Williams. Performance driven facial animation. In Proceedings of SIGGRAPH 90. Computer Graphics Proceedings, Annual Conference Series, 1990. 235--242.
[45]
J. Xiao, T. Kanade, and J. Cohn. Robust full motion recovery of head by dynamic templates and reregistration techniques. In Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition. 2002. 156--162.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SCA '03: Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
July 2003
387 pages
ISBN:1581136595

Sponsors

Publisher

Eurographics Association

Goslar, Germany

Publication History

Published: 26 July 2003

Check for updates

Qualifiers

  • Article

Conference

SCA03
Sponsor:
SCA03: Symposium on Computer Animation 2003
July 26 - 27, 2003
California, San Diego

Acceptance Rates

SCA '03 Paper Acceptance Rate 38 of 100 submissions, 38%;
Overall Acceptance Rate 183 of 487 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2019)From 2D to 3D real-time expression transfer for facial animationMultimedia Tools and Applications10.1007/s11042-018-6785-878:9(12519-12535)Online publication date: 1-May-2019
  • (2018)Stabilized real-time face tracking via a learned dynamic rigidity priorACM Transactions on Graphics10.1145/3272127.327509337:6(1-11)Online publication date: 4-Dec-2018
  • (2018)HeadonACM Transactions on Graphics10.1145/3197517.320135037:4(1-13)Online publication date: 30-Jul-2018
  • (2017)Makeup LampsComputer Graphics Forum10.5555/3128975.312900436:2(311-323)Online publication date: 1-May-2017
  • (2017)EarFieldSensingProceedings of the 2017 CHI Conference on Human Factors in Computing Systems10.1145/3025453.3025692(1911-1922)Online publication date: 2-May-2017
  • (2017)Interactive facial expression editing based on spatio-temporal coherencyThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-017-1387-433:6-8(981-991)Online publication date: 1-Jun-2017
  • (2016)Repurposing hand animation for interactive applicationsProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.5555/2982818.2982833(97-106)Online publication date: 11-Jul-2016
  • (2016)High-fidelity facial and speech animation for VR HMDsACM Transactions on Graphics10.1145/2980179.298025235:6(1-14)Online publication date: 5-Dec-2016
  • (2016)Realtime 3D eye gaze animation using a single RGB cameraACM Transactions on Graphics10.1145/2897824.292594735:4(1-14)Online publication date: 11-Jul-2016
  • (2016)Real-time facial animation with image-based dynamic avatarsACM Transactions on Graphics10.1145/2897824.292587335:4(1-12)Online publication date: 11-Jul-2016
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media