Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2994258.2994270acmconferencesArticle/Chapter ViewAbstractPublication PagesmigConference Proceedingsconference-collections
short-paper

Behavioural facial animation using motion graphs and mind maps

Published: 10 October 2016 Publication History

Abstract

We present a new behavioural animation method that combines motion graphs for synthesis of animation and mind maps as behaviour controllers for the choice of motions, significantly reducing the cost of animating secondary characters. Motion graphs are created for each facial region from the analysis of a motion database, while synthesis occurs by minimizing the path distance that connects automatically chosen nodes. A Mind map is a hierarchical graph built on top of the motion graphs, where the user visually chooses how a stimulus affects the character's mood, which in turn will trigger motion synthesis. Different personality traits add more emotional complexity to the chosen reactions. Combining behaviour simulation and procedural animation leads to more emphatic and autonomous characters that react differently in each interaction, shifting the task of animating a character to one of defining its behaviour.

Supplementary Material

ZIP File (p161-serra.zip)

References

[1]
Ambadar, Z., Cohn, J., and Reed, L. 2009. All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Journal of Nonverbal Behavior 33, 1, 17--34.
[2]
Arya, A., and DiPaola, S. 2007. Multispace behavioral model for face-based affective social agents. J. Image Video Process. 2007, 1 (Jan.), 4--4.
[3]
Bidarra, R., Schaap, R., and Goossens, K. 2010. Growing on the inside: Soulful characters for video games. In IEEE Symposium on Computational Intelligence and Games, 337--344.
[4]
Casas, D., Tejera, M., Guillemaut, J.-Y., and Hilton, A. 2012. 4d parametric motion graphs for interactive animation. In Symposium on Interactive 3D Graphics and Games, ACM, New York, NY, USA, 103--110.
[5]
Dijkstra, E. 1959. A note on two problems in connexion with graphs. Numerische Mathematik 1, 1, 269--271.
[6]
Egges, A., and Magnenat-Thalmann, N. 2005. Emotional communicative body animation for multiple characters. In International Workshop on Crowd Simulation (V-Crowds), 31--40.
[7]
Fernandes, T., Serra, J., Órdo nez, J., and Orvalho, V. 2012. Mind maps as behavior controllers for virtual characters. In CHI - Extended Abstracts, ACM, 2291--2296.
[8]
Heck, R., and Gleicher, M. 2007. Parametric motion graphs. In Symposium on Interactive 3D Graphics and Games, ACM, New York, NY, USA, 129--136.
[9]
Jung, Y., Kuijper, A., Fellner, D. W., Kipp, M., Miksatko, J., Gratch, J., and Thalmann, D. 2011. Believable Virtual Characters in Human-Computer Dialogs. In Eurographics - State of the Art Reports, N. John and B. Wyvill, Eds.
[10]
Kanade, T., Cohn, J., and Tian, Y. 2000. Comprehensive database for facial expression analysis. In proc. of 4th IEEE Int. Conf. on Automatic Face and Gesture Recognition, 46--53.
[11]
Kovar, L., Gleicher, M., and Pighin, F. 2002. Motion graphs. In In proc. of SIGGRAPH, 473--482.
[12]
Lewis, J. P., and Anjyo, K. 2010. Direct manipulation blend-shapes. IEEE Comput. Graph. Appl. 30, 4 (July), 42--50.
[13]
Lucey, P., Cohn, J., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. 2010. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 94--101.
[14]
Mehrabian, A., and Russell, J. 1976. The three dimensions of emotional reaction. Psychology Today 10, 3, 57--61.
[15]
Millar, R., Hanna, J., and Kealy, S. 1999. A review of behavioural animation. Computers & Graphics 23, 1, 127 -- 143.
[16]
Mizuguchi, M., Buchanan, J., and Calvert, T. 2001. Data driven motion transitions for interactive games. In Eurographics 2001 - Short Presentations.
[17]
Orfanidis, S. 1996. Introduction to Signal Processing. Engle-wood Cliffs, NJ, Prentice Hall.
[18]
Perlin, K. 1997. Layered compositing of facial expression. In In proc. of SIGGRAPH, ACM Press, 226--227.
[19]
Reynolds, C. W. 1987. Flocks, herds and schools: A distributed behavioral model. In In proc. of SIGGRAPH, ACM, 25--34.
[20]
Sagar, M., Bullivant, D., Robertson, P., Efimov, O., Jawed, K., Kalarot, R., and Wu, T. 2014. A neurobehavioural framework for autonomous animation of virtual human faces. In In proc. of SIGGRAPH Asia, SA '14, 2:1--2:10.
[21]
Vinayagamoorthy, V., Gillies, M., Steed, A., Tanguy, E., Pan, X., Loscos, C., and Slater, M. 2006. Building Expression into Virtual Characters. In in Eurographics - State of the Art Reports, ACM, 21--61.
[22]
Xue, Y.-L., Mao, X., Li, Z., and Diao, W.-H. 2007. Modeling of layered fuzzy facial expression generation. In Digital Human Modeling: International Conference on Digital Human Modeling, 243--252.
[23]
Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High resolution capture for modeling and animation. ACM Trans. Graph. 23, 3, 548--558.
[24]
Zhang, X., Yin, L., Cohn, J. F., Canavan, S., Reale, M., Horowitz, A., Liu, P., and Girard, J. M. 2014. Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database. Image and Vision Computing 32, 10, 692 -- 706. Best of Automatic Face and Gesture Recognition 2013.

Cited By

View all
  • (2022)Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52688.2022.01975(20363-20373)Online publication date: Jun-2022
  • (2017)More than a FeelingProceedings of the 30th Annual ACM Symposium on User Interface Software and Technology10.1145/3126594.3126640(773-786)Online publication date: 20-Oct-2017
  • (2017)Easy Generation of Facial Animation Using Motion GraphsComputer Graphics Forum10.1111/cgf.1321837:1(97-111)Online publication date: 13-Jun-2017

Index Terms

  1. Behavioural facial animation using motion graphs and mind maps

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MIG '16: Proceedings of the 9th International Conference on Motion in Games
      October 2016
      202 pages
      ISBN:9781450345927
      DOI:10.1145/2994258
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 10 October 2016

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. behavioural simulation
      2. facial animation
      3. procedural generation

      Qualifiers

      • Short-paper

      Funding Sources

      Conference

      MiG '16
      Sponsor:
      MiG '16: Motion In Games
      October 10 - 12, 2016
      California, Burlingame

      Acceptance Rates

      Overall Acceptance Rate -9 of -9 submissions, 100%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)12
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 22 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2022)Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52688.2022.01975(20363-20373)Online publication date: Jun-2022
      • (2017)More than a FeelingProceedings of the 30th Annual ACM Symposium on User Interface Software and Technology10.1145/3126594.3126640(773-786)Online publication date: 20-Oct-2017
      • (2017)Easy Generation of Facial Animation Using Motion GraphsComputer Graphics Forum10.1111/cgf.1321837:1(97-111)Online publication date: 13-Jun-2017

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media