Nothing Special   »   [go: up one dir, main page]

skip to main content
article

Constraint-based motion optimization using a statistical dynamic model

Published: 29 July 2007 Publication History

Abstract

In this paper, we present a technique for generating animation from a variety of user-defined constraints. We pose constraint-based motion synthesis as a maximum a posterior (MAP) problem and develop an optimization framework that generates natural motion satisfying user constraints. The system automatically learns a statistical dynamic model from motion capture data and then enforces it as a motion prior. This motion prior, together with user-defined constraints, comprises a trajectory optimization problem. Solving this problem in the low-dimensional space yields optimal natural motion that achieves the goals specified by the user. We demonstrate the effectiveness of this approach by generating whole-body and facial motion from a variety of spatial-temporal constraints.

Supplementary Material

JPG File (pps008.jpg)
MP4 File (pps008.mp4)

References

[1]
Abe, Y., Liu, C. K., and Popović, Z. 2004. Momentum-based parameterization of dynamic character motion. In Proceedings of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 173--182.
[2]
Arikan, O., and Forsyth, D. A. 2002. Interactive motion generation from examples. In ACM Transactions on Graphics. 21(3):483--490.
[3]
Bazaraa, M. S., Sherali, H. D., and Shetty, C. 1993. Nonl-inear Programming: Theory and Algorithms. John Wiley and Sons Ltd. 2nd Edition.
[4]
Bishop, C. 1996. Neural Network for Pattern Recognition. Cambridge University Press.
[5]
Brand, M., and Hertzmann, A. 2000. Style machines. In Proceedings of ACM ACM SIGGRAPH 2000. 183--192.
[6]
Brand, M. E. 1999. Voice puppetry. In Proceedings of ACM SIGGRAPH 1999. 21--28.
[7]
Bregler, C., Covell, M., and Slaney, M. 1997. Video rewrite: Driving visual speech with audio. In Proceedings of ACM SIGGRAPH 1997. 353--360.
[8]
Chai, J., and Hodgins, J. 2005. Performance animation from low-dimensional control signals. In ACM Transactions on Graphics. 24(3):686--696.
[9]
Chai, J., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 193--206.
[10]
Fang, A., and Pollard, N. S. 2003. Efficient synthesis of physically valid human motion. In ACM Transactions on Graphics. 22(3):417--426.
[11]
Galata, A., Johnson, N., and Hogg, D. 2001. Learning variable length markov models of behavior. In Computer Vision and Image Understanding (CVIU) Journal. 81(3):398--413.
[12]
Gleicher, M. 1998. Retargeting motion to new characters. In Proceedings of ACM SIGGRAPH 1998. 33--42.
[13]
Grochow, K., Martin, S. L., Hertzmann, A., and Popović, Z. 2004. Style-based inverse kinematics. In ACM Transactions on Graphics. 23(3):522--531.
[14]
Grzeszczuk, R., Terzopoulos, D., and Hinton, G. 1998. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of ACM SIGGRAPH 1998. 9--20.
[15]
Kovar, L., and Gleicher, M. 2004. Automated extraction and parameterization of motions in large data sets. In ACM Transactions on Graphics. 23(3):559--568.
[16]
Kovar, L., Gleicher, M., and Pighin, F. 2002. Motion graphs. In ACM Transactions on Graphics. 21(3):473--482.
[17]
Lee, J., Chai, J., Reitsma, P., Hodgins, J., and Pollard, N. 2002. Interactive control of avatars animated with human motion data. In ACM Transactions on Graphics. 21(3):491--500.
[18]
Li, Y., Wang, T., and Shum, H.-Y. 2002. Motion texture: A two-level statistical model for character synthesis. In ACM Transactions on Graphics. 21(3):465--472.
[19]
Liu, C. K., and Popović, Z. 2002. Synthesis of complex dynamic character motion from simple animations. In ACM Transactions on Graphics. 21(3):408--416.
[20]
Liu, K., Hertzmann, A., and Popović, Z. 2005. Learning physics-based motion style with nonlinear inverse optimization. In ACM Transactions on Graphics. 23(3):1071--1081.
[21]
Ljung, L. 1999. System identification: Theory for the user. Prentice Hall PTR. 2nd Edition.
[22]
Molina Tanco, L., and Hilton, A. 2000. Realistic synthesis of novel human movements from a database of motion capture examples. In Proceedings of the Workshop on Human Motion. 137--142.
[23]
Mukai, T., and Kuriyama, S. 2005. Geostatistical motion interpolation. In ACM Transactions on Graphics. 24(3):1062--1070.
[24]
Palm, W. J. 1999. Modeling, analysis, and control of dynamic systems. Wiley Publishers. 2nd Edition.
[25]
Pavlović, V., Rehg, J. M., and MacCormick, J. 2000. Learning switching linear models of human motion. In Advances in Neural Information Processing Systems 13, 981--987.
[26]
Popović, Z., and Witkin, A. P. 1999. Physically based motion transformation. In Proceedings of ACM SIGGRAPH 1999. 11--20.
[27]
Rose, C., Cohen, M. F., and Bodenheimer, B. 1998. Verbs and adverbs: Multidimensional motion interpolation. In IEEE Computer Graphics and Applications. 18(5):32--40.
[28]
Rose, C. F., Sloan, P.-P. J., and Cohen, M. F. 2001. Artist-directed inverse-kinematics using radial basis function interpolation. In Computer Graphics Forum. 20(3):239--250.
[29]
Safonova, A., and Hodgins, J. K. 2007. Construction and optimal search of interpolated motion graphs. In ACM Transactions on Graphics. 26(3).
[30]
Safonova, A., Hodgins, J., and Pollard, N. 2004. Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces. In ACM Transactions on Graphics. 23(3):514--521.
[31]
Soatto, S., Doretto, G., and Wu, Y. N. 2001. Dynamic textures. In Proceedings of International Conference on Computer Vision (ICCV'01). 2:439--446.
[32]
Urtasun, R., Fleet, D. J., and Fua, P. 2006. Temporal motion models for monocular and multiview 3d human body tracking. In Computer Vision and Image Understanding (CVIU). 104(2):157--177.
[33]
Vicon Systems, 2004. http://www.vicon.com.
[34]
Witkin, A., and Kass, M. 1988. Spacetime constraints. In Proceedings of ACM SIGGRAPH 1998. 159--168.
[35]
Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. In ACM Transactions on Graphics. 23(3):548--558.

Cited By

View all
  • (2025)Dynamic Motion Transition: A Hybrid Data-Driven and Model-Driven Method for Human Pose TransitionsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337242131:3(1848-1861)Online publication date: 1-Mar-2025
  • (2024)DanceCraft: A Music-Reactive Real-time Dance Improv SystemProceedings of the 9th International Conference on Movement and Computing10.1145/3658852.3659078(1-10)Online publication date: 30-May-2024
  • (2024)Robust Diffusion‐based Motion In‐betweeningComputer Graphics Forum10.1111/cgf.1526043:7Online publication date: 7-Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 26, Issue 3
July 2007
976 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/1276377
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 July 2007
Published in TOG Volume 26, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. constraint-based motion synthesis
  2. facial animation
  3. human body animation
  4. motion capture data
  5. motion control
  6. spatial-temporal constraints
  7. statistical dynamic models

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)8
  • Downloads (Last 6 weeks)2
Reflects downloads up to 23 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Dynamic Motion Transition: A Hybrid Data-Driven and Model-Driven Method for Human Pose TransitionsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337242131:3(1848-1861)Online publication date: 1-Mar-2025
  • (2024)DanceCraft: A Music-Reactive Real-time Dance Improv SystemProceedings of the 9th International Conference on Movement and Computing10.1145/3658852.3659078(1-10)Online publication date: 30-May-2024
  • (2024)Robust Diffusion‐based Motion In‐betweeningComputer Graphics Forum10.1111/cgf.1526043:7Online publication date: 7-Nov-2024
  • (2024)Motion In-Betweening via Deep $\Delta$Δ-InterpolatorIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.330910730:8(5693-5704)Online publication date: 1-Aug-2024
  • (2024)Keyframe control for customizable choreography with style maintenanceComputers and Electrical Engineering10.1016/j.compeleceng.2024.109267117:COnline publication date: 1-Jul-2024
  • (2024)A U-Shaped Spatio-Temporal Transformer as Solver for Motion CaptureComputational Visual Media10.1007/978-981-97-2095-8_15(274-294)Online publication date: 30-Mar-2024
  • (2023)Neural Motion GraphSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618181(1-11)Online publication date: 10-Dec-2023
  • (2023)RSMT: Real-time Stylized Motion Transition for CharactersACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591514(1-10)Online publication date: 23-Jul-2023
  • (2022)Motion In-Betweening via Two-Stage TransformersACM Transactions on Graphics10.1145/3550454.355545441:6(1-16)Online publication date: 30-Nov-2022
  • (2022)Real-time controllable motion transition for charactersACM Transactions on Graphics10.1145/3528223.353009041:4(1-10)Online publication date: 22-Jul-2022
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media