Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

MoSh: motion and shape capture from sparse markers

Published: 19 November 2014 Publication History

Abstract

Marker-based motion capture (mocap) is widely criticized as producing lifeless animations. We argue that important information about body surface motion is present in standard marker sets but is lost in extracting a skeleton. We demonstrate a new approach called MoSh (Motion and Shape capture), that automatically extracts this detail from mocap data. MoSh estimates body shape and pose together using sparse marker data by exploiting a parametric model of the human body. In contrast to previous work, MoSh solves for the marker locations relative to the body and estimates accurate body shape directly from the markers without the use of 3D scans; this effectively turns a mocap system into an approximate body scanner. MoSh is able to capture soft tissue motions directly from markers by allowing body shape to vary over time. We evaluate the effect of different marker sets on pose and shape accuracy and propose a new sparse marker set for capturing soft-tissue motion. We illustrate MoSh by recovering body shape, pose, and soft-tissue motion from archival mocap data and using this to produce animations with subtlety and realism. We also show soft-tissue motion retargeting to new characters and show how to magnify the 3D deformations of soft tissue to create animations with appealing exaggerations.

Supplementary Material

ZIP File (a220.zip)
Supplemental material.

References

[1]
Allen, B., Curless, B., and Popović, Z. 2003. The space of human body shapes: Reconstruction and parameterization from range scans. ACM Trans. Graph. (Proc. SIGGRAPH) 22, 3, 587--594.
[2]
Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., and Davis, J. 2005. SCAPE: Shape Completion and Animation of PEople. ACM Trans. Graph. (Proc. SIGGRAPH 24, 3, 408--416.
[3]
Bogo, F., Romero, J., Loper, M., and Black, M. J. 2014. FAUST: Dataset and evaluation for 3D mesh registration. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
[4]
de Aguiar, E., Theobalt, C., Stoll, C., and Seidel, H.-P. 2007. Marker-less deformable mesh tracking for human shape and motion capture. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1--8.
[5]
de Aguiar, E., Zayer, R., Theobalt, C., Seidel, H. P., and Magnor, M. 2007. A simple framework for natural animation of digitized models. In Computer Graphics and Image Processing, 2007. SIBGRAPI 2007. XX Brazilian Symposium on, 3--10.
[6]
de Aguiar, E., Stoll, C., Theobalt, C., Ahmed, N., Seidel, H.-P., and Thrun, S. 2008. Performance capture from sparse multi-view video. ACM Trans. Graph. (Proc. SIGGRAPH) 27, 3 (Aug.), 98:1--98:10.
[7]
Griewank, A., and Walther, A. 2008. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, second ed. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA.
[8]
Hirshberg, D. A., Loper, M., Rachlin, E., and Black, M. J. 2012. Coregistration: Simultaneous alignment and modeling of articulated 3d shape. In Computer Vision ECCV 2012, A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, and C. Schmid, Eds., vol. 7577 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, 242--255.
[9]
Hong, Q. Y., Park, S. I., and Hodgins, J. K. 2010. A data-driven segmentation for the shoulder complex. Computer Graphics Forum 29, 2, 537--544.
[10]
Jain, A., Thormählen, T., Seidel, H.-P., and Theobalt, C. 2010. MovieReshape: Tracking and reshaping of humans in videos. ACM Transactiosn on Graphics (Proc. SIGGRAPH) 29, 6 (Dec.), 148:1--148:10.
[11]
Kwon, J.-Y., and Lee, I.-K. 2007. Rubber-like exaggeration for character animation. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications, IEEE Computer Society, Washington, DC, USA, PG '07, 18--26.
[12]
Leardini, A., Chiari, L., Croce, U. D., and Cappozzo, A. 2005. Human movement analysis using stereophotogrammetry: Part 3. soft tissue artifact assessment and compensation. Gait & Posture 21, 2, 212--225.
[13]
Liu, Y., Gall, J., Stoll, C., Dai, Q., Seidel, H.-P., and Theobalt, C. 2013. Markerless motion capture of multiple characters using multiview image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 11, 2720--2735.
[14]
Livne, M., Sigal, L., Troje, N., and Fleet, D. 2012. Human attributes from 3D pose tracking. Computer Vision and Image Understanding 116, 5, 648--660.
[15]
Loper, M., 2014. Chumpy autodifferentiation library. http://chumpy.org/.
[16]
Neumann, T., Varanasi, K., Hasler, N., Wacker, M., Magnor, M., and Theobalt, C. 2013. Capture and statistical modeling of arm-muscle deformations. Computer Graphics Forum 32, 2 (May), 285--294.
[17]
Neumann, T., Varanasi, K., Wenger, S., Wacker, M., Magnor, M., and Theobalt, C. 2013. Sparse localized deformation components. ACM Trans. Graph. 32, 6 (Nov.), 179:1--179:10.
[18]
Nocedal, J., and Wright, S. J. 2006. Numerical Optimization, 2nd ed. Springer, New York.
[19]
Park, S. I., and Hodgins, J. K. 2006. Capturing and animating skin deformation in human motion. ACM Trans. Graph. (Proc. SIGGRAPH) 25, 3 (July), 881--889.
[20]
Park, S. I., and Hodgins, J. K. 2008. Data-driven modeling of skin and muscle deformation. ACM Trans. Graph. (Proc. SIGGRAPH) 27, 3 (Aug.), 96:1--96:6.
[21]
Robinette, K., Blackwell, S., Daanen, H., Boehmer, M., Fleming, S., Brill, T., Hoeferlin, D., and Burnsides, D. 2002. Civilian American and European Surface Anthropometry Resource (CAESAR) final report. Tech. Rep. AFRL-HE-WP-TR-2002-0169, US Air Force Research Laboratory.
[22]
Stark, J., and Hilton, A. 2007. Surface capture for performance-based animation. IEEE Computer Graphics and Applications 27, 3, 21--31.
[23]
Tsoli, A., Mahmood, N., and Black, M. J. 2014. Breathing life into shape: Capturing, modeling and animating 3D human breathing. ACM Trans. Graph., (Proc. SIGGRAPH) 33, 4 (July), 52:1--52:11.
[24]
Wadhwa, N., Rubinstein, M., Durand, F., and Freeman, W. T. 2013. Phase-based video motion processing. ACM Trans. Graph., (Proc. SIGGRAPH) 32, 4 (July), 80:1--80:10.
[25]
Wang, H., Xu, N., Raskar, R., and Ahuja, N. 2007. Videoshop: A new framework for spatio-temporal video editing in gradient domain. Graph. Models 69, 1, 57--70.
[26]
Wu, H.-Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., and Freeman, W. T. 2012. Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. (Proc. SIGGRAPH) 31, 4 (July), 65:1--65:8.

Cited By

View all
  • (2025)Deep learning in monocular 3D human pose estimation: Systematic review of contemporary techniques and applicationsMultimedia Tools and Applications10.1007/s11042-024-20495-2Online publication date: 6-Feb-2025
  • (2024)ATGT3D: Animatable Texture Generation and Tracking for 3D AvatarsElectronics10.3390/electronics1322456213:22(4562)Online publication date: 20-Nov-2024
  • (2024)A comprehensive evaluation of marker-based, markerless methods for loose garment scenarios in varying camera configurationsFrontiers in Computer Science10.3389/fcomp.2024.13799256Online publication date: 5-Apr-2024
  • Show More Cited By

Index Terms

  1. MoSh: motion and shape capture from sparse markers

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 33, Issue 6
    November 2014
    704 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/2661229
    Issue’s Table of Contents
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 November 2014
    Published in TOG Volume 33, Issue 6

    Check for updates

    Author Tags

    1. human animation
    2. motion capture
    3. shape capture
    4. soft tissue motion

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)512
    • Downloads (Last 6 weeks)75
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Deep learning in monocular 3D human pose estimation: Systematic review of contemporary techniques and applicationsMultimedia Tools and Applications10.1007/s11042-024-20495-2Online publication date: 6-Feb-2025
    • (2024)ATGT3D: Animatable Texture Generation and Tracking for 3D AvatarsElectronics10.3390/electronics1322456213:22(4562)Online publication date: 20-Nov-2024
    • (2024)A comprehensive evaluation of marker-based, markerless methods for loose garment scenarios in varying camera configurationsFrontiers in Computer Science10.3389/fcomp.2024.13799256Online publication date: 5-Apr-2024
    • (2024)Assessing inter- and intra-rater reliability of movement scores and the effects of body-shape using a custom visualisation tool: an exploratory studyBMC Sports Science, Medicine and Rehabilitation10.1186/s13102-024-00988-116:1Online publication date: 30-Sep-2024
    • (2024)DAMO: A Deep Solver for Arbitrary Marker Configuration in Optical Motion CaptureACM Transactions on Graphics10.1145/369586544:1(1-14)Online publication date: 14-Sep-2024
    • (2024)Look Ma, no markers: holistic performance capture without the hassleACM Transactions on Graphics10.1145/368777243:6(1-12)Online publication date: 19-Dec-2024
    • (2024)SolePoser: Full Body Pose Estimation using a Single Pair of Insole SensorProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676418(1-9)Online publication date: 13-Oct-2024
    • (2024)Visibility-guided Human Body Reconstruction from Uncalibrated Multi-view CamerasProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3658110(589-598)Online publication date: 30-May-2024
    • (2024)Towards Unstructured Unlabeled Optical Mocap: A Video Helps!ACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657522(1-11)Online publication date: 13-Jul-2024
    • (2024)Exploring the Impact of Rendering Method and Motion Quality on Model Performance when Using Multi-view Synthetic Data for Action Recognition2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00453(4580-4590)Online publication date: 3-Jan-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media