Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3424636.3426909acmconferencesArticle/Chapter ViewAbstractPublication PagesmigConference Proceedingsconference-collections
research-article
Open access

Adult2child: Motion Style Transfer using CycleGANs

Published: 22 November 2020 Publication History

Abstract

Child characters are commonly seen in leading roles in top-selling video games. Previous studies have shown that child motions are perceptually and stylistically different from those of adults. Creating motion for these characters by motion capturing children is uniquely challenging because of confusion, lack of patience and regulations. Retargeting adult motion, which is much easier to record, onto child skeletons, does not capture the stylistic differences. In this paper, we propose that style translation is an effective way to transform adult motion capture data to the style of child motion. Our method is based on CycleGAN, which allows training on a relatively small number of sequences of child and adult motions that do not even need to be temporally aligned. Our adult2child network converts short sequences of motions called motion words from one domain to the other. The network was trained using a motion capture database collected by our team containing 23 locomotion and exercise motions. We conducted a perception study to evaluate the success of style translation algorithms, including our algorithm and recently presented style translation neural networks. Results show that the translated adult motions are recognized as child motions significantly more often than adult motions.

Supplementary Material

MP4 File (a13-dong-video.mp4)

References

[1]
Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, and Baoquan Chen. 2020. Unpaired Motion Style Transfer from Video to Animation. ACM Trans. Graph. 39, 4 (July 2020).
[2]
Aishat Aloba. 2019. Tailoring Motion Recognition Systems to Children’s Motions. In 2019 International Conference on Multimodal Interaction (Suzhou, China) (ICMI ’19). ACM, NY, USA, 457–462.
[3]
Aishat Aloba, Gianne Flores, Julia Woodward, Alex Shaw, Amanda Castonguay, Isabella Cuba, Yuzhu Dong, Eakta Jain, and Lisa Anthony. 2018. Kinder-Gator: The UF Kinect Database of Child and Adult Motion. In Proceedings of the 39th Annual European Association for Computer Graphics Conference: Short Papers (Delft, The Netherlands) (EG). Eurographics Association, 13–16.
[4]
Kenji Amaya, Armin Bruderlin, and Tom Calvert. 1996. Emotion from Motion. In Proceedings of the Conference on Graphics Interface ’96 (Toronto, Ontario, Canada) (GI ’96). Canadian Info. Proc. Society, CAN, 222–229.
[5]
Andreas Aristidou, Daniel Cohen-Or, Jessica K. Hodgins, Yiorgos Chrysanthou, and Ariel Shamir. 2018b. Deep Motifs and Motion Signatures. ACM Trans. Graph. 37, 6, Article 187 (Nov. 2018), 13 pages.
[6]
Andreas Aristidou, Daniel Cohen-Or, Jessica K. Hodgins, and Ariel Shamir. 2018a. Self-similarity Analysis for Motion Capture Cleaning. Comput. Graph. Forum 37, 2 (May 2018), 297–309.
[7]
Andreas Aristidou, Qiong Zeng, Efstathios Stavrakis, Kangkang Yin, Daniel Cohen-Or, Yiorgos Chrysanthou, and Baoquan Chen. 2017. Emotion Control of Unstructured Dance Movements. In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation (Los Angeles, California) (SCA ’17). ACM, NY, USA, Article 9, 10 pages.
[8]
Emad Barsoum, John Kender, and Zicheng Liu. 2018. HP-GAN: Probabilistic 3D Human Motion Prediction via GAN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (Salt Lake City, UT, USA). IEEE Computer Society, 1418–1427.
[9]
Matthew Brand and Aaron Hertzmann. 2000. Style Machines. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH ’00). ACM Press/Addison-Wesley Publishing Co., USA, 183–192.
[10]
Kwang-Jin Choi and Hyeong-Seok Ko. 1999. On-Line Motion Retargetting. In Proceedings of the 7th Pacific Conference on Computer Graphics and Applications(PG ’99). IEEE Computer Society, USA, 32.
[11]
CMU. 2020. Carnegie Mellon University MoCap Database: http://mocap.cs.cmu.edu/. [Online; Retrieved July, 2020].
[12]
Yuzhu Dong, Aishat Aloba, Lisa Anthony, and Eakta Jain. 2018. Style Translation to Create Child-like Motion. In Proceedings of the 39th Annual European Association for Computer Graphics Conference: Posters (Delft, The Netherlands) (EG ’18). Eurographics Association, Goslar, DEU, 31–32.
[13]
Yuzhu Dong, Aishat Aloba, Sachin Paryani, Lisa Anthony, Neha Rana, and Eakta Jain. 2017. Adult2Child: Dynamic Scaling Laws to Create Child-like Motion. In Proceedings of the Tenth International Conference on Motion in Games (Barcelona, Spain) (MIG ’17). ACM, NY, USA, Article 13.
[14]
Han Du, Erik Herrmann, Janis Sprenger, Noshaba Cheema, Somayeh Hosseini, Klaus Fischer, and Philipp Slusallek. 2019a. Stylistic Locomotion Modeling with Conditional Variational Autoencoder. In 40th Annual Conference of the European Association for Computer Graphics, Eurographics 2019(Genoa, Italy), Paolo Cignoni and Eder Miguel (Eds.). The Eurographics Association, 9–12.
[15]
Han Du, Erik Herrmann, Janis Sprenger, Klaus Fischer, and Philipp Slusallek. 2019b. Stylistic Locomotion Modeling and Synthesis Using Variational Generative Models. In Motion, Interaction and Games (Newcastle upon Tyne, United Kingdom) (MIG ’19). ACM, NY, USA, Article 32, 10 pages.
[16]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR’16). 2414–2423.
[17]
Michael Gleicher. 1998. Retargetting Motion to New Characters. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH ’98). ACM, NY, USA, 33–42.
[18]
Gutemberg Guerra-Filho and Arnab Biswas. 2012. The Human Motion Database: A Cognitive and Parametric Sampling of Human Motion. Image Vision Comput. 30, 3 (March 2012), 251–261.
[19]
Chris Hecker, Bernd Raabe, Ryan W. Enslow, John DeWeese, Jordan Maynard, and Kees van Prooijen. 2008. Real-Time Motion Retargeting to Highly Varied User-Created Morphologies. ACM Trans. Graph. 27, 3 (Aug. 2008), 1–11.
[20]
Jessica K. Hodgins and Nancy S. Pollard. 1997. Adapting Simulated Behaviors for New Characters. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH ’97). ACM Press/Addison-Wesley Publishing Co., USA, 153–162.
[21]
Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-Functioned Neural Networks for Character Control. ACM Trans. Graph. 36, 4, Article 42 (July 2017).
[22]
Daniel Holden, Jun Saito, and Taku Komura. 2016. A Deep Learning Framework for Character Motion Synthesis and Editing. ACM Trans. Graph. 35, 4, Article 138 (July 2016).
[23]
Daniel Holden, Jun Saito, Taku Komura, and Thomas Joyce. 2015. Learning Motion Manifolds with Convolutional Autoencoders. In SIGGRAPH Asia 2015 Technical Briefs (Kobe, Japan) (SA ’15). ACM, NY, USA, Article 18.
[24]
Marijana Hraski, Željko Hraski, and Ivan Prskalo. 2015. Comparison of standing long jump technique performed by subjects from different age groups. Baltic Journal of Sport and Health Sciences 98, 3 (2015), 2.
[25]
Eugene Hsu, Kari Pulli, and Jovan Popović. 2005. Style Translation for Human Motion. ACM Trans. Graph. 24, 3 (July 2005), 1082–1089.
[26]
Donald F. Huelke. 1998. An overview of anatomical considerations of infants and children in the adult world of automobile safety design. Annual Proceedings / Association for the Advancement of Automotive Medicine 42(1998), 93–113.
[27]
Leslie Ikemoto, Okan Arikan, and David Forsyth. 2009. Generalizing Motion Edits with Gaussian Processes. ACM Trans. Graph. 28, 1, Article 1 (Feb. 2009).
[28]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR’17). 1125–1134.
[29]
Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, and Julia Woodward. 2016. Is the Motion of a Child Perceivably Different from the Motion of an Adult?ACM Trans. Appl. Percept. 13, 4, Article 22 (July 2016).
[30]
Joseph Johnson. 2018. Hours children spend gaming weekly in the UK from 2013 to 2017, by age group.https://www.statista.com/statistics/274434/time-spent-gaming-weekly-among-children-in-the-uk-by-age [Online; Retrieved July, 2020].
[31]
Wanli Ma, Shihong Xia, Jessica K. Hodgins, Xiao Yang, Chunpeng Li, and Zhaoqi Wang. 2010. Modeling Style and Variation in Human Motion. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Madrid, Spain) (SCA ’10). Eurographics Association, 21–30.
[32]
Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. 2017. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision(ICCV’17). 2794–2802.
[33]
Ian Mason, Sebastian Starke, He Zhang, Hakan Bilen, and Taku Komura. 2018. Few-shot Learning of Homogeneous Human Locomotion Styles. Comput. Graph. Forum 37, 7 (2018), 143–153.
[34]
Alberto Menache. 2000. Understanding motion capture for computer animation and video games. Morgan kaufmann.
[35]
Philip R. Nader, Robert H. Bradley, Renate M. Houts, Susan L. McRitchie, and Marion O’Brien. 2008. Moderate-to-vigorous physical activity from ages 9 to 15 years. JAMA 300, 3 (2008), 295–305.
[36]
OSU. 2020. Ohio State University MoCap Database https://accad.osu.edu/research/motion-lab/mocap-system-and-data. [Online; Retrieved July, 2020].
[37]
Jean Piaget. 2015. The Grasp of Consciousness (Psychology Revivals): Action and Concept in the Young Child. Psychology Press.
[38]
Marc H. Raibert and Jessica K. Hodgins. 1991. Animation of Dynamic Legged Locomotion. In Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH ’91). ACM, NY, USA, 349–358.
[39]
Ari Shapiro, Yong Cao, and Petros Faloutsos. 2006. Style Components. In Proceedings of Graphics Interface 2006 (Quebec, Canada) (GI ’06). Canadian Info. Proc. Society, CAN, 33–39.
[40]
Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, and Baoquan Chen. 2020. MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency. ACM Trans. Graph. (June 2020).
[41]
Harrison Jesse Smith, Chen Cao, Michael Neff, and Yingying Wang. 2019. Efficient Neural Networks for Real-Time Motion Style Transfer. Proc. ACM Comput. Graph. Interact. Tech. 2, 2, Article 13 (July 2019).
[42]
Graham W. Taylor and Geoffrey E. Hinton. 2009. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style. In Proceedings of the 26th Annual International Conference on Machine Learning (Montreal, Quebec, Canada) (ICML ’09). ACM, NY, USA, 1025–1032.
[43]
Munetoshi Unuma, Ken Anjyo, and Ryozo Takeuchi. 1995. Fourier Principles for Emotion-Based Human Figure Animation. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH ’95). ACM, NY, USA, 91–96.
[44]
Jack M. Wang, David J. Fleet, and Aaron Hertzmann. 2007. Multifactor Gaussian Process Models for Style-content Separation. In Proceedings of the 24th International Conference on Machine Learning (Corvalis, Oregon, USA) (ICML ’07). ACM, NY, USA, 975–982.
[45]
Qi Wang, Thierry Artières, Mickael Chen, and Ludovic Denoyer. 2020. Adversarial learning for modeling human motion. Vis. Comp. 36(2020), 141–160.
[46]
Andrew Witkin and Zoran Popović. 1995. Motion Warping. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH ’95). ACM, NY, USA, 105–108.
[47]
Shihong Xia, Congyi Wang, Jinxiang Chai, and Jessica Hodgins. 2015. Realtime Style Transfer for Unlabeled Heterogeneous Human Motion. ACM Trans. Graph. 34, 4, Article 119 (July 2015).
[48]
M. Ersin Yumer and Niloy J. Mitra. 2016. Spectral Style Transfer for Human Motion between Independent Actions. ACM Trans. Graph. 35, 4, Article 137 (July 2016).
[49]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision(ICCV’17). 2242–2251.

Cited By

View all
  • (2024)Generative Motion Stylization of Cross-structure Characters within Canonical Motion SpaceProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680864(7018-7026)Online publication date: 28-Oct-2024
  • (2024)ADAPT: AI‐Driven Artefact Purging Technique for IMU Based Motion CaptureComputer Graphics Forum10.1111/cgf.15172Online publication date: 17-Oct-2024
  • (2024)Pose-to-Motion: Cross-Domain Motion Retargeting with Pose PriorProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.15170(1-10)Online publication date: 21-Aug-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MIG '20: Proceedings of the 13th ACM SIGGRAPH Conference on Motion, Interaction and Games
October 2020
190 pages
ISBN:9781450381710
DOI:10.1145/3424636
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 November 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. CycleGAN
  2. Motion Analysis
  3. Style transfer
  4. Unpaired data

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

MIG '20
Sponsor:
MIG '20: Motion, Interaction and Games
October 16 - 18, 2020
SC, Virtual Event, USA

Acceptance Rates

Overall Acceptance Rate -9 of -9 submissions, 100%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)485
  • Downloads (Last 6 weeks)57
Reflects downloads up to 20 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Generative Motion Stylization of Cross-structure Characters within Canonical Motion SpaceProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680864(7018-7026)Online publication date: 28-Oct-2024
  • (2024)ADAPT: AI‐Driven Artefact Purging Technique for IMU Based Motion CaptureComputer Graphics Forum10.1111/cgf.15172Online publication date: 17-Oct-2024
  • (2024)Pose-to-Motion: Cross-Domain Motion Retargeting with Pose PriorProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.15170(1-10)Online publication date: 21-Aug-2024
  • (2024)Modification of Skeletal Character Animation Using Inverse Kinematics Controllers2024 International Russian Smart Industry Conference (SmartIndustryCon)10.1109/SmartIndustryCon61328.2024.10515984(553-557)Online publication date: 25-Mar-2024
  • (2023)Deep Learning-Based Motion Style Transfer Tools, Techniques and Future ChallengesSensors10.3390/s2305259723:5(2597)Online publication date: 26-Feb-2023
  • (2023)Upper Body Pose Estimation Using Deep Learning for a Virtual Reality AvatarApplied Sciences10.3390/app1304246013:4(2460)Online publication date: 14-Feb-2023
  • (2023)MOCHA: Real-Time Motion Characterization via Context MatchingSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618252(1-11)Online publication date: 10-Dec-2023
  • (2023)RSMT: Real-time Stylized Motion Transition for CharactersACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591514(1-10)Online publication date: 23-Jul-2023
  • (2023)Pose Representations for Deep Skeletal AnimationComputer Graphics Forum10.1111/cgf.1463241:8(155-167)Online publication date: 20-Mar-2023
  • (2023)Dance Style Transfer with Cross-modal Transformer2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV56688.2023.00503(5047-5056)Online publication date: Jan-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media