Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Neural Categorical Priors for Physics-Based Character Control

Published: 05 December 2023 Publication History

Abstract

Recent advances in learning reusable motion priors have demonstrated their effectiveness in generating naturalistic behaviors. In this paper, we propose a new learning framework in this paradigm for controlling physics-based characters with improved motion quality and diversity over existing methods. The proposed method uses reinforcement learning (RL) to initially track and imitate life-like movements from unstructured motion clips using the discrete information bottleneck, as adopted in the Vector Quantized Variational AutoEncoder (VQ-VAE). This structure compresses the most relevant information from the motion clips into a compact yet informative latent space, i.e., a discrete space over vector quantized codes. By sampling codes in the space from a trained categorical prior distribution, high-quality life-like behaviors can be generated, similar to the usage of VQ-VAE in computer vision. Although this prior distribution can be trained with the supervision of the encoder's output, it follows the original motion clip distribution in the dataset and could lead to imbalanced behaviors in our setting. To address the issue, we further propose a technique named prior shifting to adjust the prior distribution using curiosity-driven RL. The outcome distribution is demonstrated to offer sufficient behavioral diversity and significantly facilitates upper-level policy learning for downstream tasks. We conduct comprehensive experiments using humanoid characters on two challenging downstream tasks, sword-shield striking and two-player boxing game. Our results demonstrate that the proposed framework is capable of controlling the character to perform considerably high-quality movements in terms of behavioral strategies, diversity, and realism. Videos, codes, and data are available at https://tencent-roboticsx.github.io/NCP/.

Supplementary Material

MP4 File (papers_841s4-file3.mp4)
supplemental

References

[1]
OpenAI: Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. 2020. Learning dexterous in-hand manipulation. The International Journal of Robotics Research 39, 1 (2020), 3--20.
[2]
Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. 2016. Unifying count-based exploration and intrinsic motivation. Advances in Neural Information Processing Systems 29 (2016).
[3]
Kevin Bergamin, Simon Clavet, Daniel Holden, and James Richard Forbes. 2019. DReCon: data-driven responsive control of physics-based characters. ACM Transactions On Graphics (TOG) 38, 6 (2019), 1--11.
[4]
Christopher M Bishop and Nasser M Nasrabadi. 2006. Pattern Recognition and Machine Learning. Vol. 4. Springer.
[5]
Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 (2015).
[6]
David F Brown, Adriano Macchietto, KangKang Yin, and Victor Zordan. 2013. Control of rotational dynamics for ground behaviors. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 55--61.
[7]
Kyungmin Cho, Chaelin Kim, Jungjin Park, Joonkyu Park, and Junyong Noh. 2021. Motion recommendation for online character control. ACM Transactions on Graphics (TOG) 40, 6 (2021), 1--16.
[8]
Rémi Coulom. 2005. Bayesian Elo Rating. https://www.remi-coulom.fr/Bayesian-Elo/
[9]
Danilo Borges da Silva, Rubens Fernandes Nunes, Creto Augusto Vidal, Joaquim B Cavalcante-Neto, Paul G Kry, and Victor B Zordan. 2017. Tunable robustness: An artificial contact strategy with virtual actuator control for balance. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 499--510.
[10]
Marco Da Silva, Yeuhi Abe, and Jovan Popović. 2008. Simulation of human motion data using short-horizon model-predictive control. In Computer Graphics Forum, Vol. 27. Wiley Online Library, 371--380.
[11]
Kai Ding, Libin Liu, Michiel Van de Panne, and KangKang Yin. 2015. Learning reduced-order feedback policies for motion skills. In Proceedings of the 14th ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 83--92.
[12]
Levi Fussell, Kevin Bergamin, and Daniel Holden. 2021. Supertrack: Motion tracking for physically simulated characters using supervised learning. ACM Transactions on Graphics (TOG) 40, 6 (2021), 1--13.
[13]
Thomas Geijtenbeek, Michiel Van De Panne, and A Frank Van Der Stappen. 2013. Flexible muscle-based locomotion for bipedal creatures. ACM Transactions on Graphics (TOG) 32, 6 (2013), 1--11.
[14]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. Commun. ACM 63, 11 (2020), 139--144.
[15]
Lei Han, Jiechao Xiong, Peng Sun, Xinghai Sun, Meng Fang, Qingwei Guo, Qiaobo Chen, Tengfei Shi, Hongsheng Yu, Xipeng Wu, and Zhengyou Zhang. 2020. Tstarbot-x: An open-sourced and comprehensive study for efficient league training in starcraft ii full game. arXiv preprint arXiv:2011.13729 (2020).
[16]
Mohamed Hassan, Yunrong Guo, Tingwu Wang, Michael Black, Sanja Fidler, and Xue Bin Peng. 2023. Synthesizing Physical Character-Scene Interactions. arXiv preprint arXiv:2302.00883 (2023).
[17]
Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. 2020. Moglow: Probabilistic and controllable motion synthesis using normalising flows. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--14.
[18]
Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations (ICLR).
[19]
Edmond SL Ho, Taku Komura, and Chiew-Lan Tai. 2010. Spatial relationship preserving character motion adaptation. In ACM SIGGRAPH 2010 papers. 1--8.
[20]
Jonathan Ho and Stefano Ermon. 2016. Generative adversarial imitation learning. Advances in Neural Information Processing Systems (NeurIPS) 29 (2016).
[21]
Jessica K Hodgins, Wayne L Wooten, David C Brogan, and James F O'Brien. 1995. Animating human athletics. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. 71--78.
[22]
Sumit Jain and C Karen Liu. 2011. Modal-space control for articulated characters. ACM Transactions on Graphics (TOG) 30, 5 (2011), 1--12.
[23]
Jordan Juravsky, Yunrong Guo, Sanja Fidler, and Xue Bin Peng. 2022. PADL: Language-Directed Physics-Based Character Control. In SIGGRAPH Asia 2022 Conference Papers. 1--9.
[24]
Manmyung Kim, Youngseok Hwang, Kyunglyul Hyun, and Jehee Lee. 2012. Tiling motion patches. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 117--126.
[25]
Manmyung Kim, Kyunglyul Hyun, Jongmin Kim, and Jehee Lee. 2009. Synchronized multi-character motion editing. ACM Transactions on Graphics (TOG) 28, 3 (2009), 1--9.
[26]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[27]
Taesoo Kwon, Young-Sang Cho, Sang I Park, and Sung Yong Shin. 2008. Two-character motion analysis and synthesis. IEEE Transactions on Visualization and Computer Graphics 14, 3 (2008), 707--720.
[28]
Taesoo Kwon and Jessica K Hodgins. 2017. Momentum-mapped inverted pendulum models for controlling dynamic human motions. ACM Transactions on Graphics (TOG) 36, 1 (2017), 1--14.
[29]
Jehee Lee and Kang Hoon Lee. 2004. Precomputing avatar behavior from human motion data. In Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation. 79--87.
[30]
Kang Hoon Lee, Myung Geol Choi, and Jehee Lee. 2006. Motion patches: building blocks for virtual environments annotated with motion data. In ACM SIGGRAPH 2006 Papers. 898--906.
[31]
Seunghwan Lee, Phil Sik Chang, and Jehee Lee. 2022. Deep Compliant Control. In ACM SIGGRAPH 2022 Conference Proceedings. 1--9.
[32]
Seyoung Lee, Sunmin Lee, Yongwoo Lee, and Jehee Lee. 2021. Learning a family of motor skills from a single motion clip. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--13.
[33]
Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010. Data-driven biped control. In ACM SIGGRAPH 2010 papers. 1--8.
[34]
Peizhuo Li, Kfir Aberman, Zihan Zhang, Rana Hanocka, and Olga Sorkine-Hornung. 2022. Ganimator: Neural motion synthesis from a single sequence. ACM Transactions on Graphics (TOG) 41, 4 (2022), 1--12.
[35]
Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel Van De Panne. 2020. Character controllers using motion vaes. ACM Transactions on Graphics (TOG) 39, 4 (2020), 40--1.
[36]
C Karen Liu, Aaron Hertzmann, and Zoran Popović. 2006. Composition of complex optimal multi-character motions. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation. 215--222.
[37]
Libin Liu and Jessica Hodgins. 2018. Learning basketball dribbling skills using trajectory optimization and deep reinforcement learning. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--14.
[38]
Libin Liu, Michiel Van De Panne, and KangKang Yin. 2016. Guided learning of control graphs for physics-based characters. ACM Transactions on Graphics (TOG) 35, 3 (2016), 1--14.
[39]
Libin Liu, KangKang Yin, Michiel Van de Panne, Tianjia Shao, and Weiwei Xu. 2010. Sampling-based contact-rich motion control. In ACM SIGGRAPH 2010 papers. 1--10.
[40]
Siqi Liu, Guy Lever, Zhe Wang, Josh Merel, SM Ali Eslami, Daniel Hennes, Wojciech M Czarnecki, Yuval Tassa, Shayegan Omidshafiei, Abbas Abdolmaleki, et al. 2022. From motor control to team play in simulated humanoid football. Science Robotics 7, 69 (2022), eabo0235.
[41]
Adriano Macchietto, Victor Zordan, and Christian R Shelton. 2009. Momentum control for balance. In ACM SIGGRAPH 2009 papers. 1--8.
[42]
Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, and Gavriel State. 2021. Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning.
[43]
Josh Merel, Saran Tunyasuvunakool, Arun Ahuja, Yuval Tassa, Leonard Hasenclever, Vu Pham, Tom Erez, Greg Wayne, and Nicolas Heess. 2020. Catch & carry: reusable neural controllers for vision-guided whole-body tasks. ACM Transactions on Graphics (TOG) 39, 4 (2020), 39--1.
[44]
Igor Mordatch, Emanuel Todorov, and Zoran Popović. 2012. Discovery of complex behaviors through contact-invariant optimization. ACM Transactions on Graphics (TOG) 31, 4 (2012), 1--8.
[45]
Uldarico Muico, Yongjoon Lee, Jovan Popović, and Zoran Popović. 2009. Contact-aware nonlinear control of dynamic characters. In ACM SIGGRAPH 2009 papers. 1--9.
[46]
Uldarico Muico, Jovan Popović, and Zoran Popović. 2011. Composite control of physically simulated characters. ACM Transactions on Graphics (TOG) 30, 3 (2011), 1--11.
[47]
Georg Ostrovski, Marc G Bellemare, Aäron Oord, and Rémi Munos. 2017. Count-based exploration with neural density models. In International Conference on Machine Learning. PMLR, 2721--2730.
[48]
Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and Jehee Lee. 2019. Learning predict-and-simulate policies from unorganized human motion data. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--11.
[49]
Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel Van de Panne. 2018. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--14.
[50]
Xue Bin Peng, Glen Berseth, and Michiel Van de Panne. 2016. Terrain-adaptive locomotion skills using deep reinforcement learning. ACM Transactions on Graphics (TOG) 35, 4 (2016), 1--12.
[51]
Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, and Sergey Levine. 2019. Mcp: Learning composable hierarchical control with multiplicative compositional policies. Advances in Neural Information Processing Systems 32 (2019).
[52]
Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, and Sanja Fidler. 2022. Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters. ACM Transactions on Graphics (TOG) 41, 4 (2022), 1--17.
[53]
Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, and Angjoo Kanazawa. 2021. Amp: Adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--20.
[54]
Marc H Raibert and Jessica K Hodgins. 1991. Animation of dynamic legged locomotion. In Proceedings of the 18th annual conference on Computer graphics and interactive techniques. 349--358.
[55]
Ali Razavi, Aaron Van den Oord, and Oriol Vinyals. 2019. Generating diverse highfidelity images with vq-vae-2. Advances in Neural Information Processing Systems (NeurIPS) 32 (2019).
[56]
Aurko Roy, Ashish Vaswani, Arvind Neelakantan, and Niki Parmar. 2018. Theory and experiments on vector quantized autoencoders. arXiv preprint arXiv:1805.11063 (2018).
[57]
Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. 2015. Policy distillation. arXiv preprint arXiv:1511.06295 (2015).
[58]
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
[59]
Huajie Shao, Shuochao Yao, Dachun Sun, Aston Zhang, Shengzhong Liu, Dongxin Liu, Jun Wang, and Tarek Abdelzaher. 2020. Controlvae: Controllable variational autoencoder. In International Conference on Machine Learning (ICML). 8655--8664.
[60]
Hubert PH Shum, Taku Komura, Masashi Shiraishi, and Shuntaro Yamazaki. 2008b. Interaction patches for multi-character animation. ACM Transactions on Graphics (TOG) 27, 5 (2008), 1--8.
[61]
Hubert PH Shum, Taku Komura, and Shuntaro Yamazaki. 2007. Simulating competitive interactions using singly captured motions. In Proceedings of the 2007 ACM symposium on Virtual reality software and technology. 65--72.
[62]
Hubert PH Shum, Taku Komura, and Shuntaro Yamazaki. 2008a. Simulating interactions of avatars in high dimensional state space. In Proceedings of the 2008 Symposium on interactive 3D Graphics and Games. 131--138.
[63]
Hubert PH Shum, Taku Komura, and Shuntaro Yamazaki. 2010. Simulating multiple character interactions with collaborative and adversarial goals. IEEE Transactions on Visualization and Computer Graphics 18, 5 (2010), 741--752.
[64]
Kwang Won Sok, Manmyung Kim, and Jehee Lee. 2007. Simulating biped behaviors from human motion data. In ACM SIGGRAPH 2007 papers. 107--es.
[65]
Alexander L Strehl and Michael L Littman. 2008. An analysis of model-based interval estimation for Markov decision processes. J. Comput. System Sci. 74, 8 (2008), 1309--1331.
[66]
Peng Sun, Jiechao Xiong, Lei Han, Xinghai Sun, Shuxing Li, Jiawei Xu, Meng Fang, and Zhengyou Zhang. 2020. Tleague: A framework for competitive self-play based distributed multi-agent reinforcement learning. arXiv preprint arXiv:2011.12895 (2020).
[67]
Jie Tan, Yuting Gu, C Karen Liu, and Greg Turk. 2014. Learning bicycle stunts. ACM Transactions on Graphics (TOG) 33, 4 (2014), 1--12.
[68]
Yunhao Tang and Shipra Agrawal. 2020. Discretizing continuous action space for on-policy optimization. In Proceedings of the aaai conference on artificial intelligence, Vol. 34. 5981--5988.
[69]
Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, and Xue Bin Peng. 2023. CALM: Conditional Adversarial Latent Models for Directable Virtual Characters. In ACM SIGGRAPH 2023 Conference Proceedings (Los Angeles, CA, USA) (SIGGRAPH '23). Association for Computing Machinery, New York, NY, USA.
[70]
Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. Advances in Neural Information Processing Systems (NeurIPS) 30 (2017).
[71]
Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. 2019. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575, 7782 (2019), 350--354.
[72]
Kevin Wampler, Erik Andersen, Evan Herbst, Yongjoon Lee, and Zoran Popović. 2010. Character animation in two-player adversarial games. ACM Transactions on Graphics (TOG) 29, 3 (2010), 1--13.
[73]
Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2020. A scalable approach to control diverse behaviors for physically simulated characters. ACM Transactions on Graphics (TOG) 39, 4 (2020), 33--1.
[74]
Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2021. Control strategies for physically simulated characters performing two-player competitive sports. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--11.
[75]
Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2022. Physics-based character controllers using conditional vaes. ACM Transactions on Graphics (TOG) 41, 4 (2022), 1--12.
[76]
Jungdam Won and Jehee Lee. 2019. Learning body shape variation in physics-based characters. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--12.
[77]
Jungdam Won, Kyungho Lee, Carol O'Sullivan, Jessica K Hodgins, and Jehee Lee. 2014. Generating and ranking diverse multi-character interactions. ACM Transactions on Graphics (TOG) 33, 6 (2014), 1--12.
[78]
Zhaoming Xie, Hung Yu Ling, Nam Hee Kim, and Michiel van de Panne. 2020. Allsteps: curriculum-driven learning of stepping stone skills. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 213--224.
[79]
Zhaoming Xie, Sebastian Starke, Hung Yu Ling, and Michiel van de Panne. 2022. Learning soccer juggling skills with layer-wise mixture-of-experts. In ACM SIGGRAPH 2022 Conference Proceedings. 1--9.
[80]
Heyuan Yao, Zhenhua Song, Baoquan Chen, and Libin Liu. 2022. ControlVAE: ModelBased Learning of Generative Controllers for Physics-Based Characters. ACM Transactions on Graphics (TOG) 41, 6 (2022), 1--16.
[81]
KangKang Yin, Stelian Coros, Philippe Beaudoin, and Michiel Van de Panne. 2008. Continuation methods for adapting simulated skills. In ACM SIGGRAPH 2008 papers. 1--7.
[82]
KangKang Yin, Kevin Loken, and Michiel Van de Panne. 2007. Simbicon: Simple biped locomotion control. ACM Transactions on Graphics (TOG) 26, 3 (2007), 105--es.
[83]
Zhiqi Yin, Zeshi Yang, Michiel Van De Panne, and KangKang Yin. 2021. Discovering diverse athletic jumping strategies. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--17.
[84]
Wenhao Yu, Greg Turk, and C Karen Liu. 2018. Learning symmetric and low-energy locomotion. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1--12.
[85]
Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2019. On the continuity of rotation representations in neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5745--5753.
[86]
Victor Brian Zordan and Jessica K Hodgins. 2002. Motion capture-driven simulations that hit and react. In Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation. 89--96.

Cited By

View all
  • (2024)Climbing Motion Synthesis using Reinforcement LearningJournal of the Korea Computer Graphics Society10.15701/kcgs.2024.30.2.2130:2(21-29)Online publication date: 1-Jun-2024
  • (2024)Categorical Codebook Matching for Embodied Character ControllersACM Transactions on Graphics10.1145/365820943:4(1-14)Online publication date: 19-Jul-2024
  • (2024)MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete RepresentationsACM Transactions on Graphics10.1145/365813743:4(1-21)Online publication date: 19-Jul-2024
  • Show More Cited By

Index Terms

  1. Neural Categorical Priors for Physics-Based Character Control

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Graphics
      ACM Transactions on Graphics  Volume 42, Issue 6
      December 2023
      1565 pages
      ISSN:0730-0301
      EISSN:1557-7368
      DOI:10.1145/3632123
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 December 2023
      Published in TOG Volume 42, Issue 6

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. VQ-VAE
      2. character animation
      3. generative model
      4. multi-agent
      5. reinforcement learning

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)194
      • Downloads (Last 6 weeks)17
      Reflects downloads up to 03 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Climbing Motion Synthesis using Reinforcement LearningJournal of the Korea Computer Graphics Society10.15701/kcgs.2024.30.2.2130:2(21-29)Online publication date: 1-Jun-2024
      • (2024)Categorical Codebook Matching for Embodied Character ControllersACM Transactions on Graphics10.1145/365820943:4(1-14)Online publication date: 19-Jul-2024
      • (2024)MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete RepresentationsACM Transactions on Graphics10.1145/365813743:4(1-21)Online publication date: 19-Jul-2024
      • (2024)Physics-based Scene Layout Generation from Human MotionACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657517(1-10)Online publication date: 13-Jul-2024
      • (2024)Strategy and Skill Learning for Physics-based Table Tennis AnimationACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657437(1-11)Online publication date: 13-Jul-2024
      • (2024)VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical CharactersProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.15175(1-11)Online publication date: 21-Aug-2024
      • (2024)An Efficient Model-Based Approach on Learning Agile Motor Skills without Reinforcement2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10611560(5724-5730)Online publication date: 13-May-2024
      • (2024)Learning Highly Dynamic Behaviors for Quadrupedal Robots2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10610440(9183-9189)Online publication date: 13-May-2024
      • (2024)Lightweight Physics-Based Character for Generating Sensible Postures in Dynamic EnvironmentsIEEE Access10.1109/ACCESS.2024.341722012(89660-89678)Online publication date: 2024
      • (2024)Lifelike agility and play in quadrupedal robots using reinforcement learning and generative pre-trained modelsNature Machine Intelligence10.1038/s42256-024-00861-36:7(787-798)Online publication date: 5-Jul-2024

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media