Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3528233.3530719acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

Deep Compliant Control

Published: 24 July 2022 Publication History

Abstract

In many physical interactions such as opening doors and playing sports, humans act compliantly to move in various ways to avoid large impacts or to manipulate objects. This paper aims to build a framework for simulation and control of humanoids that creates physically compliant interactions with surroundings. We can generate a broad spectrum of movements ranging from passive reactions to external physical perturbations, to active manipulations with clear intentions. Technical challenges include defining compliance, reproducing physically reliable movements, and robustly controlling under-actuated dynamical systems. The key technical contribution is a two-level control architecture based on deep reinforcement learning that imitates human movements while adjusting their bodies to external perturbations. The controller minimizes the interaction forces and the control torques for imitation, and we demonstrate the effectiveness of the controller with various motor skills including opening doors, balancing a ball, and running hand in hand.

Supplementary Material

Appendix (appendix.pdf.pdf)
MP4 File (video.mp4)
Supplemental video

References

[1]
Khairul Anam and Adel Ali Al-Jumaily. 2012. Active exoskeleton control systems: State of the art. Procedia Engineering 41(2012), 988–994.
[2]
Robert J Anderson and Mark W Spong. 1988. Hybrid impedance control of robotic manipulators. IEEE Journal on Robotics and Automation 4, 5 (1988), 549–556. https://doi.org/10.1109/56.20440
[3]
Okan Arikan, David A Forsyth, and James F O’Brien. 2005. Pushing people around. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation. 59–66.
[4]
Kevin Bergamin, Simon Clavet, Daniel Holden, and James Richard Forbes. 2019. DReCon: Data-Driven Responsive Control of Physics-Based Characters. ACM Transactions on Graphics 38, 6, Article 206(2019).
[5]
Miroslav Bogdanovic, Majid Khadiv, and Ludovic Righetti. 2020. Learning variable impedance control for contact sensitive tasks. IEEE Robotics and Automation Letters 5, 4 (2020), 6129–6136.
[6]
Craig Carignan, Jonathan Tang, and Stephen Roderick. 2009. Development of an exoskeleton haptic interface for virtual task training. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 3697–3702.
[7]
Alexander Clegg, Wenhao Yu, Jie Tan, C. Karen Liu, and Greg Turk. 2018. Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning. ACM Transactions on Graphics 37, 6, Article 179(2018).
[8]
Stelian Coros, Philippe Beaudoin, and Michiel Van de Panne. 2010. Generalized biped walking control. ACM Transactions On Graphics (TOG) 29, 4 (2010), 1–9.
[9]
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028(2017).
[10]
Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, SM Eslami, 2017. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286(2017).
[11]
Jonathan Ho and Stefano Ermon. 2016. Generative adversarial imitation learning. Advances in neural information processing systems 29 (2016), 4565–4573.
[12]
Neville Hogan. 1985. Impedance Control: An Approach to Manipulation: Part I—Theory. Journal of Dynamic Systems, Measurement, and Control 107, 1 (1985), 1–7.
[13]
Quentin Leboutet, Emmanuel Dean-Leon, Florian Bergner, and Gordon Cheng. 2019. Tactile-Based Whole-Body Compliance With Force Propagation for Mobile Manipulators. IEEE Transactions on Robotics 35, 2 (2019), 330–342. https://doi.org/10.1109/TRO.2018.2889261
[14]
Jehee Lee. 2008. Representing Rotations and Orientations in Geometric Computing. IEEE Computer Graphics and Applications 28, 2 (2008), 75–83.
[15]
Jeongseok Lee, Michael X. Grey, Sehoon Ha, Tobias Kunz, Sumit Jain, Yuting Ye, Siddhartha S. Srinivasa, Mike Stilman, and C. Karen Liu. 2018. DART: Dynamic Animation and Robotics Toolkit. The Journal of Open Source Software 3, 22 (2018), 500.
[16]
Michelle A. Lee, Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, and Jeannette Bohg. 2019b. Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks. In 2019 International Conference on Robotics and Automation (ICRA). 8943–8950. https://doi.org/10.1109/ICRA.2019.8793485
[17]
Seyoung Lee, Sunmin Lee, Yongwoo Lee, and Jehee Lee. 2021. Learning a family of motor skills from a single motion clip. ACM Transactions on Graphics 40, 4 (2021), 1–13.
[18]
Seunghwan Lee, Moonseok Park, Kyoungmin Lee, and Jehee Lee. 2019a. Scalable Muscle-Actuated Human Simulation and Control. ACM Transactions on Graphics 38, 4, Article 73(2019).
[19]
Yoonsang Lee, Kyungho Lee, Soon-Sun Kwon, Jiwon Jeong, Carol O’Sullivan, Moon Seok Park, and Jehee Lee. 2015. Push-recovery stability of biped locomotion. ACM Transactions on Graphics (TOG) 34, 6 (2015), 1–9.
[20]
Zhijun Li, Zhicong Huang, Wei He, and Chun-Yi Su. 2016. Adaptive impedance control for an upper limb robotic exoskeleton using biological signals. IEEE Transactions on Industrial Electronics 64, 2 (2016), 1664–1674.
[21]
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971(2015).
[22]
Libin Liu and Jessica Hodgins. 2018. Learning Basketball Dribbling Skills Using Trajectory Optimization and Deep Reinforcement Learning. ACM Trans. Graph. 37, 4, Article 142 (July 2018), 14 pages. https://doi.org/10.1145/3197517.3201315
[23]
Ying-Sheng Luo, Jonathan Hans Soeseno, Trista Pei-Chun Chen, and Wei-Chao Chen. 2020. Carl: Controllable agent with reinforcement learning for quadruped locomotion. ACM Transactions on Graphics (TOG) 39, 4 (2020), 38–1.
[24]
Li-Ke Ma, Zeshi Yang, Tong Xin, Baining Guo, and KangKang Yin. 2021. Learning and Exploring Motor Skills with Spacetime Bounds. Computer Graphics Forum 40, 2 (2021).
[25]
Emanuele Magrini, Fabrizio Flacco, and Alessandro De Luca. 2015. Control of generalized contact motion and force in physical human-robot interaction. In 2015 IEEE International Conference on Robotics and Automation (ICRA). 2298–2304. https://doi.org/10.1109/ICRA.2015.7139504
[26]
Roberto Martín-Martín, Michelle A. Lee, Rachel Gardner, Silvio Savarese, Jeannette Bohg, and Animesh Garg. 2019. Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 1010–1017. https://doi.org/10.1109/IROS40897.2019.8968201
[27]
Sehee Min, Jungdam Won, Seunghwan Lee, Jungnam Park, and Jehee Lee. 2019. SoftCon: Simulation and Control of Soft-Bodied Animals with Biomimetic Actuators. ACM Trans. Graph. 38, 6, Article 208 (Nov. 2019), 12 pages. https://doi.org/10.1145/3355089.3356497
[28]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602(2013).
[29]
Tobias Nef, Matjaz Mihelj, Gabriela Kiefer, Christina Perndl, Roland Muller, and Robert Riener. 2007. ARMin-Exoskeleton for arm therapy in stroke patients. In 2007 IEEE 10th international conference on rehabilitation robotics. IEEE, 68–74.
[30]
Christian Ott, Ranjan Mukherjee, and Yoshihiko Nakamura. 2010. Unified Impedance and Admittance Control. In 2010 IEEE International Conference on Robotics and Automation. 554–561. https://doi.org/10.1109/ROBOT.2010.5509861
[31]
Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and Jehee Lee. 2019. Learning Predict-and-Simulate Policies from Unorganized Human Motion Data. ACM Transactions on Graphics 38, 6, Article 205(2019).
[32]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS-W.
[33]
Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. 2018. DeepMimic: Example-guided Deep Reinforcement Learning of Physics-based Character Skills. ACM Transactions on Graphics 37, 4, Article 143(2018).
[34]
Xue Bin Peng, Glen Berseth, Kangkang Yin, and Michiel Van De Panne. 2017. DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning. ACM Transactions on Graphics 36, 4, Article 41(2017).
[35]
Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, and Angjoo Kanazawa. 2021. AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control. ACM Transactions on Graphics 40, 4, Article 1 (July 2021).
[36]
G.A. Pratt and M.M. Williamson. 1995. Series elastic actuators. In Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, Vol. 1. 399–406 vol.1. https://doi.org/10.1109/IROS.1995.525827
[37]
M. W. Spong. 1987. Modeling and Control of Elastic Joint Robots. Journal of Dynamic Systems, Measurement, and Control 109, 4 (12 1987), 310–318. https://doi.org/10.1115/1.3143860 arXiv:https://asmedigitalcollection.asme.org/dynamicsystems/article-pdf/109/4/310/5604812/310_1.pdf
[38]
Benjamin Stephens. 2007. Humanoid push recovery. In 2007 7th IEEE-RAS International Conference on Humanoid Robots. IEEE, 589–595.
[39]
Jie Tan, Karen Liu, and Greg Turk. 2011. Stable proportional-derivative controllers. IEEE Computer Graphics and Applications 31, 4 (2011), 34–44.
[40]
Faraz Torabi, Garrett Warnell, and Peter Stone. 2018. Generative adversarial imitation from observation. arXiv preprint arXiv:1807.06158(2018).
[41]
Daniel E Whitney. 1987. Historical perspective and state of the art in robot force control. The International Journal of Robotics Research 6, 1 (1987), 3–14.
[42]
Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2020. A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters. ACM Transactions on Graphics 39, 4, Article 33(2020).
[43]
Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2021. Control strategies for physically simulated characters performing two-player competitive sports. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1–11.
[44]
Jungdam Won, Jungnam Park, and Jehee Lee. 2018. Aerobatics Control of Flying Creatures via Self-Regulated Learning. ACM Trans. Graph. 37, 6, Article 181 (Dec. 2018), 10 pages. https://doi.org/10.1145/3272127.3275023
[45]
Wenhao Yu, Greg Turk, and C. Karen Liu. 2018. Learning Symmetric and Low-Energy Locomotion. ACM Trans. Graph. 37, 4, Article 144 (July 2018), 12 pages. https://doi.org/10.1145/3197517.3201397
[46]
Victor Brian Zordan and Jessica K Hodgins. 2002. Motion capture-driven simulations that hit and react. In Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation. 89–96.

Cited By

View all
  • (2024)ReGAIL: Toward Agile Character Control From a Single Reference MotionProceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games10.1145/3677388.3696330(1-10)Online publication date: 21-Nov-2024
  • (2024)Deep Compliant Control for Legged Robots2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10611209(11421-11427)Online publication date: 13-May-2024
  • (2023)Discovering Fatigued Movements for Virtual Character AnimationSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618176(1-12)Online publication date: 10-Dec-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGGRAPH '22: ACM SIGGRAPH 2022 Conference Proceedings
July 2022
553 pages
ISBN:9781450393379
DOI:10.1145/3528233
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Admittance Control
  2. Character Animation
  3. Compliant Control
  4. Deep Reinforcement Learning
  5. Generative Adversarial Imitation Learning
  6. Impedance Control
  7. Stiffness

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SIGGRAPH '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)98
  • Downloads (Last 6 weeks)19
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)ReGAIL: Toward Agile Character Control From a Single Reference MotionProceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games10.1145/3677388.3696330(1-10)Online publication date: 21-Nov-2024
  • (2024)Deep Compliant Control for Legged Robots2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10611209(11421-11427)Online publication date: 13-May-2024
  • (2023)Discovering Fatigued Movements for Virtual Character AnimationSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618176(1-12)Online publication date: 10-Dec-2023
  • (2023)DROP: Dynamics Responses from Human Motion Prior and Projective DynamicsSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618175(1-11)Online publication date: 10-Dec-2023
  • (2023)Too Stiff, Too Strong, Too SmartProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/36069356:3(1-17)Online publication date: 24-Aug-2023
  • (2023)Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00886(9629-9640)Online publication date: 1-Oct-2023
  • (2023)Character hit reaction animations using physics and inverse kinematicsComputer Animation and Virtual Worlds10.1002/cav.217034:3-4Online publication date: 16-May-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media