Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Advertisement
Springer Nature Link
Account
Menu
Find a journal Publish with us Track your research
Search
Cart
  1. Home
  2. Journal of Intelligent & Robotic Systems
  3. Article

State-Dependent Maximum Entropy Reinforcement Learning for Robot Long-Horizon Task Learning

  • Regular paper
  • Open access
  • Published: 24 January 2024
  • Volume 110, article number 19, (2024)
  • Cite this article
Download PDF

You have full access to this open access article

Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript
State-Dependent Maximum Entropy Reinforcement Learning for Robot Long-Horizon Task Learning
Download PDF
  • Deshuai Zheng1,
  • Jin Yan1,
  • Tao Xue1 &
  • …
  • Yong Liu  ORCID: orcid.org/0000-0003-0138-37201 
  • 884 Accesses

  • 1 Citation

  • Explore all metrics

Abstract

Task-oriented robot learning has shown significant potential with the development of Reinforcement Learning (RL) algorithms. However, the learning of long-horizon tasks for robots remains a formidable challenge due to the inherent complexity of tasks, typically comprising multiple diverse stages. Universal RL algorithms commonly encounter issues such as slow convergence or even failure to converge altogether when applied to such tasks. The reasons behind these challenges lie in the local optima trap or redundant exploration during the new stages or the junction of two continuous stages. To address these challenges, we propose a novel state-dependent maximum entropy (SDME) reinforcement learning algorithm. This algorithm effectively balances the trade-off between exploration and exploitation around three kinds of critical states arising from the unique nature of long-horizon tasks. We conducted experiments within an open-source simulation environment, focusing on two representative long-horizon tasks. The proposed SDME algorithm exhibits faster and more stable learning capabilities, requiring merely one-third of the number of learning samples necessary for baseline approaches. Furthermore, we assess the generalization ability of our method under randomly initialized conditions, and the results show that the success rate of the SDME algorithm is nearly twice that of the baselines. Our code will be available at https://github.com/Peter-zds/SDME.

Article PDF

Download to read the full article text

Similar content being viewed by others

Utilizing Large Language Models for Robot Skill Reward Shaping in Reinforcement Learning

Chapter © 2025

Guiding real-world reinforcement learning for in-contact manipulation tasks with Shared Control Templates

Article Open access 04 June 2024

Maximum diffusion reinforcement learning

Article 02 May 2024

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.
  • Learning algorithms
  • Rehabilitation Robotics
  • Robotic Engineering
  • Robotics
  • Spike-timing-dependent plasticity
  • Stochastic Learning and Adaptive Control
Use our pre-submission checklist

Avoid common mistakes on your manuscript.

References

  1. Fang, Y., Liao, B., Wang, X., Fang, J., Qi, J., Wu, R., Niu, J., Liu, W.: You only look at one sequence: Rethinking transformer in vision through object detection. Adv. Neural. Inf. Process. Syst. 34, 26183–26197 (2021)

    Google Scholar 

  2. Djordjevic, V., Tao, H., Song, X., He, S., Gao, W., Stojanović, V.: Data-driven control of hydraulic servo actuator: An event-triggered adaptive dynamic programming approach. MBE, Mathematical biosciences and engineering (2023)

    Google Scholar 

  3. Wang, X., Girdhar, R., Yu, S.X., Misra, I.: Cut and learn for unsupervised object detection and instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3124–3134 (2023)

  4. Stojanović, V.: Fault-tolerant control of a hydraulic servo actuator via adaptive dynamic programming. Mathematical Modelling and Control (2023)

  5. Tutsoy, O., Barkana, D.E., Balikci, K.: A novel exploration-exploitation-based adaptive law for intelligent model-free control approaches. IEEE Trans. Cybernet. 53(1), 329–337 (2023). https://doi.org/10.1109/TCYB.2021.3091680

    Article  Google Scholar 

  6. Zhuang, Z., Tao, H., Chen, Y., Stojanovic, V., Paszke, W.: An optimal iterative learning control approach for linear systems with nonuniform trial lengths under input constraints. IEEE Trans. Syst. Man. Cybernet. Syst. (2022)

  7. Quillen, D., Jang, E., Nachum, O., Finn, C., Ibarz, J., Levine, S.: Deep reinforcement learning for vision-based robotic grasping: A simulated comparative evaluation of off-policy methods. In: 2018 IEEE International conference on robotics and automation (ICRA), IEEE, pp. 6284–6291 (2018)

  8. Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V., et al.: Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293 (2018)

  9. Fang, K., Zhu, Y., Garg, A., Kurenkov, A., Mehta, V., Fei-Fei, L., Savarese, S.: Learning task-oriented grasping for tool manipulation from simulated self-supervision. The International Journal of Robotics Research 39(2–3), 202–216 (2020)

    Article  Google Scholar 

  10. Nair, A., Pong, V., Dalal, M., Bahl, S., Lin, S., Levine, S.: Visual reinforcement learning with imagined goals. arXiv preprint arXiv:1807.04742 (2018)

  11. Xu, D., Nair, S., Zhu, Y., Gao, J., Garg, A., Fei-Fei, L., Savarese, S.: Neural task programming: Learning to generalize across hierarchical tasks. In: 2018 IEEE International conference on robotics and automation (ICRA) (2017)

  12. Tremblay, J., To, T., Molchanov, A., Tyree, S., Kautz, J., Birchfield, S.: Synthetically trained neural networks for learning human-readable plans from real-world demonstrations. In: 2018 IEEE Internationa L conference on robotics and automation (ICRA), IEEE, pp. 5659–5666 (2018)

  13. Huang, D.-A., Nair, S., Xu, D., Zhu, Y., Garg, A., Fei-Fei, L., Savarese, S., Niebles, J.C.: Neural task graphs: Generalizing to unseen tasks from a single video demonstration. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8565–8574 (2019)

  14. Yu, T., Quillen, D., He, Z., Julian, R., Hausman, K., Finn, C., Levine, S.: Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In: Conference on robot learning (CoRL) (2019). arXiv:1910.10897

  15. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. Comput. Sci. (2013)

  16. Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: International conference on machine learning, PMLR, pp. 1889–1897 (2015)

  17. Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning, (2015). arXiv:1509.02971

  18. Abed-Alguni, B., Ottom, M.A.: Double delayed q-learning. International Journal of Artificial Intelligence 16(2), 41–59 (2018)

    Google Scholar 

  19. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, PMLR, pp. 1861–1870 (2018)

  20. Ho, J., Ermon, S.: Generative adversarial imitation learning. Adv. Neural Inform. Process. Syst. 29 (2016)

  21. Ng, A.Y., Russell, S., et al: Algorithms for inverse reinforcement learning. In: Icml, vol.1, p. 2 (2000)

  22. Abolghasemi, P., Mazaheri, A., Shah, M., Boloni, L.: Pay attention!-robustifying a deep visuomotor policy through task-focused visual attention. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4254–4262 (2019)

  23. Mohseni-Kabir, A., Rich, C., Chernova, S., Sidner, C.L., Miller, D.: Interactive hierarchical task learning from a single demonstration. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, pp. 205–212 (2015)

  24. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  25. Hundt, A., Killeen, B., Greene, N., Wu, H., Kwon, H., Paxton, C., Hager, G.D.: good robot! : Efficient reinforcement learning for multi-step visual tasks with sim to real transfer. IEEE Robot. Autom. Lett. 5(4), 6724–6731 (2020)

  26. Li, Z., Sun, Z., Su, J., Zhang, J.: Learning a skill-sequence-dependent policy for long-horizon manipulation tasks. In: 2021 IEEE 17th International conference on automation science and engineering (CASE), IEEE, pp. 1229–1234 (2021)

  27. Strudel, R., Pashevich, A., Kalevatykh, I., Laptev, I., Sivic, J., Schmid, C.: Learning to combine primitive skills: A step towards versatile robotic manipulation. In: 2020 IEEE International conference on robotics and automation (ICRA), IEEE, pp. 4637–4643 (2020)

  28. Wu, B., Xu, F., He, Z., Gupta, A., Allen, P.K.: Squirl: Robust and efficient learning from video demonstration of long-horizon robotic manipulation tasks. In: 2020 IEEE/RSJ International conference on intelligent robots and systems (IROS), pp. IEEE, 9720–9727 (2020)

  29. Clegg, A., Yu, W., Tan, J., Liu, C.K., Turk, G.: Learning to dress: Synthesizing human dressing motion via deep reinforcement learning. ACM Transactions on Graphics (TOG) 37(6), 1–10 (2018)

    Article  Google Scholar 

  30. Lee, Y., Sun, S.-H., Somasundaram, S., Hu, E.S., Lim, J.J.: Composing complex skills by learning transition policies. In: International conference on learning representations (2018)

  31. Lee, Y., Lim, J.J., Anandkumar, A., Zhu, Y.: Adversarial skill chaining for long-horizon robot manipulation via terminal state regularization, (2021). arXiv:2111.07999

  32. Schulman, J., Chen, X., Abbeel, P.: Equivalence between policy gradients and soft q-learning, (2017). arXiv:1704.06440

  33. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. (2017). arXiv:1707.06347

  34. Zheng, D., Yan, J., Xue, T., Liu, Y.: A knowledge-based task planning approach for robot multi-task manipulation. Complex & Intell. Syst. pp. 1–14 (2023)

Download references

Acknowledgements

This work was funded by National Natural Science Foundation of China (Grant No. 61473155), Primary Research & Development Plan of Jiangsu Province (Grant No. BE2017301), Six talent peaks project in Jiangsu Province (Grant No. GDZB-039).

Funding

This work was funded by National Natural Science Foundation of China (Grant No. 61473155), Primary Research & Development Plan of Jiangsu Province (Grant No. BE2017301), Six talent peaks project in Jiangsu Province (Grant No. GDZB-039).

Author information

Authors and Affiliations

  1. School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210000, China

    Deshuai Zheng, Jin Yan, Tao Xue & Yong Liu

Authors
  1. Deshuai Zheng
    View author publications

    Search author on:PubMed Google Scholar

  2. Jin Yan
    View author publications

    Search author on:PubMed Google Scholar

  3. Tao Xue
    View author publications

    Search author on:PubMed Google Scholar

  4. Yong Liu
    View author publications

    Search author on:PubMed Google Scholar

Contributions

All authors contributed to the study conception and design. Material preparation was performed by Yong Liu. Deshuai Zheng proposed the method and verified its practicability by experiments. The first draft of the manuscript was written by Tao Xue and Jin Yan. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yong Liu.

Ethics declarations

Consent to participate

Informed consent was obtained from all individual participants included in the study.

Consent to Publish

The participant has consented to the submission of the case report to the journal.

Competing Interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, D., Yan, J., Xue, T. et al. State-Dependent Maximum Entropy Reinforcement Learning for Robot Long-Horizon Task Learning. J Intell Robot Syst 110, 19 (2024). https://doi.org/10.1007/s10846-024-02049-8

Download citation

  • Received: 14 February 2023

  • Accepted: 31 December 2023

  • Published: 24 January 2024

  • DOI: https://doi.org/10.1007/s10846-024-02049-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Long-horizon task
  • Robot learning
  • Reinforcement learning
Use our pre-submission checklist

Avoid common mistakes on your manuscript.

Advertisement

Search

Navigation

  • Find a journal
  • Publish with us
  • Track your research

Discover content

  • Journals A-Z
  • Books A-Z

Publish with us

  • Journal finder
  • Publish your research
  • Language editing
  • Open access publishing

Products and services

  • Our products
  • Librarians
  • Societies
  • Partners and advertisers

Our brands

  • Springer
  • Nature Portfolio
  • BMC
  • Palgrave Macmillan
  • Apress
  • Discover
  • Your US state privacy rights
  • Accessibility statement
  • Terms and conditions
  • Privacy policy
  • Help and support
  • Legal notice
  • Cancel contracts here

Not affiliated

Springer Nature

© 2025 Springer Nature