Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Comparison of Various Approaches to Reinforcement Learning Algorithms for Multi-robot Box Pushing

  • Conference paper
  • First Online:
Advances in Engineering Research and Application (ICERA 2018)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 63))

Included in the following conference series:

Abstract

In this paper, a comparison of reinforcement learning algorithms and their performance on a robot box pushing task is provided. The robot box pushing problem is structured as both a single agent problem and also a multi-agent problem. A Q-learning algorithm is applied to the single-agent box pushing problem, and three different Q-learning algorithms are applied to the multi-agent box pushing problem. Both sets of algorithms are applied on a dynamic environment that is comprised of static objects, a static goal location, a dynamic box location, and dynamic agent positions. A simulation environment is developed to test the four algorithms, and their performance is compared through graphical explanations of test results. The comparison shows that the newly applied reinforcement algorithm out-performs the previously applied algorithms on the robot box pushing problem in a dynamic environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chakraborty, J., Konar, A., Nagar, A., Das, S.: Rotation and translation selective pareto optimal solution to the box-pushing problem by mobile robots using NSGA-II. In: 2009 IEEE Congress on Evolutionary Computation, pp. 2120–2126, May 2009. https://doi.org/10.1109/CEC.2009.4983203

  2. Hwang, K.S., Ling, J.L., Wang, W.H.: Adaptive reinforcement learning in box-pushing robots. In: 2014 IEEE International Conference on Automation Science and Engineering (CASE), pp. 1182–1187, August 2014. https://doi.org/10.1109/CoASE.2014.6899476

  3. La, H.M., Lim, R., Sheng, W.: Multirobot cooperative learning for predator avoidance. IEEE Trans. Control Syst. Technol. 23(1), 52–63 (2015). https://doi.org/10.1109/TCST.2014.2312392

    Article  Google Scholar 

  4. La, H.M., Lim, R.S., Sheng, W., Chen, J.: Cooperative flocking and learning in multi-robot systems for predator avoidance. In: 2013 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, pp. 337–342, May 2013. https://doi.org/10.1109/CYBER.2013.6705469

  5. Parra-Gonzalez, E.F., Ramirez-Torres, J.G., Toscano-Pulido, G.: A new object path planner for the box pushing problem. In: 2009 Electronics, Robotics and Automotive Mechanics Conference (CERMA), pp. 119–124, September 2009. https://doi.org/10.1109/CERMA.2009.54

  6. Rakshit, P., Konar, A., Nagar, A.K.: Multi-robot box-pushing in presence of measurement noise. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 4926–4933, July 2016. https://doi.org/10.1109/CEC.2016.7744422

  7. Wang, Y., Silva, C.W.D.: Multi-robot box-pushing: Single-agent q-learning vs. team q-learning. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3694–3699, October 2006. https://doi.org/10.1109/IROS.2006.281729

  8. Wang, Y., de Silva, C.W.: An object transportation system with multiple robots and machine learning. In: Proceedings of the 2005, American Control Conference, vol. 2, pp. 1371–1376, June 2005. https://doi.org/10.1109/ACC.2005.1470156

  9. Yasuda, T., Ohkura, K., Yamada, K.: Multi-robot cooperation based on continuous reinforcement learning with two state space representations. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics, p. 4475, October 2013. https://doi.org/10.1109/SMC.2013.760

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hung Manh La .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rahimi, M., Gibb, S., Shen, Y., La, H.M. (2019). A Comparison of Various Approaches to Reinforcement Learning Algorithms for Multi-robot Box Pushing. In: Fujita, H., Nguyen, D., Vu, N., Banh, T., Puta, H. (eds) Advances in Engineering Research and Application. ICERA 2018. Lecture Notes in Networks and Systems, vol 63. Springer, Cham. https://doi.org/10.1007/978-3-030-04792-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04792-4_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04791-7

  • Online ISBN: 978-3-030-04792-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics