Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3570773.3570808acmotherconferencesArticle/Chapter ViewAbstractPublication PagesisaimsConference Proceedingsconference-collections
research-article

A Review of Path Planning Based on IQL and DQN

Published: 09 December 2022 Publication History

Abstract

In the field of robot navigation, Path Planning is a very important problem. Reasonable Path Planning can greatly improve the efficiency of transportation and ensure the safety of robots. The traditional Path Planning method solves the problem of an optimal path to some extent, but it is far from enough. With Machine Learning becoming a hot topic, Path Planning using Reinforcement Learning and Deep Reinforcement Learning has been studied. Q-learning, as a basic algorithm of Reinforcement Learning, has been applied for a long time and has been improved by combining multiple algorithms. And Deep Q-network, a classical algorithm of Deep Reinforcement Learning, has been used to solve complex problems which traditional Reinforcement Learning cannot solve, particularly in Path Planning. This article will present the current achievements in the Improvement of Q-learning (IQL) and Deep Q-network (DQN). In the future, Reinforcement Learning and Deep Reinforcement Learning will generate more and better algorithms to solve problems with higher complexity and need shorter response times.

References

[1]
H. Wang, Y. Yu, and Q. Yuan, “Application of Dijkstra algorithm in robot path-planning,” in 2011 Second International Conference on Mechanic Automation and Control Engineering, 2011.
[2]
J. Y. Hwang, J. S. Kim, S. S. Lim and K. H. Park, "A fast Path Planning by path graph optimization", Systems Man and Cybernetics Part A: Systems and Humans IEEE Transactions, vol. 33, no. 1, pp. 121-129, 2003.
[3]
M. Noto and H. Sato, "A method for the shortest path search by extended Dijkstra algorithm," Smc 2000 conference proceedings. 2000 ieee international conference on systems, man and cybernetics. 'cybernetics evolving to systems, humans, organizations, and their complex interactions' cat. no.0, 2000, pp. 2316-2320 vol.3.
[4]
J. Tu and S. X. Yang, “Genetic algorithm based path planning for a mobile robot,” in 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), 2004.
[5]
S. M. LaValle and J. J. Kuffner, "Randomized kinodynamic planning", Int. J. Robot. Res., vol. 20, no. 5, pp. 378-400, 2001.
[6]
J. J. Kuffner and S. M. LaValle, "RRT-connect: An efficient approach to single-query path planning," Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), 2000, pp. 995-1001 vol.2.
[7]
A. Maoudj and A. Hentout, “Optimal path planning approach based on Q-learning algorithm for mobile robots,” Appl. Soft Comput., vol. 97, no. 106796, p. 106796, 2020.
[8]
E. S. Low, P. Ong, and K. C. Cheah, “Solving the optimal path planning of a mobile robot using improved Q-learning,” Rob. Auton. Syst., vol. 115, pp. 143–161, 2019.
[9]
H.L. Wu, L.C. Cai, and X. Gao, “Online Pheromone Stringency Guiding Heuristically Accelerated Q-learning,” Application Research of Computers, in press. 2018.
[10]
P.F. Dong, Z.A. Zhang, X.H. Mei, and S. Zhu, “Reinforcement Learning Path Planning Algorithm Based on Gravitational Potential Field and Trap Search,” Computer Engineering and Applications, in press. 2021.
[11]
C. Yan and X. Xiang, "A Path Planning Algorithm for UAV Based on Improved Q-learning," 2018 2nd International Conference on Robotics and Automation Sciences (ICRAS), 2018, pp. 1-5.
[12]
A. Konar, I. Goswami Chakraborty, S. J. Singh, L. C. Jain and A. K. Nagar, "A Deterministic Improved Q-learning for Path Planning of a Mobile Robot," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 43, no. 5, pp. 1141-1153, Sept. 2013.
[13]
] I. Goswami, P. K. Das, A. Konar, and R. Janarthanan, “Extended Q-learning algorithm for path-planning of a mobile robot,” in Proc. 8th Int. Conf. SEAL, Dec. 2010, pp. 379–383.
[14]
S. Li, X. Xu and L. Zuo, "Dynamic Path Planning of a mobile robot with improved Q-learning algorithm," 2015 IEEE International Conference on Information and Automation, 2015, pp. 409-414.
[15]
C. Tsai, H. Huang and C. Chan, "Parallel Elite Genetic Algorithm and Its Application to Global Path Planning for Autonomous Robot Navigation," in IEEE Transactions on Industrial Electronics, vol. 58, no. 10, pp. 4813-4821, Oct. 2011.
[16]
L.W. Qiu, “Application of Deep Reinforcement Learning In Video Game Playing,” Master Thesis of South China University of Technology, 2015.
[17]
S. Zhou, X. Liu, Y. Xu and J. Guo, "A Deep Q-network (DQN) Based Path Planning Method for Mobile Robots," 2018 IEEE International Conference on Information and Automation (ICIA), 2018, pp. 366-371.
[18]
J. Xin, H. Zhao, D. Liu and M. Li, "Application of deep reinforcement learning in mobile robot path planning," 2017 Chinese Automation Congress (CAC), 2017, pp. 7112-7116.
[19]
S. Guo, X. Zhang, Y. Du, Y. Zheng, and Z. Cao, “Path planning of coastal ships based on optimized DQN reward function,” J. Mar. Sci. Eng., vol. 9, no. 2, p. 210, 2021.
[20]
C. Yi and M. Qi, "Research on Virtual Path Planning Based on Improved DQN," 2020 IEEE International Conference on Real-time Computing and Robotics (RCAR), 2020, pp. 387-392.

Cited By

View all
  • (2023)Maze Solving Using Deep Q-NetworkProceedings of the 2023 6th International Conference on Advances in Robotics10.1145/3610419.3610458(1-5)Online publication date: 5-Jul-2023

Index Terms

  1. A Review of Path Planning Based on IQL and DQN
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        ISAIMS '22: Proceedings of the 3rd International Symposium on Artificial Intelligence for Medicine Sciences
        October 2022
        594 pages
        ISBN:9781450398442
        DOI:10.1145/3570773
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 09 December 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        ISAIMS 2022

        Acceptance Rates

        Overall Acceptance Rate 53 of 112 submissions, 47%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)44
        • Downloads (Last 6 weeks)6
        Reflects downloads up to 21 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2023)Maze Solving Using Deep Q-NetworkProceedings of the 2023 6th International Conference on Advances in Robotics10.1145/3610419.3610458(1-5)Online publication date: 5-Jul-2023

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media