Abstract
Safe driving policies is the key technology to realize the adaptive cruise control of autonomous vehicle in highway environment. In this paper, the reinforcement learning method is applied to autonomous driving’s decision-making. To solve the problem that present reinforcement learning methods are difficult to deal with the randomness and uncertainty in driving environment, a model-free method for analyzing the Lyapunov stability and H∞ performance is applied to Actor-Critic algorithm to improve the stability and robustness of reinforcement learning. The safety of taking an action is judged by setting a safety threshold, thus improving the safety of behavioral decisions. Our method also designs a set of reward functions to better meet the safety and efficiency of driving decisions in the highway environment. The results show that the method can provide safe driving strategies for driverless vehicles in both normal road conditions and environments with unexpected situations, enabling the vehicles to drive safely.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Volodymyr, M., Koray, K., David, S., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2019)
Saxena, D.M., Bae, S., Nakhaei, A., et al.: Driving in dense traffic with model-free reinforcement learning. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 5385–5392 (2020)
Kendall, A., et al.: Learning to drive in a day. In: IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 8248–8254 (2019)
Hoel, C.-J., Wolff, K., Laine, L.: Tactical decision-making in autonomous driving by reinforcement learning with uncertainty estimation. arXiv:2004.10439 (2020)
Osband, I., Aslanides, J., Cassirer, A.: Randomized prior functions for deep reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 31, pp. 8617–8629 (2018)
Hoel, C.J., Wolff, K., Laine, L.: Automated speed and lane change decision making using deep reinforcement learning. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, pp. 2148–2155 (2018)
Minghao, H., Yuan, T., Lixian, Z., Jun, W., Wei, P.: H∞ model-free reinforcement learning with robust stability guarantee. In: NeurIPS 2019 Workshop on Robot Learning: Control and Interaction in the Real World (2019)
Andrew, B., Richard, S., Charles, A.: Neuron like elements that can solve difficult learning control problems. IEEE Trans. Syst. Man Cybern. 13(5), 834–846 (1983)
Lillicrap T P , Hunt J J , Pritzel A , et al. Continuous control with deep reinforcement learning. Comput. Sci. (2015)
Volodymyr, M., Adrià, P.B., Mehdi, M., Alex, G., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937 (2016)
Chen, Y., Dong, C., Palanisamy, P., et al.: Attention-based hierarchical deep reinforcement learning for lane change behaviors in autonomous driving. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2019)
Underwood, S., Bartz, D., Kade, A., Crawford, M.: Truck automation: testing and trusting the virtual driver. In: Meyer, G., Beiker, S. (eds.) Road Vehicle Automation 3. LNM, pp. 91–109. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40503-2_8
Krau, S., Wagner P , Gawron C . Metastable states in a microscopic model of traffic flow. Physical review. Stat. Phys. 5(1997), 5597–5602 (1997). Plasmas, fluids, and related interdisciplinary topics
Erdmann, J.: Lane-changing model in SUMO. In: SUMO 2014. Proceedings of the SUMO2014 Modeling Mobility with Open Data, vol. 24, pp. 77–88 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Jiang, Z., Wang, Z., Cui, X., Zheng, C. (2021). Intelligent Safety Decision-Making for Autonomous Vehicle in Highway Environment. In: Liu, XJ., Nie, Z., Yu, J., Xie, F., Song, R. (eds) Intelligent Robotics and Applications. ICIRA 2021. Lecture Notes in Computer Science(), vol 13016. Springer, Cham. https://doi.org/10.1007/978-3-030-89092-6_64
Download citation
DOI: https://doi.org/10.1007/978-3-030-89092-6_64
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89091-9
Online ISBN: 978-3-030-89092-6
eBook Packages: Computer ScienceComputer Science (R0)