Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3573900.3593631acmconferencesArticle/Chapter ViewAbstractPublication PagespadsConference Proceedingsconference-collections
extended-abstract

Autonomous Agent for Beyond Visual Range Air Combat: A Deep Reinforcement Learning Approach

Published: 21 June 2023 Publication History

Abstract

This work contributes to developing an agent based on deep reinforcement learning capable of acting in a beyond visual range (BVR) air combat simulation environment. The paper presents an overview of building an agent representing a high-performance fighter aircraft that can learn and improve its role in BVR combat over time based on rewards calculated using operational metrics. Also, through self-play experiments, it expects to generate new air combat tactics never seen before. Finally, we hope to examine a real pilot’s ability, using virtual simulation, to interact in the same environment with the trained agent and compare their performances. This research will contribute to the air combat training context by developing agents that can interact with real pilots to improve their performances in air defense missions.

References

[1]
Marc G Bellemare, Will Dabney, and Rémi Munos. 2017. A distributional perspective on reinforcement learning. In International Conference on Machine Learning. PMLR, 449–458.
[2]
Joao P. A. Dantas. 2018. Apoio à Decisão para o Combate Aéreo Além do Alcance Visual: Uma Abordagem por Redes Neurais Artificiais. Master’s Thesis. Instituto Tecnológico de Aeronáutica, São José dos Campos, SP, Brazil.
[3]
Joao P. A. Dantas, Andre N. Costa, Diego Geraldo, Marcos R. O. A. Maximo, and Takashi Yoneyama. 2021. Engagement Decision Support for Beyond Visual Range Air Combat. In Proceedings of the 2021 Latin American Robotics Symposium, 2021 Brazilian Symposium on Robotics, and 2021 Workshop on Robotics in Education. October 11th–15th, 96–101.
[4]
Joao P. A. Dantas, Andre N. Costa, Diego Geraldo, Marcos R. O. A. Maximo, and Takashi Yoneyama. 2021. Weapon Engagement Zone Maximum Launch Range Estimation Using a Deep Neural Network. In Intelligent Systems, André Britto and Karina Valdivia Delgado (Eds.). Springer, Cham, 193–207.
[5]
Joao P. A Dantas, Andre N. Costa, Vitor C. F. Gomes, Andre R. Kuroswiski, Felipe L. L. Medeiros, and Diego Geraldo. 2022. ASA: A Simulation Environment for Evaluating Military Operational Scenarios. In Proceedings of the 20th International Conference on Scientific Computing. 25th–28th, Las Vegas, NV, USA.
[6]
Joao P. A. Dantas, Andre N. Costa, Felipe L. L. Medeiros, Diego Geraldo, Marcos R. O. A. Maximo, and Takashi Yoneyama. 2022. Supervised Machine Learning for Effective Missile Launch Based on Beyond Visual Range Air Combat Simulations. In Proceedings of the Winter Simulation Conference (Singapore) (WSC ’22).
[7]
Joao P. A. Dantas, Marcos R. O. A. Maximo, Andre N. Costa, Diego Geraldo, and Takashi Yoneyama. 2022. Machine Learning to Improve Situational Awareness in Beyond Visual Range Air Combat. IEEE Latin America Transactions 20, 8 (2022). https://latamt.ieeer9.org/index.php/transactions/article/view/6530
[8]
Zihao Fan, Yang Xu, Yuhang Kang, and Delin Luo. 2022. Air Combat Maneuver Decision Method Based on A3C Deep Reinforcement Learning. Machines 10, 11 (2022), 1033.
[9]
Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Gheshlaghi Azar, and David Silver. 2018. Rainbow: Combining Improvements in Deep Reinforcement Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. 3150–3157.
[10]
Lt Col Patrick Higby and USAF Col. 2005. Promise and reality: Beyond visual range (BVR) air-to-air combat. Air War College (AWC) Electives Program: Air Power Theory, Doctrine, and Strategy: 1945–Present 30 (2005).
[11]
Dongyuan Hu, Rennong Yang, Jialiang Zuo, Ze Zhang, Jun Wu, and Ying Wang. 2021. Application of deep reinforcement learning in maneuver planning of beyond-visual-range air combat. IEEE Access 9 (2021), 32282–32297.
[12]
Jinwen Hu, Luhe Wang, Tianmi Hu, Chubing Guo, and Yanxiong Wang. 2022. Autonomous maneuver decision making of dual-UAV cooperative air combat based on deep reinforcement learning. Electronics 11, 3 (2022), 467.
[13]
Xiaoxiong Liu, Yi Yin, Yuzhan Su, and Ruichen Ming. 2022. A Multi-UCAV Cooperative Decision-Making Method Based on an MAPPO Algorithm for Beyond-Visual-Range Air Combat. Aerospace 9, 10 (2022), 563.
[14]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529–533.
[15]
Haiyin Piao, Zhixiao Sun, Guanglei Meng, Hechang Chen, Bohao Qu, Kuijun Lang, Yang Sun, Shengqi Yang, and Xuanqi Peng. 2020. Beyond-visual-range air combat tactics auto-generation by reinforcement learning. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 1–8.
[16]
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2015. Prioritized Experience Replay. arXiv preprint arXiv:1511.05952 (2015).
[17]
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. 2015. Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning. PMLR, 1995–2003.
[18]
Hongpeng Zhang, Yujie Wei, Huan Zhou, and Changqiang Huang. 2022. Maneuver Decision-Making for Autonomous Air Combat Based on FRE-PPO. Applied Sciences 12, 20 (2022), 10230.
[19]
Hongpeng Zhang, Huan Zhou, Yujie Wei, and Changqiang Huang. 2022. Autonomous maneuver decision-making method based on reinforcement learning and Monte Carlo tree search. Frontiers in Neurorobotics (2022).

Cited By

View all
  • (2024)Loyal Wingman Assessment: Social Navigation for Human-Autonomous Collaboration in Simulated Air CombatProceedings of the 38th ACM SIGSIM Conference on Principles of Advanced Discrete Simulation10.1145/3615979.3662149(61-62)Online publication date: 24-Jun-2024
  • (2024)AsaPy: A Python Library for Aerospace Simulation AnalysisProceedings of the 38th ACM SIGSIM Conference on Principles of Advanced Discrete Simulation10.1145/3615979.3656063(15-24)Online publication date: 24-Jun-2024

Index Terms

  1. Autonomous Agent for Beyond Visual Range Air Combat: A Deep Reinforcement Learning Approach

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGSIM-PADS '23: Proceedings of the 2023 ACM SIGSIM Conference on Principles of Advanced Discrete Simulation
      June 2023
      173 pages
      ISBN:9798400700309
      DOI:10.1145/3573900
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 June 2023

      Check for updates

      Qualifiers

      • Extended-abstract
      • Research
      • Refereed limited

      Conference

      SIGSIM-PADS '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 398 of 779 submissions, 51%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)54
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 22 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Loyal Wingman Assessment: Social Navigation for Human-Autonomous Collaboration in Simulated Air CombatProceedings of the 38th ACM SIGSIM Conference on Principles of Advanced Discrete Simulation10.1145/3615979.3662149(61-62)Online publication date: 24-Jun-2024
      • (2024)AsaPy: A Python Library for Aerospace Simulation AnalysisProceedings of the 38th ACM SIGSIM Conference on Principles of Advanced Discrete Simulation10.1145/3615979.3656063(15-24)Online publication date: 24-Jun-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media