Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = deep space flights

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 3069 KiB  
Article
An Initial Trajectory Design for the Multi-Target Exploration of the Electric Sail
by Zichen Fan, Fei Cheng, Wenlong Li, Guiqi Pan, Mingying Huo and Naiming Qi
Aerospace 2025, 12(3), 196; https://doi.org/10.3390/aerospace12030196 - 28 Feb 2025
Viewed by 80
Abstract
The electric sail (E-sail), as an emerging propulsion system with an infinite specific impulse, is particularly suitable for ultra-long-distance multi-target deep-space exploration missions. If multiple gravity assists are considered during the exploration process, it can effectively improve the exploration efficiency of the E-sail. [...] Read more.
The electric sail (E-sail), as an emerging propulsion system with an infinite specific impulse, is particularly suitable for ultra-long-distance multi-target deep-space exploration missions. If multiple gravity assists are considered during the exploration process, it can effectively improve the exploration efficiency of the E-sail. This paper proposes a fast optimization algorithm for deep-space multi-target exploration trajectories for the E-sail, which achieves the exploration of multiple celestial bodies and solar-system boundaries in one flight, and introduces a gravity assist to improve the flight speed of the E-sail during the exploration process. By comparing simulation examples under different conditions, the effectiveness of the algorithm proposed in this paper has been demonstrated. This is of great significance for the initial rapid design of complex deep-space exploration missions such as the E-sail multi-target exploration. Full article
Show Figures

Figure 1

Figure 1
<p>Coordinate systems [<a href="#B27-aerospace-12-00196" class="html-bibr">27</a>].</p>
Full article ">Figure 2
<p>Propulsive acceleration’s characteristic angles [<a href="#B27-aerospace-12-00196" class="html-bibr">27</a>].</p>
Full article ">Figure 3
<p>Gravity-assist process.</p>
Full article ">Figure 4
<p>Mars gravity-assist and solar-system boundary exploration transfer trajectories.</p>
Full article ">Figure 5
<p>Time variation of the E-sail propulsive acceleration components in the orbital coordinate system (Mars–solar-system boundary).</p>
Full article ">Figure 6
<p>Time variation of attitude angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> (Mars–solar-system boundary).</p>
Full article ">Figure 7
<p>Time variation of attitude angle <math display="inline"><semantics> <mi>σ</mi> </semantics></math> (Mars–solar-system boundary).</p>
Full article ">Figure 8
<p>Jupiter gravity-assist and solar-system boundary exploration transfer trajectories.</p>
Full article ">Figure 9
<p>Time variation of the E-sail propulsive acceleration components in the orbital coordinate system (Jupiter–solar-system boundary).</p>
Full article ">Figure 10
<p>Time variation of attitude angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> (Jupiter–solar-system boundary).</p>
Full article ">Figure 11
<p>Time variation of attitude angle <math display="inline"><semantics> <mi>σ</mi> </semantics></math> (Jupiter–solar-system boundary).</p>
Full article ">Figure 12
<p>Mars and Jupiter gravity-assist and solar-system boundary exploration transfer trajectories.</p>
Full article ">Figure 13
<p>Time variation of the E-sail propulsive acceleration components in the orbital coordinate system (Mars–Jupiter–solar-system boundary).</p>
Full article ">Figure 14
<p>Time variation of attitude angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> (Mars–Jupiter–solar-system boundary).</p>
Full article ">Figure 15
<p>Time variation of attitude angle <math display="inline"><semantics> <mi>σ</mi> </semantics></math> (Mars–Jupiter–solar-system boundary).</p>
Full article ">
25 pages, 12528 KiB  
Article
Mission Re-Planning of Reusable Launch Vehicles Under Throttling Fault in the Recovery Flight Based on Controllable Set Analysis and a Deep Neural Network
by Keshu Li, Wanqing Zhang, Han Yuan, Jing Zhou and Ying Ma
Aerospace 2025, 12(3), 166; https://doi.org/10.3390/aerospace12030166 - 20 Feb 2025
Viewed by 177
Abstract
The frequent launches of reusable launch vehicles are currently the primary approach to support large-scale space transportation, necessitating high reliability in recovery flights. This paper proposes a mission re-planning scheme to address throttling faults, which significantly affect the feasibility of powered landing. To [...] Read more.
The frequent launches of reusable launch vehicles are currently the primary approach to support large-scale space transportation, necessitating high reliability in recovery flights. This paper proposes a mission re-planning scheme to address throttling faults, which significantly affect the feasibility of powered landing. To quantify the influence of throttling capability, the concept of “controllable set (CS)” is introduced. The CS is defined as the collection of all feasible initial states that can achieve a successful powered landing and is computed using polyhedron approximation and convex optimization. Based on the CS, the physical feasibility of a power landing problem under deviations from the nominal conditions can be evaluated probabilistically. Besides, a deep neural network (DNN) is constructed to enhance the computational efficiency of the CS analysis, thereby meeting the requirements for online applications. Finally, an effective re-planning scheme is proposed to deal with throttling faults in recovery flight. This is achieved by adjusting the designed angle of attack during the endo-atmosphere unpowered descent phase and selecting the associated optimal handover conditions to initiate the powered landing. The optimal re-planning parameters are determined through a comprehensive investigation of the design space, leveraging probability-based CS analysis and computationally efficient DNN predictions. Simulations verify the accuracy of the CS computation algorithm and the effectiveness of the re-planning scheme under different fault conditions. The results indicate high feasibility probabilities of 99.97%, 98.12%, and 78.52% for maximum throttling capabilities at 65%, 75%, and 85% of nominal thrust magnitude, respectively. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the LP coordinate system.</p>
Full article ">Figure 2
<p>The expression of the controllable subsets. (<b>a</b>) Schematic diagram of the position controllable subset; (<b>b</b>) schematic diagram of the velocity controllable subset.</p>
Full article ">Figure 3
<p>The schematic diagram of the polyhedron approximation method. (<b>a</b>) The precise boundary of a convex region; (<b>b</b>) the first polyhedron approximation; (<b>c</b>) the second polyhedron approximation; (<b>d</b>) the next polyhedron approximation.</p>
Full article ">Figure 4
<p>A typical initial polyhedron of the longitudinal position controllable subset.</p>
Full article ">Figure 5
<p>The determination of the physical feasibility of the powered landing problem. (<b>a</b>) Case-1: neither the nominal position nor the nominal velocity is within its respective controllable subset; (<b>b</b>) Case-2: only the nominal position falls within its controllable subset; (<b>c</b>) Case-3: only the nominal velocity falls within its controllable subset; (<b>d</b>) Case-4: both the nominal position and the nominal velocity fall within their respective controllable subsets.</p>
Full article ">Figure 6
<p>Analysis of the longitudinal position controllable subset under uncertainty. (<b>a</b>) Standard position, controllable subset, and 3<span class="html-italic">σ</span> boundary; (<b>b</b>) standard trajectory, controllable subsets, and 3<span class="html-italic">σ</span> boundaries.</p>
Full article ">Figure 7
<p>The schematic diagram of the throttling fault scenario. (<b>a</b>) Throttling fault scenario for longitudinal position; (<b>b</b>) throttling fault scenario for longitudinal velocity.</p>
Full article ">Figure 8
<p>The schematic diagram of the re-planning result. (<b>a</b>) Re-planning result for longitudinal position; (<b>b</b>) re-planning result for longitudinal velocity.</p>
Full article ">Figure 9
<p>Comparison of the feasibility probabilities for longitudinal position. (<b>a</b>) Probability after throttling fault without re-planning; (<b>b</b>) probability after throttling fault with re-planning.</p>
Full article ">Figure 10
<p>Comparison of the feasibility probabilities for longitudinal velocity. (<b>a</b>) Probability after throttling fault without re-planning; (<b>b</b>) probability after throttling fault with re-planning.</p>
Full article ">Figure 11
<p>The schematic structure of the DNN. The blue circles represent the input nodes, the green circles represent the out put nodes, and the yellow circles represent the nodes in hidden layers.</p>
Full article ">Figure 12
<p>The training process of the DNN.</p>
Full article ">Figure 13
<p>The evaluation of re-planning results based on the DNN.</p>
Full article ">Figure 14
<p>The accuracy of the position controllable subset. (<b>a</b>) The distribution of the MCS samples; (<b>b</b>) statistics of the feasibility misjudgment.</p>
Full article ">Figure 15
<p>The accuracy of the velocity controllable subset. (<b>a</b>) The distribution of the MCS samples; (<b>b</b>) statistics of the feasibility misjudgment.</p>
Full article ">Figure 16
<p>The CS analysis under different throttling faults. (<b>a</b>) The position controllable subset analysis; (<b>b</b>) the velocity controllable subset analysis.</p>
Full article ">Figure 17
<p>Mission re-planning under a throttling fault with a maximal throttling capability of 65% of the nominal thrust magnitude. (<b>a</b>) The probability-based analysis; (<b>b</b>) the performance of the re-planning solution.</p>
Full article ">Figure 18
<p>Mission re-planning under a throttling fault with a maximal throttling capability of 75% of the nominal thrust magnitude. (<b>a</b>) The probability-based analysis; (<b>b</b>) the performance of the re-planning solution.</p>
Full article ">Figure 19
<p>Mission re-planning under a throttling fault with a maximal throttling capability of 85% of the nominal thrust magnitude. (<b>a</b>) The probability-based analysis; (<b>b</b>) the performance of the re-planning solution.</p>
Full article ">
20 pages, 3878 KiB  
Article
Energy Scheduling of Hydrogen Hybrid UAV Based on Model Predictive Control and Deep Deterministic Policy Gradient Algorithm
by Haitao Li, Chenyu Wang, Shufu Yuan, Hui Zhu, Bo Li, Yuexin Liu and Li Sun
Algorithms 2025, 18(2), 80; https://doi.org/10.3390/a18020080 - 2 Feb 2025
Viewed by 555
Abstract
Energy scheduling for hybrid unmanned aerial vehicles (UAVs) is of critical importance to their safe and stable operation. However, traditional approaches, predominantly rule-based, often lack the dynamic adaptability and stability necessary to address the complexities of changing operational environments. To overcome these limitations, [...] Read more.
Energy scheduling for hybrid unmanned aerial vehicles (UAVs) is of critical importance to their safe and stable operation. However, traditional approaches, predominantly rule-based, often lack the dynamic adaptability and stability necessary to address the complexities of changing operational environments. To overcome these limitations, this paper proposes a novel energy scheduling framework that integrates the Model Predictive Control (MPC) with a Deep Reinforcement Learning algorithm, specifically the Deep Deterministic Policy Gradient (DDPG). The proposed method is designed to optimize energy management in hydrogen-powered UAVs across diverse flight missions. The energy system comprises a proton exchange membrane fuel cell (PEMFC), a lithium-ion battery, and a hydrogen storage tank, enabling robust optimization through the synergistic application of MPC and DDPG. The simulation results demonstrate that the MPC effectively minimizes electric power consumption under various flight conditions, while the DDPG achieves convergence and facilitates efficient scheduling. By leveraging advanced mechanisms, including continuous action space representation, efficient policy learning, experience replay, and target networks, the proposed approach significantly enhances optimization performance and system stability in complex, continuous decision-making scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>An UAV flight profile diagram.</p>
Full article ">Figure 2
<p>Hydrogen hybrid UAV energy system structure diagram.</p>
Full article ">Figure 3
<p>The MPC and DDPG flowchart.</p>
Full article ">Figure 4
<p>The MPC flowchart.</p>
Full article ">Figure 5
<p>The DDPG flowchart.</p>
Full article ">Figure 6
<p>Schematic diagram of the DDPG policy network and value network structure.</p>
Full article ">Figure 7
<p>Mean and range of random operating conditions.</p>
Full article ">Figure 8
<p>Comparison between MPC predicted power and reference value.</p>
Full article ">Figure 9
<p>DDPG training cumulative reward.</p>
Full article ">Figure 10
<p>Hydrogen fuel cell and SOC scheduling during 0–19 min.</p>
Full article ">Figure 11
<p>Hydrogen fuel cell and SOC scheduling during 19–36 min.</p>
Full article ">Figure 12
<p>Hydrogen fuel cell and SOC scheduling during 346–360 min.</p>
Full article ">
18 pages, 1747 KiB  
Article
Deep Reinforcement Learning Algorithm with Long Short-Term Memory Network for Optimizing Unmanned Aerial Vehicle Information Transmission
by Yufei He, Ruiqi Hu, Kewei Liang, Yonghong Liu and Zhiyuan Zhou
Mathematics 2025, 13(1), 46; https://doi.org/10.3390/math13010046 - 26 Dec 2024
Viewed by 781
Abstract
The optimization of information transmission in unmanned aerial vehicles (UAVs) is essential for enhancing their operational efficiency across various applications. This issue is framed as a mixed-integer nonconvex optimization challenge, which traditional optimization algorithms and reinforcement learning (RL) methods often struggle to address [...] Read more.
The optimization of information transmission in unmanned aerial vehicles (UAVs) is essential for enhancing their operational efficiency across various applications. This issue is framed as a mixed-integer nonconvex optimization challenge, which traditional optimization algorithms and reinforcement learning (RL) methods often struggle to address effectively. In this paper, we propose a novel deep reinforcement learning algorithm that utilizes a hybrid discrete–continuous action space. To address the long-term dependency issues inherent in UAV operations, we incorporate a long short-term memory (LSTM) network. Our approach accounts for the specific flight constraints of fixed-wing UAVs and employs a continuous policy network to facilitate real-time flight path planning. A non-sparse reward function is designed to maximize data collection from internet of things (IoT) devices, thus guiding the UAV to optimize its operational efficiency. Experimental results demonstrate that the proposed algorithm yields near-optimal flight paths and significantly improves data collection capabilities, compared to conventional heuristic methods, achieving an improvement of up to 10.76%. Validation through simulations confirms the effectiveness and practicality of the proposed approach in real-world scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence: Large Language Models and Big Data Analysis)
Show Figures

Figure 1

Figure 1
<p>UAV communication sketch.</p>
Full article ">Figure 2
<p>DRL model for UAV messaging.</p>
Full article ">Figure 3
<p>Illustration of UAV path planning and IoT data collection. (<b>A</b>) Starting position; (<b>B</b>) UAV only connects with IoT device 1; (<b>C</b>) UAV connect with both IoT devices; (<b>D</b>) UAV only connects with IoT device 2.</p>
Full article ">Figure 4
<p>Value loss functions at different learning rates.</p>
Full article ">Figure 5
<p>Reward function.</p>
Full article ">Figure 6
<p>Comparison of data collection under different numbers of channels and IoT devices.</p>
Full article ">Figure 7
<p>Flight paths of the UAV with different algorithms and IoT device locations, while <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>9</mn> </mrow> </msup> </mrow> </semantics></math>. (<b>A</b>) IoT devices are distributed along the diagonal of the region <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>; (<b>B</b>) IoT devices are distributed in an “S” shape; (<b>C</b>) IoT devices are concentrated on the left side of the diagonal of the region <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>; (<b>D</b>) IoT devices are concentrated on the right side of the diagonal of the region <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>.</p>
Full article ">Figure 8
<p>Amount of information collected by the UAV with different algorithms and IoT device locations. (<b>A</b>) IoT devices are distributed along the diagonal of the region <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>; (<b>B</b>) IoT devices are distributed in an “S” shape; (<b>C</b>) IoT devices are concentrated on the left side of the diagonal of the region <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>; (<b>D</b>) IoT devices are concentrated on the right side of the diagonal of the region <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>.</p>
Full article ">
23 pages, 9966 KiB  
Article
SFFNet: Shallow Feature Fusion Network Based on Detection Framework for Infrared Small Target Detection
by Zhihui Yu, Nian Pan and Jin Zhou
Remote Sens. 2024, 16(22), 4160; https://doi.org/10.3390/rs16224160 - 8 Nov 2024
Cited by 1 | Viewed by 871
Abstract
Infrared small target detection (IRSTD) is the process of recognizing and distinguishing small targets from infrared images that are obstructed by crowded backgrounds. This technique is used in various areas, including ground monitoring, flight navigation, and so on. However, due to complex backgrounds [...] Read more.
Infrared small target detection (IRSTD) is the process of recognizing and distinguishing small targets from infrared images that are obstructed by crowded backgrounds. This technique is used in various areas, including ground monitoring, flight navigation, and so on. However, due to complex backgrounds and the loss of information in deep networks, infrared small target detection remains a difficult undertaking. To solve the above problems, we present a shallow feature fusion network (SFFNet) based on detection framework. Specifically, we design the shallow-layer-guided feature enhancement (SLGFE) module, which guides multi-scale feature fusion with shallow layer information, effectively mitigating the loss of information in deep networks. Then, we design the visual-Mamba-based global information extension (VMamba-GIE) module, which leverages a multi-branch structure combining the capability of convolutional layers to extract features in local space with the advantages of state space models in the exploration of long-distance information. The design significantly extends the network’s capacity to acquire global contextual information, enhancing its capability to handle complex backgrounds. And through the effective fusion of the SLGFE and VMamba-GIE modules, the exorbitant computation brought by the SLGFE module is substantially reduced. The experimental results on two publicly available infrared small target datasets demonstrate that the SFFNet surpasses other state-of-the-art algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of 2D-Selective-Scan (SS2D).</p>
Full article ">Figure 2
<p>(<b>a</b>) Overall architecture of the SFFNet. (<b>b</b>) Architecture of SLGFE module. (<b>c</b>) Architecture of VMamba-GIE module.</p>
Full article ">Figure 3
<p>Illustration of the backbone network architecture.</p>
Full article ">Figure 4
<p>Illustration of the detection head architecture.</p>
Full article ">Figure 5
<p>Comparison of feature maps at different scales extracted by the backbone network. The target position is highlighted by the red dotted box. (<b>a</b>) original image. (<b>b</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math>. (<b>d</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math>. (<b>e</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>4</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Partial visualization results obtained by different infrared small target detection methods on the NUAA-SIRST dataset.</p>
Full article ">Figure 7
<p>Partial visualization results obtained by different object detection networks on the NUAA-SIRST dataset.</p>
Full article ">Figure 8
<p>Partial visualization results obtained by different infrared small target detection methods on the IRSTD-1K dataset.</p>
Full article ">Figure 9
<p>Partial visualization results obtained by different object detection networks on the IRSTD-1k dataset.</p>
Full article ">Figure 10
<p>Partial feature maps at different stages. The positions of targets are highlighted with red dashed circle.</p>
Full article ">
32 pages, 10874 KiB  
Article
Advanced Cooperative Formation Control in Variable-Sweep Wing UAVs via the MADDPG–VSC Algorithm
by Zhengyang Cao and Gang Chen
Appl. Sci. 2024, 14(19), 9048; https://doi.org/10.3390/app14199048 - 7 Oct 2024
Viewed by 1250
Abstract
UAV technology is advancing rapidly, and variable-sweep wing UAVs are increasingly valuable because they can adapt to different flight conditions. However, conventional control methods often struggle with managing continuous action spaces and responding to dynamic environments, making them inadequate for complex multi-UAV cooperative [...] Read more.
UAV technology is advancing rapidly, and variable-sweep wing UAVs are increasingly valuable because they can adapt to different flight conditions. However, conventional control methods often struggle with managing continuous action spaces and responding to dynamic environments, making them inadequate for complex multi-UAV cooperative formation control tasks. To address these challenges, this study presents an innovative framework that integrates dynamic modeling with morphing control, optimized by the multi-agent deep deterministic policy gradient for two-sweep control (MADDPG–VSC) algorithm. This approach enables real-time sweep angle adjustments based on current flight states, significantly enhancing aerodynamic efficiency and overall UAV performance. The precise motion state model for wing morphing developed in this study underpins the MADDPG–VSC algorithm’s implementation. The algorithm not only optimizes multi-UAV formation control efficiency but also improves obstacle avoidance, attitude stability, and decision-making speed. Extensive simulations and real-world experiments consistently demonstrate that the proposed algorithm outperforms contemporary methods in multiple aspects, underscoring its practical applicability in complex aerial systems. This study advances control technologies for morphing-wing UAV formation and offers new insights into multi-agent cooperative control, with substantial potential for real-world applications. Full article
(This article belongs to the Special Issue Collaborative Learning and Optimization Theory and Its Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic of dynamic relationships in a variable-sweep wing UAV.</p>
Full article ">Figure 2
<p>Diagram of the variable-sweep wing UAV and mass centers. (<b>a</b>) Structural diagram of the L-30A variable-sweep wing UAV. (<b>b</b>) Schematic of mass centers and position vectors.</p>
Full article ">Figure 3
<p>Rotating mechanism at the wing–fuselage junction of the L-30A UAV. (<b>a</b>) Schematic of the rotation mechanism. (<b>b</b>) Schematic of the wing–fuselage junction detail.</p>
Full article ">Figure 4
<p>Aerodynamic characteristics of the L-30A variable-sweep wing UAV. (<b>a</b>) Lift and drag coefficients. (<b>b</b>) Lift-to-drag ratio.</p>
Full article ">Figure 5
<p>Schematic of cooperative formation control in variable-sweep wing UAV system.</p>
Full article ">Figure 6
<p>Structure of the MADDPG–VSC algorithm model.</p>
Full article ">Figure 7
<p>Schematic representation of the simulation environment. (<b>a</b>) Terrain schematic; (<b>b</b>) simple scenario; (<b>c</b>) complex scenario.</p>
Full article ">Figure 8
<p>UAV trajectories using MADDPG, MADDPG–VSC, and SAC algorithms. (<b>a</b>) MADDPG algorithm; (<b>b</b>) MADDPG–VSC algorithm; (<b>c</b>) SAC algorithm.</p>
Full article ">Figure 9
<p>Training reward curves for MADDPG, MADDPG–VSC, and SAC algorithms.</p>
Full article ">Figure 10
<p>Network parameter variations using MADDPG, MADDPG–VSC, and SAC algorithms. (<b>a</b>) MADDPG algorithm; (<b>b</b>) MADDPG–VSC algorithm; (<b>c</b>) SAC algorithm.</p>
Full article ">Figure 10 Cont.
<p>Network parameter variations using MADDPG, MADDPG–VSC, and SAC algorithms. (<b>a</b>) MADDPG algorithm; (<b>b</b>) MADDPG–VSC algorithm; (<b>c</b>) SAC algorithm.</p>
Full article ">Figure 11
<p>Schematic of the L-30A UAV platform and its components: (<b>a</b>) L-30A UAV system; (<b>b</b>) sensors and safety mechanisms.</p>
Full article ">Figure 12
<p>Experimental hardware platform and task controller. (<b>a</b>) Hardware platform; (<b>b</b>) task controller.</p>
Full article ">Figure 13
<p>Scenario map of the formation flight trajectories.</p>
Full article ">Figure 14
<p>Latency comparison across different hardware platforms.</p>
Full article ">Figure 15
<p>Energy consumption comparison across hardware platforms.</p>
Full article ">Figure 16
<p>Fault tolerance and reliability comparison across hardware platforms.</p>
Full article ">Figure 17
<p>Trajectory tracking error comparison between MADDPG and MADDPG–VSC. (<b>a</b>) Trajectory of MADDPG; (<b>b</b>) trajectory of MADDPG–VSC.</p>
Full article ">
22 pages, 19661 KiB  
Article
UAV Autonomous Navigation Based on Deep Reinforcement Learning in Highly Dynamic and High-Density Environments
by Yuanyuan Sheng, Huanyu Liu, Junbao Li and Qi Han
Drones 2024, 8(9), 516; https://doi.org/10.3390/drones8090516 - 23 Sep 2024
Viewed by 2520
Abstract
Autonomous navigation of Unmanned Aerial Vehicles (UAVs) based on deep reinforcement learning (DRL) has made great progress. However, most studies assume relatively simple task scenarios and do not consider the impact of complex task scenarios on UAV flight performance. This paper proposes a [...] Read more.
Autonomous navigation of Unmanned Aerial Vehicles (UAVs) based on deep reinforcement learning (DRL) has made great progress. However, most studies assume relatively simple task scenarios and do not consider the impact of complex task scenarios on UAV flight performance. This paper proposes a DRL-based autonomous navigation algorithm for UAVs, which enables autonomous path planning for UAVs in high-density and highly dynamic environments. This algorithm proposes a state space representation method that contains position information and angle information by analyzing the impact of UAV position changes and angle changes on navigation performance in complex environments. In addition, a dynamic reward function is constructed based on a non-sparse reward function to balance the agent’s conservative behavior and exploratory behavior during the model training process. The results of multiple comparative experiments show that the proposed algorithm not only has the best autonomous navigation performance but also has the optimal flight efficiency in complex environments. Full article
Show Figures

Figure 1

Figure 1
<p>UAV navigation mission diagram.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic diagram of airborne LiDAR. (<b>b</b>) Relative positional relationship between UAV and target point.</p>
Full article ">Figure 3
<p>Schematic diagram of UAV movement and changes in measurement data (rectangles and circles in the diagram represent obstacles): (<b>a</b>) movement diagram (translation); (<b>b</b>) changes in laser-measured data (translation); (<b>c</b>) movement diagram (translation and rotation); (<b>d</b>) changes in laser-measured data (translation and rotation).</p>
Full article ">Figure 4
<p>Different obstacle scenarios: (<b>a</b>) low-density obstacles; (<b>b</b>) high-density obstacles; (<b>c</b>) dynamic obstacles.</p>
Full article ">Figure 5
<p>The network structure of our algorithm.</p>
Full article ">Figure 6
<p>Schematic diagram of training scenario.</p>
Full article ">Figure 7
<p>Reward convergence curves obtained by training four algorithms.</p>
Full article ">Figure 8
<p>High-density test scenarios: (<b>a</b>) scene 1: <span class="html-italic">N<sub>obs</sub></span> = 90; <span class="html-italic">ρ</span> = 0.225; (<b>b</b>) scene 2: <span class="html-italic">N<sub>obs</sub></span> = 120; <span class="html-italic">ρ</span> = 0.3; (<b>c</b>) scene 3: <span class="html-italic">N<sub>obs</sub></span> = 150; <span class="html-italic">ρ</span> = 0.375; (<b>d</b>) scene 4: <span class="html-italic">N<sub>obs</sub></span> = 180; <span class="html-italic">ρ</span> = 0.45; (<b>e</b>) scene 5: <span class="html-italic">N<sub>obs</sub></span> = 210; <span class="html-italic">ρ</span> = 0.525.</p>
Full article ">Figure 9
<p>Flight trajectory of UAV in high-density test: (<b>a</b>) DDPG in scene 1; (<b>b</b>) TD3 in scene 1; (<b>c</b>) SAC in scene 1; (<b>d</b>) ours in scene 1; (<b>e</b>) DDPG in scene 2; (<b>f</b>) TD3 in scene 2; (<b>g</b>) SAC in scene 2; (<b>h</b>) ours in scene 2; (<b>i</b>) DDPG in scene 3; (<b>j</b>) TD3 in scene 3; (<b>k</b>) SAC in scene 3; (<b>l</b>) ours in scene 3; (<b>m</b>) DDPG in scene 4; (<b>n</b>) TD3 in scene 4; (<b>o</b>) SAC in scene 4; (<b>p</b>) ours in scene 4; (<b>q</b>) DDPG in scene 5; (<b>r</b>) TD3 in scene 5; (<b>s</b>) SAC in scene 5; (<b>t</b>) ours in scene 5.</p>
Full article ">Figure 9 Cont.
<p>Flight trajectory of UAV in high-density test: (<b>a</b>) DDPG in scene 1; (<b>b</b>) TD3 in scene 1; (<b>c</b>) SAC in scene 1; (<b>d</b>) ours in scene 1; (<b>e</b>) DDPG in scene 2; (<b>f</b>) TD3 in scene 2; (<b>g</b>) SAC in scene 2; (<b>h</b>) ours in scene 2; (<b>i</b>) DDPG in scene 3; (<b>j</b>) TD3 in scene 3; (<b>k</b>) SAC in scene 3; (<b>l</b>) ours in scene 3; (<b>m</b>) DDPG in scene 4; (<b>n</b>) TD3 in scene 4; (<b>o</b>) SAC in scene 4; (<b>p</b>) ours in scene 4; (<b>q</b>) DDPG in scene 5; (<b>r</b>) TD3 in scene 5; (<b>s</b>) SAC in scene 5; (<b>t</b>) ours in scene 5.</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in different scenarios: (<b>a</b>) average flight distance <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> in each scenario; (<b>b</b>) average number of flight steps <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in each scenario.</p>
Full article ">Figure 11
<p>Highly dynamic test scenario diagram.</p>
Full article ">Figure 12
<p>Flight trajectory of UAV in highly dynamic test: (<b>a</b>) DDPG in scene 1; (<b>b</b>) TD3 in scene 1; (<b>c</b>) SAC in scene 1; (<b>d</b>) ours in scene 1; (<b>e</b>) DDPG in scene 2; (<b>f</b>) TD3 in scene 2; (<b>g</b>) SAC in scene 2; (<b>h</b>) ours in scene 2; (<b>i</b>) DDPG in scene 3; (<b>j</b>) TD3 in scene 3; (<b>k</b>) SAC in scene 3; (<b>l</b>) ours in scene 3; (<b>m</b>) DDPG in scene 4; (<b>n</b>) TD3 in scene 4; (<b>o</b>) SAC in scene 4; (<b>p</b>) ours in scene 4; (<b>q</b>) DDPG in scene 5; (<b>r</b>) TD3 in scene 5; (<b>s</b>) SAC in scene 5; (<b>t</b>) ours in scene 5.</p>
Full article ">Figure 12 Cont.
<p>Flight trajectory of UAV in highly dynamic test: (<b>a</b>) DDPG in scene 1; (<b>b</b>) TD3 in scene 1; (<b>c</b>) SAC in scene 1; (<b>d</b>) ours in scene 1; (<b>e</b>) DDPG in scene 2; (<b>f</b>) TD3 in scene 2; (<b>g</b>) SAC in scene 2; (<b>h</b>) ours in scene 2; (<b>i</b>) DDPG in scene 3; (<b>j</b>) TD3 in scene 3; (<b>k</b>) SAC in scene 3; (<b>l</b>) ours in scene 3; (<b>m</b>) DDPG in scene 4; (<b>n</b>) TD3 in scene 4; (<b>o</b>) SAC in scene 4; (<b>p</b>) ours in scene 4; (<b>q</b>) DDPG in scene 5; (<b>r</b>) TD3 in scene 5; (<b>s</b>) SAC in scene 5; (<b>t</b>) ours in scene 5.</p>
Full article ">Figure 13
<p><math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in different dynamic scenarios: (<b>a</b>) average flight distance <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> in each dynamic scenario; (<b>b</b>) average number of flight steps <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in each dynamic scenario.</p>
Full article ">Figure 14
<p>Reward convergence curves of three models.</p>
Full article ">Figure 15
<p>UAV flight trajectories in high-density tests based on three models: (<b>a</b>) model 1 in scene 3; (<b>b</b>) model 2 in scene 3; (<b>c</b>) ours in scene 3; (<b>d</b>) model 1 in scene 4; (<b>e</b>) model 2 in scene 4; (<b>f</b>) ours in scene 4; (<b>g</b>) model 1 in scene 5; (<b>h</b>) model 2 in scene 5; (<b>i</b>) ours in scene 5.</p>
Full article ">Figure 15 Cont.
<p>UAV flight trajectories in high-density tests based on three models: (<b>a</b>) model 1 in scene 3; (<b>b</b>) model 2 in scene 3; (<b>c</b>) ours in scene 3; (<b>d</b>) model 1 in scene 4; (<b>e</b>) model 2 in scene 4; (<b>f</b>) ours in scene 4; (<b>g</b>) model 1 in scene 5; (<b>h</b>) model 2 in scene 5; (<b>i</b>) ours in scene 5.</p>
Full article ">Figure 16
<p><math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> of UAV based on three models in different scenarios: (<b>a</b>) average flight distance <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> in each scenario; (<b>b</b>) average number of flight steps <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in each scenario.</p>
Full article ">Figure 17
<p>UAV flight trajectories in highly dynamic tests based on three models: (<b>a</b>) model 1 in scene 3; (<b>b</b>) model 2 in scene 3; (<b>c</b>) ours in scene 3; (<b>d</b>) model 1 in scene 4; (<b>e</b>) model 2 in scene 4; (<b>f</b>) ours in scene 4; (<b>g</b>) model 1 in scene 5; (<b>h</b>) model 2 in scene 5; (<b>i</b>) ours in scene 5.</p>
Full article ">Figure 18
<p><math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> of UAV based on three models in different dynamic scenarios: (<b>a</b>) average flight distance <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> in each dynamic scenario; (<b>b</b>) average number of flight steps <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> in each dynamic scenario.</p>
Full article ">
21 pages, 6862 KiB  
Article
Research on Self-Learning Control Method of Reusable Launch Vehicle Based on Neural Network Architecture Search
by Shuai Xue, Zhaolei Wang, Hongyang Bai, Chunmei Yu and Zian Li
Aerospace 2024, 11(9), 774; https://doi.org/10.3390/aerospace11090774 - 20 Sep 2024
Cited by 1 | Viewed by 1697
Abstract
Reusable launch vehicles need to face complex and diverse environments during flight. The design of rocket recovery control law based on traditional deep reinforcement learning (DRL) makes it difficult to obtain a set of network architectures that can adapt to multiple scenarios and [...] Read more.
Reusable launch vehicles need to face complex and diverse environments during flight. The design of rocket recovery control law based on traditional deep reinforcement learning (DRL) makes it difficult to obtain a set of network architectures that can adapt to multiple scenarios and multi-parameter uncertainties, and the performance of deep reinforcement learning algorithm depends on manual trial and error of hyperparameters. To solve this problem, this paper proposes a self-learning control method for launch vehicle recovery based on neural architecture search (NAS), which decouples deep network structure search and reinforcement learning hyperparameter optimization. First, using network architecture search technology based on a multi-objective hybrid particle swarm optimization algorithm, the proximal policy optimization algorithm of deep network architecture is automatically designed, and the search space is lightweight design in the process. Secondly, in order to further improve the landing accuracy of the launch vehicle, the Bayesian optimization (BO) method is used to automatically optimize the hyperparameters of reinforcement learning, and the control law of the landing phase in the recovery process of the launch vehicle is obtained through training. Finally, the algorithm is transplanted to the rocket intelligent learning embedded platform for comparative testing to verify its online deployment capability. The simulation results show that the proposed method can satisfy the landing accuracy of the launch vehicle recovery mission, and the control effect is basically the same as the landing accuracy of the trained rocket model under the untrained condition of model parameter deviation and wind field interference, which verifies the generalization of the proposed method. Full article
(This article belongs to the Special Issue Advanced GNC Solutions for VTOL Systems)
Show Figures

Figure 1

Figure 1
<p>Relationship between engine thrust and angle at the bottom view.</p>
Full article ">Figure 2
<p>General framework of self-learning control for launch vehicle recovery.</p>
Full article ">Figure 3
<p>Network structure of Actor and Critic in control law training algorithm based on PPO.</p>
Full article ">Figure 4
<p>Neural network architecture search framework.</p>
Full article ">Figure 5
<p>Architecture search topology diagram of neural network.</p>
Full article ">Figure 6
<p>Structure diagram of Bayesian optimization reinforcement learning model.</p>
Full article ">Figure 7
<p>Average reward function change.</p>
Full article ">Figure 8
<p>Trajectory of rocket motion.</p>
Full article ">Figure 9
<p>Speed change of the rocket.</p>
Full article ">Figure 10
<p>Mass change of rocket.</p>
Full article ">Figure 11
<p>Variation of pitch angle deviation of rocket.</p>
Full article ">Figure 12
<p>Variation of yaw angle deviation of rocket.</p>
Full article ">Figure 13
<p>Variation of roll angle deviation of rocket.</p>
Full article ">Figure 14
<p>The thrust magnitude of the rocket changes.</p>
Full article ">Figure 15
<p>Thrust angle change of the rocket.</p>
Full article ">Figure 16
<p>Thrust angle change of the rocket.</p>
Full article ">Figure 17
<p>Trajectory of rocket motion.</p>
Full article ">Figure 18
<p>Speed change of the rocket.</p>
Full article ">Figure 19
<p>Mass change of rocket.</p>
Full article ">Figure 20
<p>Variation of pitch angle deviation of rocket.</p>
Full article ">Figure 21
<p>Variation of yaw angle deviation of rocket.</p>
Full article ">Figure 22
<p>Variation of roll angle deviation of rocket.</p>
Full article ">Figure 23
<p>The thrust magnitude of the rocket changes.</p>
Full article ">Figure 24
<p>Thrust angle change of the rocket.</p>
Full article ">Figure 25
<p>Thrust angle change of the rocket.</p>
Full article ">Figure 26
<p>The rocket recovered 100 sample trajectories.</p>
Full article ">
18 pages, 4668 KiB  
Article
Autonomous Trajectory Planning Method for Stratospheric Airship Regional Station-Keeping Based on Deep Reinforcement Learning
by Sitong Liu, Shuyu Zhou, Jinggang Miao, Hai Shang, Yuxuan Cui and Ying Lu
Aerospace 2024, 11(9), 753; https://doi.org/10.3390/aerospace11090753 - 13 Sep 2024
Cited by 1 | Viewed by 1304
Abstract
The stratospheric airship, as a near-space vehicle, is increasingly utilized in scientific exploration and Earth observation due to its long endurance and regional observation capabilities. However, due to the complex characteristics of the stratospheric wind field environment, trajectory planning for stratospheric airships is [...] Read more.
The stratospheric airship, as a near-space vehicle, is increasingly utilized in scientific exploration and Earth observation due to its long endurance and regional observation capabilities. However, due to the complex characteristics of the stratospheric wind field environment, trajectory planning for stratospheric airships is a significant challenge. Unlike lower atmospheric levels, the stratosphere presents a wind field characterized by significant variability in wind speed and direction, which can drastically affect the stability of the airship’s trajectory. Recent advances in deep reinforcement learning (DRL) have presented promising avenues for trajectory planning. DRL algorithms have demonstrated the ability to learn complex control strategies autonomously by interacting with the environment. In particular, the proximal policy optimization (PPO) algorithm has shown effectiveness in continuous control tasks and is well suited to the non-linear, high-dimensional problem of trajectory planning in dynamic environments. This paper proposes a trajectory planning method for stratospheric airships based on the PPO algorithm. The primary contributions of this paper include establishing a continuous action space model for stratospheric airship motion; enabling more precise control and adjustments across a broader range of actions; integrating time-varying wind field data into the reinforcement learning environment; enhancing the policy network’s adaptability and generalization to various environmental conditions; and enabling the algorithm to automatically adjust and optimize flight paths in real time using wind speed information, reducing the need for human intervention. Experimental results show that, within its wind resistance capability, the airship can achieve long-duration regional station-keeping, with a maximum station-keeping time ratio (STR) of up to 0.997. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

Figure 1
<p>Displacement of airship in uncertain wind field environment.</p>
Full article ">Figure 2
<p>Displacement generated by the airship’s own driving force.</p>
Full article ">Figure 3
<p>The minimum, maximum, and average monthly wind speeds for typical region at 10° N latitude.</p>
Full article ">Figure 4
<p>Structure of the Actor network and the Critic network.</p>
Full article ">Figure 5
<p>Flow chart of training process.</p>
Full article ">Figure 6
<p>Results of training process. (<b>a</b>) Reward changes over training steps; (<b>b</b>) duration changes over training steps.</p>
Full article ">Figure 7
<p>Flow chart of testing process.</p>
Full article ">Figure A1
<p>Flight trajectories simulation based on Pygame in 2022.</p>
Full article ">Figure A1 Cont.
<p>Flight trajectories simulation based on Pygame in 2022.</p>
Full article ">Figure A2
<p>Flight trajectories simulation based on Pygame in 2023.</p>
Full article ">Figure A2 Cont.
<p>Flight trajectories simulation based on Pygame in 2023.</p>
Full article ">
25 pages, 9006 KiB  
Article
Large-Scale Solar-Powered UAV Attitude Control Using Deep Reinforcement Learning in Hardware-in-Loop Verification
by Yongzhao Yan, Huazhen Cao, Boyang Zhang, Wenjun Ni, Bo Wang and Xiaoping Ma
Drones 2024, 8(9), 428; https://doi.org/10.3390/drones8090428 - 26 Aug 2024
Cited by 1 | Viewed by 1218
Abstract
Large-scale solar-powered unmanned aerial vehicles possess the capacity to perform long-term missions at different altitudes from near-ground to near-space, and the huge spatial span brings strict disciplines for its attitude control such as aerodynamic nonlinearity and environmental disturbances. The design efficiency and control [...] Read more.
Large-scale solar-powered unmanned aerial vehicles possess the capacity to perform long-term missions at different altitudes from near-ground to near-space, and the huge spatial span brings strict disciplines for its attitude control such as aerodynamic nonlinearity and environmental disturbances. The design efficiency and control performance are limited by the gain scheduling of linear methods in a way, which are widely used on such aircraft at present. So far, deep reinforcement learning has been demonstrated to be a promising approach for training attitude controllers for small unmanned aircraft. In this work, a low-level attitude control method based on deep reinforcement learning is proposed for solar-powered unmanned aerial vehicles, which is able to interact with high-fidelity nonlinear systems to discover optimal control laws and can receive and track the target attitude input with an arbitrary high-level control module. Considering the risks of field flight experiments, a hardware-in-loop simulation platform is established that connects the on-board avionics stack with the neural network controller trained in a digital environment. Through flight missions under different altitudes and parameter perturbation, the results show that the controller without re-training has comparable performance with the traditional PID controller, even despite physical delays and mechanical backlash. Full article
Show Figures

Figure 1

Figure 1
<p>Solar-powered UAV designed by the authors’ team.</p>
Full article ">Figure 2
<p>The responses obtained from mathematical servo model and real servo: (<b>a</b>) mathematical servo model, (<b>b</b>) real servo.</p>
Full article ">Figure 3
<p>Classical flight control architecture [<a href="#B26-drones-08-00428" class="html-bibr">26</a>].</p>
Full article ">Figure 4
<p>The architecture of policy neural network.</p>
Full article ">Figure 5
<p>Schematic diagram of simulation environment composition.</p>
Full article ">Figure 6
<p>Signal pathway of the hardware-in-loop simulation platform.</p>
Full article ">Figure 7
<p>Experimental setup diagram.</p>
Full article ">Figure 8
<p>The recorded curve during training, where (<b>a</b>) is the simulation steps of one evaluation episode, and (<b>b</b>) is the normalized total reward of one episode.</p>
Full article ">Figure 9
<p>The time histories of the attitude evaluated under the mathematical model: (<b>a</b>) <span class="html-italic">ϕ</span>, (<b>b</b>) <span class="html-italic">V</span><sub>ias</sub>, (<b>c</b>) <span class="html-italic">θ</span>, (<b>d</b>) <span class="html-italic">β.</span></p>
Full article ">Figure 10
<p>The normalized actuator efficiency trends with altitude when Veas = 10 m/s.</p>
Full article ">Figure 11
<p>The flight process at 10 km altitude: (<b>a</b>) <span class="html-italic">x</span>, (<b>b</b>) Altitude, (<b>c</b>) <span class="html-italic">ϕ</span>, (<b>d</b>) <span class="html-italic">θ</span>, (<b>e</b>) <span class="html-italic">Ψ.</span></p>
Full article ">Figure 12
<p>The attitude tracking errors and actuator changes at 10 km altitude: (<b>a</b>) <span class="html-italic">ϕ</span><sub>error</sub>, (<b>b</b>) <span class="html-italic">θ</span><sub>error</sub>, (<b>c</b>) <span class="html-italic">β</span><sub>error</sub><b><span class="html-italic">,</span></b> (<b>d</b>) <span class="html-italic">δ</span><sub>a</sub>, (<b>e</b>) <span class="html-italic">δ</span><sub>e</sub>, (<b>f</b>) <span class="html-italic">δ</span><sub>r</sub>.</p>
Full article ">Figure 13
<p>Enlarged view of flight process controlled by NN in the wind at 10 km altitude: (<b>a</b>) <span class="html-italic">ϕ</span>, (<b>b</b>) <span class="html-italic">θ</span>, (<b>c</b>) <span class="html-italic">β</span>, (<b>d</b>) <span class="html-italic">δ</span><sub>a</sub>, (<b>e</b>) <span class="html-italic">δ</span><sub>e</sub>, (<b>f</b>) <span class="html-italic">δ</span><sub>r</sub>.</p>
Full article ">Figure 14
<p>Enlarged view of flight process controlled by PID in the wind at 10 km altitude: (<b>a</b>) <span class="html-italic">ϕ</span>, (<b>b</b>) <span class="html-italic">θ</span>, (<b>c</b>) <span class="html-italic">β</span>, (<b>d</b>) <span class="html-italic">δ</span><sub>a</sub>, (<b>e</b>) <span class="html-italic">δ</span><sub>e</sub><span class="html-italic">,</span> (<b>f</b>) <span class="html-italic">δ</span><sub>r</sub>.</p>
Full article ">Figure 15
<p>Roll angle and aileron response under different parameter perturbations: (<b>a</b>,<b>b</b>) are under positive perturbation, (<b>c</b>,<b>d</b>) are under reference model, and (<b>e</b>,<b>f</b>) are under negative perturbation. The red dashed lines represent the command values, and the black solid lines represent the real response.</p>
Full article ">Figure 16
<p>Pitch angle and elevator response under different parameter perturbations: (<b>a</b>,<b>b</b>) are under positive perturbation, (<b>c</b>,<b>d</b>) are under reference model, and (<b>e</b>,<b>f</b>) are under negative perturbation. The red dashed lines represent the command values, and the black solid lines represent the real response.</p>
Full article ">Figure 17
<p>The attitude tracking process for the route at 1 km altitude: (<b>a</b>) <span class="html-italic">ϕ</span>, (<b>b</b>) <span class="html-italic">θ</span>, (<b>c</b>) <span class="html-italic">Ψ.</span></p>
Full article ">Figure 18
<p>The attitude tracking process for the route at 20 km altitude: (<b>a</b>) <span class="html-italic">ϕ,</span> (<b>b</b>) <span class="html-italic">θ</span>, (<b>c</b>) <span class="html-italic">Ψ.</span></p>
Full article ">Figure 19
<p>Flight process under the ideal servo model: (<b>a</b>) <span class="html-italic">θ</span>, (<b>b</b>) <span class="html-italic">δ</span><sub>e</sub>.</p>
Full article ">Figure 20
<p>Flight process under the ideal servo model with backlash: (<b>a</b>) <span class="html-italic">θ</span>, (<b>b</b>) <span class="html-italic">δ</span><sub>e</sub>.</p>
Full article ">
26 pages, 5892 KiB  
Article
Landslide Assessment Classification Using Deep Neural Networks Based on Climate and Geospatial Data
by Yadviga Tynchenko, Vladislav Kukartsev, Vadim Tynchenko, Oksana Kukartseva, Tatyana Panfilova, Alexey Gladkov, Van Nguyen and Ivan Malashin
Sustainability 2024, 16(16), 7063; https://doi.org/10.3390/su16167063 - 17 Aug 2024
Cited by 8 | Viewed by 2010
Abstract
This study presents a method for classifying landslide triggers and sizes using climate and geospatial data. The landslide data were sourced from the Global Landslide Catalog (GLC), which identifies rainfall-triggered landslide events globally, regardless of size, impact, or location. Compiled from 2007 to [...] Read more.
This study presents a method for classifying landslide triggers and sizes using climate and geospatial data. The landslide data were sourced from the Global Landslide Catalog (GLC), which identifies rainfall-triggered landslide events globally, regardless of size, impact, or location. Compiled from 2007 to 2018 at NASA Goddard Space Flight Center, the GLC includes various mass movements triggered by rainfall and other events. Climatic data for the 10 years preceding each landslide event, including variables such as rainfall amounts, humidity, pressure, and temperature, were integrated with the landslide data. This dataset was then used to classify landslide triggers and sizes using deep neural networks (DNNs) optimized through genetic algorithm (GA)-driven hyperparameter tuning. The optimized DNN models achieved accuracies of 0.67 and 0.82, respectively, in multiclass classification tasks. This research demonstrates the effectiveness of GA to enhance landslide disaster risk management. Full article
Show Figures

Figure 1

Figure 1
<p>Overall technical framework for landslide risk assessment.</p>
Full article ">Figure 2
<p>Distribution of landslide events by countries.</p>
Full article ">Figure 3
<p>Experimental Pipeline.</p>
Full article ">Figure 4
<p>Evolution of metrics for landslide size and trigger predictions across individuals.</p>
Full article ">Figure 5
<p>Confusion matrix for landslide size classification.</p>
Full article ">Figure 6
<p>Confusion matrix for landslide trigger classification.</p>
Full article ">Figure 7
<p>Map of landslide risk assessment on Viti Levu Island, Fiji. Red indicates areas with a high probability of landslides according to the model, while green represents areas with low risk.</p>
Full article ">Figure 8
<p>Research structure overview.</p>
Full article ">Figure A1
<p>Island landslides maps: (<b>a</b>) England and Ireland; (<b>b</b>) Philippines; (<b>c</b>) Hawaii; (<b>d</b>) Fiji.</p>
Full article ">Figure A2
<p>Landslides maps: (<b>a</b>) Caribbean Basin; (<b>b</b>) Indonesia and Malaysia; (<b>c</b>) East Australia and New Zealand; (<b>d</b>) Korea and Japan; (<b>e</b>) Columbia and Ecuador; (<b>f</b>) Rio de Janeiro.</p>
Full article ">Figure A3
<p>Landslides maps: (<b>a</b>) Alps and Balkans; (<b>b</b>) Caucasus; (<b>c</b>) Lake Victoria Region; (<b>d</b>) West Africa.</p>
Full article ">Figure A4
<p>Landslides maps (<b>a</b>) Himalaya, (<b>b</b>) China, (<b>c</b>) Western Ghats, (<b>d</b>) South-East Asia.</p>
Full article ">Figure A5
<p>USA landslides maps: (<b>a</b>) Washington; (<b>b</b>) California; (<b>c</b>) the North West states; (<b>d</b>) Utah and Colorado.</p>
Full article ">
19 pages, 7931 KiB  
Article
Improving Aerial Targeting Precision: A Study on Point Cloud Semantic Segmentation with Advanced Deep Learning Algorithms
by Salih Bozkurt, Muhammed Enes Atik and Zaide Duran
Drones 2024, 8(8), 376; https://doi.org/10.3390/drones8080376 - 6 Aug 2024
Cited by 2 | Viewed by 1747
Abstract
The integration of technological advancements has significantly impacted artificial intelligence (AI), enhancing the reliability of AI model outputs. This progress has led to the widespread utilization of AI across various sectors, including automotive, robotics, healthcare, space exploration, and defense. Today, air defense operations [...] Read more.
The integration of technological advancements has significantly impacted artificial intelligence (AI), enhancing the reliability of AI model outputs. This progress has led to the widespread utilization of AI across various sectors, including automotive, robotics, healthcare, space exploration, and defense. Today, air defense operations predominantly rely on laser designation. This process is entirely dependent on the capability and experience of human operators. Considering that UAV systems can have flight durations exceeding 24 h, this process is highly prone to errors due to the human factor. Therefore, the aim of this study is to automate the laser designation process using advanced deep learning algorithms on 3D point clouds obtained from different sources, thereby eliminating operator-related errors. As different data sources, dense 3D point clouds produced with photogrammetric methods containing color information, and point clouds produced with LiDAR systems were identified. The photogrammetric point cloud data were generated from images captured by the Akinci UAV’s multi-axis gimbal camera system within the scope of this study. For the point cloud data obtained from the LiDAR system, the DublinCity LiDAR dataset was used for testing purposes. The segmentation of point cloud data utilized the PointNet++ and RandLA-Net algorithms. Distinct differences were observed between the evaluated algorithms. The RandLA-Net algorithm, relying solely on geometric features, achieved an approximate accuracy of 94%, while integrating color features significantly improved its performance, raising its accuracy to nearly 97%. Similarly, the PointNet++ algorithm, relying solely on geometric features, achieved an accuracy of approximately 94%. Notably, the model developed as a unique contribution in this study involved enriching the PointNet++ algorithm by incorporating color attributes, leading to significant improvements with an approximate accuracy of 96%. The obtained results demonstrate a notable improvement in the PointNet++ algorithm with the proposed approach. Furthermore, it was demonstrated that the methodology proposed in this study can be effectively applied directly to data generated from different sources in aerial scanning systems. Full article
Show Figures

Figure 1

Figure 1
<p>The study area is the district of Çorlu, situated in Tekirdağ province, Türkiye. (<b>a</b>) A map of Türkiye’s provinces. (<b>b</b>) A map of Tekirdağ province’s districts. (<b>c</b>) An image of the study area located in the district of Çorlu.</p>
Full article ">Figure 2
<p>Illustration of DublinCity LiDAR data hierarchical structure.</p>
Full article ">Figure 3
<p>Sample of DublinCity LiDAR data.</p>
Full article ">Figure 4
<p>Illustration of SfM algorithm.</p>
Full article ">Figure 5
<p>Illustration of the conversion between the aircraft and gimbal axes.</p>
Full article ">Figure 6
<p>Comparison of before and after axis transformations for the generated 3D point cloud. (<b>a</b>,<b>b</b>) Before axis transformation. (<b>c</b>) After axis transformation.</p>
Full article ">Figure 7
<p>Example of CloudCompare labeling phase. (<b>a</b>) Label layers. (<b>b</b>) Regions in the labeling stage.</p>
Full article ">Figure 8
<p>Illustration of the PointNet++ architecture for a single-scale point group.</p>
Full article ">Figure 9
<p>Illustration of RandLA-Net architecture.</p>
Full article ">Figure 10
<p>Sample of DublinCity LiDAR data with 6 classes. (<b>a</b>) Sample of PointNet++ training data. (<b>b</b>) Sample of PointNet++ predicted data.</p>
Full article ">Figure 11
<p>Sample of DublinCity LiDAR data with 4 classes. (<b>a</b>) Sample of PointNet++ train data. (<b>b</b>) Sample of PointNet++ predicted data.</p>
Full article ">Figure 12
<p>Sample of DublinCity LiDAR Data with 3 classes. (<b>a</b>) Sample of PointNet++ training data. (<b>b</b>) Sample of PointNet++ predicted data.</p>
Full article ">Figure 13
<p>Sample of DublinCity LiDAR data RandLA-Net prediction results. (<b>a</b>) Prediction results for 6 classes. (<b>b</b>) Prediction results for 4 classes. (<b>c</b>) Prediction results for 3 classes.</p>
Full article ">Figure 14
<p>Sample of generated point cloud with 3 classes. (<b>a</b>) Sample of PointNet++ training data. (<b>b</b>) Sample of PointNet++ predicted data (minimum batch size: 16; epochs: 50).</p>
Full article ">Figure 15
<p>Sample of generated point cloud PointNet++ results created with 3 classes (<b>a</b>) produced with a minimum batch size of 32 and 100 epochs, and (<b>b</b>) produced with a minimum batch size of 32 and 1000 epochs.</p>
Full article ">Figure 16
<p>RandLA-Net with only geometric features: sample of a building used as test data and its original and predicted label views. (<b>a</b>) Original view of building. (<b>b</b>) Manually labeled building. (<b>c</b>) Predicted labels of the building.</p>
Full article ">Figure 17
<p>RandLA-Net with color and geometric features: Sample of a building used as test data and its original and predicted label views. (<b>a</b>) Original view of building. (<b>b</b>) Manually labeled building. (<b>c</b>) Predicted labels of the building.</p>
Full article ">
22 pages, 7285 KiB  
Article
Design and Application of an Onboard Particle Identification Platform Based on Convolutional Neural Networks
by Chaoping Bai, Xin Zhang, Shenyi Zhang, Yueqiang Sun, Xianguo Zhang, Ziting Wang and Shuai Zhang
Appl. Sci. 2024, 14(15), 6628; https://doi.org/10.3390/app14156628 - 29 Jul 2024
Viewed by 914
Abstract
Space radiation particle detection plays a crucial role in scientific research and engineering practice, especially in particle species identification. Currently, commonly used in-orbit particle identification techniques include telescope methods, electrostatic analysis time of flight (ESA × TOF), time-of-flight energy (TOF × E), and [...] Read more.
Space radiation particle detection plays a crucial role in scientific research and engineering practice, especially in particle species identification. Currently, commonly used in-orbit particle identification techniques include telescope methods, electrostatic analysis time of flight (ESA × TOF), time-of-flight energy (TOF × E), and pulse shape discrimination (PSD). However, these methods usually fail to utilize the full waveform information containing rich features, and their particle identification results may be affected by the random rise and fall of particle deposition and noise interference. In this study, a low-latency and lightweight onboard FPGA real-time particle identification platform based on full waveform information was developed utilizing the superior target classification, robustness, and generalization capabilities of convolutional neural networks (CNNs). The platform constructs diversified input datasets based on the physical features of waveforms and uses Optuna and Pytorch software architectures for model training. The hardware platform is responsible for the real-time inference of waveform data and the dynamic expansion of the dataset. The platform was utilized for deep learning training and the testing of the historical waveform data of neutron and gamma rays, and the inference time of a single waveform takes 4.9 microseconds, with an accuracy rate of over 97%. The classification expectation FOM (figure-of-merit) value of this CNN model is 133, which is better than the traditional pulse shape discrimination (PSD) algorithm’s FOM value of 0.8. The development of this platform not only improves the accuracy and efficiency of space particle discrimination but also provides an advanced tool for future space environment monitoring, which is of great value for engineering applications. Full article
Show Figures

Figure 1

Figure 1
<p>Structural diagram of the convolutional neural network particle identification platform.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic diagram of waveform signals of different particles with the same energy; (<b>b</b>) neutron and gamma waveform schematics.</p>
Full article ">Figure 3
<p>Dataset construction flowchart.</p>
Full article ">Figure 4
<p>Forward inference architecture construction flowchart.</p>
Full article ">Figure 5
<p>Block diagram of the data flow of the convolutional layer.</p>
Full article ">Figure 6
<p>Pooling layer data flow block diagram.</p>
Full article ">Figure 7
<p>Block diagram of the full connectivity layer’s data flow.</p>
Full article ">Figure 8
<p>Three types of training dataset.</p>
Full article ">Figure 9
<p>Three types of validation dataset.</p>
Full article ">Figure 10
<p>Three types of test dataset.</p>
Full article ">Figure 11
<p>CNN network architecture diagram.</p>
Full article ">Figure 12
<p>Training and validation set computation results.</p>
Full article ">Figure 13
<p>Physical diagram of CNN operation platform.</p>
Full article ">Figure 14
<p>(<b>a</b>) Distribution of neutron classification expectations. (<b>b</b>) Gamma classification expectation distribution.</p>
Full article ">Figure 15
<p>The pulse shape and time window of neutron and gamma in CLYC.</p>
Full article ">Figure 16
<p>Neutron–gamma PSD frequency curve.</p>
Full article ">
20 pages, 24161 KiB  
Article
Deep Embedding Koopman Neural Operator-Based Nonlinear Flight Training Trajectory Prediction Approach
by Jing Lu, Jingjun Jiang and Yidan Bai
Mathematics 2024, 12(14), 2162; https://doi.org/10.3390/math12142162 - 10 Jul 2024
Viewed by 1491
Abstract
Accurate flight training trajectory prediction is a key task in automatic flight maneuver evaluation and flight operations quality assurance (FOQA), which is crucial for pilot training and aviation safety management. The task is extremely challenging due to the nonlinear chaos of trajectories, the [...] Read more.
Accurate flight training trajectory prediction is a key task in automatic flight maneuver evaluation and flight operations quality assurance (FOQA), which is crucial for pilot training and aviation safety management. The task is extremely challenging due to the nonlinear chaos of trajectories, the unconstrained airspace maps, and the randomization of driving patterns. In this work, a deep learning model based on data-driven modern koopman operator theory and dynamical system identification is proposed. The model does not require the manual selection of dictionaries and can automatically generate augmentation functions to achieve nonlinear trajectory space mapping. The model combines stacked neural networks to create a scalable depth approximator for approximating the finite-dimensional Koopman operator. In addition, the model uses finite-dimensional operator evolution to achieve end-to-end adaptive prediction. In particular, the model can gain some physical interpretability through operator visualization and generative dictionary functions, which can be used for downstream pattern recognition and anomaly detection tasks. Experiments show that the model performs well, particularly on flight training trajectory datasets. Full article
(This article belongs to the Special Issue Data Mining and Machine Learning with Applications, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Flight training trajectories from the CAFUC dataset.</p>
Full article ">Figure 2
<p>Koopman measure-invariant subspace.</p>
Full article ">Figure 3
<p>Deep Embedding Koopman Neural Operator (DE-KNO) general framework.</p>
Full article ">Figure 4
<p>The CAFUC dataset is visualized with 3D trajectory views from different viewpoints, the blue line is the original part and the yellow line is the predicted part. (<b>top</b>) and 2D views showing the time series of the main parameters (<b>bottom</b>).</p>
Full article ">Figure 5
<p>Experiment process: (<b>a</b>) CAFUC, (<b>b</b>) Lorenz.</p>
Full article ">Figure 5 Cont.
<p>Experiment process: (<b>a</b>) CAFUC, (<b>b</b>) Lorenz.</p>
Full article ">Figure 6
<p>Model stability test results: <b>left</b>, efficiency on CAFUC dataset; <b>right</b>, efficiency on Traffic dataset.</p>
Full article ">Figure 7
<p>Model efficiency comparison: left, efficiency on CAFUC dataset; right, efficiency on Traffic dataset. The left graph records the training time, MSE, and memory usage of each model on the CAFUC dataset. The right graph records the training time, MSE, and memory usage of each model on the Traffic dataset.</p>
Full article ">
20 pages, 3704 KiB  
Article
Design of Entire-Flight Pinpoint Return Trajectory for Lunar DRO via Deep Neural Network
by Xuxing Huang, Baihui Ding, Bin Yang, Renyuan Xie, Zhengyong Guo, Jin Sha and Shuang Li
Aerospace 2024, 11(7), 566; https://doi.org/10.3390/aerospace11070566 - 10 Jul 2024
Viewed by 1212
Abstract
Lunar DRO pinpoint return is the final stage of manned deep space exploration via a lunar DRO station. A re-entry capsule suffers from complicated dynamic and thermal effects during an entire flight. The optimization of the lunar DRO return trajectory exhibits strong non-linearity. [...] Read more.
Lunar DRO pinpoint return is the final stage of manned deep space exploration via a lunar DRO station. A re-entry capsule suffers from complicated dynamic and thermal effects during an entire flight. The optimization of the lunar DRO return trajectory exhibits strong non-linearity. To obtain a global optimal return trajectory, an entire-flight lunar DRO pinpoint return model including a Moon–Earth transfer stage and an Earth atmosphere re-entry stage is constructed. A re-entry point on the atmosphere boundary is introduced to connect these two stages. Then, an entire-flight global optimization framework for lunar DRO pinpoint return is developed. The design of the entire-flight return trajectory is simplified as the optimization of the re-entry point. Moreover, to further improve the design efficiency, a rapid landing point prediction method for the Earth re-entry is developed based on a deep neural network. This predicting network maps the re-entry point in the atmosphere and the landing point on Earth with respect to optimal control re-entry trajectories. Numerical simulations validate the optimization accuracy and efficiency of the proposed methods. The entire-flight return trajectory achieves a high accuracy of the landing point and low fuel consumption. Full article
(This article belongs to the Special Issue Deep Space Exploration)
Show Figures

Figure 1

Figure 1
<p>Coordinate system for Moon–Earth transfer.</p>
Full article ">Figure 2
<p>Stages of lunar DRO pinpoint return.</p>
Full article ">Figure 3
<p>Global optimization framework for Moon–Earth pinpoint return trajectory.</p>
Full article ">Figure 4
<p>DNN-based landing prediction method.</p>
Full article ">Figure 5
<p>The error of the predicted latitude and longitude of the landing point.</p>
Full article ">Figure 6
<p>The distribution of predicted latitude and longitude values of the re-entry point.</p>
Full article ">Figure 7
<p>The distribution of absolute errors of the predicted latitude and longitude values of the re-entry point.</p>
Full article ">Figure 8
<p>Entire-flight lunar DRO return trajectory in CR3BP.</p>
Full article ">Figure 9
<p>Optimization result of Earth re-entry trajectory.</p>
Full article ">
Back to TopTop