Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (471)

Search Parameters:
Keywords = IMU navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1038 KiB  
Article
Accelerometer Bias Estimation for Unmanned Aerial Vehicles Using Extended Kalman Filter-Based Vision-Aided Navigation
by Djedjiga Belfadel and David Haessig
Electronics 2025, 14(6), 1074; https://doi.org/10.3390/electronics14061074 - 7 Mar 2025
Abstract
Accurate estimation of accelerometer biases in Inertial Measurement Units (IMUs) is crucial for reliable Unmanned Aerial Vehicle (UAV) navigation, particularly in GPS-denied environments. Uncompensated biases lead to an unbounded accumulation of position error and increased velocity error, resulting in significant navigation inaccuracies. This [...] Read more.
Accurate estimation of accelerometer biases in Inertial Measurement Units (IMUs) is crucial for reliable Unmanned Aerial Vehicle (UAV) navigation, particularly in GPS-denied environments. Uncompensated biases lead to an unbounded accumulation of position error and increased velocity error, resulting in significant navigation inaccuracies. This paper examines the effects of accelerometer bias on UAV navigation accuracy and introduces a vision-aided navigation system. The proposed system integrates data from an IMU, altimeter, and optical flow sensor (OFS), employing an Extended Kalman Filter (EKF) to estimate both the accelerometer biases and the UAV position and velocity. This approach reduces the accumulation of velocity and positional errors. The efficiency of this approach was validated through simulation experiments involving a UAV navigating in circular and straight-line trajectories. Simulation results show that the proposed approach significantly enhances UAV navigation performance, providing more accurate estimates of both the state and accelerometer biases while reducing error growth through the use of vision aiding from an Optical Flow Sensor. Full article
(This article belongs to the Special Issue Precision Positioning and Navigation Communication Systems)
Show Figures

Figure 1

Figure 1
<p>Basic block diagram for a strapdown inertial navigation system, courtesy [<a href="#B16-electronics-14-01074" class="html-bibr">16</a>].</p>
Full article ">Figure 2
<p>Strapdown inertial navigation with IMU acceleration input corrected for IMU bias.</p>
Full article ">Figure 3
<p>System and Simulation Block Diagram.</p>
Full article ">Figure 4
<p>Position Errors, Dead-Reckoning, biases not present.</p>
Full article ">Figure 5
<p>Velocity Errors, Dead-Reckoning, biases not present.</p>
Full article ">Figure 6
<p>Position Errors, Dead-Reckoning, biases present.</p>
Full article ">Figure 7
<p>Velocity Errors, Dead-Reckoning, biases present.</p>
Full article ">Figure 8
<p>Position Errors, Kalman Est. on, biases not present, bias est. off.</p>
Full article ">Figure 9
<p>Velocity Errors, Kalman Est. on, biases not present, bias est. off.</p>
Full article ">Figure 10
<p>Position Errors, Kalman est. on, biases present, bias est. off.</p>
Full article ">Figure 11
<p>Velocity Errors, Kalman est. on, biases present, bias est. off.</p>
Full article ">Figure 12
<p>Position Errors, Kalman est. on, biases present, bias est. on.</p>
Full article ">Figure 13
<p>Velocity Errors, Kalman est. on, biases present, bias est. on.</p>
Full article ">Figure 14
<p>Bias Estimates during Scheme 4.</p>
Full article ">Figure 15
<p>Position Errors, Kalman est. on, biases present, bias est. off.</p>
Full article ">Figure 16
<p>Velocity Errors, Kalman est. on, biases present, bias est. off.</p>
Full article ">Figure 17
<p>Position Errors, Kalman est. on, biases present, bias est. on.</p>
Full article ">Figure 18
<p>Velocity Errors, Kalman est. on, biases present, bias est. on.</p>
Full article ">
28 pages, 4077 KiB  
Review
A Comprehensive Survey on Short-Distance Localization of UAVs
by Luka Kramarić, Niko Jelušić, Tomislav Radišić and Mario Muštra
Drones 2025, 9(3), 188; https://doi.org/10.3390/drones9030188 - 4 Mar 2025
Viewed by 168
Abstract
The localization of Unmanned Aerial Vehicles (UAVs) is a critical area of research, particularly in applications requiring high accuracy and reliability in Global Positioning System (GPS)-denied environments. This paper presents a comprehensive overview of short-distance localization methods for UAVs, exploring their strengths, limitations, [...] Read more.
The localization of Unmanned Aerial Vehicles (UAVs) is a critical area of research, particularly in applications requiring high accuracy and reliability in Global Positioning System (GPS)-denied environments. This paper presents a comprehensive overview of short-distance localization methods for UAVs, exploring their strengths, limitations, and practical applications. Among short-distance localization methods, ultra-wideband (UWB) technology has gained significant attention due to its ability to provide accurate positioning, resistance to multipath interference, and low power consumption. Different approaches to the usage of UWB sensors, such as time of arrival (ToA), time difference of arrival (TDoA), and double-sided two-way ranging (DS-TWR), alongside their integration with complementary sensors like Inertial Measurement Units (IMUs), cameras, and visual odometry systems, are explored. Furthermore, this paper provides an evaluation of the key factors affecting UWB-based localization performance, including anchor placement, synchronization, and the challenges of combined use with other localization technologies. By highlighting the current trends in UWB-related research, including its increasing use in swarm control, indoor navigation, and autonomous landing, potential researchers could benefit from this study by using it as a guide for choosing the appropriate localization techniques, emphasizing UWB technology’s potential as a foundational technology in advanced UAV applications. Full article
(This article belongs to the Special Issue Resilient Networking and Task Allocation for Drone Swarms)
Show Figures

Figure 1

Figure 1
<p>The steps in the process of designing a short-distance localization system for UAVs, from the choice of the application and the environment to the required performance.</p>
Full article ">Figure 2
<p>The principle of the Extended Kalman Filter allows the usage of a linear filter in nonlinear state estimation [<a href="#B19-drones-09-00188" class="html-bibr">19</a>].</p>
Full article ">Figure 3
<p>The localization trajectories of a UAV where the yellow, cyan, red, and blue curves represent the ground truth, UWB, QVIO, and AprilTag, respectively [<a href="#B35-drones-09-00188" class="html-bibr">35</a>].</p>
Full article ">Figure 4
<p>Landing locations by using different localization equipment [<a href="#B57-drones-09-00188" class="html-bibr">57</a>].</p>
Full article ">Figure 5
<p>The localization error with and without the use of UWB with GNSS/IMU shows that the combination of localization systems provides significantly better results [<a href="#B69-drones-09-00188" class="html-bibr">69</a>].</p>
Full article ">Figure 6
<p>A comparison of the trajectories from different combinations of aids to UWB technology: (<b>a</b>) Integration between the camera and INS for 180 s of a complete signal outage; (<b>b</b>) INS dead reckoning solution compared against reference trajectory for 60 s of GNSS signals outage; and (<b>c</b>) UWB-INS integration performance compared against reference trajectory for 180 s GNSS signal outage [<a href="#B73-drones-09-00188" class="html-bibr">73</a>].</p>
Full article ">Figure 7
<p>Message exchange for a single UAV-anchor pair using the DS-TWR [<a href="#B74-drones-09-00188" class="html-bibr">74</a>].</p>
Full article ">Figure 8
<p>The real vs. the predefined flight trajectory in the xy-plane [<a href="#B74-drones-09-00188" class="html-bibr">74</a>].</p>
Full article ">
18 pages, 780 KiB  
Article
Real-Time and Post-Mission Heading Alignment for Drone Navigation Based on Single-Antenna GNSS and MEMs-IMU Sensors
by João F. Contreras, Jitesh A. M. Vassaram, Marcos R. Fernandes and João B. R. do Val
Drones 2025, 9(3), 169; https://doi.org/10.3390/drones9030169 - 25 Feb 2025
Viewed by 224
Abstract
This paper presents a heading alignment procedure for drone navigation employing a single hover GNSS antenna combined with low-grade MEMs-IMU sensors. The design was motivated by the need for a drone-mounted differential interferometric SAR (DinSAR) application. Still, the methodology proposed here applies to [...] Read more.
This paper presents a heading alignment procedure for drone navigation employing a single hover GNSS antenna combined with low-grade MEMs-IMU sensors. The design was motivated by the need for a drone-mounted differential interferometric SAR (DinSAR) application. Still, the methodology proposed here applies to any Unmanned Aerial Vehicle (UAV) application that requires high-precision navigation data for short-flight missions utilizing cost-effective MEMs sensors. The method proposed here involves a Bayesian parameter estimation based on a simultaneous cumulative Mahalanobis metric applied to the innovation process of Kalman-like filters, which are identical except for the initial heading guess. The procedure is then generalized to multidimensional parameters, thus called parametric alignment, referring to the fact that the strategy applies to alignment problems regarding some parameters, such as the heading initial value. The motivation for the multidimensional extension in the scenario is also presented. The method is highly applicable for cases where gyro-compassing is not available. It employs the most straightforward optimization techniques that can be implemented using a real-time parallelism scheme. Experimental results obtained from a real UAV mission demonstrate that the proposed method can provide initial heading alignment when the heading is not directly observable during takeoff, while numerical simulations are used to illustrate the extension to the multidimensional case. Full article
(This article belongs to the Special Issue Drones Navigation and Orientation)
Show Figures

Figure 1

Figure 1
<p>The diagram illustrates the invariance in the projections of the gravity vector (<math display="inline"><semantics> <mrow> <msubsup> <mi>f</mi> <mrow> <mi>i</mi> <mi>b</mi> <mo>,</mo> <mi>x</mi> </mrow> <mi>b</mi> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mrow> <mi>i</mi> <mi>b</mi> <mo>,</mo> <mi>y</mi> </mrow> <mi>b</mi> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mrow> <mi>i</mi> <mi>b</mi> <mo>,</mo> <mi>z</mi> </mrow> <mi>b</mi> </msubsup> </mrow> </semantics></math>) on the body axes of a UAV after a change in the initial heading (yaw rotation). Thus, even before the flight, the readings would be the same for frames <math display="inline"><semantics> <msub> <mi>b</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>b</mi> <mn>2</mn> </msub> </semantics></math>, making it impossible to determine the initial heading of the UAV.</p>
Full article ">Figure 2
<p>The flight profile of a mission conducted in June 2024 in Sweden at the indicated map location. Two ZED-F9P GNSS receivers and an ADIS 16945 inertial system were used.</p>
Full article ">Figure 3
<p>Illustrative diagram showing the steps involved in obtaining the heading estimate via MAP of <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <msub> <mi>ψ</mi> <mn>0</mn> </msub> <mo>|</mo> <msub> <mi>y</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>N</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> in a real-time application of a drone navigation system.</p>
Full article ">Figure 4
<p>Quadratic approximation (blue curve) obtained by sampling three distinct initial heading values. The black curve is the log-likelihood function obtained for the real data flight in <a href="#drones-09-00169-f002" class="html-fig">Figure 2</a>. Note the remarkable precise sinusoidal profile. The minimum point of the quadratic approximation (in green) is taken as the estimate for the initial heading.</p>
Full article ">Figure 5
<p>Two perspectives following the evolution of heading estimation: the first is a three-dimensional mesh representation of the parabolas fitted over time; the second is a view of the <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>0</mn> </msub> </semantics></math> vs. time plane, illustrating the evolution of the estimated heading values (the parabola’s minimum point).</p>
Full article ">Figure 6
<p>Path convergence of Algorithm 2 in a ill-conditioned problem in two dimensions.</p>
Full article ">
19 pages, 11821 KiB  
Article
Bias Estimation for Low-Cost IMU Including X- and Y-Axis Accelerometers in INS/GPS/Gyrocompass
by Gen Fukuda and Nobuaki Kubo
Sensors 2025, 25(5), 1315; https://doi.org/10.3390/s25051315 - 21 Feb 2025
Viewed by 213
Abstract
Inertial navigation systems (INSs) provide autonomous position estimation capabilities independent of global navigation satellite systems (GNSSs). However, the high cost of traditional sensors, such as fiber-optic gyroscopes (FOGs), limits their widespread adoption. In contrast, micro-electromechanical system (MEMS)-based inertial measurement units (IMUs) offer a [...] Read more.
Inertial navigation systems (INSs) provide autonomous position estimation capabilities independent of global navigation satellite systems (GNSSs). However, the high cost of traditional sensors, such as fiber-optic gyroscopes (FOGs), limits their widespread adoption. In contrast, micro-electromechanical system (MEMS)-based inertial measurement units (IMUs) offer a low-cost alternative; however, their lower accuracy and sensor bias issues, particularly in maritime environments, remain considerable obstacles. This study proposes an improved method for bias estimation by comparing the estimated values from a trajectory generator (TG)-based acceleration and angular-velocity estimation system with actual measurements. Additionally, for X- and Y-axis accelerations, we introduce a method that leverages the correlation between altitude differences derived from an INS/GNSS/gyrocompass (IGG) and those obtained during the TG estimation process to estimate the bias. Simulation datasets from experimental voyages validate the proposed method by evaluating the mean, median, normalized cross-correlation, least squares, and fast Fourier transform (FFT). The Butterworth filter achieved the smallest angular-velocity bias estimation error. For X- and Y-axis acceleration bias, altitude-based estimation achieved differences of 1.2 × 10−2 m/s2 and 1.0 × 10−4 m/s2, respectively, by comparing the input bias using 30 min data. These methods enhance the positioning and attitude estimation accuracy of low-cost IMUs, providing a cost-effective maritime navigation solution. Full article
(This article belongs to the Special Issue INS/GNSS Integrated Navigation Systems)
Show Figures

Figure 1

Figure 1
<p>Roll and pitch comparison between reference and IGG with X and Y Acc. Initial bias.</p>
Full article ">Figure 2
<p>Bias estimation process.</p>
Full article ">Figure 3
<p>Details of the X and Y acceleration initial bias estimation section in <a href="#sensors-25-01315-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>Image of processing program at a particular segment and time.</p>
Full article ">Figure 5
<p>Image of processing program at a particular segment and time.</p>
Full article ">Figure 6
<p>Experimental voyage track used for simulation.</p>
Full article ">Figure 7
<p>AV plots for gyroscopes.</p>
Full article ">Figure 8
<p>AV plots for accelerometers. For the GNSS simulation values, RTK positioning using u-blox F9P, as shown in <a href="#sensors-25-01315-t004" class="html-table">Table 4</a>, and NovAtel GNSS-802L is assumed.</p>
Full article ">Figure 9
<p>Angular rate output with IMU and simulation.</p>
Full article ">Figure 10
<p>Acceleration sensor output using the IMU and simulation.</p>
Full article ">Figure 11
<p>Roll with each segment.</p>
Full article ">Figure 12
<p>Pitch with each segment.</p>
Full article ">Figure 13
<p><span class="html-italic">X</span>-axis bias estimation.</p>
Full article ">Figure 14
<p><span class="html-italic">Y</span>-axis bias estimation.</p>
Full article ">
21 pages, 4434 KiB  
Article
Scenario Generation and Autonomous Control for High-Precision Vineyard Operations
by Carlos Ruiz Mayo, Federico Cheli, Stefano Arrigoni, Francesco Paparazzo, Simone Mentasti and Marco Ezio Pezzola
AgriEngineering 2025, 7(2), 46; https://doi.org/10.3390/agriengineering7020046 - 18 Feb 2025
Viewed by 253
Abstract
Precision Farming (PF) in vineyards represents an innovative approach to vine cultivation that leverages the advantages of the latest technologies to optimize resource use and improve overall field management. This study investigates the application of PF techniques in a vineyard, focusing on sensor-based [...] Read more.
Precision Farming (PF) in vineyards represents an innovative approach to vine cultivation that leverages the advantages of the latest technologies to optimize resource use and improve overall field management. This study investigates the application of PF techniques in a vineyard, focusing on sensor-based decision-making for autonomous driving. The goal of this research is to define a repeatable methodology for virtual testing of autonomous driving operations in a vineyard, considering realistic scenarios, efficient control architectures, and reliable sensors. The simulation scenario was created to replicate the conditions of a real vineyard, including elevation, banking profiles, and vine positioning. This provides a safe environment for training operators and testing tools such as sensors, algorithms, or controllers. This study also proposes an efficient control scheme, implemented as a state machine, to autonomously drive the tractor during two distinct phases of the navigation process: between rows and out of the field. The implementation demonstrates improvements in trajectory-following precision while reducing the intervention required by the farmer. The proposed system was extensively tested in a virtual environment, with a particular focus on evaluating the effects of micro and macro terrain irregularities on the results. A key feature of the control framework is its ability to achieve adequate accuracy while minimizing the number of sensors used, relying on a configuration of a Global Navigation Satellite System (GNSS) and an Inertial Measurement Unit (IMU) as a cost-effective solution. This minimal-sensor approach, which includes a state machine designed to seamlessly transition between in-field and out-of-field operations, balances performance and cost efficiency. The system was validated through a wide range of simulations, highlighting its robustness and adaptability to various terrain conditions. The main contributions of this work include the high fidelity of the simulation scenario, the efficient integration of the control algorithm and sensors for the two navigation phases, and the detailed analysis of terrain conditions. Together, these elements form a robust framework for testing autonomous tractor operations in vineyards. Full article
(This article belongs to the Section Sensors Technology and Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Comparison of the real plant distribution with the spin generation.</p>
Full article ">Figure 2
<p>Elevation profile (<b>a</b>) and plant altitude distribution (<b>b</b>).</p>
Full article ">Figure 3
<p>Perspective view of the digital twin in TruckMaker [<a href="#B26-agriengineering-07-00046" class="html-bibr">26</a>].</p>
Full article ">Figure 4
<p>Increase in plant width as a function of maturity level.</p>
Full article ">Figure 5
<p>Complete trajectory to be followed in the testing scenario.</p>
Full article ">Figure 6
<p>Schematic of the elevation, slope and camber profiles applied in the simulation environment.</p>
Full article ">Figure 7
<p>Complete 3D trajectory in the testing scenario.</p>
Full article ">Figure 8
<p>States of the overall decision making algorithm.</p>
Full article ">Figure 9
<p>Seven detection zones.</p>
Full article ">Figure 10
<p>Scheme of the MPC states and feedback errors [<a href="#B30-agriengineering-07-00046" class="html-bibr">30</a>].</p>
Full article ">Figure 11
<p>Reference X-Y trajectory to be followed during the U-turn maneuver.</p>
Full article ">Figure 12
<p>Lateral distances measured by Free Space sensors (<b>b</b>) versus expected ones (<b>a</b>).</p>
Full article ">Figure 13
<p>Recreated field using Free Space sensor over the actual bounding box positions of the plants.</p>
Full article ">Figure 14
<p>Comparison of the actual X and Y positions with the ones obtained with the blind model.</p>
Full article ">Figure 15
<p>Comparison of the actual X position with the one computed by the GNSS.</p>
Full article ">Figure 16
<p>Comparison of the actual orientation with the one computed by the GNSS.</p>
Full article ">Figure 17
<p>Comparison between the real yaw angle and the computed with IMU sensor.</p>
Full article ">Figure 18
<p>Comparison between the ideal and the GNSS-IMU model performance in terms of RMS of lateral deviation with respect to testing trajectory in <a href="#agriengineering-07-00046-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 19
<p>Analysis of the effect of micro-irregularities, in the GNSS-IMU model, on the RMS of lateral deviation form the testing trajectory in <a href="#agriengineering-07-00046-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 20
<p>Analysis of the effect of macro-irregularities, in the GNSS-IMU model, on the RMS of lateral deviation form the testing trajectory in <a href="#agriengineering-07-00046-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 21
<p>Analysis of the combined effect of roughness and macro irregularities, in the GNSS-IMU model, on the RMS of lateral deviation form the testing trajectory in <a href="#agriengineering-07-00046-f005" class="html-fig">Figure 5</a>. Considering low rougheness (<b>a</b>) and high roughness (<b>b</b>) levels.</p>
Full article ">
18 pages, 4517 KiB  
Article
Running Parameter Analysis in 400 m Sprint Using Real-Time Kinematic Global Navigation Satellite Systems
by Keisuke Onodera, Naoto Miyamoto, Kiyoshi Hirose, Akiko Kondo, Wako Kajiwara, Hiroshi Nakano, Shunya Uda and Masaki Takeda
Sensors 2025, 25(4), 1073; https://doi.org/10.3390/s25041073 - 11 Feb 2025
Viewed by 529
Abstract
Accurate measurement of running parameters, including the step length (SL), step frequency (SF), and velocity, is essential for optimizing sprint performance. Traditional methods, such as 2D video analysis and inertial measurement units (IMUs), face limitations in precision and [...] Read more.
Accurate measurement of running parameters, including the step length (SL), step frequency (SF), and velocity, is essential for optimizing sprint performance. Traditional methods, such as 2D video analysis and inertial measurement units (IMUs), face limitations in precision and practicality. This study introduces and evaluates two methods for estimating running parameters using real-time kinematic global navigation satellite systems (RTK GNSS) with 100 Hz sampling. Method 1 identifies mid-stance phases via vertical position minima, while Method 2 aligns with the initial contact (IC) events through vertical velocity minima. Two collegiate sprinters completed a 400 m sprint under controlled conditions, with RTK GNSS measurements validated against 3D video analysis and IMU data. Both methods estimated the SF, SL, and velocity, but Method 2 demonstrated superior accuracy, achieving a lower RMSE (SF: 0.205 Hz versus 0.291 Hz; SL: 0.143 m versus 0.190 m) and higher correlation with the reference data. Method 2 also exhibited improved performance in curved sections and detected stride asymmetries with higher consistency than Method 1. These findings highlight RTK GNSS, particularly the velocity minima approach, as a robust, drift-free, single-sensor solution for detailed per-step sprint analysis in outdoor conditions. This approach offers a practical alternative to IMU-based methods and enables training optimization and performance evaluation. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental measurement system. (<b>A</b>) Athlete wearing the measurement system, including an RTK GNSS receiver, an antenna, and IMUs. (<b>B</b>) RTK GNSS receiver stored in a waist pack. (<b>C</b>) Head-mounted GNSS antenna set-up, with a triple-band helical antenna connected via an SMA cable. (<b>D</b>) IMUs attached above the ankles for <span class="html-italic">SF</span> validation, with an additional unit on the headgear for synchronization.</p>
Full article ">Figure 2
<p>GNSS-based step detection methods with IMU reference. A one-second snapshot of data from Subject A. The dashed vertical lines indicate camera-based IC. (<b>A</b>) Method 1 detected minima in vertical GNSS position (▽) near mid-stance. (<b>B</b>) Method 2 identified minima in vertical GNSS velocity (▼) aligned with IC. (<b>C</b>,<b>D</b>) Resultant accelerations from right and left foot IMUs, respectively, with asterisks (*) marking IC events. The grey line shows the raw data before applying the Butterworth filter.</p>
Full article ">Figure 3
<p>Comparison of GNSS-based methods (Method 1 and Method 2) versus camera-based measurements of <span class="html-italic">SF</span>, <span class="html-italic">SL</span>, and <span class="html-italic">running velocity</span>. (<b>A</b>–<b>C</b>) Scatter plots with the line of unity. (<b>D</b>–<b>F</b>) Bland–Altman plots showing <span class="html-italic">bias</span> and LOA. Data points are color-coded by subject: Subject A (blue circles) and Subject B (red squares). Overlapping data points appear darker due to transparency settings, visually indicating areas of higher data density.</p>
Full article ">Figure 4
<p>Comparison of GNSS-based methods (Method 1 and Method 2) versus IMU-derived <span class="html-italic">SF</span> in the 400 m sprint (left and right legs). (<b>A</b>,<b>B</b>) Scatter plots for Subject A (blue circles) and Subject B (red squares), with the <span class="html-italic">SF</span> for the left leg (<b>top</b>) and right leg (<b>bottom</b>) compared with the IMU-derived <span class="html-italic">SF</span>. (<b>C</b>,<b>D</b>) Corresponding Bland–Altman plots, indicating <span class="html-italic">bias</span> (solid line) and LOA (dashed lines). Overlapping data points appear darker due to transparency settings, visually indicating areas of higher data density.</p>
Full article ">Figure 5
<p>Comparison of GNSS-based methods (Method 1 and Method 2) with IMU-derived <span class="html-italic">SF</span> across curved and straight track sections. (<b>A</b>,<b>B</b>) Scatter plots for Subject A (blue circles) and Subject B (red squares), with data separated into curved (<b>top</b>) and straight (<b>bottom</b>) sections. (<b>C</b>,<b>D</b>) Bland–Altman plots indicating <span class="html-italic">bias</span> (solid line) and LOA (dashed lines). Overlapping data points appear darker due to transparency settings, visually indicating areas of higher data density.</p>
Full article ">Figure 6
<p>Step-by-step analysis of running parameters using RTK GNSS (Method 2) in a 400 m sprint. Panels show step-by-step changes in <span class="html-italic">running velocity</span>, <span class="html-italic">SF</span>, <span class="html-italic">SL</span>, and <span class="html-italic">elapsed time</span> for (<b>A</b>) Subject A (200 steps) and (<b>B</b>) Subject B (242 steps). The right steps are represented by filled circles (●), and the left steps are represented by open circles (○). Shaded areas for Subject A indicate periods in ‘Float’ solution, highlighting reduced GNSS accuracy.</p>
Full article ">
29 pages, 4682 KiB  
Article
LSAF-LSTM-Based Self-Adaptive Multi-Sensor Fusion for Robust UAV State Estimation in Challenging Environments
by Mahammad Irfan, Sagar Dalai, Petar Trslic, James Riordan and Gerard Dooly
Machines 2025, 13(2), 130; https://doi.org/10.3390/machines13020130 - 9 Feb 2025
Viewed by 598
Abstract
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging [...] Read more.
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging environments. We propose a deep learning-based adaptive sensor fusion framework for UAV state estimation, integrating multi-sensor data from stereo cameras, an IMU, two 3D LiDAR’s, and GPS. The framework dynamically adjusts fusion weights in real time using a long short-term memory (LSTM) model, enhancing robustness under diverse conditions such as illumination changes, structureless environments, degraded GPS signals, or complete signal loss where traditional single-sensor SLAM methods often fail. Validated on an in-house integrated UAV platform and evaluated against high-precision RTK ground truth, the algorithm incorporates deep learning-predicted fusion weights into an optimization-based odometry pipeline. The system delivers robust, consistent, and accurate state estimation, outperforming state-of-the-art techniques. Experimental results demonstrate its adaptability and effectiveness across challenging scenarios, showcasing significant advancements in UAV autonomy and reliability through the synergistic integration of deep learning and sensor fusion. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed architecture for LSTM-based self-adaptive multi-sensor fusion (LSAF).</p>
Full article ">Figure 2
<p>An illustration of the proposed LSAF framework. The global estimator combines local estimations from various global sensors to achieve precise local accuracy and globally drift free pose estimation, which builds upon our previous work [<a href="#B28-machines-13-00130" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Proposed LSTM-based multi-sensor fusion architecture for UAV state estimation.</p>
Full article ">Figure 4
<p>LSTM cell architecture for adaptive multi-sensor fusion.</p>
Full article ">Figure 5
<p>Training and validation loss of the proposed LSTM-based self-adaptive multi-sensor fusion (LSAF) framework over 1000 epochs.</p>
Full article ">Figure 6
<p>Training and validation MAE of the proposed LSTM-based self-adaptive multi-sensor fusion (LSAF) framework over 1000 epochs.</p>
Full article ">Figure 7
<p>Proposed block diagram for LSTM-based self-adaptive multi-sensor fusion (LSAF).</p>
Full article ">Figure 8
<p>The experimental environment in different scenarios during the data collection. Panel (<b>a</b>,<b>b</b>) represent the UAV hardware along with sensor integration and panel (<b>c</b>,<b>d</b>) are the open-field dataset environment view from stereo and LiDAR sensors, respectively, which build upon our previous work [<a href="#B28-machines-13-00130" class="html-bibr">28</a>].</p>
Full article ">Figure 9
<p>Trajectory plots of the proposed LSAF method and comparison with FASTLIO2 and VINS-Fusion.</p>
Full article ">Figure 10
<p>Box plots showing the overall APE of each strategy.</p>
Full article ">Figure 11
<p>Absolute estimated position of x, y, and z axes showing plots of various methods on the UAV car parking dataset.</p>
Full article ">Figure 12
<p>Absolute position error of roll, yaw, and pitch showing the plots of various methods on the UAV car parking dataset.</p>
Full article ">Figure 13
<p>Trajectory plots of the proposed LSAF method and comparison with FASTLIO2 and VINS-Fusion on the UL outdoor handheld dataset.</p>
Full article ">Figure 14
<p>Box plots showing the overall APE of each strategy.</p>
Full article ">Figure 15
<p>Absolute estimated position of x, y, and z axes showing the plots of various methods on the UL outdoor handheld dataset.</p>
Full article ">Figure 16
<p>Absolute position error of roll, yaw, and pitch showing the plots of various methods on the UL outdoor handheld dataset.</p>
Full article ">Figure 17
<p>Trajectory plots of the proposed LSAF method and comparison with FASTLIO2 and VINS-Fusion.</p>
Full article ">Figure 18
<p>Absolute estimated position of the x, y, and z axes showing the plots of various methods on the UAV car bridge dataset.</p>
Full article ">Figure 19
<p>Absolute position error of roll, yaw, and pitch showing plots of various methods on the UAV car bridge dataset.</p>
Full article ">Figure 20
<p>Box plots showing the overall APE of each strategy.</p>
Full article ">
32 pages, 4386 KiB  
Article
Multi-Source, Fault-Tolerant, and Robust Navigation Method for Tightly Coupled GNSS/5G/IMU System
by Zhongliang Deng, Zhichao Zhang, Zhenke Ding and Bingxun Liu
Sensors 2025, 25(3), 965; https://doi.org/10.3390/s25030965 - 5 Feb 2025
Viewed by 671
Abstract
The global navigation satellite system (GNSS) struggles to deliver the precision and reliability required for positioning, navigation, and timing (PNT) services in environments with severe interference. Fifth-generation (5G) cellular networks, with their low latency, high bandwidth, and large capacity, offer a robust communication [...] Read more.
The global navigation satellite system (GNSS) struggles to deliver the precision and reliability required for positioning, navigation, and timing (PNT) services in environments with severe interference. Fifth-generation (5G) cellular networks, with their low latency, high bandwidth, and large capacity, offer a robust communication infrastructure, enabling 5G base stations (BSs) to extend coverage into regions where traditional GNSSs face significant challenges. However, frequent multi-sensor faults, including missing alarm thresholds, uncontrolled error accumulation, and delayed warnings, hinder the adaptability of navigation systems to the dynamic multi-source information of complex scenarios. This study introduces an advanced, tightly coupled GNSS/5G/IMU integration framework designed for distributed PNT systems, providing all-source fault detection with weighted, robust adaptive filtering. A weighted, robust adaptive filter (MCC-WRAF), grounded in the maximum correntropy criterion, was developed to suppress fault propagation, relax Gaussian noise constraints, and improve the efficiency of observational weight distribution in multi-source fusion scenarios. Moreover, we derived the intrinsic relationships of filtering innovations within wireless measurement models and proposed a time-sequential, observation-driven full-source FDE and sensor recovery validation strategy. This approach employs a sliding window which expands innovation vectors temporally based on source encoding, enabling real-time validation of isolated faulty sensors and adaptive adjustment of observational data in integrated navigation solutions. Additionally, a covariance-optimal, inflation-based integrity protection mechanism was introduced, offering rigorous evaluations of distributed PNT service availability. The experimental validation was carried out in a typical outdoor scenario, and the results highlight the proposed method’s ability to mitigate undetected fault impacts, improve detection sensitivity, and significantly reduce alarm response times across step, ramp, and multi-fault mixed scenarios. Additionally, the dynamic positioning accuracy of the fusion navigation system improved to 0.83 m (1σ). Compared with standard Kalman filtering (EKF) and advanced multi-rate Kalman filtering (MRAKF), the proposed algorithm achieved 28.3% and 53.1% improvements in its 1σ error, respectively, significantly enhancing the accuracy and reliability of the multi-source fusion navigation system. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>A weighted robust adaptive filtering and all-source fault detection framework for GNSS/5G/IMU tightly coupled navigation.</p>
Full article ">Figure 2
<p>GNSS/5G/IMU seamless and continuous indoor and outdoor positioning solution.</p>
Full article ">Figure 3
<p>State estimation procedure for different fault modes.</p>
Full article ">Figure 4
<p>(<b>a</b>) Experimental data acquisition path. (<b>b</b>) Experimental data acquisition equipment.</p>
Full article ">Figure 5
<p>Sky plot showing observable satellites. The pentagrams represent satellites.</p>
Full article ">Figure 6
<p>Step fault detection statistic and positioning error curves for GNSS/5G: (<b>a</b>) Case 1, (<b>b</b>) Case 2, (<b>c</b>) Case 4.</p>
Full article ">Figure 7
<p>Fault detection rate statistics for the two stages in case 4.</p>
Full article ">Figure 8
<p>Fault detection rate of SAT-17 under varying step errors.</p>
Full article ">Figure 9
<p>IMU step fault detection statistic and positioning error curves: (<b>a</b>) case 3.1, (<b>b</b>) case 3.2, and (<b>c</b>) case 3.3.</p>
Full article ">Figure 10
<p>Ramp fault detection statistics and positioning error curves for case 5. (<b>a</b>) Comparison of test statistics using different methods. (<b>b</b>) East-north-up positioning errors before and after FDE.</p>
Full article ">Figure 11
<p>Ramp fault detection statistics and positioning error curves for case 6. (<b>a</b>) Comparison of test statistics using different methods. (<b>b</b>) East-north-up positioning errors before and after FDE.</p>
Full article ">Figure 12
<p>Boxplot distribution of east-north-up positioning errors before and after FDE in ramp fault conditions.</p>
Full article ">Figure 13
<p>FDRs and RMSEs for various ramp fault conditions.</p>
Full article ">Figure 14
<p>Ramp fault detection statistics and positioning error curves for case 7. (<b>a</b>) Comparison of test statistics using different methods. (<b>b</b>) East-north-up positioning errors before and after FDE.</p>
Full article ">Figure 15
<p>Protection level curves: (<b>a</b>) before FDE and (<b>b</b>) after FDE.</p>
Full article ">Figure 16
<p>Stanford chart before and after FDE: (<b>a</b>) before FDE and (<b>b</b>) after FDE.</p>
Full article ">Figure 17
<p>Comparison of fusion localization errors of different navigation sources.</p>
Full article ">Figure 18
<p>Fusion Methods for Failure-Free Scenarios: (<b>a</b>) Position Error and (<b>b</b>) CDF Curves.</p>
Full article ">
55 pages, 11197 KiB  
Review
State-of-the-Art Navigation Systems and Sensors for Unmanned Underwater Vehicles (UUVs)
by Md Mainuddin Sagar, Menaka Konara, Nate Picard and Kihan Park
Appl. Mech. 2025, 6(1), 10; https://doi.org/10.3390/applmech6010010 - 2 Feb 2025
Viewed by 601
Abstract
Researchers are currently conducting several studies in the field of navigation systems and sensors. Even in the past, there was a lot of research regarding the field of velocity sensors for unmanned underwater vehicles (UUVs). UUVs have various services and significance in the [...] Read more.
Researchers are currently conducting several studies in the field of navigation systems and sensors. Even in the past, there was a lot of research regarding the field of velocity sensors for unmanned underwater vehicles (UUVs). UUVs have various services and significance in the military, scientific research, and many commercial applications due to their autonomy mechanism. So, it’s very crucial for the proper maintenance of the navigation system. Reliable navigation of unmanned underwater vehicles depends on the quality of their state determination. There are so many navigation systems available, like position determination, depth information, etc. Among them, velocity determination is now one of the most important navigational criteria for UUVs. The key source of navigational aids for different deep-sea research projects is water currents. These days, many different sensors are available to monitor the UUV’s velocity. In recent times, there have been five primary types of sensors utilized for UUV velocity forecasts. These include Doppler Velocity Logger sensors, paddlewheel sensors, optical sensors, electromagnetic sensors, and ultrasonic sensors. The most popular sensing sensor for estimating velocity at the moment is the Doppler Velocity Logger (DVL) sensor. DVL sensor is the most fully developed sensor for UUVs in recent years. In this work, we offer an overview of the field of navigation systems and sensors (especially velocity) developed for UUVs with respect to their use with tidal current sensing in the UUV setting, including their history, evolution, current research initiatives, and anticipated future. Full article
Show Figures

Figure 1

Figure 1
<p>UUV navigation system diagram (reprinted with permission from [<a href="#B17-applmech-06-00010" class="html-bibr">17</a>]).</p>
Full article ">Figure 2
<p>A specimen of unmanned underwater vehicle (reprinted with permission from [<a href="#B35-applmech-06-00010" class="html-bibr">35</a>]).</p>
Full article ">Figure 3
<p>High-resolution UUV MBES bathymetry of cold-seep structures in the deep-water northern Bay of Mexico (left) and the backscatter mapping (right). (<b>A</b>) abundant heart urchins around the lake margins. (<b>B</b>) while zones of abundant mussels. (<b>C</b>) corresponded to zones of elevated backscatter (dark tones) in the fluid expulsion center to the south. The latter site was also sampled. (<b>D</b>) to investigate orange-stained mud visible on the seafloor (reprinted with permission from [<a href="#B44-applmech-06-00010" class="html-bibr">44</a>]).</p>
Full article ">Figure 4
<p>Unmanned underwater vehicles’ maximum operational speed and altitude (reprinted with permission from [<a href="#B52-applmech-06-00010" class="html-bibr">52</a>]).</p>
Full article ">Figure 5
<p>DVL velocity measurement schematic diagram (reprinted with permission from [<a href="#B151-applmech-06-00010" class="html-bibr">151</a>]).</p>
Full article ">Figure 6
<p>The USBL positioning measurement schematic illustration (reprinted with permission from [<a href="#B151-applmech-06-00010" class="html-bibr">151</a>]).</p>
Full article ">Figure 7
<p>UUV layout (reprinted with permission from [<a href="#B151-applmech-06-00010" class="html-bibr">151</a>]).</p>
Full article ">Figure 8
<p>A simple ultrasonic speed sensor (adopted from [<a href="#B175-applmech-06-00010" class="html-bibr">175</a>]).</p>
Full article ">Figure 9
<p>A small (paddlewheel) speed sensor (reprinted with permission from [<a href="#B186-applmech-06-00010" class="html-bibr">186</a>]).</p>
Full article ">Figure 10
<p>Doppler velocity log (DVL) sensor working process (reprinted with permission from [<a href="#B192-applmech-06-00010" class="html-bibr">192</a>]).</p>
Full article ">Figure 11
<p>Optical velocity sensor working principle (reprinted with permission from [<a href="#B204-applmech-06-00010" class="html-bibr">204</a>]).</p>
Full article ">Figure 12
<p>Visual navigation system layout (reprinted with permission from [<a href="#B12-applmech-06-00010" class="html-bibr">12</a>]).</p>
Full article ">Figure 13
<p>An optical speed sensor (adopted from [<a href="#B213-applmech-06-00010" class="html-bibr">213</a>]).</p>
Full article ">Figure 14
<p>US-based advanced UUV (reprinted with permission from [<a href="#B245-applmech-06-00010" class="html-bibr">245</a>]).</p>
Full article ">Figure 15
<p>Advanced UUV in European countries (reprinted with permission from [<a href="#B245-applmech-06-00010" class="html-bibr">245</a>]).</p>
Full article ">Figure 16
<p>Advanced UUV in Russia (reprinted with permission from [<a href="#B245-applmech-06-00010" class="html-bibr">245</a>]).</p>
Full article ">Figure 17
<p>New technologies for low-cost autonomous underwater vehicles (reprinted with permission from [<a href="#B263-applmech-06-00010" class="html-bibr">263</a>]).</p>
Full article ">
20 pages, 8888 KiB  
Article
E2-VINS: An Event-Enhanced Visual–Inertial SLAM Scheme for Dynamic Environments
by Jiafeng Huang, Shengjie Zhao and Lin Zhang
Appl. Sci. 2025, 15(3), 1314; https://doi.org/10.3390/app15031314 - 27 Jan 2025
Viewed by 643
Abstract
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. [...] Read more.
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. Although SLAM, especially Visual–Inertial SLAM (VI-SLAM), has made substantial progress, most classic algorithms in this field are designed based on the assumption that the observed scene is static. In complex real-world environments, the presence of dynamic objects such as pedestrians and vehicles can seriously affect the robustness and accuracy of such systems. Event cameras, which use recently introduced motion-sensitive biomimetic sensors, efficiently capture scene changes (referred to as “events”) with high temporal resolution, offering new opportunities to enhance VI-SLAM performance in dynamic environments. Integrating this kind of innovative sensor, we propose the first event-enhanced Visual–Inertial SLAM framework specifically designed for dynamic environments, termed E2-VINS. Specifically, the system uses visual–inertial alignment strategy to estimate IMU biases and correct IMU measurements. The calibrated IMU measurements are used to assist in motion compensation, achieving spatiotemporal alignment of events. The event-based dynamicity metrics, which measure the dynamicity of each pixel, are then generated on these aligned events. Based on these metrics, the visual residual terms of different pixels are adaptively assigned weights, namely, dynamicity weights. Subsequently, E2-VINS jointly and alternately optimizes the system state (camera poses and map points) and dynamicity weights, effectively filtering out dynamic features through a soft-threshold mechanism. Our scheme enhances the robustness of classic VI-SLAM against dynamic features, which significantly enhances VI-SLAM performance in dynamic environments, resulting in an average improvement of 1.884% in the mean position error compared to state-of-the-art methods. The superior performance of E2-VINS is validated through both qualitative and quantitative experimental results. To ensure that our results are fully reproducible, all the relevant data and codes have been released. Full article
(This article belongs to the Special Issue Advances in Audio/Image Signals Processing)
Show Figures

Figure 1

Figure 1
<p>The overall pipeline of our proposed <math display="inline"><semantics> <msup> <mi mathvariant="normal">E</mi> <mn>2</mn> </msup> </semantics></math>-VINS system. The preprocessing stage includes IMU preintegration and feature tracking of RGB frames. During system initialization, <math display="inline"><semantics> <msup> <mi mathvariant="normal">E</mi> <mn>2</mn> </msup> </semantics></math>-VINS performs geometric alignment between IMU measurements and vision-based structure-from-motion procedures while estimating the biases to obtain calibrated IMU measurements. The calibrated data are then applied for IMU-assisted event motion compensation. The compensated events are used to generate event-based dynamicity metrics, which assess the motion dynamics of each pixel. Based on these metrics, visual residuals are adaptively assigned dynamicity weights for different pixels. Finally, <math display="inline"><semantics> <msup> <mi mathvariant="normal">E</mi> <mn>2</mn> </msup> </semantics></math>-VINS jointly and iteratively optimizes the system state (camera poses and map points) and the dynamicity weights.</p>
Full article ">Figure 2
<p>The illustration of the visual–inertial alignment. By matching the visual structure with IMU preintegration, the system estimates the gyroscope bias and calibrates IMU measurements.</p>
Full article ">Figure 3
<p>The differences between the distributions of the raw events and the motion-compensated ones (blue: positive events; red: negative events). The event camera moves freely and observes scenes of an office and a moving person. (<b>a</b>) shows the distribution of the raw events in the spatiotemporal space. (<b>b</b>) shows the event frame formed by the accumulation of raw events. (<b>c</b>,<b>d</b>) show the motion-compensated results corresponding to (<b>a</b>,<b>b</b>), respectively.</p>
Full article ">Figure 4
<p>Illustration of self-motion compensation when the rotation angle on the Y axis is <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>k</mi> </msub> </semantics></math>. The Y axis compensation displacement occurs when the initial coordinate of an event is not at <math display="inline"><semantics> <mi mathvariant="bold">o</mi> </semantics></math>. <math display="inline"><semantics> <msub> <mi mathvariant="bold">u</mi> <mi>k</mi> </msub> </semantics></math> is the coordinate of the event triggered by the object <math display="inline"><semantics> <mi mathvariant="bold">p</mi> </semantics></math>. After the camera rotates by an angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>k</mi> </msub> </semantics></math> around the Y axis, <math display="inline"><semantics> <msub> <mi mathvariant="bold">u</mi> <mi>k</mi> </msub> </semantics></math> moves to <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">u</mi> <mi>k</mi> <mo>′</mo> </msubsup> </semantics></math>. This process can be seen as the motion of 3D point <math display="inline"><semantics> <mi mathvariant="bold">p</mi> </semantics></math> to the point <math display="inline"><semantics> <msup> <mi mathvariant="bold">p</mi> <mo>′</mo> </msup> </semantics></math>, which corresponds to <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">u</mi> <mi>k</mi> <mo>′</mo> </msubsup> </semantics></math> in the original pose. In this view, the incident angles before and after the movement are <math display="inline"><semantics> <mrow> <msub> <mi>ξ</mi> <mi>x</mi> </msub> <mo>+</mo> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ξ</mi> <mi>x</mi> </msub> </semantics></math>, respectively. The angle <math display="inline"><semantics> <msub> <mi>ξ</mi> <mi>x</mi> </msub> </semantics></math> can be calculated using Equation <a href="#FD6-applsci-15-01314" class="html-disp-formula">6</a>. Furthermore, the compensation displacement <math display="inline"><semantics> <msub> <mi mathvariant="bold">t</mi> <mi>x</mi> </msub> </semantics></math> in the pixel plane can be obtained.</p>
Full article ">Figure 5
<p>Visualization of some typical qualitative results. The weighted feature image frames obtained by <math display="inline"><semantics> <msup> <mi mathvariant="normal">E</mi> <mn>2</mn> </msup> </semantics></math>-VINS on three test sequences from the VIODE dataset [<a href="#B42-applsci-15-01314" class="html-bibr">42</a>] (city_day, city_night, and parking_lot) and four sequences from the ECMD dataset [<a href="#B43-applsci-15-01314" class="html-bibr">43</a>] (Dense_street_day_easy_a, Dense_street_day_easy_b, Dense_street_night_easy_a, and Urban_road_day_easy_a) are shown on the upper side. Features with low weights (from dynamic objects) are plotted as red points, while features with high weights (from static objects) are shown as green ones. The trajectories of the compared algorithms and <math display="inline"><semantics> <msup> <mi mathvariant="normal">E</mi> <mn>2</mn> </msup> </semantics></math>-VINS on the Dense_street_day_medium_a and Dense_street_night_easy_a sequences of the ECMD dataset [<a href="#B43-applsci-15-01314" class="html-bibr">43</a>] are illustrated on the lower side.</p>
Full article ">
19 pages, 7788 KiB  
Article
Research on Outdoor Navigation of Intelligent Wheelchair Based on a Novel Layered Cost Map
by Jianwei Cui, Siji Yu, Yucheng Shang, Yuxiang Dai and Wenyi Zhang
Actuators 2025, 14(2), 46; https://doi.org/10.3390/act14020046 - 22 Jan 2025
Viewed by 535
Abstract
With the aging of the population and the increase in the number of people with disabilities, intelligent wheelchairs are essential in improving travel autonomy and quality of life. In this paper, we propose an autonomous outdoor navigation framework for intelligent wheelchairs based on [...] Read more.
With the aging of the population and the increase in the number of people with disabilities, intelligent wheelchairs are essential in improving travel autonomy and quality of life. In this paper, we propose an autonomous outdoor navigation framework for intelligent wheelchairs based on hierarchical cost maps to address the challenges of wheelchair navigation in complex and dynamic outdoor environments. First, the framework integrates multi-sensors such as RTK high-precision GPS, IMU, and 3D LIDAR; fuses RTK, IMU, and odometer data to realize high-precision positioning; and performs path planning and obstacle avoidance through dynamic hierarchical cost maps. Secondly, the drivable area layer is integrated into the traditional hierarchical cost map, in which the drivable area detection algorithm utilizes local plane fitting and elevation difference analysis to achieve efficient ground point cloud segmentation and real-time updating, which ensures the real-time safety of navigation. The experiments are validated in real outdoor scenes and simulation environments, and the results show that the speed of drivable region detection is about 30 ms, the positioning accuracy of wheelchair outdoor navigation is less than 10 cm, and the distance of active obstacle avoidance is 1 m. This study provides an effective solution for the autonomous navigation of the intelligent wheelchair in a complex outdoor environment, and it has a high robustness and application potential. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

Figure 1
<p>Intelligent wheelchair system architecture design. The multimode perception layer integrates a variety of sensor data and is responsible for environment perception and obstacle detection, providing the necessary real-time environmental information for autonomous navigation; the autonomous navigation layer is accountable for path planning and decision-making based on the environment perception information and scheduling the execution of the motion control layer; the communication transmission layer is accountable for realizing data exchange between different components according to the agreed communication protocol; the physical layer is the foundation layer of the whole system, which mainly includes Hardware components of the wheelchair.</p>
Full article ">Figure 2
<p>Intelligent wheelchair and sensor installation position.</p>
Full article ">Figure 3
<p>Intelligent wheelchair outdoor navigation software architecture, where the red arrows represent the transfer of data, the black arrows represent the role of the function packages, and the blue arrows represent the interaction between the hardware and the data.</p>
Full article ">Figure 4
<p>RTK localization architecture diagram.</p>
Full article ">Figure 5
<p>Schematic of LiDAR spherical coordinate system. This is a 16-line LiDAR with a horizontal field of view of 360°, a vertical field of view of 30°, a horizontal angular resolution of 1°, and a vertical angular resolution of 2°.</p>
Full article ">Figure 6
<p>Schematic of ground point screening based on PCA local ground fitting.</p>
Full article ">Figure 7
<p>The red box shows the processing of the point cloud. Firstly, the environmental point cloud data are collected by LiDAR. The point cloud is ground segmented, elevated points are extracted, and a raster of the drivable area is calculated, which is then imported into the hierarchical cost map. The green box shows the updating process of each layer of the hierarchical cost map, and the updating order is from bottom to top. Firstly, the static map is imported to initialize the cost map as shown in (<b>a</b>); the blue grid appears on the static map layer, indicating the static obstacles on the map; then, the LiDAR detects the environmental obstacles, and the obstacle layer is updated as shown in (<b>b</b>). The blue grid appears on the obstacle layer, indicating the static obstacles and the dynamic obstacles detected by the LiDAR. Then, the drivable area is updated based on the imported drivable area raster layer shown in (<b>c</b>). The orange grid indicates the calculated drivable area grid and the blue grid indicates the drivable area boundary and its rear area; finally, the expansion layer is updated according to the detected obstacles by expanding the map, as shown in (<b>d</b>). The gray grid indicates the expansion layer, which enables the wheelchair to stay away from obstacles during path planning; up to this point, the cost map has been completely updated, and the layers are merged into a complete total cost map. (<b>e</b>) The map is updated in real time, and the right blue grid changes position to indicate dynamic obstacle movement.</p>
Full article ">Figure 8
<p>Schematic diagram of the target point and wheelchair path planning movement.</p>
Full article ">Figure 9
<p>Intelligent wheelchair drivable area detection results.</p>
Full article ">Figure 10
<p>Drivable area layered cost map experiment: (<b>a</b>) an accurate picture, (<b>b</b>) the actual point cloud data collected, and (<b>c</b>) cost map, where the white part of the raster has a surrogate value of 0, and the black part represents the non-drivable area which is the layer of the drivable area, the light blue part is the obstacle layer, and the red part is the expansion layer.</p>
Full article ">Figure 11
<p>Outdoor path planning experiment. (<b>a</b>) Schematic diagram of the path planning of the Gaode map, and (<b>b</b>) path planning after ROS receives the first target point, the red points are laser points processed by the ground point cloud segmentation algorithm.</p>
Full article ">Figure 12
<p>Outdoor path planning experiment. (<b>a</b>) Illustration of obstacles in the actual scene, and (<b>b</b>) cost map and path planning in the presence of obstacles.</p>
Full article ">
16 pages, 5789 KiB  
Article
Research on EV Crawler-Type Soil Sample Robot Using GNSS Information
by Liangliang Yang, Chiaki Tomioka, Yohei Hoshino, Sota Kamata and Shunsuke Kikuchi
Sensors 2025, 25(3), 604; https://doi.org/10.3390/s25030604 - 21 Jan 2025
Viewed by 562
Abstract
In Japan, the decline in the number of agricultural workers and the aging of the workforce are problems, and there is a demand for more efficient and labor-saving work. Furthermore, in order to correct the rising price of fertilizer and the increasing burden [...] Read more.
In Japan, the decline in the number of agricultural workers and the aging of the workforce are problems, and there is a demand for more efficient and labor-saving work. Furthermore, in order to correct the rising price of fertilizer and the increasing burden on the environment caused by fertilizer, there is a demand for more efficient fertilization. Therefore, we aim to develop an electric soil sampling robot that can run autonomously using Global Navigation Satellite System (GNSS) information. GNSS and the Inertial Measurement Unit (IMU) are used as navigation sensors. The work machine is a crawler type that reduces soil compaction. In addition, a route map was generated in advance using the coordinate values of the field, with soil sampling positions set at 10 m intervals. In the experiment, the robot traveled along the route map and stopped automatically. The standard deviation of the standard deviation of lateral error was about 0.032 m, and the standard deviation of the interval between soil sampling positions was also less than 0.05 m. Therefore, it can be said that the accuracy is sufficient for soil sampling. It can also be said that even higher density sampling is possible by setting the intervals for soil sampling at finer intervals. Full article
(This article belongs to the Special Issue INS/GNSS Integrated Navigation Systems)
Show Figures

Figure 1

Figure 1
<p>Points where soil samples were taken.</p>
Full article ">Figure 2
<p>Drawing of the EV (electric vehicle) crawler-type soil sample robot used in the experiment.</p>
Full article ">Figure 3
<p>Soil sampling equipment.</p>
Full article ">Figure 4
<p>Soil sample collection procedure. The red arrows indicate the steps, and the blue arrows indicate the direction the mechanism moves.</p>
Full article ">Figure 5
<p>Block diagram of EV crawler-type soil sampling robot.</p>
Full article ">Figure 6
<p>Flowchart of the program during automatic driving.</p>
Full article ">Figure 7
<p>Calculating the navigated autonomous driving. Points and straight lines used when driving autonomously. Points A and B are certain points at the soil sample collection site in the field. The red line is the line between points A and B. The blue line is an auxiliary line obtained by moving the line AB parallel to the direction of the robot’s current location P. Q is the point obtained by moving P vertically on the line AB. x and y represent the x and y axes of the entire figure. d represents the distance between P and Q. l represents the distance between PB.</p>
Full article ">Figure 8
<p>EV crawler-type soil sample robot.</p>
Full article ">Figure 9
<p>Changes in robot speed during automatic driving. (<b>a</b>) RPM value and time lapse. (<b>b</b>) RPM value specified by system and time elapsed (800 to 870 s).</p>
Full article ">Figure 10
<p>Change in value of difference between left and right RPM and time course.</p>
Full article ">Figure 11
<p>Distance between the previous point and the next point.</p>
Full article ">Figure 12
<p>Lateral error of Path 1, Path 2, Path 3.</p>
Full article ">Figure 13
<p>Distance between the previous point and the next point actually measured.</p>
Full article ">
19 pages, 1682 KiB  
Article
Underwater DVL Optimization Network (UDON): A Learning-Based DVL Velocity Optimizing Method for Underwater Navigation
by Feihu Zhang, Shaoping Zhao, Lu Li and Chun Cao
Drones 2025, 9(1), 56; https://doi.org/10.3390/drones9010056 - 15 Jan 2025
Viewed by 559
Abstract
As the exploration of marine resources continues to deepen, the utilization of Autonomous Underwater Vehicles (AUVs) for conducting marine resource surveys and underwater environmental mapping has become a common practice. In order to successfully accomplish exploration missions, AUVs require high-precision underwater navigation information [...] Read more.
As the exploration of marine resources continues to deepen, the utilization of Autonomous Underwater Vehicles (AUVs) for conducting marine resource surveys and underwater environmental mapping has become a common practice. In order to successfully accomplish exploration missions, AUVs require high-precision underwater navigation information as support. A Strapdown Inertial Navigation System (SINS) can provide AUVs with accurate attitude and heading information, while a Doppler Velocity Log (DVL) is capable of measuring the velocity vector of the AUVs. Therefore, the integrated SINS/DVL navigation system can furnish the necessary navigational information required by an AUV. In response to the issue of DVL being susceptible to external environmental interference, leading to reduced measurement accuracy, this paper proposes an end-to-end deep-learning approach to enhance the accuracy of DVL velocity vector measurements. The utilization of the raw measurement data from an Inertial Measurement Unit (IMU), which includes gyroscopes and accelerometers, to assist the DVL in velocity vector estimation and to refine it towards the Global Positioning System (GPS) velocity vector, compensates for the external environmental interference affecting the DVL, therefore enhancing the navigation accuracy. To evaluate the proposed method, we conducted lake experiments using SINS and DVL equipment, from which the collected data were organized into a dataset for training and assessing the model. The research results show that the DVL vector predicted by our model can achieve a maximum improvement of 69.26% in terms of root mean square error and a maximum improvement of 78.62% in terms of relative trajectory error. Full article
(This article belongs to the Special Issue Advances in Autonomous Underwater Drones)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed method.</p>
Full article ">Figure 2
<p>A schematic diagram of the DVL’s four beams, which are arranged in an X-shaped layout.</p>
Full article ">Figure 3
<p>Deep-learning network architecture diagram of the proposed model in this paper.</p>
Full article ">Figure 4
<p>The device on the left is the SINS, and the device on the right is the DVL device.</p>
Full article ">Figure 5
<p>In the <b>top left</b> corner is the satellite trajectory map for Task-1, in the <b>top right</b> corner is Task-2, in the <b>bottom left</b> corner is Task-3, and in the <b>bottom right</b> corner is Task-4. The small map in the center is the satellite image of the experimental area.</p>
Full article ">Figure 6
<p>In the <b>left</b> panel is our test vessel with the fixed mechanical structure installed. In the <b>upper right</b> corner is the coordinate direction diagram for the SINS and DVL devices, and in the <b>lower right</b> corner is the schematic diagram of the data collection process.</p>
Full article ">Figure 7
<p>Comparison chart of the original DVL velocity, model-improved velocity, and GPS output velocity, where (<b>a</b>) Task-1; (<b>b</b>) Task-2; (<b>c</b>) Task-3; (<b>d</b>) Task-4. The VL represents the lateral velocity of DVL, and the VF represents the forward velocity of DVL.</p>
Full article ">Figure 8
<p>The trajectory obtained by dead reckoning using the original DVL velocity, original SINS velocity, model-improved velocity, and GPS output velocity, where (<b>a</b>) Task-1; (<b>b</b>) Task-2; (<b>c</b>) Task-3; (<b>d</b>) Task-4. East represents the displacement of the vehicle in the eastward direction, and North represents the displacement of the vehicle in the northward direction.</p>
Full article ">
13 pages, 1881 KiB  
Article
Research on Inertial Isolation Rotation Modulation of Dual-Axis Inertial Navigation Based on Multi-Error Coupling Characteristics
by Bo Zhang, Changhua Hu, Silin Hou, Jianxun Zhang, Jianfei Zheng and Xuan Liu
Aerospace 2025, 12(1), 47; https://doi.org/10.3390/aerospace12010047 - 13 Jan 2025
Viewed by 568
Abstract
Currently, research on the rotational modulation of dual-axis inertial navigation for isolated carrier motion does not provide sufficient solutions for the compensation of the gyroscope scale factor error caused by the Earth’s rotation. Moreover, it is primarily applied to ships with low maneuverability [...] Read more.
Currently, research on the rotational modulation of dual-axis inertial navigation for isolated carrier motion does not provide sufficient solutions for the compensation of the gyroscope scale factor error caused by the Earth’s rotation. Moreover, it is primarily applied to ships with low maneuverability and has not yet been implemented in the field of pure inertial guidance weapons. A dual-axis inertial isolation rotation modulation method is proposed to address this issue, taking into account the application characteristics of long-endurance guided weapons. An analysis of the system error characteristics under the coupling of multiple error sources acting on IMU was conducted, and it was found that the angular velocity of the inertial isolation carrier can significantly reduce the output error of the IMU. A dual-axis inertial isolation shaft system installation error compensation algorithm was designed, and an improvement was made based on the traditional sixteen-sequence rotation scheme to compensate for the projection components of the Earth’s rotation and carrier motion on the inner and outer frame rotation axes, achieving the inertial isolation rotation modulation function of dual-axis inertial navigation. Based on the attitude changes in long-range guided weapons, Monte Carlo simulation verification was conducted, and the results showed that this scheme can improve inertial navigation accuracy by 10% to 20%. Full article
Show Figures

Figure 1

Figure 1
<p>Coordinate relationship: (<b>a</b>) relationship between coordinate systems <span class="html-italic">i</span>, <span class="html-italic">n</span>, and <span class="html-italic">p</span>; (<b>b</b>) installation relationship between the IMU and framework.</p>
Full article ">Figure 2
<p>Scheme 1: (<b>a</b>) analog output of angle increment of gyroscope; (<b>b</b>) analog output of visual velocity increment of accelerometer.</p>
Full article ">Figure 3
<p>Scheme 2: (<b>a</b>) analog output of angle increment of gyroscope; (<b>b</b>) analog output of visual velocity increment of accelerometer.</p>
Full article ">Figure 4
<p>Scheme 3: (<b>a</b>) analog output of angle increment of gyroscope; (<b>b</b>) analog output of visual velocity increment of accelerometer.</p>
Full article ">Figure 5
<p>Scheme 4: (<b>a</b>) analog output of angle increment of gyroscope; (<b>b</b>) analog output of visual velocity increment of accelerometer.</p>
Full article ">Figure 6
<p>Integrated navigation error simulation results.</p>
Full article ">Figure 7
<p>Navigation error: (<b>a</b>) east position error; (<b>b</b>) north position error; (<b>c</b>) height position error.</p>
Full article ">
29 pages, 4271 KiB  
Article
Maximum Mixture Correntropy Criterion-Based Variational Bayesian Adaptive Kalman Filter for INS/UWB/GNSS-RTK Integrated Positioning
by Sen Wang, Peipei Dai, Tianhe Xu, Wenfeng Nie, Yangzi Cong, Jianping Xing and Fan Gao
Remote Sens. 2025, 17(2), 207; https://doi.org/10.3390/rs17020207 - 8 Jan 2025
Viewed by 538
Abstract
The safe operation of unmanned ground vehicles (UGVs) demands fundamental and essential requirements for continuous and reliable positioning performance. Traditional coupled navigation systems, combining the global navigation satellite system (GNSS) with an inertial navigation system (INS), provide continuous, drift-free position estimation. However, challenges [...] Read more.
The safe operation of unmanned ground vehicles (UGVs) demands fundamental and essential requirements for continuous and reliable positioning performance. Traditional coupled navigation systems, combining the global navigation satellite system (GNSS) with an inertial navigation system (INS), provide continuous, drift-free position estimation. However, challenges like GNSS signal interference and blockage in complex scenarios can significantly degrade system performance. Moreover, ultra-wideband (UWB) technology, known for its high precision, is increasingly used as a complementary system to the GNSS. To tackle these challenges, this paper proposes a novel tightly coupled INS/UWB/GNSS-RTK integrated positioning system framework, leveraging a variational Bayesian adaptive Kalman filter based on the maximum mixture correntropy criterion. This framework is introduced to provide a high-precision and robust navigation solution. By incorporating the maximum mixture correntropy criterion, the system effectively mitigates interference from anomalous measurements. Simultaneously, variational Bayesian estimation is employed to adaptively adjust noise statistical characteristics, thereby enhancing the robustness and accuracy of the integrated system’s state estimation. Furthermore, sensor measurements are tightly integrated with the inertial measurement unit (IMU), facilitating precise positioning even in the presence of interference from multiple signal sources. A series of real-world and simulation experiments were carried out on a UGV to assess the proposed approach’s performance. Experimental results demonstrate that the approach provides superior accuracy and stability in integrated system state estimation, significantly mitigating position drift error caused by uncertainty-induced disturbances. In the presence of non-Gaussian noise disturbances introduced by anomalous measurements, the proposed approach effectively implements error control, demonstrating substantial advantages in positioning accuracy and robustness. Full article
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)
Show Figures

Figure 1

Figure 1
<p>Message exchange process in the DS-TWR.</p>
Full article ">Figure 2
<p>Schematic diagram of multilateration positioning.</p>
Full article ">Figure 3
<p>Overview of the TC INS/UWB/GNSS-RTK integrated positioning system.</p>
Full article ">Figure 4
<p>Flowchart of the MMCC-based VBAKF algorithm.</p>
Full article ">Figure 5
<p>Overview of the UGV equipment and reference trajectory. (<b>a</b>) Experimental data collection platform. (<b>b</b>) Top view of reference trajectory.</p>
Full article ">Figure 6
<p>Positioning error sequences in the ENU directions for various solution strategies in Case 1.</p>
Full article ">Figure 7
<p>Overview of estimated position trajectory for various solution strategies in Case 1.</p>
Full article ">Figure 8
<p>CDF curves of horizontal positioning errors for various solution strategies in Case 1.</p>
Full article ">Figure 9
<p>Distribution of horizontal positioning errors for various solution strategies in Case 1.</p>
Full article ">Figure 10
<p>Improved percentage of the proposed MMCC-VBAKF in Case 1.</p>
Full article ">Figure 11
<p>Positioning error sequences in the ENU directions for various solution strategies in Case 2.</p>
Full article ">Figure 12
<p>Overview of estimated position trajectory for various solution strategies in Case 2.</p>
Full article ">Figure 13
<p>CDF curves of horizontal positioning errors for various solution strategies in Case 2.</p>
Full article ">Figure 14
<p>Distribution of horizontal positioning errors for various solution strategies in Case 2.</p>
Full article ">Figure 15
<p>Improved percentage of the proposed MMCC-VBAKF in Case 2.</p>
Full article ">
Back to TopTop