Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,050)

Search Parameters:
Keywords = onboard sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 27811 KiB  
Article
Machine Learning to Retrieve Gap-Free Land Surface Temperature from Infrared Atmospheric Sounding Interferometer Observations
by Fabio Della Rocca, Pamela Pasquariello, Guido Masiello, Carmine Serio and Italia De Feis
Remote Sens. 2025, 17(4), 694; https://doi.org/10.3390/rs17040694 - 18 Feb 2025
Abstract
Retrieving LST from infrared spectral observations is challenging because it needs separation from emissivity in surface radiation emission, which is feasible only when the state of the surface–atmosphere system is known. Thanks to its high spectral resolution, the Infrared Atmospheric Sounding Interferometer (IASI) [...] Read more.
Retrieving LST from infrared spectral observations is challenging because it needs separation from emissivity in surface radiation emission, which is feasible only when the state of the surface–atmosphere system is known. Thanks to its high spectral resolution, the Infrared Atmospheric Sounding Interferometer (IASI) instrument onboard Metop polar-orbiting satellites is the only sensor that can simultaneously retrieve LST, the emissivity spectrum, and atmospheric composition. Still, it cannot penetrate thick cloud layers, making observations blind to surface emissions under cloudy conditions, with surface and atmospheric parameters being flagged as voids. The present paper aims to discuss a downscaling–fusion methodology to retrieve LST missing values on a spatial field retrieved from spatially scattered IASI observations to yield level 3, regularly gridded data, using as proxy data LST from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) flying on Meteosat Second Generation (MSG) platform, a geostationary instrument, and from the Advanced Very High-Resolution Radiometer (AVHRR) onboard Metop polar-orbiting satellites. We address this problem by using machine learning techniques, i.e., Gradient Boosting, Random Forest, Gaussian Process Regression, Neural Network, and Stacked Regression. We applied the methodology over the Po Valley region, a very heterogeneous area that allows addressing the trained models’ robustness. Overall, the methods significantly enhanced spatial sampling, keeping errors in terms of Root Mean Square Error (RMSE) and bias (Mean Absolute Error, MAE) very low. Although we demonstrate and assess the results primarily using IASI data, the paper is also intended for applications to the IASI follow-on, that is, IASI Next Generation (IASI-NG), and much more to the Infrared Sounder (IRS), which is planned to fly this year, 2025, on the Meteosat Third Generation platform (MTG). Full article
(This article belongs to the Special Issue Remote Sensing in Geomatics (Second Edition))
Show Figures

Figure 1

Figure 1
<p>The red box indicates the Po Valley target region with the CLC 2018 as shapefile.</p>
Full article ">Figure 2
<p>Flowchart of the proposed framework. (<b>a</b>) Retrieval of LST; (<b>b</b>) Training; (<b>c</b>) L3 LST.</p>
Full article ">Figure 3
<p>Comparison of L2 IASI observations and the derived prediction mask for August 2022. The left panel shows the spatial distribution of L2 observations across the 9 years of data, while the right panel shows the spatial domain used for prediction.</p>
Full article ">Figure 4
<p>Comparison of MAE cross validated errors of the tested ML algorithms: Random Forest (blue), Boosting (orange), Neural Network (yellow), Gaussian Process Regression (purple), and Stacked Regression (green). The numbers in the legend represent the average MAE for all methods calculated across all months.</p>
Full article ">Figure 5
<p>Comparison of RMSE cross validated errors of the tested ML algorithms: Random Forest (blue), Boosting (orange), Neural Network (yellow), Gaussian Process Regression (purple), and Stacked Regression (green). The numbers in the legend represent the average RMSE for all methods calculated across all months.</p>
Full article ">Figure 6
<p>Example of the L3 LST for the months January–June. The first column represents the IASI LST L2 observations, while the second column shows the LST L3 predicted with Stacked Regression; each row represents a month.</p>
Full article ">Figure 7
<p>Example of the L3 LST for the months July–December. The first column represents the IASI LST L2 observations, while the second column shows the LST L3 predicted with Stacked Regression; each row represents a month.</p>
Full article ">Figure 8
<p>Comparison between the predicted LST for August 2022 and the mean of nine years of AVHRR and SEVIRI data for the same month. The top-left panel shows the IASI L2 observations, while the right panel displays the difference maps with SEVIRI (<b>top</b>) and AVHRR (<b>bottom</b>) including also the L2 observations represented by the small black dots. The bottom-left panel presents the KDE plot of these differences, including the mean and standard deviation of the errors.</p>
Full article ">Figure 9
<p>Comparison between the predicted LST for March 2022 and the mean of nine years of AVHRR and SEVIRI data for the same month. The top-left panel shows the IASI L2 observations, while the right panel displays the difference maps with SEVIRI (<b>top</b>) and AVHRR (<b>bottom</b>) including also the L2 observations represented by the small black dots. The bottom-left panel presents the KDE plot of these differences, including the mean and standard deviation of the errors.</p>
Full article ">Figure 10
<p>L2/L3 Differences IASI - MODIS for the months January–June. The first column displays the L2/L3 differences using KDE plots with the mean and standard deviation: the red curves display the L2 errors, and the blue curves display the L3 errors. The second column shows the scatterplots between the predicted IASI L3 LST values and the MODIS L3 LST values, the linear fits, and the <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> indexes.</p>
Full article ">Figure 11
<p>L2/L3 Differences IASI - MODIS for the months July–December. The first column displays the L2/L3 differences using KDE plots with the mean and standard deviation: the red curves display the L2 errors, and the blue curves display the L3 errors. The second column shows the scatterplots between the predicted IASI L3 LST values and the MODIS L3 LST values, the linear fits, and the <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> indexes.</p>
Full article ">Figure 12
<p>Comparison LSTs from IASI, AVHRR, SEVIRI and MODIS for the months January–June. The first column displays the L2 differences using KDE plots with the mean and standard deviation: the blue curves represent the L2 differences between IASI and AVHRR, the red curves represent the L2 differences between IASI and SEVIRI, and the yellow curves represent the L2 differences between IASI and MODIS. The second column shows the same differences using boxplots, with the <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> indexes included on the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 13
<p>Comparison LSTs from IASI, AVHRR, SEVIRI and MODIS for the months July–December. The first column displays the L2 differences using KDE plots with the mean and standard deviation: the blue curves represent the L2 differences between IASI and AVHRR, the red curves represent the L2 differences between IASI and SEVIRI, and the yellow curves represent the L2 differences between IASI and MODIS. The second column shows the same differences using boxplots, with the <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> indexes included on the <span class="html-italic">x</span>-axis.</p>
Full article ">
21 pages, 7555 KiB  
Article
Control of Multiple Mobile Robots Based on Data Fusion from Proprioceptive and Actuated Exteroceptive Onboard Sensors
by Arpit Joon, Wojciech Kowalczyk and Przemyslaw Herman
Electronics 2025, 14(4), 776; https://doi.org/10.3390/electronics14040776 - 17 Feb 2025
Abstract
This paper introduces a team of Automated Guided Vehicles (AGVs) equipped with open-source, perception-enhancing rotating devices. Each device has a set of ArUco markers, employed to compute the relative pose of other AGVs. These markers also serve as landmarks, delineating a path for [...] Read more.
This paper introduces a team of Automated Guided Vehicles (AGVs) equipped with open-source, perception-enhancing rotating devices. Each device has a set of ArUco markers, employed to compute the relative pose of other AGVs. These markers also serve as landmarks, delineating a path for the robots to follow. The authors combined various control methodologies to track the ArUco markers on another rotating device mounted on the AGVs. Behavior trees are implemented to facilitate task-switching or to respond to sudden disturbances, such as environmental obstacles. The Robot Operating System (ROS) is installed on the AGVs to manage high-level controls. The efficacy of the proposed solution is confirmed through a real experiment. This research contributes to the advancement of AGV technology and its potential applications in various fields for example in a warehouse with a restricted and known environment where AGVs can transport goods while avoiding other AGVs in the same environment. Full article
(This article belongs to the Special Issue Recent Advances in Robotics and Automation Systems)
Show Figures

Figure 1

Figure 1
<p>Robots with rotating platform.</p>
Full article ">Figure 2
<p>Behavior tree for rotating platform movement.</p>
Full article ">Figure 3
<p>Exploded view of rotating platform assembly.</p>
Full article ">Figure 4
<p>Axis representations between the camera and the ArUco marker.</p>
Full article ">Figure 5
<p>Corner points of ArUco markers.</p>
Full article ">Figure 6
<p>Top view of the rotating device.</p>
Full article ">Figure 7
<p>When robots are too close to observe the markers on the cuboidal part.</p>
Full article ">Figure 8
<p>Side-by-side comparison of (<b>a</b>) ArUco Diamond Marker and (<b>b</b>) Environment with landmarks.</p>
Full article ">Figure 9
<p>Experiment result 1. (<b>a</b>) (x,y)—plot of all robots including OptiTrack data of robots. (<b>b</b>) (x,y)—plot of robot 1 with static robot 4. (<b>c</b>) (x,y)—plot of all robots with activation regions. (<b>d</b>) Sum of collision avoidance components of robot 1 on the x-axis. (<b>e</b>) Sum of collision avoidance components of robot 1 on the y-axis. (<b>f</b>) Global error <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>x</mi> </mrow> </msub> </semantics></math> with respect to time of robot 1. (<b>g</b>) Global error <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>y</mi> </mrow> </msub> </semantics></math> with respect to time of robot 1. (<b>h</b>) Global error <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>θ</mi> </mrow> </msub> </semantics></math> with respect to time of robot 1. (<b>i</b>) Linear velocity (<math display="inline"><semantics> <msub> <mi>v</mi> <mn>1</mn> </msub> </semantics></math>) with respect to time. (<b>j</b>) Angular velocity (<math display="inline"><semantics> <msub> <mi>w</mi> <mn>1</mn> </msub> </semantics></math>) with respect to time. (<b>k</b>) Sum of collision avoidance components of robot 2 on the x-axis. (<b>l</b>) Sum of collision avoidance components of robot 2 on the y-axis. (<b>m</b>) <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>x</mi> </mrow> </msub> </semantics></math> with respect to time of robot 2. (<b>n</b>) <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>y</mi> </mrow> </msub> </semantics></math> with respect to time of robot 2. (<b>o</b>) <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>θ</mi> </mrow> </msub> </semantics></math> with respect to time of robot 2. (<b>p</b>) Linear velocity <math display="inline"><semantics> <msub> <mi>v</mi> <mn>2</mn> </msub> </semantics></math> with respect to time. (<b>q</b>) Angular velocity <math display="inline"><semantics> <msub> <mi>w</mi> <mn>2</mn> </msub> </semantics></math> with respect to time. (<b>r</b>) Landmarks and other robots detected by robot 1. (<b>s</b>) Landmarks and other robots detected by robot 2.</p>
Full article ">Figure 9 Cont.
<p>Experiment result 1. (<b>a</b>) (x,y)—plot of all robots including OptiTrack data of robots. (<b>b</b>) (x,y)—plot of robot 1 with static robot 4. (<b>c</b>) (x,y)—plot of all robots with activation regions. (<b>d</b>) Sum of collision avoidance components of robot 1 on the x-axis. (<b>e</b>) Sum of collision avoidance components of robot 1 on the y-axis. (<b>f</b>) Global error <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>x</mi> </mrow> </msub> </semantics></math> with respect to time of robot 1. (<b>g</b>) Global error <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>y</mi> </mrow> </msub> </semantics></math> with respect to time of robot 1. (<b>h</b>) Global error <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>θ</mi> </mrow> </msub> </semantics></math> with respect to time of robot 1. (<b>i</b>) Linear velocity (<math display="inline"><semantics> <msub> <mi>v</mi> <mn>1</mn> </msub> </semantics></math>) with respect to time. (<b>j</b>) Angular velocity (<math display="inline"><semantics> <msub> <mi>w</mi> <mn>1</mn> </msub> </semantics></math>) with respect to time. (<b>k</b>) Sum of collision avoidance components of robot 2 on the x-axis. (<b>l</b>) Sum of collision avoidance components of robot 2 on the y-axis. (<b>m</b>) <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>x</mi> </mrow> </msub> </semantics></math> with respect to time of robot 2. (<b>n</b>) <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>y</mi> </mrow> </msub> </semantics></math> with respect to time of robot 2. (<b>o</b>) <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>θ</mi> </mrow> </msub> </semantics></math> with respect to time of robot 2. (<b>p</b>) Linear velocity <math display="inline"><semantics> <msub> <mi>v</mi> <mn>2</mn> </msub> </semantics></math> with respect to time. (<b>q</b>) Angular velocity <math display="inline"><semantics> <msub> <mi>w</mi> <mn>2</mn> </msub> </semantics></math> with respect to time. (<b>r</b>) Landmarks and other robots detected by robot 1. (<b>s</b>) Landmarks and other robots detected by robot 2.</p>
Full article ">
28 pages, 10511 KiB  
Article
Weather-Adaptive Regenerative Braking Strategy Based on Driving Style Recognition for Intelligent Electric Vehicles
by Marwa Ziadia, Sousso Kelouwani, Ali Amamou and Kodjo Agbossou
Sensors 2025, 25(4), 1175; https://doi.org/10.3390/s25041175 - 14 Feb 2025
Abstract
This paper examines the energy efficiency of smart electric vehicles equipped with regenerative braking systems under challenging weather conditions. While Advanced Driver Assistance Systems (ADAS) are primarily designed to enhance driving safety, they often overlook energy efficiency. This study proposes a Weather-Adaptive Regenerative [...] Read more.
This paper examines the energy efficiency of smart electric vehicles equipped with regenerative braking systems under challenging weather conditions. While Advanced Driver Assistance Systems (ADAS) are primarily designed to enhance driving safety, they often overlook energy efficiency. This study proposes a Weather-Adaptive Regenerative Braking Strategy (WARBS) system, which leverages onboard sensors and data processing capabilities to enhance the energy efficiency of regenerative braking across diverse weather conditions while minimizing unnecessary alerts. To achieve this, we develop driving style recognition models that integrate road conditions, such as weather and road friction, with different driving styles. Next, we propose an adaptive deceleration plan that aims to maximize the conversion of kinetic energy into electrical energy for the vehicle’s battery under varying weather conditions, considering vehicle dynamics and speed constraints. Given that the potential for energy recovery through regenerative braking is diminished on icy and snowy roads compared to dry ones, our approach introduces a driving context recognition system to facilitate effective speed planning. Both simulation and experimental validation indicate that this approach can significantly enhance overall energy efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Reactions to braking distance.</p>
Full article ">Figure 2
<p>Reactions to power regeneration.</p>
Full article ">Figure 3
<p>Weather-Adaptive Regeneration Braking Strategy Design.</p>
Full article ">Figure 4
<p>Instrumented intelligent electric vehicle (Kia Soul 2017).</p>
Full article ">Figure 5
<p>Road friction [<a href="#B51-sensors-25-01175" class="html-bibr">51</a>].</p>
Full article ">Figure 6
<p>Confusion matrix of ANFIS.</p>
Full article ">Figure 7
<p>Confusion matrix of Decision Tree.</p>
Full article ">Figure 8
<p>Confusion matrix of SVM.</p>
Full article ">Figure 9
<p>Comparing the RMSE of the proposed ANFIS model with that of the LSTM methodology, excluding classification.</p>
Full article ">Figure 10
<p>Regeneration maximization on icy segments by WARBS.</p>
Full article ">Figure 11
<p>Regeneration maximization on dry segments by WARBS.</p>
Full article ">
19 pages, 4784 KiB  
Article
Cooperative Formation Control of a Multi-Agent Khepera IV Mobile Robots System Using Deep Reinforcement Learning
by Gonzalo Garcia, Azim Eskandarian, Ernesto Fabregas, Hector Vargas and Gonzalo Farias
Appl. Sci. 2025, 15(4), 1777; https://doi.org/10.3390/app15041777 - 10 Feb 2025
Abstract
The increasing complexity of autonomous vehicles has exposed the limitations of many existing control systems. Reinforcement learning (RL) is emerging as a promising solution to these challenges, enabling agents to learn and enhance their performance through interaction with the environment. Unlike traditional control [...] Read more.
The increasing complexity of autonomous vehicles has exposed the limitations of many existing control systems. Reinforcement learning (RL) is emerging as a promising solution to these challenges, enabling agents to learn and enhance their performance through interaction with the environment. Unlike traditional control algorithms, RL facilitates autonomous learning via a recursive process that can be fully simulated, thereby preventing potential damage to the actual robot. This paper presents the design and development of an RL-based algorithm for controlling the collaborative formation of a multi-agent Khepera IV mobile robot system as it navigates toward a target while avoiding obstacles in the environment by using onboard infrared sensors. This study evaluates the proposed RL approach against traditional control laws within a simulated environment using the CoppeliaSim simulator. The results show that the performance of the RL algorithm gives a sharper control law concerning traditional approaches without the requirement to adjust the control parameters manually. Full article
(This article belongs to the Special Issue Deep Reinforcement Learning for Multiagent Systems)
Show Figures

Figure 1

Figure 1
<p>CoppeliaSim simulator environment.</p>
Full article ">Figure 2
<p>Position control variables for the differential robot.</p>
Full article ">Figure 3
<p>Block diagram of the position control problem.</p>
Full article ">Figure 4
<p>Block diagram showing the position control problem with obstacle avoidance.</p>
Full article ">Figure 5
<p>Positionformation control.</p>
Full article ">Figure 6
<p>Actor and Critic neural networks.</p>
Full article ">Figure 7
<p>Initial positions: CoppeliaSim environment.</p>
Full article ">Figure 8
<p>Position history without cooperation. Target: blue cross, leader trajectory: orange, follower 1 trajectory: blue, follower 2 trajectory: green. Numbers indicate elapsed time in seconds arrows represent the initial orientation of each robot.</p>
Full article ">Figure 9
<p>Reward values for non-cooperative approach with <span class="html-italic">e</span> the orientation error, and <span class="html-italic">d</span> the distance error. Leader: up, follower 1: centre, follower 2: bottom.</p>
Full article ">Figure 10
<p>Leader left and right wheel speeds for the non-cooperative approach.</p>
Full article ">Figure 11
<p>Position history for the cooperative approach. Target: blue cross, leader trajectory: orange, follower 1 trajectory: blue, follower 2 trajectory: green. Numbers indicate elapsed time in seconds and arrows represent the initial orientation of each robot.</p>
Full article ">Figure 12
<p>Reward values for cooperative approach with <span class="html-italic">e</span> the orientation error, <span class="html-italic">d</span> the distance error, and <math display="inline"><semantics> <msub> <mi>d</mi> <mi>s</mi> </msub> </semantics></math> the added followers’ distance errors. Leader: up, follower 1: centre, follower 2: bottom.</p>
Full article ">Figure 13
<p>Leader left and right wheel speeds for the cooperative approach.</p>
Full article ">Figure 14
<p>Obstacle avoidance of initial positions.</p>
Full article ">Figure 15
<p>Position history for the cooperative approach to obstacle avoidance. Target: blue cross, leader and trajectory: orange, follower 1 and trajectory: blue, follower 2 and trajectory: green, obstacles: light brown. Numbers indicate elapsed time in seconds.</p>
Full article ">Figure 16
<p>Reward values with <span class="html-italic">e</span> the orientation error, <span class="html-italic">d</span> the distance error, and <math display="inline"><semantics> <msub> <mi>d</mi> <mi>s</mi> </msub> </semantics></math> the added followers’ distance errors. Leader: up, follower 1: centre, follower 2: bottom.</p>
Full article ">Figure 17
<p>Leader left and right wheel speeds.</p>
Full article ">Figure 18
<p>Follower DRL control surfaces show position history in blue and the starting point in red.</p>
Full article ">Figure 19
<p>Follower Villela control surfaces, showing a position history in blue and stating point in red.</p>
Full article ">Figure 20
<p>Position histories in the horizontal plane moving from right to left, DRL: blue and Villela: red. The black cross represents the target and the black arrow represents the initial orientation of the robot.</p>
Full article ">Figure 21
<p>Position history. Target: blue cross, leader and trajectory: orange, follower 1 and trajectory: blue, follower 2 and trajectory: green, obstacles: light brown. Segmented lines are the results for DRL. Numbers indicate elapsed time in seconds and the black arrows represent the initial orientation of each robot.</p>
Full article ">
38 pages, 14791 KiB  
Article
Online High-Definition Map Construction for Autonomous Vehicles: A Comprehensive Survey
by Hongyu Lyu, Julie Stephany Berrio Perez, Yaoqi Huang, Kunming Li, Mao Shan and Stewart Worrall
J. Sens. Actuator Netw. 2025, 14(1), 15; https://doi.org/10.3390/jsan14010015 - 2 Feb 2025
Abstract
High-definition (HD) maps aim to provide detailed road information with centimeter-level accuracy, essential for enabling precise navigation and safe operation of autonomous vehicles (AVs). Traditional offline construction methods involve several complex steps, such as data collection, point cloud generation, and feature extraction, but [...] Read more.
High-definition (HD) maps aim to provide detailed road information with centimeter-level accuracy, essential for enabling precise navigation and safe operation of autonomous vehicles (AVs). Traditional offline construction methods involve several complex steps, such as data collection, point cloud generation, and feature extraction, but these methods are resource-intensive and struggle to keep pace with the rapidly changing road environments. In contrast, online HD map construction leverages onboard sensor data to dynamically generate local HD maps, offering a bird’s-eye view (BEV) representation of the surrounding road environment. This approach has the potential to improve adaptability to spatial and temporal changes in road conditions while enhancing cost-efficiency by reducing the dependency on frequent map updates and expensive survey fleets. This survey provides a comprehensive analysis of online HD map construction, including the task background, high-level motivations, research methodology, key advancements, existing challenges, and future trends. We systematically review the latest advancements in three key sub-tasks: map segmentation, map element detection, and lane graph construction, aiming to bridge gaps in the current literature. We also discuss existing challenges and future trends, covering standardized map representation design, multitask learning, and multi-modality fusion, while offering suggestions for potential improvements. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems (ITS))
Show Figures

Figure 1

Figure 1
<p>Structure of this survey.</p>
Full article ">Figure 2
<p>Pipeline of research methodology.</p>
Full article ">Figure 3
<p>Comparison of the VT module in two projection-based map segmentation methods. (<b>a</b>) Simple-BEV [<a href="#B53-jsan-14-00015" class="html-bibr">53</a>] projects voxel grid points onto feature maps and uses bilinear sampling to extract features for constructing 3D voxel features. (<b>b</b>) Ego3RT [<a href="#B46-jsan-14-00015" class="html-bibr">46</a>] projects polarized grid queries onto feature maps and uses attention to extract features for constructing 3D voxel features.</p>
Full article ">Figure 4
<p>Comparison of the VT module in two lift-based map segmentation methods. (<b>a</b>) PON [<a href="#B38-jsan-14-00015" class="html-bibr">38</a>] uses MLP to expand bottleneck features along the depth axis. (<b>b</b>) LSS [<a href="#B39-jsan-14-00015" class="html-bibr">39</a>] uses CNN to predict pixel-wise depth probability distributions.</p>
Full article ">Figure 5
<p>Comparison of the VT module in two network-based map segmentation methods. (<b>a</b>) PYVA [<a href="#B42-jsan-14-00015" class="html-bibr">42</a>] uses two MLPs to enable bidirectional projection of feature maps between pixel space and BEV space. (<b>b</b>) BEVSegFormer [<a href="#B51-jsan-14-00015" class="html-bibr">51</a>] uses deformable cross-attention [<a href="#B102-jsan-14-00015" class="html-bibr">102</a>] to predict 2D reference points for sampling feature maps to refine BEV queries.</p>
Full article ">Figure 6
<p>Comparison of the MD module in two CNN-based map element detection methods. (<b>a</b>) HDMapNet [<a href="#B21-jsan-14-00015" class="html-bibr">21</a>] uses an FCN [<a href="#B109-jsan-14-00015" class="html-bibr">109</a>] to decode semantic, instance, and direction masks, which are then post-processed into vectorized representations. (<b>b</b>) InstaGraM [<a href="#B59-jsan-14-00015" class="html-bibr">59</a>] uses two CNNs to detect vertices and edges, then employs an attentional GNN to associate the vertices, generating vectorized representations in an end-to-end manner.</p>
Full article ">Figure 7
<p>Comparison of the pipelines of two Transformer-based map element detection methods. (<b>a</b>) MapTR [<a href="#B56-jsan-14-00015" class="html-bibr">56</a>] uses a single-stage DETR-like Transformer [<a href="#B110-jsan-14-00015" class="html-bibr">110</a>] for parallel decoding of ordered point sequences for map elements. (<b>b</b>) MGMap [<a href="#B67-jsan-14-00015" class="html-bibr">67</a>] uses instance masks to enhance element queries for precise localization and uses mask patches to refine point position predictions.</p>
Full article ">Figure 8
<p>Comparison of temporal fusion (short-term and long-term) in two Transformer-based map element detection methods. (<b>a</b>) StreamMapNet [<a href="#B63-jsan-14-00015" class="html-bibr">63</a>] aligns and fuses BEV features from consecutive frames and propagates high-confidence element queries to the next frame. (<b>b</b>) HRMapNet [<a href="#B70-jsan-14-00015" class="html-bibr">70</a>] fuses BEV features with rasterized map features to enrich information and rasterizes vectorized map predictions to maintain a global historical map.</p>
Full article ">Figure 9
<p>Comparison of the pipelines for two single-step-based lane graph construction methods. (<b>a</b>) TopoMLP [<a href="#B87-jsan-14-00015" class="html-bibr">87</a>] uses two Transformers for lane and traffic element queries, followed by MLPs to predict the topological relationships between paired queries. (<b>b</b>) TPLR [<a href="#B20-jsan-14-00015" class="html-bibr">20</a>] uses a Transformer to process lane and minimal cycle queries simultaneously, followed by joint decoding of the lane graph and the cover of minimal cycles.</p>
Full article ">Figure 10
<p>Comparison of the TR module in two iteration-based lane graph construction methods. (<b>a</b>) TopoNet [<a href="#B85-jsan-14-00015" class="html-bibr">85</a>] uses two Transformers for lane and traffic element queries, followed by a GCN for iterative message passing and feature updating. (<b>b</b>) RoadNetTransformer [<a href="#B84-jsan-14-00015" class="html-bibr">84</a>] (semi-autoregressive) first predicts lane key points in parallel and then autoregressively generates local sequences for lane graphs.</p>
Full article ">Figure 11
<p>Comparison of lane segment representation [<a href="#B86-jsan-14-00015" class="html-bibr">86</a>] with two alternative map representations.</p>
Full article ">Figure 12
<p>Comparison of uncertainty-based map representations [<a href="#B116-jsan-14-00015" class="html-bibr">116</a>] integrated into various online HD map construction methods. (<b>a</b>) Ground truth. (<b>b</b>) MapTR [<a href="#B56-jsan-14-00015" class="html-bibr">56</a>]. (<b>c</b>) MapTRv2 [<a href="#B76-jsan-14-00015" class="html-bibr">76</a>]. (<b>d</b>) MapTRv2-CL [<a href="#B76-jsan-14-00015" class="html-bibr">76</a>]. (<b>e</b>) StreamMapNet [<a href="#B63-jsan-14-00015" class="html-bibr">63</a>].</p>
Full article ">Figure 13
<p>Comparison of the MTL pipeline in two online HD map construction methods. (<b>a</b>) BEVerse [<a href="#B50-jsan-14-00015" class="html-bibr">50</a>] presents a unified framework for map segmentation, 3D object detection, and motion prediction. (<b>b</b>) BeMapNet [<a href="#B57-jsan-14-00015" class="html-bibr">57</a>] presents a unified framework for map segmentation, map element detection, and instance segmentation.</p>
Full article ">Figure 14
<p>Comparison of the MMF pipeline in two online HD map construction methods. (<b>a</b>) BEVFusion [<a href="#B52-jsan-14-00015" class="html-bibr">52</a>] fuses camera and LiDAR features in the unified BEV space. (<b>b</b>) NMP [<a href="#B58-jsan-14-00015" class="html-bibr">58</a>] fuses BEV features with neural map priors from previous traversals.</p>
Full article ">
26 pages, 6434 KiB  
Article
Motion and Inertia Estimation for Non-Cooperative Space Objects During Long-Term Occlusion Based on UKF-GP
by Rabiul Hasan Kabir and Xiaoli Bai
Sensors 2025, 25(3), 647; https://doi.org/10.3390/s25030647 - 22 Jan 2025
Viewed by 316
Abstract
This study addresses the motion and inertia parameter estimation problem of a torque-free, tumbling, non-cooperative space object (target) under long-term occlusions. To solve this problem, we employ a data-driven Gaussian process (GP) to simulate sensor measurements. In particular, we implement the multi-output GP [...] Read more.
This study addresses the motion and inertia parameter estimation problem of a torque-free, tumbling, non-cooperative space object (target) under long-term occlusions. To solve this problem, we employ a data-driven Gaussian process (GP) to simulate sensor measurements. In particular, we implement the multi-output GP to predict the projection measurements of a stereo-camera system onboard a chaser spacecraft. A product kernel, consisting of two periodic kernels, is used in the GP models to capture the periodic trends from non-periodic projection data. The initial guesses for the periodicity hyper-parameters of the GP models are intelligently derived from fast Fourier transform (FFT) analysis of the projection data. Additionally, we propose an unscented Kalman filter–Gaussian process (UKF-GP) fusion algorithm for target motion and inertia parameter estimation. The predicted projections from the GP models and their derivatives are used as the pseudo-measurements for UKF-GP during long-term occlusion. Results from Monte Carlo (MC) simulations demonstrate that, for varying tumbling frequencies, the UKF-GP can accurately estimate the target’s motion variables over hundreds of seconds, a capability the conventional UKF algorithm lacks. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of the simulation scenario. The left spacecraft (chaser) is a cooperative spacecraft carrying a stereo-camera system, and the right one (target) is a torque-free, tumbling, non-cooperative space object. <math display="inline"><semantics> <mi mathvariant="script">I</mi> </semantics></math> is the inertial frame, and <math display="inline"><semantics> <mi mathvariant="script">B</mi> </semantics></math> is the body frame of the target that is parallel to the principal axes of the target. The target’s point features are indicated by the asterisk symbols.</p>
Full article ">Figure 2
<p>Schematic of the stereo-vision camera system.</p>
Full article ">Figure 3
<p>Flowchart of the UKF-GP algorithm.</p>
Full article ">Figure 4
<p>Results from the MC simulations for the GP models. (<b>a</b>) RMSE box plots of the predicted projections for 2000 s, and (<b>b</b>) the training time of the GP model.</p>
Full article ">Figure 5
<p>State variable estimation errors of UKF, Benchmark, and UKF-GP for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0.025</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 5 Cont.
<p>State variable estimation errors of UKF, Benchmark, and UKF-GP for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0.025</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 6
<p>State variable estimation errors of UKF, Benchmark, and UKF-GP for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0.075</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 6 Cont.
<p>State variable estimation errors of UKF, Benchmark, and UKF-GP for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0.075</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 7
<p>State variable estimation errors of UKF, Benchmark, and UKF-GP for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0.125</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 7 Cont.
<p>State variable estimation errors of UKF, Benchmark, and UKF-GP for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0.125</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 8
<p>State variable estimation errors of UKF, Benchmark, and UKF-GP for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0.175</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 9
<p>RMSEs of the state estimation errors of UKF, Benchmark, and UKF-GP for 1500 s of occlusion from MC simulations. All figures have the same legend; therefore, the legend is only provided in (<b>a</b>).</p>
Full article ">
19 pages, 3639 KiB  
Article
Transfer Learning with Deep Neural Network Toward the Prediction of the Mass of the Charge in Underwater Explosion Events
by Jacopo Bardiani, Claudio Sbarufatti and Andrea Manes
J. Mar. Sci. Eng. 2025, 13(2), 190; https://doi.org/10.3390/jmse13020190 - 21 Jan 2025
Viewed by 494
Abstract
In practical applications, the prediction of the explosive mass of an underwater explosion represents a crucial aspect for defining extreme scenarios and for assessing damage, implementing defensive and security strategies, and ensuring the structural integrity of marine structures. In this study, a deep [...] Read more.
In practical applications, the prediction of the explosive mass of an underwater explosion represents a crucial aspect for defining extreme scenarios and for assessing damage, implementing defensive and security strategies, and ensuring the structural integrity of marine structures. In this study, a deep neural network (DNN) was developed to predict the mass of an underwater explosive charge, by means of the transfer learning technique (TL). Both DNN and TL methods utilized data collected through coupled Eulerian–Lagrangian numerical simulations performed through the suite MSC Dytran. Different positions and masses of the charge, seabed typology, and distance between the structure and seabed have been considered within the dataset. All the features considered as input for the machine learning model are information that the crew is aware of through onboard sensors and instrumentations, making the framework extremely useful in real-world scenarios. TL involves reconfiguring and retraining a new DNN model, starting from a pre-trained network model developed in a past study by the authors, which predicted the spatial position of the explosive. This study serves as a proof of concept that using transfer learning to create a DNN model from a pre-trained network requires less computational effort compared to building and training a model from scratch, especially considering the vast amount of data typically present in real-world scenarios. Full article
(This article belongs to the Special Issue Data-Driven Methods for Marine Structures)
Show Figures

Figure 1

Figure 1
<p>Scenario considered: (<b>a</b>) relative position between structure and charge and (<b>b</b>) 3-D view of the numerical model. Adapted from ref. [<a href="#B25-jmse-13-00190" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Indication of the extraction process for a generic case inside the dataset: pressure and vertical displacement pattern for the central mesh element of the plate. Adapted from ref. [<a href="#B25-jmse-13-00190" class="html-bibr">25</a>].</p>
Full article ">Figure 3
<p>Pressure and density field at different instants for the 3-D model: (<b>a</b>) density (kg/m<sup>3</sup>) at t = 10<sup>−8</sup> s, (<b>b</b>) pressure (Pa) at t = 0.00010 s, (<b>c</b>) pressure (Pa) at t = 0.00025 s, (<b>d</b>) pressure (Pa) at t = 0.00035 s, (<b>e</b>) pressure (Pa) at t = 0.00035 s, (<b>f</b>) pressure (Pa) at t = 0.00045 s, (<b>g</b>) pressure (MPa) at t = 0.00070 s and (<b>h</b>) pressure (MPa) at t = 0.00110 s (pictures taken from ParaView, rescaling of the legend limits to have a better view of the shocks).</p>
Full article ">Figure 3 Cont.
<p>Pressure and density field at different instants for the 3-D model: (<b>a</b>) density (kg/m<sup>3</sup>) at t = 10<sup>−8</sup> s, (<b>b</b>) pressure (Pa) at t = 0.00010 s, (<b>c</b>) pressure (Pa) at t = 0.00025 s, (<b>d</b>) pressure (Pa) at t = 0.00035 s, (<b>e</b>) pressure (Pa) at t = 0.00035 s, (<b>f</b>) pressure (Pa) at t = 0.00045 s, (<b>g</b>) pressure (MPa) at t = 0.00070 s and (<b>h</b>) pressure (MPa) at t = 0.00110 s (pictures taken from ParaView, rescaling of the legend limits to have a better view of the shocks).</p>
Full article ">Figure 4
<p>Explanation of the transfer learning approach for this study: (<b>a</b>) pre-trained DNN for position prediction (developed in [<a href="#B25-jmse-13-00190" class="html-bibr">25</a>]) and (<b>b</b>) new DNN for mass prediction through transfer learning. Adapted from [<a href="#B41-jmse-13-00190" class="html-bibr">41</a>].</p>
Full article ">Figure 5
<p>Diagram for the explanation of nested cross-validation method.</p>
Full article ">Figure 6
<p>Workflow of the transfer learning strategy applied within this study.</p>
Full article ">Figure 7
<p>Loss function history for the new single-task DNN: (<b>a</b>) with TL, (<b>b</b>) without TL (training from scratch), and (<b>c</b>) comparison of the two cases for the validation set.</p>
Full article ">Figure 7 Cont.
<p>Loss function history for the new single-task DNN: (<b>a</b>) with TL, (<b>b</b>) without TL (training from scratch), and (<b>c</b>) comparison of the two cases for the validation set.</p>
Full article ">
18 pages, 4649 KiB  
Article
Development of an Aerial Manipulation System Using Onboard Cameras and a Multi-Fingered Robotic Hand with Proximity Sensors
by Ryuki Sato, Etienne Marco Badard, Chaves Silva Romulo, Tadashi Wada and Aiguo Ming
Sensors 2025, 25(2), 470; https://doi.org/10.3390/s25020470 - 15 Jan 2025
Viewed by 428
Abstract
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a [...] Read more.
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a multi-fingered robotic hand with proximity sensors is developed. To achieve self-contained autonomous navigation to a targeted object, onboard tracking and depth cameras are used to detect the targeted object and to control the UAV to reach the target object, even in a Global Positioning System-denied environment. The robotic hand can perform proximity sensor-based grasping stably for an object that is within a position error tolerance (a circle with a radius of 50 mm) from the center of the hand. Therefore, to successfully grasp the object, a requirement for the position error of the hand (=UAV) during hovering after reaching the targeted object should be less than the tolerance. To meet this requirement, an object detection algorithm to support accurate target localization by combining information from both cameras was developed. In addition, camera mount orientation and UAV attitude sampling rate were determined by experiments, and it is confirmed that these implementations improved the UAV position error to within the grasping tolerance of the robot hand. Finally, the experiments on aerial manipulations using the developed system demonstrated the successful grasping of the targeted object. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Aerial manipulation scheme adopted for the developed UAV system equipped with a multi-fingered robotic hand with proximity sensors.</p>
Full article ">Figure 2
<p>Overview of the developed UAV and multi-fingered robotic hand.</p>
Full article ">Figure 3
<p>System architecture diagram of the aerial manipulation system.</p>
Full article ">Figure 4
<p>Aerial manipulation procedure.</p>
Full article ">Figure 5
<p>Schematic diagram of the object detection and localization algorithm.</p>
Full article ">Figure 6
<p>Object detection output including the color pre-processing stage: (<b>a</b>) the RGB output, (<b>b</b>) the RGB output after applying a mask without pre-processing step, (<b>c</b>) the RGB output after applying a mask with pre-processing step. The green boxes in (<b>b</b>,<b>c</b>) represent the bounding box and indicate detected objects by the algorithm.</p>
Full article ">Figure 7
<p>Compensation for object localization based on UAV position and orientation (represented on a 2D plane for clarity).</p>
Full article ">Figure 8
<p>Targeted object position estimation before and after compensation.</p>
Full article ">Figure 9
<p>Camera configurations: (<b>a</b>) down-facing configuration, (<b>b</b>) front-facing configuration.</p>
Full article ">Figure 10
<p>Flight experiment result using (<b>a</b>) the down-facing configuration and (<b>b</b>) the front-facing configuration. The grey cross markers indicate the object’s position and the green dots represent the mean position of the UAV’s flight path.</p>
Full article ">Figure 11
<p>UAV attitudes during aerial manipulation Experiment 1: object at <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">P</mi> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.250</mn> <mo>,</mo> <mn>0.400</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> [m].</p>
Full article ">Figure 12
<p>UAV attitudes during aerial manipulation Experiment 2: object at <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">P</mi> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mo>−</mo> <mn>0.200</mn> <mo>,</mo> <mo>−</mo> <mn>0.100</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> [m].</p>
Full article ">Figure 13
<p>UAV attitudes during aerial manipulation Experiment 3: object at <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">P</mi> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.350</mn> <mo>,</mo> <mn>0.150</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> [m].</p>
Full article ">
27 pages, 2171 KiB  
Article
Robust Onboard Orbit Determination Through Error Kalman Filtering
by Michele Ceresoli, Andrea Colagrossi, Stefano Silvestrini and Michèle Lavagna
Aerospace 2025, 12(1), 45; https://doi.org/10.3390/aerospace12010045 - 12 Jan 2025
Viewed by 445
Abstract
Accurate and robust on-board orbit determination is essential for enabling autonomous spacecraft operations, particularly in scenarios where ground control is limited or unavailable. This paper presents a novel method for achieving robust on-board orbit determination by integrating a loosely coupled GNSS/INS architecture with [...] Read more.
Accurate and robust on-board orbit determination is essential for enabling autonomous spacecraft operations, particularly in scenarios where ground control is limited or unavailable. This paper presents a novel method for achieving robust on-board orbit determination by integrating a loosely coupled GNSS/INS architecture with an on-board orbit propagator through error Kalman filtering. This method is designed to continuously estimate and propagate a spacecraft’s orbital state, leveraging real-time sensor measurements from a global navigation satellite system (GNSS) receiver and an inertial navigation system (INS). The key advantage of the proposed approach lies in its ability to maintain orbit determination integrity even during GNSS signal outages or sensor failures. During such events, the on-board orbit propagator seamlessly continues to predict the spacecraft’s trajectory using the last known state information and the error estimates from the Kalman filter, which were adapted here to handle synthetic propagated measurements. The effectiveness and robustness of the method are demonstrated through comprehensive simulation studies under various operational scenarios, including simulated GNSS signal interruptions and sensor anomalies. Full article
(This article belongs to the Special Issue New Concepts in Spacecraft Guidance Navigation and Control)
Show Figures

Figure 1

Figure 1
<p>Error calibration methods: open loop (feedforward) or closed loop (feedback).</p>
Full article ">Figure 2
<p>Loosely (<b>left</b>) and tightly (<b>right</b>) coupled systems.</p>
Full article ">Figure 3
<p>GCRF-to-ITRF position rotation errors in LEO and GEO.</p>
Full article ">Figure 4
<p>Position (<b>left</b>) and velocity (<b>right</b>) errors as functions of the delay time for an LEO satellite in a circular 550 km orbit.</p>
Full article ">Figure 5
<p>GNSS receiver model.</p>
Full article ">Figure 6
<p>Inertial Navigation system model.</p>
Full article ">Figure 7
<p>Sensor modelling scheme.</p>
Full article ">Figure 8
<p>Orbit determination with onboard orbit propagation.</p>
Full article ">Figure 9
<p>Operational timeline of a traditional predict-update Kalman filter. At each step, the filter uses the solution at <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math> to estimate <math display="inline"><semantics> <msub> <mi>x</mi> <mi>k</mi> </msub> </semantics></math>, provides the solution at <math display="inline"><semantics> <msub> <mi>t</mi> <mi>s</mi> </msub> </semantics></math> and holds it until <math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math>. In practice, the entire estimation process is affected by several delays due to the processing time requested by the algorithms and the latency over the spacecraft communication buses <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>C</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Position estimation error in a nominal LEO scenario.</p>
Full article ">Figure 11
<p>Velocity estimation error in a nominal LEO scenario.</p>
Full article ">Figure 12
<p>Comparison between the position error for the solution with the GNSS post-processing propagation step (orange line) and without the post-processing (blue line).</p>
Full article ">Figure 13
<p>Position estimation error in a nominal GEO scenario over 12 h.</p>
Full article ">Figure 14
<p>Position estimation error in case of GNSS outages in LEO. A degradation of the position accuracy is visible as the GNSS signal becomes unavailable.</p>
Full article ">Figure 15
<p>Position estimation error in case of GNSS outages in GEO. A degradation of the position accuracy is visible as the GNSS signal becomes unavailable.</p>
Full article ">Figure 16
<p>Position estimation error when orbital maneuvers were performed during periods of GNSS outages in LEO. The top image displays the results obtained with the proposed implementation which accounted for the control acceleration measured by the IMU. In the bottom image, the solution during the GNSS outage was obtained by solely propagating the latest available GNSS state.</p>
Full article ">
20 pages, 18304 KiB  
Article
Assessment of Radiometric Calibration Consistency of Thermal Emissive Bands Between Terra and Aqua Moderate-Resolution Imaging Spectroradiometers
by Tiejun Chang, Xiaoxiong Xiong, Carlos Perez Diaz, Aisheng Wu and Hanzhi Lin
Remote Sens. 2025, 17(2), 182; https://doi.org/10.3390/rs17020182 - 7 Jan 2025
Viewed by 378
Abstract
Moderate-Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua spacecraft have been in orbit for over 24 and 22 years, respectively, providing continuous observations of the Earth’s surface. Among the instrument’s 36 bands, 16 of them are thermal emissive bands (TEBs) with [...] Read more.
Moderate-Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua spacecraft have been in orbit for over 24 and 22 years, respectively, providing continuous observations of the Earth’s surface. Among the instrument’s 36 bands, 16 of them are thermal emissive bands (TEBs) with wavelengths that range from 3.75 to 14.24 μm. Routine post-launch calibrations are performed using the sensor’s onboard blackbody and space view port, the moon, and vicarious targets that include the ocean, Dome Concordia (Dome C) in Antarctica, and quasi-deep convective clouds (DCC). The calibration consistency between the satellite measurements from the two instruments is essential in generating a multi-year data record for the long-term monitoring of the Earth’s Level 1B (L1B) data. This paper presents the Terra and Aqua MODIS TEB comparison for the upcoming Collection 7 (C7) L1B products using measurements over Dome C and the ocean, as well as the double difference via simultaneous nadir overpasses with the Infrared Atmospheric Sounding Interferometer (IASI) sensor. The mission-long trending of the Terra and Aqua MODIS TEB is presented, and their cross-comparison is also presented and discussed. Results show that the calibration of the two MODIS sensors and their respective Earth measurements are generally consistent and within their design specifications. Due to the electronic crosstalk contamination, the PV LWIR bands show slightly larger drifts for both MODIS instruments across different Earth measurements. These drifts also have an impact on the Terra-to-Aqua calibration consistency. This thorough assessment serves as a robust record containing a summary of the MODIS calibration performance and the consistency between the two MODIS sensors over Earth view retrievals. Full article
Show Figures

Figure 1

Figure 1
<p>MODTRAN profile over an ocean scene simulated using MODIS Atmospheric Profile product as input. MODIS RSR are superimposed over the MODTRAN simulation.</p>
Full article ">Figure 2
<p>Aqua (<b>left</b>) and Terra (<b>right</b>) brightness temperature series over Dome C for MODIS C7 bands 20, 25, 29, 30, 31, and 33. All bands are referenced to AWS. Results are monthly averaged.</p>
Full article ">Figure 3
<p>Aqua minus Terra brightness temperature series over Dome C for MODIS C7 bands 20, 25, 29, 30, 31, and 33. All bands are referenced to AWS. Red dashed horizontal line defines average Aqua minus Terra BT differences. Results are monthly averaged.</p>
Full article ">Figure 4
<p>Aqua (<b>left</b>) and Terra (<b>right</b>) brightness temperature series over ocean for MODIS C7 bands 20, 25, 29, 30, 31, and 33. All bands are normalized (BT (band) = BT (band)—BT (band 31) + avg BT (band 31)) to band 31, except for band 31. Results are monthly averaged.</p>
Full article ">Figure 5
<p>Aqua minus Terra brightness temperature series over ocean for MODIS C7 bands 20, 25, 29, 30, 31, and 33. All bands are normalized (BT (band) = BT (band)—BT (band 31) + avg BT (band 31)) to band 31, except for band 31. Red dashed horizontal line defines average Aqua minus Terra BT difference. Results are monthly averaged.</p>
Full article ">Figure 6
<p>Aqua-IASI and Terra-IASI BT difference time series for MODIS sample bands 20, 25, 29, 30, 31, and 33. Average value for every SNO crossover between MODIS and IASI shown. Empty markers represent difference with IASI-A, while filled makers are used to denote IASI-C.</p>
Full article ">Figure 7
<p>Aqua-IASI minus Terra-IASI BT difference for MODIS sample bands 20, 25, 29, 30, 31, and 33. Red dashed horizontal line defines average Aqua-IASI minus Terra-IASI BT difference. Results are monthly averaged.</p>
Full article ">Figure 8
<p>Terra-IASI BT difference as a function of Terra MODIS BT for MODIS sample bands 20, 25, 29, 30, 31, and 33.</p>
Full article ">Figure 9
<p>Aqua-IASI BT difference as a function of Aqua MODIS BT for MODIS sample bands 20, 25, 29, 30, 31, and 33.</p>
Full article ">
30 pages, 60239 KiB  
Article
Retrieval and Evaluation of Global Surface Albedo Based on AVHRR GAC Data of the Last 40 Years
by Shaopeng Li, Xiongxin Xiao, Christoph Neuhaus and Stefan Wunderle
Remote Sens. 2025, 17(1), 117; https://doi.org/10.3390/rs17010117 - 1 Jan 2025
Viewed by 669
Abstract
In this study, the global land surface albedo namely GAC43 was retrieved for the years 1979 to 2020 using Advanced Very High Resolution Radiometer (AVHRR) global area coverage (GAC) data onboard National Oceanic and Atmospheric Administration (NOAA) and Meteorological Operational (MetOp) satellites. We [...] Read more.
In this study, the global land surface albedo namely GAC43 was retrieved for the years 1979 to 2020 using Advanced Very High Resolution Radiometer (AVHRR) global area coverage (GAC) data onboard National Oceanic and Atmospheric Administration (NOAA) and Meteorological Operational (MetOp) satellites. We provide a comprehensive retrieval process of the GAC43 albedo, followed by a comprehensive assessment against in situ measurements and three widely used satellite-based albedo products, the third edition of the CM SAF cLoud, Albedo and surface RAdiation (CLARA-A3), the Copernicus Climate Change Service (C3S) albedo product, and MODIS BRDF/albedo product (MCD43). Our quantitative evaluations indicate that GAC43 demonstrates the best stability, with a linear trend of ±0.002 per decade at nearly all pseudo invariant calibration sites (PICS) from 1982 to 2020. In contrast, CLARA-A3 exhibits significant noise before the 2000s due to the limited availability of observations, while C3S shows substantial biases during the same period due to imperfect sensors intercalibrations. Extensive validation at globally distributed homogeneous sites shows that GAC43 has comparable accuracy to C3S, with an overall RMSE of approximately 0.03, but a smaller positive bias of 0.012. Comparatively, MCD43C3 shows the lowest RMSE (~0.023) and minimal bias, while CLARA-A3 displays the highest RMSE (~0.042) and bias (0.02). Furthermore, GAC43, CLARA-A3, and C3S exhibit overestimation in forests, with positive biases exceeding 0.023 and RMSEs of at least 0.028. In contrast, MCD43C3 shows negligible bias and a smaller RMSE of 0.015. For grasslands and shrublands, GAC43 and MCD43C3 demonstrate comparable estimation uncertainties of approximately 0.023, with close positive biases near 0.09, whereas C3S and CLARA-A3 exhibit higher RMSEs and biases exceeding 0.032 and 0.022, respectively. All four albedo products show significant RMSEs around 0.035 over croplands but achieve the highest estimation accuracy better than 0.020 over deserts. It is worth noting that significant biases are typically attributed to insufficient spatial representativeness of the measurement sites. Globally, GAC43 and C3S exhibit similar spatial distribution patterns across most land surface conditions, including an overestimation compared to MCD43C3 and an underestimation compared to CLARA-A3 in forested areas. In addition, GAC43, C3S, and CLARA-A3 estimate higher albedo values than MCD43C3 in low-vegetation regions, such as croplands, grasslands, savannas, and woody savannas. Besides the fact that the new GAC43 product shows the best stability covering the last 40 years, one has to consider the higher proportion of backup inversions before 2000. Overall, GAC43 offers a promising long-term and consistent albedo with good accuracy for future studies such as global climate change, energy balance, and land management policy. Full article
Show Figures

Figure 1

Figure 1
<p>Local solar times and solar zenith angles of equator observations for all AVHRR-carrying NOAA and MetOp satellites used to generate GAC43 albedo products as shown in (<b>a</b>,<b>b</b>), respectively. SZA &gt; 90° indicates night conditions.</p>
Full article ">Figure 2
<p>Globally distributed sites with homogeneous characteristics and corresponding land cover types defined by the IGBP from the MCD12C1 product. Purple squares located in the desert are used to evaluate temporal stability, while other sites are utilized for direct validations.</p>
Full article ">Figure 3
<p>Flowchart for this study.</p>
Full article ">Figure 4
<p>The performance of full inversion and full and backup inversion at various IGBP land cover types.</p>
Full article ">Figure 5
<p>The performance of the GAC43 albedo with full inversions at various land cover types, where panels (<b>a</b>–<b>h</b>) represent the land cover types of BSV, CRO, DBF, EBF, ENF, GRA, OSH and WSA, respectively. In the plots, the red solid line represents the 1:1 line, and the green dotted line and purple solid lines represent the limits of deviation ±0.02 and ±0.04, respectively.</p>
Full article ">Figure 6
<p>Google Earth <sup>TM</sup> images were used to visually illustrate the heterogeneity surrounding selected homogeneous sites representing various land cover types: (<b>a</b>) EBF, (<b>b</b>) BSV, (<b>c</b>) CRO and (<b>d</b>) GRA, as defined by the MCD12C1 IGBP classification. The red circle in each image denotes a radius of 2.5 km.</p>
Full article ">Figure 7
<p>Inter-comparison performance among four satellite-based albedo products. The top four subfigures (<b>a</b>–<b>d</b>) show the accuracy of all available matching samples between in situ measurements and estimated albedo values derived from satellite products, while the bottom four subfigures (<b>e</b>–<b>h</b>) give the performance of that using same samples.</p>
Full article ">Figure 8
<p>The performance of four satellite-based albedo products using same samples across various land surface types, evaluated in terms of (<b>a</b>) RMSE and (<b>b</b>) bias, respectively. The <span class="html-italic">x</span>-axis represents the land cover type classified as forest, grassland or shrublands, cropland, and desert, and corresponding available samples.</p>
Full article ">Figure 9
<p>The temporal performance of four satellite-based albedo products related to in situ measurements, and each subplot represents one case of different land cover surface, including (<b>a</b>) EBF, (<b>b</b>) ENF, (<b>c</b>) DBF, (<b>d</b>) GRA, and (<b>e</b>) CRO, respectively. The grey shaded areas depict situations with snow cover.</p>
Full article ">Figure 10
<p>Spatial distributions of GAC43 BSA in July 2013 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure 11
<p>Percentage difference in BSA values between (<b>a</b>) GAC43 and CLARA-A3, (<b>b</b>) GAC43 and C3S, and (<b>c</b>) GAC43 and MCD43C3 in July 2013.</p>
Full article ">Figure 12
<p>The scattering plots between GAC43 BSA and (<b>a</b>) CLARA-A3 BSA, (<b>b</b>) C3S BSA, and (<b>c</b>) MCD43C3 BSA using all snow-free monthly pixels in July 2013, where the red lines indicate 1:1.</p>
Full article ">Figure 13
<p>The monthly BSA for the four satellite-based products across various land cover types in July 2013, where panels (<b>a</b>–<b>i</b>) represent the land cover types of CRO, DBF, DNF, EBF, ENF, GRA, MF, SAV and WSA, respectively. In the plots, the bottom values of each albedo product are the median of all corresponding land cover estimates. The top values match available samples.</p>
Full article ">Figure 14
<p>Monthly BSA from GAC43, MCD43C3, C3S, and CALRA-A3 at three randomly selected PICS sites: (<b>a</b>) Arabia 2, 20.19°N, 51.63°E; (<b>b</b>) Libya 3, 23.22°N, 23.23°E; and (<b>c</b>) Sudan 1, 22.11°N, 28.11°E, all characterized by BSV land surfaces as defined by IGBP.</p>
Full article ">Figure 15
<p>Box plots of the slope per decade for GAC43, CLARA-A3, C3S, and MCD43C3 at all PICS sites, where (<b>a</b>–<b>d</b>) represent the corresponding statistics during 1982–1990, 1991–2000, 2001–2010 and 2011–2020, respectively, and three dashed grey lines represent the 75%, 50%, and 25% quantiles. Red dotted lines indicate the horizontal line where slope is 0.</p>
Full article ">Figure 16
<p>Percentage of full inversions for the years 2004, 2008, 2012, and 2016 based on GAC43 (<b>top</b>) and MCD43A3 (<b>bottom</b>).</p>
Full article ">Figure 17
<p>Percentage of full inversions of GAC43 at various continents from 1979 to 2020.</p>
Full article ">Figure A1
<p>Spatial distributions of GAC43 BSA in July 2004 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A2
<p>Spatial distributions of GAC43 BSA in July 2008 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A3
<p>Spatial distributions of GAC43 BSA in July 2012 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A4
<p>Spatial distributions of GAC43 BSA in July 2016 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A5
<p>Percentages of full inversions for the years between 1979 and 2020 based on GAC43 data record.</p>
Full article ">
24 pages, 1256 KiB  
Article
Automatic Cleaning of Time Series Data in Rural Internet of Things Ecosystems That Use Nomadic Gateways
by Jerzy Dembski, Agata Kołakowska and Bogdan Wiszniewski
Sensors 2025, 25(1), 189; https://doi.org/10.3390/s25010189 - 1 Jan 2025
Viewed by 451
Abstract
A serious limitation to the deployment of IoT solutions in rural areas may be the lack of available telecommunications infrastructure enabling the continuous collection of measurement data. A nomadic computing system, using a UAV carrying an on-board gateway, can handle this; it leads, [...] Read more.
A serious limitation to the deployment of IoT solutions in rural areas may be the lack of available telecommunications infrastructure enabling the continuous collection of measurement data. A nomadic computing system, using a UAV carrying an on-board gateway, can handle this; it leads, however, to a number of technical challenges. One is the intermittent collection of data from ground sensors governed by weather conditions for the UAV measurement missions. Therefore, each sensor should be equipped with software that allows for the cleaning of collected data before transmission to the fly-over nomadic gateway from erroneous, misleading, or otherwise redundant data—to minimize their volume and fit them in the limited transmission window. This task, however, may be a barrier for end devices constrained in several ways, such as limited energy reserve, insufficient computational capability of their MCUs, and short transmission range of their RAT modules. In this paper, a comprehensive approach to these problems is proposed, which enables the implementation of an anomaly detector in time series data with low computational demand. The proposed solution uses the analysis of the physics of the measured signals and is based on a simple anomaly model whose parameters can be optimized using popular AI techniques. It was validated during a full 10-month vegetation period in a real Rural IoT system deployed by Gdańsk Tech. Full article
(This article belongs to the Special Issue Application of UAV and Sensing in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Nomadic computing in rural areas. Measurement sensors scattered over a large area without access to telecommunications infrastructure need an intermediary in the form of a mobile gateway carried by a UAV. Due to the limited <math display="inline"><semantics> <mrow> <mi mathvariant="normal">Δ</mi> <mi>t</mi> </mrow> </semantics></math> fly-over window, the transmitted data samples should not contain redundant, erratic, or otherwise misleading data.</p>
Full article ">Figure 2
<p>Time series data with power gaps. A nomadic gateway that connects to a sensor irregularly is not able to automatically detect power outages if the latter is not equipped with a continuously powered system clock.</p>
Full article ">Figure 3
<p>Absolute error in the moisture signal. Anomalous “out-of-range” values most often have internal causes related to the incorrect calibration of the sensor probes of measuring devices.</p>
Full article ">Figure 4
<p>Single peak in the moisture signal. Although some instability of the PV signal is visible, with abrupt changes in the values of its samples 61–91, no other peaks of the moisture signal are present. Apparently, the cause of the single peak observed has its source in the external environment of the moisture sensor probe.</p>
Full article ">Figure 5
<p>Jumps in the temperature and moisture signals. Their occurrence in slowly changing signals (see <a href="#sensors-25-00189-t001" class="html-table">Table 1</a>) mean that, for the rest of the daily period, either a given soil sensor probe was turned on or reset or stopped working for some internal reason.</p>
Full article ">Figure 6
<p>Bumps in the moisture and pH signals. Note the correlation of both signals, where the moisture signal reached its local maximum at sample 70 prior to the pH signal reaching its local maximum twice (samples 75 and 82); most likely, the end device was temporarily flooded.</p>
Full article ">Figure 7
<p>Instabilities in the temperature and moisture signals. Most likely, the temperature and moisture sensing probes were subject to small disturbances in the available power due to small variations in loads on the PV circuit caused by an undercharged battery.</p>
Full article ">Figure 8
<p>Generic anomaly model.</p>
Full article ">Figure 9
<p>Daily time series data anomaly detection and cleaning. After cleaning, minute samples are aggregated into hourly samples.</p>
Full article ">Figure 10
<p>Exemplary labeling of anomalies as “true” or “detected”. Sequence <span class="html-italic">t</span> of ground truth labels shows anomalies in a given (analyzed) signal marked in green, whereas sequence <span class="html-italic">y</span> of labels is generated by the anomaly detector (in red). Anomalous samples are indicated by 1 s; otherwise, they are correct and indicated by 0 s. In this example, the first anomaly marked in green was partially recognized because its red counterpart only partially matches it, while the second anomaly marked in green perfectly matches its red counterpart. Moreover, the third anomaly marked in green was not detected at all, and the other two anomalies marked in red were falsely detected.</p>
Full article ">Figure 11
<p>Reduction in average error <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>=</mo> <mo stretchy="false">(</mo> <msub> <mi>E</mi> <mrow> <mi>s</mi> <mi>m</mi> <mi>p</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>E</mi> <mrow> <mi>s</mi> <mi>q</mi> <mi>n</mi> </mrow> </msub> <mo stretchy="false">)</mo> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> calculated on the basis of training data during parameter optimization. Taking into account both <math display="inline"><semantics> <msub> <mi>E</mi> <mrow> <mi>s</mi> <mi>m</mi> <mi>p</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>E</mi> <mrow> <mi>s</mi> <mi>q</mi> <mi>n</mi> </mrow> </msub> </semantics></math> helps to avoid local minima during optimization.</p>
Full article ">Figure 12
<p>Distances from the reference series. With heuristic values of the anomaly parameters, the distance to the reference series was reduced by 16.34% on average, whereas after their optimization, it decreased by 24.95% on average.</p>
Full article ">
26 pages, 34170 KiB  
Article
Navigating ALICE: Advancements in Deployable Docking and Precision Detection for AUV Operations
by Yevgeni Gutnik, Nir Zagdanski, Sharon Farber, Tali Treibitz and Morel Groper
Robotics 2025, 14(1), 5; https://doi.org/10.3390/robotics14010005 - 31 Dec 2024
Viewed by 675
Abstract
Autonomous Underwater Vehicles (AUVs) operate independently using onboard batteries and data storage, necessitating periodic recovery for battery recharging and data transfer. Traditional surface-based launch and recovery (L&R) operations pose significant risks to personnel and equipment, particularly in adverse weather conditions. Subsurface docking stations [...] Read more.
Autonomous Underwater Vehicles (AUVs) operate independently using onboard batteries and data storage, necessitating periodic recovery for battery recharging and data transfer. Traditional surface-based launch and recovery (L&R) operations pose significant risks to personnel and equipment, particularly in adverse weather conditions. Subsurface docking stations provide a safer alternative but often involve complex fixed installations and costly acoustic positioning systems. This work introduces a comprehensive docking solution featuring the following two key innovations: (1) a novel deployable docking station (DDS) designed for rapid deployment from vessels of opportunity, operating without active acoustic transmitters; and (2) an innovative sensor fusion approach that combines the AUV’s onboard forward-looking sonar and camera data. The DDS comprises a semi-submersible protective frame and a subsurface, heave-compensated docking component equipped with backlit visual markers, an electromagnetic (EM) beacon, and an EM lifting device. This adaptable design is suitable for temporary installations and in acoustically sensitive or covert operations. The positioning and guidance system employs a multi-sensor approach, integrating range and azimuth data from the sonar with elevation data from the vision camera to achieve precise 3D positioning and robust navigation in varying underwater conditions. This paper details the design considerations and integration of the AUV system and the docking station, highlighting their innovative features. The proposed method was validated through software-in-the-loop simulations, controlled seawater pool experiments, and preliminary open-sea trials, including several docking attempts. While further sea trials are planned, current results demonstrate the potential of this solution to enhance AUV operational capabilities in challenging underwater environments while reducing deployment complexity and operational costs. Full article
(This article belongs to the Special Issue Navigation Systems of Autonomous Underwater and Surface Vehicles)
Show Figures

Figure 1

Figure 1
<p>The DDS and ALICE AUV deployed from a small vessel during an experiment at sea.</p>
Full article ">Figure 2
<p>An overview on the DDS components.</p>
Full article ">Figure 3
<p>DDS surface platform and docking component.</p>
Full article ">Figure 4
<p>Components of the subsurface docking component.</p>
Full article ">Figure 5
<p>The surface platform components.</p>
Full article ">Figure 6
<p>ALICE AUV (disassembled) with FLS and FLC in the front payload section.</p>
Full article ">Figure 7
<p>ALICE during experiment at sea.</p>
Full article ">Figure 8
<p>Three guidance phases of the docking sequence, defined by the detection range and field of view of the FLS, FLC, and the magnetometers.</p>
Full article ">Figure 9
<p>FLS coordinate frame.</p>
Full article ">Figure 10
<p>(<b>a</b>) The FLS data in the seawater pool experiment; (<b>b</b>) adaptive beam mask; (<b>c</b>) two-frame averaged and median blur image; and (<b>d</b>) thresholded binary image.</p>
Full article ">Figure 11
<p>The ArUco markers, as detected by the ArUco detection algorithm.</p>
Full article ">Figure 12
<p>(<b>a</b>) The DDS as detected by the FLC in the pool experiment. (<b>b</b>) Extracted objects using the HSV threshold.</p>
Full article ">Figure 13
<p>FLC’s coordinate frame.</p>
Full article ">Figure 14
<p>FLC and FLS coordinate frames.</p>
Full article ">Figure 15
<p>The docking sequence as displayed by the SMACH viewer.</p>
Full article ">Figure 16
<p>(<b>a</b>) Stonefish scene features the ALICE AUV, DDS, and a support vessel. (<b>b</b>) The processed FLS image with the green rectangle representing the detected DDS and the blue rectangle signifying the tracked entity. (<b>c</b>) Simulated FLC image with the ROI (represented by blue rectangle) defined according to the detection of the DDS in the FLS image. The green and red lines signifying the coordinate frame of the FLC image.</p>
Full article ">Figure 17
<p>ALICE’s path and the positioning of the docking component as computed by the FLS and FLC detection and fusion algorithms in the Stonefish simulation.</p>
Full article ">Figure 18
<p>DDS and ALICE in a pool experiment.</p>
Full article ">Figure 19
<p>(<b>a</b>) The docking component as obtained by the FLS, set to the maximal range of 6 m, with the green rectangle representing the detected object and the blue rectangle indicating the most probable docking component entity. (<b>b</b>) The docking component as obtained by the FLC, with the blue rectangle representing the ROI defined by the fusion algorithm and the green cycle indicating the HSV-based detection of the docking component. The red and green lines represent the axes of the image frame.</p>
Full article ">Figure 20
<p>The positioning of the DDS’ docking component as computed by the FLS, the FLS-FLC fusion, and the ArUco marker detection algorithms.</p>
Full article ">Figure 21
<p>(<b>a</b>) DDS deployment from a support vessel at sea. (<b>b</b>) ALICE AUV and the DDS during the experiment at sea.</p>
Full article ">Figure 22
<p>The FLS image, configured at its maximum range of 25 m, captures the DDS’ surface platform as observed at sea. The green rectangle represents the detected object and the blue rectangle indicates the tracked entity.</p>
Full article ">Figure 23
<p>Results of the experiment at sea: (<b>a</b>) The detection of the docking component, indicated by rec cycle within the ROI. (<b>b</b>) The detection of the ArUco markers. (<b>c</b>) Collusion with the docking component.</p>
Full article ">Figure 24
<p>ALICE’s trajectory, the docking component’s position, and the goal points as recorded during the first sea docking attempt.</p>
Full article ">Figure 25
<p>The detection of the docking component by the HSV filter, marked by the red circle, and the ROI, marked by the blue rectangle.</p>
Full article ">
20 pages, 7144 KiB  
Article
A Study of NOAA-20 VIIRS Band M1 (0.41 µm) Striping over Clear-Sky Ocean
by Wenhui Wang, Changyong Cao, Slawomir Blonski and Xi Shao
Remote Sens. 2025, 17(1), 74; https://doi.org/10.3390/rs17010074 - 28 Dec 2024
Viewed by 427
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the National Oceanic and Atmospheric Administration-20 (NOAA-20) satellite was launched on 18 November 2017. The on-orbit calibration of the NOAA-20 VIIRS visible and near-infrared (VisNIR) bands has been very stable over time. However, NOAA-20 operational [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the National Oceanic and Atmospheric Administration-20 (NOAA-20) satellite was launched on 18 November 2017. The on-orbit calibration of the NOAA-20 VIIRS visible and near-infrared (VisNIR) bands has been very stable over time. However, NOAA-20 operational M1 (a dual gain band with a center wavelength of 0.41 µm) sensor data records (SDR) have exhibited persistent scene-dependent striping over clear-sky ocean (high gain, low radiance) since the beginning of the mission, different from other VisNIR bands. This paper studies the root causes of the striping in the operational NOAA-20 M1 SDRs. Two potential factors were analyzed: (1) polarization effect-induced striping over clear-sky ocean and (2) imperfect on-orbit radiometric calibration-induced striping. NOAA-20 M1 is more sensitive to the polarized lights compared to other NOAA-20 short-wavelength bands and the similar bands on the Suomi NPP and NOAA-21 VIIRS, with detector and scan angle-dependent polarization sensitivity up to ~6.4%. The VIIRS M1 top of atmosphere radiance is dominated by Rayleigh scattering over clear-sky ocean and can be up to ~70% polarized. In this study, the impact of the polarization effect on M1 striping was investigated using radiative transfer simulation and a polarization correction method similar to that developed by the NOAA ocean color team. Our results indicate that the prelaunch-measured polarization sensitivity and the polarization correction method work well and can effectively reduce striping over clear-sky ocean scenes by up to ~2% at near nadir zones. Moreover, no significant change in NOAA-20 M1 polarization sensitivity was observed based on the data analyzed in this study. After the correction of the polarization effect, residual M1 striping over clear-sky ocean suggests that there exists half-angle mirror (HAM)-side and detector-dependent striping, which may be caused by on-orbit radiometric calibration errors. HAM-side and detector-dependent striping correction factors were analyzed using deep convective cloud (DCC) observations (low gain, high radiances) and verified over the homogeneous Libya-4 desert site (low gain, mid-level radiance); neither are significantly affected by the polarization effect. The imperfect on-orbit radiometric calibration-induced striping in the NOAA operational M1 SDR has been relatively stable over time. After the correction of the polarization effect, the DCC-based striping correction factors can further reduce striping over clear-sky ocean scenes by ~0.5%. The polarization correction method used in this study is only effective over clear-sky ocean scenes that are dominated by the Rayleigh scattering radiance. The DCC-based striping correction factors work well at all radiance levels; therefore, they can be deployed operationally to improve the quality of NOAA-20 M1 SDRs. Full article
(This article belongs to the Collection The VIIRS Collection: Calibration, Validation, and Application)
Show Figures

Figure 1

Figure 1
<p>Monthly DCC reflectance (mode) time series for NOAA-20 VIIRS bands M1–M4 from May 2018 to June 2024.</p>
Full article ">Figure 2
<p>NOAA-20 M1 (<b>a</b>) detector level relative response (RSR, represented by different colors) functions and (<b>b</b>) operational F-factors on 31 December 2023 (right).</p>
Full article ">Figure 3
<p>Example of 6SV simulated Stokes vectors (<b>a</b>) <span class="html-italic">I</span>, (<b>b</b>) <span class="html-italic">Q</span>, (<b>c</b>) <span class="html-italic">U</span>, and (<b>d</b>) DoLP for a NOAA-20 VIIRS M1 granule on 9 January 2024 20:36–20:38 UTC (Pacific Coast, latitude: 29.27°, longitude: −116.95°).</p>
Full article ">Figure 4
<p>6SV simulated degree of linear polarization (DoLP, unitless) over clear-sky ocean at surface pressure of 1013.5 hPa and wind speed of 5 m/s: (<b>a</b>) DoLP as functions of view zenith angle (VZA) and relative azimuth angle (RAA) at solar zenith angle (SZA) of 22.5°; (<b>b</b>) DoLP as functions of SZA and RAA at VZA of 22.5°.</p>
Full article ">Figure 5
<p>6SV simulated DoLP (black dots) for NOAA-20 M1 over clear-sky ocean as a function of scattering angle, at a surface pressure of 1013.5 hPa and a wind speed of 5 m/s. The blue vertical dash line marks the 90° scattering angle.</p>
Full article ">Figure 6
<p>Polar plots of NOAA-20 VIIRS M1 prelaunch polarization sensitivity and phase angle at different scan angles for (<b>a</b>) HAM-A and (<b>b</b>) HAM-B. Polarization sensitivity (unit: percent) is represented by the length of a vector on the polar plot, while polarization phase angle is represented by the direction of the vector. Scan angle is represented by different colors; detector is represented by different symbols.</p>
Full article ">Figure 7
<p>NOAA-20 VIIRS M1 detector- and HAM-side-dependent <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math> (left panel) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>13</mn> </mrow> </msub> </mrow> </semantics></math> (right panel) terms as a function of the scan angle, derived using prelaunch characterized polarization amplitude and phase angle.</p>
Full article ">Figure 8
<p>NOAA-20 M1 striping over a clear-sky ocean scene on 9 January 2024 20:36 UTC: (<b>a</b>) operational SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence in the operational SDR; (<b>c</b>) operational reflectance ratios between individual detectors and band-averaged value; (<b>d</b>–<b>f</b>) are similar to (<b>a</b>–<b>c</b>), but after applying the polarization correction. The gray horizontal dash lines in (<b>c</b>,<b>f</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 9
<p>Similar to <a href="#remotesensing-17-00074-f008" class="html-fig">Figure 8</a>, but for a NOAA-20 M1 clear-sky ocean scene on 23 September 2018 06:12 UTC (Indian Ocean, West Coast of Australia): (<b>a</b>) operational SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence in the operational SDR; (<b>c</b>) operational reflectance ratios between individual detectors and band-averaged value; (<b>d</b>–<b>f</b>) are similar to (<b>a</b>–<b>c</b>), but after applying the polarization correction. The gray horizontal dash lines in (<b>c</b>,<b>f</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 10
<p>Comparison of NOAA-20 M1 DCC-based striping correction factors for (<b>a</b>) considering detector-dependent striping only and (<b>b</b>) considering both HAM-side- and detector-dependent striping.</p>
Full article ">Figure 11
<p>Impacts of DCC-based striping correction factors for NOAA-20 M1 over the Libyan-4 desert site (30 March 2024, 11:32 UTC): (<b>a</b>) operational SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence in the operational SDR; (<b>c</b>) operational reflectance ratios between individual detectors and band-averaged value; (<b>d</b>–<b>f</b>) are similar to (<b>a</b>–<b>c</b>), but after applying the DCC-based striping correction. The gray horizontal dash lines in (<b>c</b>,<b>f</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 12
<p>Similar to <a href="#remotesensing-17-00074-f008" class="html-fig">Figure 8</a>, but after applying both DCC-based striping correction and polarization correction: (<b>a</b>) SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence; (<b>c</b>) reflectance ratios between individual detectors and band-averaged value. The gray horizontal dash lines in (<b>c</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 13
<p>Similar to <a href="#remotesensing-17-00074-f009" class="html-fig">Figure 9</a>, but after applying both DCC-based striping correction and polarization correction: (<b>a</b>) SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence; (<b>c</b>) reflectance ratios between individual detectors and band-averaged value. The gray horizontal dash lines in (<b>c</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">
13 pages, 901 KiB  
Article
CubeSat Imaging Payload Design for Environmental Monitoring of Greenland
by Paul D. Rosero-Montalvo and Julian Charles Philip Priest
Electronics 2025, 14(1), 18; https://doi.org/10.3390/electronics14010018 - 25 Dec 2024
Viewed by 570
Abstract
Climate change affects the Earth’s ecosystem, and understanding human impact on sparsely populated polar regions is crucial, especially in glacial dynamics. Nanosatellites can play an essential role in monitoring remote regions due to their flexibility in adding remote sensors for Earth observation. However, [...] Read more.
Climate change affects the Earth’s ecosystem, and understanding human impact on sparsely populated polar regions is crucial, especially in glacial dynamics. Nanosatellites can play an essential role in monitoring remote regions due to their flexibility in adding remote sensors for Earth observation. However, they have hardware constraints such as physical space limitations, low power generation, and low bandwidth, as well as environmental challenges of vacuum, heat, cold and radiation. This paper details the preliminary system design of an imaging payload integrated into a nanosatellite for monitoring field study sites in Greenland. The payload is capable of supporting advanced image processing and Machine Learning (ML) applications. Key design elements, including the selection of imaging sensors, onboard processing units, and data transmission systems, which are optimized for the constraints of a nanosatellite platform, are presented. As a result, we presented a novel imaging payload system design, which shows a significant step towards leveraging space technology for environmental research. Full article
Show Figures

Figure 1

Figure 1
<p>V-model methodology applied to the payload design. Process: <math display="inline"><semantics> <mrow> <mo>(</mo> <mo>→</mo> <mo>)</mo> </mrow> </semantics></math>. Iteration: <math display="inline"><semantics> <mrow> <mo>(</mo> <mo>⤎</mo> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Imaging payload electronic design proposal.</p>
Full article ">Figure 2 Cont.
<p>Imaging payload electronic design proposal.</p>
Full article ">Figure 3
<p>PCB electronic design: SoMs’ connection with the USB switch and cameras.</p>
Full article ">Figure 4
<p>Imaging payload PCB developed with a redundant SoM.</p>
Full article ">Figure 5
<p>Full imaging payload with cameras, brackets and hardness.</p>
Full article ">
Back to TopTop