Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (970)

Search Parameters:
Keywords = sensor bias

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2578 KiB  
Article
A Novel Approach for Kalman Filter Tuning for Direct and Indirect Inertial Navigation System/Global Navigation Satellite System Integration
by Adalberto J. A. Tavares Jr. and Neusa M. F. Oliveira
Sensors 2024, 24(22), 7331; https://doi.org/10.3390/s24227331 (registering DOI) - 16 Nov 2024
Viewed by 493
Abstract
This work presents an innovative approach for tuning the Kalman filter in INS/GNSS integration, combining states from the inertial navigation system (INS) and data from the Global Navigation Satellite System (GNSS) to enhance navigation accuracy. The INS uses measurements from accelerometers and gyroscopes, [...] Read more.
This work presents an innovative approach for tuning the Kalman filter in INS/GNSS integration, combining states from the inertial navigation system (INS) and data from the Global Navigation Satellite System (GNSS) to enhance navigation accuracy. The INS uses measurements from accelerometers and gyroscopes, which are subject to uncertainties in scale factor, misalignment, non-orthogonality, and bias, as well as temporal, thermal, and vibration variations. The GNSS receiver faces challenges such as multipath, temporary signal loss, and susceptibility to high-frequency noise. The novel approach for Kalman filter tuning involves previously performing Monte Carlo simulations using ideal data from a predetermined trajectory, applying the inertial sensor error model. For the indirect filter, errors from inertial sensors are used, while, for the direct filter, navigation errors in position, velocity, and attitude are also considered to obtain the process noise covariance matrix Q. This methodology is tested and validated with real data from Castro Leite Consultoria’s commercial platforms, PINA-F and PINA-M. The results demonstrate the efficiency and consistency of the estimation technique, highlighting its applicability in real scenarios. Full article
(This article belongs to the Special Issue INS/GNSS Integrated Navigation Systems)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the Monte Carlo simulation algorithm.</p>
Full article ">Figure 2
<p>Top-down view of test trajectory highlighted in blue.</p>
Full article ">Figure 3
<p>Castro Leite Consultoria’s inertial platforms.</p>
Full article ">Figure 4
<p>Geodetic position.</p>
Full article ">Figure 5
<p>Positions in geodetic frame.</p>
Full article ">Figure 6
<p>Velocities in navigation frame.</p>
Full article ">Figure 7
<p>Euler angles.</p>
Full article ">Figure 8
<p>Geodetic position.</p>
Full article ">Figure 9
<p>Positions in geodetic frame.</p>
Full article ">Figure 10
<p>Velocities in navigation frame.</p>
Full article ">Figure 11
<p>Euler angles.</p>
Full article ">Figure 12
<p>Geodetic position.</p>
Full article ">Figure 13
<p>Positions in geodetic frame.</p>
Full article ">Figure 14
<p>Velocities in navigation frame.</p>
Full article ">Figure 15
<p>Euler angles.</p>
Full article ">Figure 16
<p>Geodetic position.</p>
Full article ">Figure 17
<p>Positions in geodetic frame.</p>
Full article ">Figure 18
<p>Velocities in navigation frame.</p>
Full article ">Figure 19
<p>Euler angles.</p>
Full article ">
17 pages, 2888 KiB  
Article
Research on Fault Diagnosis of Agricultural IoT Sensors Based on Improved Dung Beetle Optimization–Support Vector Machine
by Sicheng Liang, Pingzeng Liu, Ziwen Zhang and Yong Wu
Sustainability 2024, 16(22), 10001; https://doi.org/10.3390/su162210001 - 16 Nov 2024
Viewed by 278
Abstract
The accuracy of data perception in Internet of Things (IoT) systems is fundamental to achieving scientific decision-making and intelligent control. Given the frequent occurrence of sensor failures in complex environments, a rapid and accurate fault diagnosis and handling mechanism is crucial for ensuring [...] Read more.
The accuracy of data perception in Internet of Things (IoT) systems is fundamental to achieving scientific decision-making and intelligent control. Given the frequent occurrence of sensor failures in complex environments, a rapid and accurate fault diagnosis and handling mechanism is crucial for ensuring the stable operation of the system. Addressing the challenges of insufficient feature extraction and sparse sample data that lead to low fault diagnosis accuracy, this study explores the construction of a fault diagnosis model tailored for agricultural sensors, with the aim of accurately identifying and analyzing various sensor fault modes, including but not limited to bias, drift, accuracy degradation, and complete failure. This study proposes an improved dung beetle optimization–support vector machine (IDBO-SVM) diagnostic model, leveraging the optimization capabilities of the former to finely tune the parameters of the Support Vector Machine (SVM) to enhance fault recognition under conditions of limited sample data. Case analyses were conducted using temperature and humidity sensors in air and soil, with comprehensive performance comparisons made against mainstream algorithms such as the Backpropagation (BP) neural network, Sparrow Search Algorithm–Support Vector Machine (SSA-SVM), and Elman neural network. The results demonstrate that the proposed model achieved an average diagnostic accuracy of 94.91%, significantly outperforming other comparative models. This finding fully validates the model’s potential in enhancing the stability and reliability of control systems. The research results not only provide new ideas and methods for fault diagnosis in IoT systems but also lay a foundation for achieving more precise, efficient intelligent control and scientific decision-making. Full article
Show Figures

Figure 1

Figure 1
<p>IoT sensing device.</p>
Full article ">Figure 2
<p>Sensor fault waveform characteristics diagram.</p>
Full article ">Figure 3
<p>Performance comparison chart of optimization algorithms.</p>
Full article ">Figure 4
<p>IDBO-SVM troubleshooting flow.</p>
Full article ">Figure 5
<p>(<b>a</b>) Confusion matrix for classification of temperature sensor fault prediction. (<b>b</b>) Confusion matrix for classification of humidity sensor fault prediction. (<b>c</b>) Confusion matrix for classification of soil temperature sensor fault prediction. (<b>d</b>) Confusion matrix for classification of soil humidity sensor fault prediction.</p>
Full article ">Figure 6
<p>Fault diagnosis model accuracy comparison.</p>
Full article ">
20 pages, 9833 KiB  
Article
Reconstruction of Hourly Gap-Free Sea Surface Skin Temperature from Multi-Sensors
by Qianguang Tu, Zengzhou Hao, Dong Liu, Bangyi Tao, Liangliang Shi and Yunwei Yan
Remote Sens. 2024, 16(22), 4268; https://doi.org/10.3390/rs16224268 (registering DOI) - 15 Nov 2024
Viewed by 265
Abstract
The sea surface skin temperature (SSTskin) is of critical importance with regard to air–sea interactions and marine carbon circulation. At present, no single remote sensor is capable of providing a gap-free SSTskin. The use of data fusion techniques is [...] Read more.
The sea surface skin temperature (SSTskin) is of critical importance with regard to air–sea interactions and marine carbon circulation. At present, no single remote sensor is capable of providing a gap-free SSTskin. The use of data fusion techniques is therefore essential for the purpose of filling these gaps. The extant fusion methodologies frequently fail to account for the influence of depth disparities and the diurnal variability of sea surface temperatures (SSTs) retrieved from multi-sensors. We have developed a novel approach that integrates depth and diurnal corrections and employs advanced data fusion techniques to generate hourly gap-free SST datasets. The General Ocean Turbulence Model (GOTM) is employed to model the diurnal variability of the SST profile, incorporating depth and diurnal corrections. Subsequently, the corrected SSTs at the same observed time and depth are blended using the Markov method and the remaining data gaps are filled with optimal interpolation. The overall precision of the hourly gap-free SSTskin generated demonstrates a mean bias of −0.14 °C and a root mean square error of 0.57 °C, which is comparable to the precision of satellite observations. The hourly gap-free SSTskin is vital for improving our comprehension of air–sea interactions and monitoring critical oceanographic processes with high-frequency variability. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall flowchart of multi-sensors fusion for SST<sub>skin</sub>.</p>
Full article ">Figure 2
<p>The DV of SST<sub>skin</sub> modeled by GOTM on 8 May 2007.</p>
Full article ">Figure 3
<p>Histogram of the difference between MTSAT-observed DV and GOTM DV on 8 May 2007.</p>
Full article ">Figure 4
<p>GOTM of the SST at 2 p.m. on 8 May 2007. (<b>a</b>) The SST profile at 122°E and 35.25°N; (<b>b</b>) the difference in the spatial distributions between SST<sub>skin</sub> and SST<sub>subskin</sub>.</p>
Full article ">Figure 5
<p>(<b>a</b>) The original hourly MTSAT SST on 8 May 2007. (<b>b</b>) The diurnal variation-corrected (normalized) hourly MTSAT SST on 8 May 2007.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) The original hourly MTSAT SST on 8 May 2007. (<b>b</b>) The diurnal variation-corrected (normalized) hourly MTSAT SST on 8 May 2007.</p>
Full article ">Figure 6
<p>(<b>a</b>) Number of sensors available on 8 May 2007; (<b>b</b>) the fusion SST at 10:30 a.m. using Markov estimation.</p>
Full article ">Figure 7
<p>Covariance structure function of the East China Sea estimated from MTSAT in 2007. The spatial covariance functions at (<b>a</b>) zonal and (<b>b</b>) meridional directions for the SST variations. Temporal correlation with time lags computed using hourly SST (<b>c</b>). Red line is the fitting function. Vertical bars represent ±1 standard deviation.</p>
Full article ">Figure 8
<p>The hourly gap-free SST<sub>skin</sub> on 8 May 2007.</p>
Full article ">Figure 9
<p>The diurnal variation of SST<sub>skin</sub> at 124°E and 28°N on 8 May 2007.</p>
Full article ">Figure 10
<p>(<b>a</b>) Scatter plot between in situ SST<sub>skin</sub> and fusion SST<sub>skin</sub>. (<b>b</b>) The hourly mean bias and standard deviation during 2007.</p>
Full article ">
18 pages, 2082 KiB  
Systematic Review
The Use of Wearable Sensors and Machine Learning Methods to Estimate Biomechanical Characteristics During Standing Posture or Locomotion: A Systematic Review
by Isabelle J. Museck, Daniel L. Brinton and Jesse C. Dean
Sensors 2024, 24(22), 7280; https://doi.org/10.3390/s24227280 - 14 Nov 2024
Viewed by 339
Abstract
Balance deficits are present in a variety of clinical populations and can negatively impact quality of life. The integration of wearable sensors and machine learning technology (ML) provides unique opportunities to quantify biomechanical characteristics related to balance outside of a laboratory setting. This [...] Read more.
Balance deficits are present in a variety of clinical populations and can negatively impact quality of life. The integration of wearable sensors and machine learning technology (ML) provides unique opportunities to quantify biomechanical characteristics related to balance outside of a laboratory setting. This article provides a general overview of recent developments in using wearable sensors and ML to estimate or predict biomechanical characteristics such as center of pressure (CoP) and center of mass (CoM) motion. This systematic review was conducted according to PRISMA guidelines. Databases including Scopus, PubMed, CINHAL, Trip PRO, Cochrane, and Otseeker databases were searched for publications on the use of wearable sensors combined with ML to predict biomechanical characteristics. Fourteen publications met the inclusion criteria and were included in this review. From each publication, information on study characteristics, testing conditions, ML models applied, estimated biomechanical characteristics, and sensor positions were extracted. Additionally, the study type, level of evidence, and Downs and Black scale score were reported to evaluate methodological quality and bias. Most studies tested subjects during walking and utilized some type of neural network (NN) ML model to estimate biomechanical characteristics. Many of the studies focused on minimizing the necessary number of sensors and placed them on areas near or below the waist. Nearly all studies reporting RMSE and correlation coefficients had values <15% and >0.85, respectively, indicating strong ML model estimation accuracy. Overall, this review can help guide the future development of ML algorithms and wearable sensor technologies to estimate postural mechanics. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Pyramid for artificial intelligence scientific evidence [<a href="#B24-sensors-24-07280" class="html-bibr">24</a>]. <span class="html-italic">Adapted from</span> “<span class="html-italic">The artificial intelligence evidence-based medicine pyramid,” by Bellini, V., et al., 2023, World J Crit Care Med, 2023. 12(2): p. 89–91. Copyright 2023. Reprinted with permission</span>.</p>
Full article ">Figure 2
<p>PRISMA flow diagram.</p>
Full article ">Figure 3
<p>Summary of the different ML models applied in the reviewed studies.</p>
Full article ">Figure 4
<p>Sensor placement locations and the percentage of studies included in this review that consider each position (percentages are based on counts of studies that included that specific sensor location and may not add up to 100%).</p>
Full article ">Figure 5
<p>Wearable sensor characteristics.</p>
Full article ">
17 pages, 37894 KiB  
Article
High-Precision Rotor Position Fitting Method of Permanent Magnet Synchronous Machine Based on Hall-Effect Sensors
by Kaining Qu, Pengfei Pang and Wei Hua
Energies 2024, 17(22), 5625; https://doi.org/10.3390/en17225625 - 10 Nov 2024
Viewed by 530
Abstract
The high-performance vector control technology of permanent magnet synchronous machines (PMSMs) relies on high-precision rotor position. The Hall-effect sensor has the advantages of low cost, simple installation, and strong anti-interference ability. However, it can only provide six discrete rotor angles in an electrical [...] Read more.
The high-performance vector control technology of permanent magnet synchronous machines (PMSMs) relies on high-precision rotor position. The Hall-effect sensor has the advantages of low cost, simple installation, and strong anti-interference ability. However, it can only provide six discrete rotor angles in an electrical cycle, which makes high-precision vector control of PMSMs difficult. Hence, to obtain the necessary rotor position of PMSMs, a rotor position fitting method combining the Hall signal and machine flux information is proposed. Firstly, the rotor position signal output by the Hall-effect sensors is used to calibrate and update the stator flux obtained under pure integration. Then, based on the corrected stator flux and its relationship with current and angle, the rotor position and speed are obtained. Experimental verification shows that the rotor position observer combining Hall signal and flux information can reduce the initial value bias and integral drift caused by traditional average speed method hysteresis and pure integration method calculation of flux and can quickly and accurately track and estimate the rotor position, achieving high-performance vector control of PMSMs. Full article
(This article belongs to the Special Issue Designs and Control of Electrical Machines and Drives)
Show Figures

Figure 1

Figure 1
<p>Hall-effect Sensors.</p>
Full article ">Figure 2
<p>Hall installation mode and output signals (<b>a</b>) 120° Hall-effect installation, (<b>b</b>) Hall-effect output signals.</p>
Full article ">Figure 3
<p>Relation of Hall signal and rotor position.</p>
Full article ">Figure 4
<p>Estimation principle by average velocity method.</p>
Full article ">Figure 5
<p>Hall signal delay under digital control system.</p>
Full article ">Figure 6
<p>Structure of position vector tracking observer.</p>
Full article ">Figure 7
<p>Bode diagram of position vector tracking observer based on back EMF.</p>
Full article ">Figure 8
<p>Stator flux observer combined with flux linkage information and Hall signal.</p>
Full article ">Figure 9
<p>Input voltage calculation for the flux observer.</p>
Full article ">Figure 10
<p>Rotor position observer combined with flux linkage information and Hall signal.</p>
Full article ">Figure 11
<p>The experiment platform.</p>
Full article ">Figure 12
<p>Flux linkage waveforms under three conditions (<b>a</b>) Flux linkage before correction, (<b>b</b>) Flux linkage at discrete Hall points, (<b>c</b>) Flux linkage under Hall signal correction.</p>
Full article ">Figure 13
<p>Comparison of no-load starting rotor positions (<b>a</b>) Average speed, (<b>b</b>) Vector tracking, (<b>c</b>) Flux-Hall.</p>
Full article ">Figure 14
<p>Comparison of on-rated-load starting rotor positions.</p>
Full article ">Figure 15
<p>Estimated rotor angle errors under three initial positions (<b>a</b>) The Hall intermediate angle, (<b>b</b>) 25° advanced, (<b>c</b>) 25° delayed.</p>
Full article ">Figure 16
<p>Comparison of <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>q</mi> </mrow> </semantics></math>-frame currents waveforms. Comparison of <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>q</mi> </mrow> </semantics></math>-frame currents waveforms (<b>a</b>) Average speed, (<b>b</b>) Flux-Hall.</p>
Full article ">Figure 17
<p>Comparison of rotor positions under a speed change from 300/min to 1800/min (<b>a</b>) Average speed, (<b>b</b>) Flux-Hall, (<b>c</b>) Angle error.</p>
Full article ">Figure 18
<p>Rotor position during forward and reverse switching (<b>a</b>) Average speed, (<b>b</b>) Flux-Hall.</p>
Full article ">Figure 19
<p>Stator current and rotor position at 750/min (<b>a</b>) Average speed, (<b>b</b>,<b>c</b>) Flux-Hall.</p>
Full article ">Figure 20
<p>Rotor position under different inductance (<b>a</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>L</mi> </mrow> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <mn>0.8</mn> <mi>L</mi> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>L</mi> </mrow> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <mn>1.2</mn> <mi>L</mi> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>L</mi> </mrow> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <mi>L</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 21
<p>Rotor position under different resistance (<b>a</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>R</mi> </mrow> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <mn>0.8</mn> <mi>R</mi> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>R</mi> </mrow> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <mn>1.2</mn> <mi>R</mi> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>R</mi> </mrow> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <mi>R</mi> </mrow> </semantics></math>.</p>
Full article ">
25 pages, 23247 KiB  
Article
Infrared and Visible Camera Integration for Detection and Tracking of Small UAVs: Systematic Evaluation
by Ana Pereira, Stephen Warwick, Alexandra Moutinho and Afzal Suleman
Drones 2024, 8(11), 650; https://doi.org/10.3390/drones8110650 - 6 Nov 2024
Viewed by 484
Abstract
Given the recent proliferation of Unmanned Aerial Systems (UASs) and the consequent importance of counter-UASs, this project aims to perform the detection and tracking of small non-cooperative UASs using Electro-optical (EO) and Infrared (IR) sensors. Two data integration techniques, at the decision and [...] Read more.
Given the recent proliferation of Unmanned Aerial Systems (UASs) and the consequent importance of counter-UASs, this project aims to perform the detection and tracking of small non-cooperative UASs using Electro-optical (EO) and Infrared (IR) sensors. Two data integration techniques, at the decision and pixel levels, are compared with the use of each sensor independently to evaluate the system robustness in different operational conditions. The data are submitted to a YOLOv7 detector merged with a ByteTrack tracker. For training and validation, additional efforts are made towards creating datasets of spatially and temporally aligned EO and IR annotated Unmanned Aerial Vehicle (UAV) frames and videos. These consist of the acquisition of real data captured from a workstation on the ground, followed by image calibration, image alignment, the application of bias-removal techniques, and data augmentation methods to artificially create images. The performance of the detector across datasets shows an average precision of 88.4%, recall of 85.4%, and [email protected] of 88.5%. Tests conducted on the decision-level fusion architecture demonstrate notable gains in recall and precision, although at the expense of lower frame rates. Precision, recall, and frame rate are not improved by the pixel-level fusion design. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Decision-level data fusion stages.</p>
Full article ">Figure 2
<p>Pixel-level data fusion stages.</p>
Full article ">Figure 3
<p>FusionGAN algorithm application: (<b>a</b>) Input Electro-optical (EO) image. (<b>b</b>) Input Infrared (IR) image. (<b>c</b>) Pixel Fused output image.</p>
Full article ">Figure 4
<p>Experimental setup with highlight on the sensors.</p>
Full article ">Figure 5
<p>Calibration procedure using the calibration board created: (<b>a</b>) EO image at close range. (<b>b</b>) EO image at far range. (<b>c</b>) IR image at close range. (<b>d</b>) IR image at far range.</p>
Full article ">Figure 5 Cont.
<p>Calibration procedure using the calibration board created: (<b>a</b>) EO image at close range. (<b>b</b>) EO image at far range. (<b>c</b>) IR image at close range. (<b>d</b>) IR image at far range.</p>
Full article ">Figure 6
<p>UAVs captured during flight experiments: (<b>a</b>) FE A—Mini-E. (<b>b</b>) FE A—DJI Mavic 2. (<b>c</b>) FE B—MIMIQ. (<b>d</b>) FE C—DJI Inspire 1. (<b>e</b>) FE D—Zeta FX-61 Phantom Wing. (<b>f</b>) FE D—DJI Mini 3 Pro.</p>
Full article ">Figure 7
<p>Schematics of flight paths: waypoints (blue) and workstation (red): (<b>a</b>) Flight Experiment A (Mini-E). (<b>b</b>) Flight Experiment A (DJI Mavic 2). (<b>c</b>) Flight Experiment C.</p>
Full article ">Figure 8
<p>Bias-removal algorithm application: (<b>a</b>) IR original image. (<b>b</b>) IR bias-corrected image. (<b>c</b>) Estimated bias.</p>
Full article ">Figure 9
<p>Artificial image pair creation algorithm: (<b>a</b>–<b>c</b>) EO images. (<b>d</b>–<b>f</b>) IR images.</p>
Full article ">Figure 10
<p>Dataset examples: (<b>a</b>–<b>c</b>) EO images. (<b>d</b>–<b>f</b>) IR images. (<b>g</b>–<b>i</b>) IR images with bias removed. (<b>j</b>–<b>l</b>) Pixel Fused images. (<b>m</b>–<b>o</b>) Pixel Fused images with bias removed.</p>
Full article ">Figure 11
<p>UAV recorded at twilight: (<b>a</b>) EO image. (<b>b</b>) IR image.</p>
Full article ">Figure 12
<p>Independent model detection and tracking on higher robustness target cases: (<b>a</b>) EO blurry UAV image. (<b>b</b>) IR blurry UAV image. (<b>c</b>) EO partially cut UAV image. (<b>d</b>) IR partially cut UAV image.</p>
Full article ">Figure 13
<p>Independent model detection and tracking on lower robustness target cases: (<b>a</b>) EO intra-class variation image. (<b>b</b>) IR intra-class variation image. (<b>c</b>) EO presence of birds image. (<b>d</b>) IR presence of birds image. (<b>e</b>) EO textured background image. (<b>f</b>) IR textured background image.</p>
Full article ">Figure 14
<p>Alignment failure on Pixel Fused images: (<b>a</b>) Vertical shift of input images to FusionGAN. (<b>b</b>) Significant vertical shift of input images leading to complete UAV overlap miss on Pixel Fused images.</p>
Full article ">Figure 15
<p>Data fusion detection and tracking on the intra-class variation target case: (<b>a</b>) EO-IR architecture. (<b>b</b>) IR-EO architecture. (<b>c</b>) Pixel-level fused architecture.</p>
Full article ">Figure 16
<p>Data fusion detection and tracking with the presence of birds target case: (<b>a</b>) EO-IR architecture. (<b>b</b>) IR-EO architecture.</p>
Full article ">Figure 17
<p>Data fusion detection and tracking on the textured background target case: (<b>a</b>) EO-IR architecture. (<b>b</b>) IR-EO architecture. (<b>c</b>) Pixel-level fused architecture.</p>
Full article ">
17 pages, 13227 KiB  
Article
Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment
by Mengqi Wang, Zengzeng Lian, María Amparo Núñez-Andrés, Penghui Wang, Yalin Tian, Zhe Yue and Lingxiao Gu
Electronics 2024, 13(22), 4346; https://doi.org/10.3390/electronics13224346 - 6 Nov 2024
Viewed by 370
Abstract
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting [...] Read more.
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting the algorithm’s positioning accuracy. To enhance the localization accuracy of mobile robots in indoor low-light environments, this paper proposes a visual inertial odometry method (L-MSCKF) based on the multi-state constraint Kalman filter. Addressing the challenges of low-light conditions, we integrated Inertial Measurement Unit (IMU) data with stereo vision odometry. The algorithm includes an image enhancement module and a gyroscope zero-bias correction mechanism to facilitate feature matching in stereo vision odometry. We conducted tests on the EuRoC dataset and compared our method with other similar algorithms, thereby validating the effectiveness and accuracy of L-MSCKF. Full article
Show Figures

Figure 1

Figure 1
<p>Algorithm procedure.</p>
Full article ">Figure 2
<p>Selection of image enhancement algorithm parameters. (<b>a</b>) Fixed low-frequency gain at 0.5, sharpening coefficient at 1, and contrast threshold set to 4. (<b>b</b>) Fixed high-frequency gain at 1.6, sharpening coefficient of 1, and contrast threshold of 4. (<b>c</b>) High-frequency gain is set to 1.6, low-frequency gain to 0.3, and the contrast threshold to 4. (<b>d</b>) The fixed high-frequency gain is 1.6, the low-frequency gain is 0.3, and the sharpening coefficient is 1.5.</p>
Full article ">Figure 3
<p>Comparison of feature point extraction effect. (<b>a</b>) The feature point extraction result of the original image. (<b>b</b>) The feature point extraction result after CLAHE processing. (<b>c</b>) The feature point extraction result after homomorphic filtering processing. (<b>d</b>) The feature point extraction result after both CLAHE processing and homomorphic filtering processing.</p>
Full article ">Figure 4
<p>Estimation of gyroscope bias coefficients on the MH02 and V203 sequences. (<b>a</b>) Variation in gyroscope bias for L-MSCKF and MSCKF-VIO on the MH02 sequence. (<b>b</b>) Estimated gyroscope bias values by L-MSCKF and MSCKF-VIO on the V203 sequence.</p>
Full article ">Figure 5
<p>The trajectory of the algorithm on sequences V103 and V203 of the EuRoC dataset. (<b>a</b>) The trajectory on the V103 sequence. (<b>b</b>) The X, Y, and Z triaxial values on the V103 sequence. (<b>c</b>) The trajectory on the V203 sequence. (<b>d</b>) The X, Y, and Z triaxial values on the V203 sequence.</p>
Full article ">Figure 6
<p>Comparison of absolute trajectory errors of each algorithm on weak light sequence V203.</p>
Full article ">Figure 7
<p>Comparison of the computational efficiency of each algorithm. (<b>a</b>) Average CPU usage in % of the total available CPU, by the algorithms running the same experiment. (<b>b</b>) Total running time of each algorithm on the same dataset.</p>
Full article ">
7 pages, 5470 KiB  
Communication
Ferroelectric Domain Modulation with Tip-Poling Engineering in BiFeO3 Films
by Xiaojun Qiao, Yuxuan Wu, Wenping Geng and Xiujian Chou
Micromachines 2024, 15(11), 1352; https://doi.org/10.3390/mi15111352 - 5 Nov 2024
Viewed by 515
Abstract
BiFeO3 (BFO) films with ferroelectricity are the most promising candidates regarding the next generation of storage devices and sensors. The comprehensive understanding of ferroelectric switchable properties is challenging and critical to robust domain wall nanoelectronics. Herein, the domain dynamic was explored in [...] Read more.
BiFeO3 (BFO) films with ferroelectricity are the most promising candidates regarding the next generation of storage devices and sensors. The comprehensive understanding of ferroelectric switchable properties is challenging and critical to robust domain wall nanoelectronics. Herein, the domain dynamic was explored in detail under external bias conditions using scanning probe microscopy, which is meaningful for the understanding of domain dynamics and the foundation of ferroelectric devices. The results show that domain reversal occurred under external electric fields with sufficient energy excitation, combined with the existence of a charged domain wall. These findings extend the domain dynamic and current paths in ferroelectric films and shed light on the potential applications for ferroelectric devices. Full article
Show Figures

Figure 1

Figure 1
<p>The domain reversal using the predefined pattern under a tip bias of 15 V: the out-of-plane (<b>a</b>) and in-plane (<b>b</b>) phases, and the corresponding amplitudes in (<b>c</b>,<b>d</b>). The phase distribution of domain variants regarding the in-plane and out-of-plane phases is shown in the inset of (<b>c</b>,<b>d</b>); the domain wall conductivity (<b>e</b>) and section-line profile (<b>f</b>) in the BFO films, (<b>g</b>,<b>h</b>) refer to the distribution of domain in both the out-of-plane and in-plane direction. (<b>i</b>) Characterization of the coercive field.</p>
Full article ">Figure 2
<p>The conductivity of BFO films in the as-grown state (<b>a</b>,<b>b</b>); corresponding conductivity under various electrical biases 1 V, 2 V, and 2.5 V, respectively (<b>c</b>–<b>e</b>); the domain wall conductivity profile (<b>f</b>) according to the current results of c-AFM.</p>
Full article ">Figure 3
<p>The domain dynamic process in the previous poled region with electrical biases of −5 V, −5.5 V, −6 V, −6.5 V, and −7 V, (<b>a1</b>–<b>e1</b>) phase and (<b>a2</b>–<b>e2</b>) amplitude imagines respectively; the poling region was as below: first, the poling was biased in the center region with dimensions of 8 μm × 8 μm, which is similar to <a href="#micromachines-15-01352-f001" class="html-fig">Figure 1</a>. After that, a series bias of −5 V, −5.5 V, −6 V, −6.5 V, and −7 V was applied with the dimensions of 14 μm × 14 μm shown in pink remarks in (<b>a1</b>–<b>e1</b>); note that the scanning area is 19 μm × 19 μm to analyze the domain switching.</p>
Full article ">Figure 4
<p>The domain dynamic using point-step poling engineering: out-of-plane (<b>a</b>,<b>c</b>) and in-plane (<b>b</b>,<b>d</b>) phases with the corresponding amplitude shown in (<b>e</b>–<b>h</b>). For (<b>a</b>,<b>b</b>), the electrical bias ranges from −1 V to −13 V, with a 0.1 s poling increment; for (<b>c</b>,<b>d</b>); the electrical bias is constant at −14 V with a 0.2 s poling increment.</p>
Full article ">
22 pages, 6236 KiB  
Article
Varying Performance of Low-Cost Sensors During Seasonal Smog Events in Moravian-Silesian Region
by Václav Nevrlý, Michal Dostál, Petr Bitala, Vít Klečka, Jiří Sléžka, Pavel Polách, Katarína Nevrlá, Melánie Barabášová, Růžena Langová, Šárka Bernatíková, Barbora Martiníková, Michal Vašinek, Adam Nevrlý, Milan Lazecký, Jan Suchánek, Hana Chaloupecká, David Kiča and Jan Wild
Atmosphere 2024, 15(11), 1326; https://doi.org/10.3390/atmos15111326 - 3 Nov 2024
Viewed by 651
Abstract
Air pollution monitoring in industrial regions like Moravia-Silesia faces challenges due to complex environmental conditions. Low-cost sensors offer a promising, cost-effective alternative for supplementing data from regulatory-grade air quality monitoring stations. This study evaluates the accuracy and reliability of a prototype node containing [...] Read more.
Air pollution monitoring in industrial regions like Moravia-Silesia faces challenges due to complex environmental conditions. Low-cost sensors offer a promising, cost-effective alternative for supplementing data from regulatory-grade air quality monitoring stations. This study evaluates the accuracy and reliability of a prototype node containing low-cost sensors for carbon monoxide (CO) and particulate matter (PM), specifically tailored for the local conditions of the Moravian-Silesian Region during winter and spring periods. An analysis of the reference data observed during the winter evaluation period showed a strong positive correlation between PM, CO, and NO2 concentrations, attributable to common pollution sources under low ambient temperature conditions and increased local heating activity. The Sensirion SPS30 sensor exhibited high linearity during the winter period but showed a systematic positive bias in PM10 readings during Polish smog episodes, likely due to fine particles from domestic heating. Conversely, during Saharan dust storm episodes, the sensor showed a negative bias, underestimating PM10 levels due to the prevalence of coarse particles. Calibration adjustments, based on the PM1/PM10 ratio derived from Alphasense OPC-N3 data, were initially explored to reduce these biases. For the first time, this study quantifies the influence of particle size distribution on the SPS30 sensor’s response during smog episodes of varying origin, under the given local and seasonal conditions. In addition to sensor evaluation, we analyzed the potential use of data from the Copernicus Atmospheric Monitoring Service (CAMS) as an alternative to increasing sensor complexity. Our findings suggest that, with appropriate calibration, selected low-cost sensors can provide reliable data for monitoring air pollution episodes in the Moravian-Silesian Region and may also be used for future adjustments of CAMS model predictions. Full article
Show Figures

Figure 1

Figure 1
<p>LCS sensor node placed on the roof of the reference air quality monitoring station of the health institute in Ostrava located in the Mariánské Hory district.</p>
Full article ">Figure 2
<p>Temporal evolution of wind speed, wind direction, and reference particulate matter [<math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>10</mn> </msub> <msub> <mi mathvariant="normal">]</mi> <mrow> <mi>R</mi> <mi>E</mi> <mi>F</mi> </mrow> </msub> </mrow> </semantics></math> concentrations during S1 (<b>a</b>–<b>c</b>) and S2 (<b>d</b>–<b>f</b>) episodes, respectively. Wind direction in degrees indicates the origin of the wind (<math display="inline"><semantics> <mrow> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> = <math display="inline"><semantics> <mrow> <msup> <mn>360</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> = north, <math display="inline"><semantics> <mrow> <msup> <mn>90</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> = east, <math display="inline"><semantics> <mrow> <msup> <mn>180</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> = south, <math display="inline"><semantics> <mrow> <msup> <mn>270</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> = west).</p>
Full article ">Figure 3
<p>Schematic representation of datasets and quantities used for exploratory and regression data analyses (in bold) with corresponding temporal resolution. These datasets as well as data processing tools (including interactive Python notebooks) are available at the Zenodo repository (see the Data Availability Statement).</p>
Full article ">Figure 4
<p>The correlation matrix with scatter plots, linear regression and probability density functions (PDFs) for reference pollutant concentrations from the winter evaluation period. The diagonal subplots displaying the PDFs of the variables are depicted on a relative scale. Note that the area under each PDF curve equals 1, indicating the total probability. All scales for non-diagonal subplots are depicted in [<math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <msup> <mi>g/m</mi> <mn>3</mn> </msup> </mrow> </semantics></math>] units. Pearson correlation coefficients <span class="html-italic">r</span> and relevant linear regression lines are depicted in red.</p>
Full article ">Figure 5
<p>Comparison of the selected hourly data series from winter evaluation period.</p>
Full article ">Figure 6
<p>Plot of diurnal variations in CO concentration (<b>a</b>) and <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>10</mn> </msub> </mrow> </semantics></math> (<b>b</b>) during winter evaluation period, extracted from reference instrument (solid line), LCS node (dotted line) and CAMS model (dash-dotted line) data, with mean value (thick lines) and the interquartile range (shaded regions).</p>
Full article ">Figure 7
<p>The seasonal variation in size distribution depicted as the normalized particle volume by bin of the Alphasense OPC-N3 sensor. The value of mass-weighted <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> <msub> <mi>/PM</mi> <mn>10</mn> </msub> </mrow> </semantics></math> ratio was estimated for each co-location month, based on the median value of the relevant 24 h averages.</p>
Full article ">Figure 8
<p>Simple linear regression of Alphasense CO-B4 sensor response versus reference instrument (HORIBA) for “Polish smog” episode S1 (<b>a</b>) compared with data for winter evaluation period (<b>b</b>). Histograms displayed adjacent to axes illustrate normalized frequency of measured concentration ranges within respective dataset.</p>
Full article ">Figure 9
<p>Simple linear regression of Sensirion SPS30 sensor response versus reference instrument (TEOM) for “Polish smog” episode S1 (<b>a</b>) compared with data for winter evaluation period (<b>b</b>). Histograms displayed adjacent to axes illustrate normalized frequency of measured concentration ranges within respective dataset.</p>
Full article ">Figure 10
<p>Simple linear regression of Sensirion SPS30 sensor response versus reference instrument (FIDAS) during spring “Saharan dust storm” episode S2. Performance shown for <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>10</mn> </msub> </mrow> </semantics></math> (<b>a</b>) and <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> (<b>b</b>). Histograms displayed adjacent to axes illustrate normalized frequency of measured concentration ranges within respective dataset.</p>
Full article ">Figure 11
<p>Correlation of reference measurements and LCS response during the winter evaluation period and the effect of ambient temperature on sensor performance (shown by the color of the data point).</p>
Full article ">Figure 12
<p>Size distribution of the normalized particle volume by bin of the Alphasense OPC-N3 sensor. The value of mass-weighted <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> <msub> <mi>/PM</mi> <mn>10</mn> </msub> </mrow> </semantics></math> ratio estimated from 24 h average on selected days during S1 (<b>a</b>) and S2 (<b>b</b>) episodes.</p>
Full article ">Figure A1
<p>Rendered 3D model of the LCS node enclosure.</p>
Full article ">Figure A2
<p>Schematic representation of the data logging framework.</p>
Full article ">
24 pages, 12756 KiB  
Article
An Empirical Algorithm for Estimating the Absorption of Colored Dissolved Organic Matter from Sentinel-2 (MSI) and Landsat-8 (OLI) Observations of Coastal Waters
by Vu Son Nguyen, Hubert Loisel, Vincent Vantrepotte, Xavier Mériaux and Dinh Lan Tran
Remote Sens. 2024, 16(21), 4061; https://doi.org/10.3390/rs16214061 - 31 Oct 2024
Viewed by 686
Abstract
Sentinel-2/MSI and Landsat-8/OLI sensors enable the mapping of ocean color-related bio-optical parameters of surface coastal and inland waters. While many algorithms have been developed to estimate the Chlorophyll-a concentration, Chl-a, and the suspended particulate matter, SPM, from OLI and MSI data, the absorption [...] Read more.
Sentinel-2/MSI and Landsat-8/OLI sensors enable the mapping of ocean color-related bio-optical parameters of surface coastal and inland waters. While many algorithms have been developed to estimate the Chlorophyll-a concentration, Chl-a, and the suspended particulate matter, SPM, from OLI and MSI data, the absorption by colored dissolved organic matter, acdom, a key parameter to monitor the concentration of dissolved organic matter, has received less attention. Herein we present an inverse model (hereafter referred to as AquaCDOM) for estimating acdom at the wavelength 412 nm (acdom (412)), within the surface layer of coastal waters, from measurements of ocean remote sensing reflectance, Rrs (λ), for these two high spatial resolution (around 20 m) sensors. Combined with a water class-based approach, several empirical algorithms were tested on a mixed dataset of synthetic and in situ data collected from global coastal waters. The selection of the final algorithms was performed with an independent validation dataset, using in situ, synthetic, and satellite Rrs (λ) measurements, but also by testing their respective sensitivity to typical noise introduced by atmospheric correction algorithms. It was found that the proposed algorithms could estimate acdom (412) with a median absolute percentage difference of ~30% and a median bias of 0.002 m−1 from the in situ and synthetic datasets. While similar performances have been shown with two other algorithms based on different methodological developments, we have shown that AquaCDOM is much less sensitive to atmospheric correction uncertainties, mainly due to the use of band ratios in its formulation. After the application of the top-of-atmosphere gains and of the same atmospheric correction algorithm, excellent agreement has been found between the OLI- and MSI-derived acdom (412) values for various coastal areas, enabling the application of these algorithms for time series analysis. An example application of our algorithms for the time series analysis of acdom (412) is provided for a coastal transect in the south of Vietnam. Full article
(This article belongs to the Special Issue Remote Sensing Band Ratios for the Assessment of Water Quality)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Global distribution of the in situ (red triangles) and match-up dataset for a<sub>cdom</sub> (λ) (green triangles) and for both <span class="html-italic">R</span><sub>rs</sub> (λ) and a<sub>cdom</sub> (λ) (blue triangles) data points. The squares represent the locations of Landsat-8 and Sentinel-2 near-simultaneous nadir-captured image pairs used for the calibration of a<sub>cdom</sub> (412) product (black squares) and the time series analysis (blue squares).</p>
Full article ">Figure 2
<p>The RGB of Landsat-8 (dimmed image) and Sentinel-2 (bounded by black box) images used for algorithm sensitivity analysis. The red line is the transect in front of the Ganh Hao River in Vietnam.</p>
Full article ">Figure 3
<p>Flowchart of the water classification method proposed in this study.</p>
Full article ">Figure 4
<p>Histograms of the a<sub>cdom</sub> (412) values for (<b>a</b>) DSM, (<b>b</b>) DSM-1 (i.e., Class 1), and (<b>c</b>) DSM-2 (i.e., Class 2).</p>
Full article ">Figure 5
<p>Comparison of ACOLITE-derived and measured values of <span class="html-italic">R</span><sub>rs</sub> (λ) from the (<b>a</b>) OLI and (<b>b</b>) MSI 10 m spatial resolution (right) match-up dataset. The different statistical indicators calculated for the data of ACOLITE-derived versus measured <span class="html-italic">R</span><sub>rs</sub> (λ) are provided (see text for details). The solid line is the 1:1 line. The different colors stand for a given wavelength.</p>
Full article ">Figure 6
<p>Radar plots summarizing the performances of the different a<sub>cdom</sub> (412) algorithms as indicated for (<b>a</b>) MSI and Class 1 water, (<b>b</b>) MSI and Class 2 water, (<b>c</b>) OLI and Class 1 waters, and (<b>d</b>) OLI and Class 2 waters. The number following the algorithm name represents the value of the surface of the polygon (the smaller the value the better the algorithm performs).</p>
Full article ">Figure 7
<p>The OLI maps of the standard deviation of modeled a<sub>cdom</sub> (412) for the algorithm combinations listed in <a href="#remotesensing-16-04061-t002" class="html-table">Table 2</a>. The name of the algorithm for each water class in each combination is displayed at the top of each map from (<b>a</b>–<b>e</b>).</p>
Full article ">Figure 8
<p>The MSI maps of the standard deviation of modeled a<sub>cdom</sub> (412) at a 60 m spatial resolution for the algorithm combinations listed in <a href="#remotesensing-16-04061-t003" class="html-table">Table 3</a>. The name of the algorithm for each water class in each combination is displayed at the top of each individual map from (<b>a</b>–<b>f</b>).</p>
Full article ">Figure 9
<p>Comparison of AquaCDOM-derived and measured values of a<sub>cdom</sub> (412) for OLI (top row) and MSI (bottom row) from the DSM-D (panels (<b>a</b>,<b>d</b>)) and DSM-V (panels (<b>b</b>,<b>e</b>)) datasets. Histograms of the derived and measured a<sub>cdom</sub> (412) values from DSM-V for OLI and MSI are shown in (<b>c</b>,<b>f</b>), respectively. The different statistical indicators calculated for the data of AquaCDOM-derived and measured a<sub>cdom</sub> (412) are provided (see text for details). The percentage of retrieved data points is indicated in brackets. The solid line is the 1:1 line. The different colors stand for different water classes as indicated.</p>
Full article ">Figure 10
<p>Comparison of AquaCDOM-derived and measured values of a<sub>cdom</sub> (412) for OLI from the (<b>a</b>) DSM-V and (<b>b</b>) Mu-CDOM datasets. (<b>c</b>) Map of the standard deviation of the AquaCDOM-derived a<sub>cdom</sub> (412) values. Comparison of SAVE-derived and measured values of a<sub>cdom</sub> (412) for OLI from the (<b>d</b>) DSM-V and (<b>e</b>) Mu-CDOM datasets. (<b>f</b>) Map of the standard deviation of the SAVE-derived a<sub>cdom</sub> (412) values. The different statistical indicators calculated for the data of model-derived versus measured a<sub>cdom</sub> (412) values are provided (see text for details). The percentage of retrieved data points is indicated in brackets. The solid line is the 1:1 line. The different colors stand for different water classes as indicated.</p>
Full article ">Figure 11
<p>Same as <a href="#remotesensing-16-04061-f010" class="html-fig">Figure 10</a> but for MSI.</p>
Full article ">Figure 12
<p>Comparison of AquaCDOM-derived and measured values of a<sub>cdom</sub> (440) for OLI from the (<b>a</b>) DSM-V and (<b>b</b>) Mu-CDOM datasets. (<b>c</b>) Map of the standard deviation of the AquaCDOM-derived a<sub>cdom</sub> (440) values. Comparison of MDN-derived and measured values of a<sub>cdom</sub> (440) for OLI from the (<b>d</b>) DSM-V and (<b>e</b>) Mu-CDOM datasets. (<b>f</b>) Map of the standard deviation of the MDN-derived a<sub>cdom</sub> (440) values. The different statistical indicators calculated for the data of model-derived versus measured a<sub>cdom</sub> (440) are provided (see text for details). The percentage of retrieved data points is indicated in brackets. The solid line is the 1:1 line. The different colors stand for different water classes as indicated.</p>
Full article ">Figure 13
<p>Same as <a href="#remotesensing-16-04061-f012" class="html-fig">Figure 12</a> but for MSI.</p>
Full article ">Figure 14
<p>Comparison of OLI-derived and MSI-derived a<sub>cdom</sub> (412) values over the 11 near-simultaneous acquisition scenes. The different statistical indicators calculated for the data of OLI-derived versus MSI-derived a<sub>cdom</sub> (412) are provided.</p>
Full article ">Figure 15
<p>(<b>a</b>) Temporal variability of a<sub>cdom</sub> (412) in Ganh Hao transect (<a href="#remotesensing-16-04061-f002" class="html-fig">Figure 2</a>) in relation to the (<b>b</b>) monthly accumulated rainfall (in mm) and (<b>c</b>) the monthly mean intraday high and low tide height difference (in m).</p>
Full article ">Figure A1
<p>Location of the a<sub>cdom</sub> (412) match-up data points (purple triangles) for OLCI on Sentinel-3. In situ a<sub>cdom</sub> (412) measurements were extracted from SeaBASS.</p>
Full article ">Figure A2
<p>Comparison of AquaCDOM-derived and measured values of a<sub>cdom</sub> (412) for the OLCI match-up dataset presented in <a href="#remotesensing-16-04061-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure A3
<p>(<b>a</b>) An RGB composite of an OLI image over a Vietnam coastal area. The black line represents the transect along which the a<sub>cdom</sub> (412) value estimated from AquaCDOM is extracted. The location of the boundary between the two water classes is shown by a white dot. (<b>b</b>) The spatial distribution of the two water classes used in this study. (<b>c</b>) The a<sub>cdom</sub> (412) spatial distribution estimated from the AquaCDOM algorithm. (<b>d</b>) The a<sub>cdom</sub> (412) values along the cross-shore transect. The red line shows the spatial delimitation between the two water classes along the cross-shore transect.</p>
Full article ">
13 pages, 4002 KiB  
Article
Waste Material Classification Based on a Wavelength-Sensitive Ge-on-Si Photodetector
by Anju Manakkakudy Kumaran, Andrea De Iacovo, Andrea Ballabio, Jacopo Frigerio, Giovanni Isella and Lorenzo Colace
Sensors 2024, 24(21), 6970; https://doi.org/10.3390/s24216970 - 30 Oct 2024
Viewed by 427
Abstract
Waste material classification is critical for efficient recycling and waste management. This study proposes a novel, low-cost material classification system based on a single, voltage-tunable Ge-on-Si photodetector operating across the visible and short-wave infrared (SWIR) spectral regions. Thanks to its tunability, the sensor [...] Read more.
Waste material classification is critical for efficient recycling and waste management. This study proposes a novel, low-cost material classification system based on a single, voltage-tunable Ge-on-Si photodetector operating across the visible and short-wave infrared (SWIR) spectral regions. Thanks to its tunability, the sensor is able to extract spectral information, and the system effectively distinguishes between seven different materials, including plastics, aluminum, glass, and paper. The system operates with a broadband illuminator, and material identification is obtained through the processing of the photocurrent signal at different bias voltages with classification algorithms. Here, we demonstrate the basic system functionality and near real-time classification of different waste materials. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic representation of device and its principle of operation. (<b>b</b>) Epitaxial structure. (<b>c</b>) Spectral responsivity of dual-band photodetector at different voltage biases. (<b>d</b>) Reflectance spectra of different waste materials.</p>
Full article ">Figure 2
<p>Samples used in the research.</p>
Full article ">Figure 3
<p>Schematic representation of experimental setup.</p>
Full article ">Figure 4
<p>Average photocurrent spectra for each material as a function of voltage bias.</p>
Full article ">Figure 5
<p>Clustering results of the classification model based on the first three principal components across a six-day dataset, with subfigures (<b>a</b>) showing day 1, (<b>b</b>) day 2, (<b>c</b>) day 3, (<b>d</b>) day 4, (<b>e</b>) day 5, and (<b>f</b>) day 6. Different colors identify different materials.</p>
Full article ">Figure 5 Cont.
<p>Clustering results of the classification model based on the first three principal components across a six-day dataset, with subfigures (<b>a</b>) showing day 1, (<b>b</b>) day 2, (<b>c</b>) day 3, (<b>d</b>) day 4, (<b>e</b>) day 5, and (<b>f</b>) day 6. Different colors identify different materials.</p>
Full article ">Figure 6
<p>Confusion matrix of combined data for KNN classifier.</p>
Full article ">Figure 7
<p>(<b>a</b>) Confusion matrix of classification model with 500 ms time delay. (<b>b</b>) Confusion matrix of classification model with 100 ms time delay.</p>
Full article ">Figure 8
<p>Impact of voltage resolution on the overall accuracy of the classification model.</p>
Full article ">Figure 9
<p>(<b>a</b>) Confusion matrix for model 1 with 500 ms time delay. (<b>b</b>) Confusion matrix for model 2 with 500 ms time delay. (<b>c</b>) Confusion matrix for model 3 with 500 ms time delay.</p>
Full article ">
20 pages, 8023 KiB  
Article
Channel Interaction and Transformer Depth Estimation Network: Robust Self-Supervised Depth Estimation Under Varied Weather Conditions
by Jianqiang Liu, Zhengyu Guo, Peng Ping, Hao Zhang and Quan Shi
Sustainability 2024, 16(20), 9131; https://doi.org/10.3390/su16209131 - 21 Oct 2024
Viewed by 542
Abstract
Monocular depth estimation provides low-cost environmental information for intelligent systems such as autonomous vehicles and robots, supporting sustainable development by reducing reliance on expensive, energy-intensive sensors and making technology more accessible and efficient. However, in practical applications, monocular vision is highly susceptible to [...] Read more.
Monocular depth estimation provides low-cost environmental information for intelligent systems such as autonomous vehicles and robots, supporting sustainable development by reducing reliance on expensive, energy-intensive sensors and making technology more accessible and efficient. However, in practical applications, monocular vision is highly susceptible to adverse weather conditions, significantly reducing depth perception accuracy and limiting its ability to deliver reliable environmental information. To improve the robustness of monocular depth estimation in challenging weather, this paper first utilizes generative models to adjust image exposure and generate synthetic images of rainy, foggy, and nighttime scenes, enriching the diversity of the training data. Next, a channel interaction module and Multi-Scale Fusion Module are introduced. The former enhances information exchange between channels, while the latter effectively integrates multi-level feature information. Finally, an enhanced consistency loss is added to the loss function to prevent the depth estimation bias caused by data augmentation. Experiments on datasets such as DrivingStereo, Foggy CityScapes, and NuScenes-Night demonstrate that our method, CIT-Depth, exhibits superior generalization across various complex conditions. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

Figure 1
<p>Overview of our CIT-Depth architecture. Our CIT-Depth is composed of two parts: Depth Network and Pose Network [<a href="#B11-sustainability-16-09131" class="html-bibr">11</a>]. The Depth Network utilizes both convolutional layers and Transformer architecture.</p>
Full article ">Figure 2
<p>The depth encoder employs CNN and Transformer module. Each module consists of a convolutional module and three Transformer modules.</p>
Full article ">Figure 3
<p>Structures of channel interaction module.</p>
Full article ">Figure 4
<p>Structures of Global Channel Pooling module.</p>
Full article ">Figure 5
<p>Structure of Multi-Scale Fusion Module.</p>
Full article ">Figure 6
<p>The enhanced images generated by GAN under various weather and time conditions are displayed in the following order from top to bottom: original input image, mixed exposure image, rainy image, foggy image, and nighttime image.</p>
Full article ">Figure 7
<p>Qualitative results on the KITTI dataset. The first row shows the input image, followed by the predicted results of Monodepth2 [<a href="#B11-sustainability-16-09131" class="html-bibr">11</a>], HR-Depth [<a href="#B38-sustainability-16-09131" class="html-bibr">38</a>], DIFFNet [<a href="#B42-sustainability-16-09131" class="html-bibr">42</a>], MonoViT [<a href="#B32-sustainability-16-09131" class="html-bibr">32</a>], and CIT-Depth (ours).</p>
Full article ">Figure 8
<p>Qualitative results on the Make3D dataset. CIT-Depth is compared with the predicted results of Monodepth2 [<a href="#B11-sustainability-16-09131" class="html-bibr">11</a>] and MonoViT [<a href="#B32-sustainability-16-09131" class="html-bibr">32</a>].</p>
Full article ">Figure 9
<p>A demonstration of the qualitative results from the DrivingStereo dataset. From left to right are images from the dataset representing sunny, rainy, foggy, and cloudy conditions.</p>
Full article ">Figure 10
<p>Qualitative results on the Foggy CityScape dataset. The depth prediction results of Monodepth2 [<a href="#B11-sustainability-16-09131" class="html-bibr">11</a>], DIFFNet [<a href="#B42-sustainability-16-09131" class="html-bibr">42</a>], MonoViT [<a href="#B32-sustainability-16-09131" class="html-bibr">32</a>], and CIT-Depth under foggy conditions.</p>
Full article ">Figure 11
<p>Qualitative results on the NuScene-Night dataset. The depth prediction results of Monodepth2 [<a href="#B11-sustainability-16-09131" class="html-bibr">11</a>], ADDS-Depth [<a href="#B48-sustainability-16-09131" class="html-bibr">48</a>], MonoViT [<a href="#B32-sustainability-16-09131" class="html-bibr">32</a>], and CIT-Depth under nighttime conditions.</p>
Full article ">
23 pages, 3210 KiB  
Article
Limb Temperature Observations in the Stratosphere and Mesosphere Derived from the OMPS Sensor
by Pedro Da Costa Louro, Philippe Keckhut, Alain Hauchecorne, Mustapha Meftah, Glen Jaross and Antoine Mangin
Remote Sens. 2024, 16(20), 3878; https://doi.org/10.3390/rs16203878 - 18 Oct 2024
Viewed by 726
Abstract
Molecular scattering (Rayleigh scattering) has been extensively used from the ground with lidars and from space to observe the limb, thereby deriving vertical temperature profiles between 30 and 80 km. In this study, we investigate how temperature can be measured using the new [...] Read more.
Molecular scattering (Rayleigh scattering) has been extensively used from the ground with lidars and from space to observe the limb, thereby deriving vertical temperature profiles between 30 and 80 km. In this study, we investigate how temperature can be measured using the new Ozone Mapping and Profiler Suite (OMPS) sensor, aboard the Suomi NPP and NOAA-21 satellites. The OMPS consists of three instruments whose main purpose is to study the composition of the stratosphere. One of these, the Limb Profiler (LP), measures the radiance of the limb of the middle atmosphere (stratosphere and mesosphere, 12 to 90 km altitude) at wavelengths from 290 to 1020 nm. This new data set has been used with a New Simplified Radiative Transfer Model (NSRTM) to derive temperature profiles with a vertical resolution of 1 km. To validate the method, the OMPS-derived temperature profiles were compared with data from four ground-based lidars and the ERA5 and MSIS models. The results show that OMPS and the lidars are in agreement within a range of about 5 K from 30 to 80 km. Comparisons with the models also show similar results, except for ERA5 beyond 50 km. We investigated various sources of bias, such as different attenuation sources, which can produce errors of up to 120 K in the UV range, instrumental errors around 0.8 K and noise problems of up to 150 K in the visible range for OMPS. This study also highlighted the interest in developing a new miniaturised instrument that could provide real-time observation of atmospheric vertical temperature profiles using a constellation of CubeSats with our NSRTM. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>In our case, the green part of the profile is unused, the red part is the profile measured by OMPS and the violet part is simulated using an inverse exponential thanks to the red part. The initialisation altitude marks the ‘boundary’ between these 2 parts.</p>
Full article ">Figure 2
<p>Example of the effect of noise correction on a daily profile in relation to the position of the site on La Réunion where the lidar is located. Temperature inversions are performed using several wavelength bands available on the OMPS instrument. On the left are the profiles with noise estimated from the latest channels. On the right are the profiles with noise estimated using the MSIS model as described in <a href="#sec2-remotesensing-16-03878" class="html-sec">Section 2</a>.</p>
Full article ">Figure 3
<p>Representation of the horizontal resolution for three tangent heights, each point represents a layer measured by OMPS. As an example with the blue curve, the first point at 113 km represents the distance observed by OMPS between the layer observed, here 30.5 km, and the next layer 31.5 km; the second point at 43 km still represents the distance observed by OMPS at 30.5 km but this time between the layers 31.5 and 32.5 km; and so on.</p>
Full article ">Figure 4
<p>This diagram illustrates the path taken by the radiance in each layer observed by OMPS. At each layer, the radiance is scattered by air molecules and absorbed by ozone and nitrogen dioxide molecules. <math display="inline"><semantics> <msub> <mi>R</mi> <msub> <mi>Couche</mi> <mi>i</mi> </msub> </msub> </semantics></math> represents the radius from the Earth to the given layer, and <math display="inline"><semantics> <msub> <mi>D</mi> <msub> <mrow> <mi mathvariant="normal">c</mi> <mo>-</mo> <mi>Sat</mi> </mrow> <mi>i</mi> </msub> </msub> </semantics></math> represents the distance that the radiance in layer <span class="html-italic">i</span> crosses to reach the satellite. Similarly, <math display="inline"><semantics> <msub> <mi>D</mi> <msub> <mrow> <mi>sol</mi> <mo>-</mo> <mi mathvariant="normal">c</mi> </mrow> <mi>i</mi> </msub> </msub> </semantics></math> represents the distance travelled by the radiance arriving from the sun to the layer.</p>
Full article ">Figure 5
<p>The left figure shows the correction applied to the radiance profile per cm at different wavelengths, while the rightfigure shows the effect of this correction in kelvin on the temperature profiles at the same wavelengths.</p>
Full article ">Figure 6
<p>Share of Rayleigh scattering in total signal attenuation in % at different wavelengths. This figure should be read in conjunction with <a href="#remotesensing-16-03878-f005" class="html-fig">Figure 5</a>, Figure 8 and Figure 10 and provides a better understanding of the roles of Rayleigh scattering and O<sub>3</sub> and NO<sub>2</sub> absorption in the corrections applied to the radiance profile and, by extension, to the temperature profiles.</p>
Full article ">Figure 7
<p>Example of an O<sub>3</sub> profile measured by OMPS in the middle atmosphere.</p>
Full article ">Figure 8
<p>Share of O<sub>3</sub> absorption in total signal attenuation in % at different wavelengths.</p>
Full article ">Figure 9
<p>Example of a NO<sub>2</sub> profile in the middle atmosphere. WACCM gives an average profile per month for each year.</p>
Full article ">Figure 10
<p>Share of NO<sub>2</sub> absorption in total signal attenuation in % at different wavelengths.</p>
Full article ">Figure 11
<p>The upper figure shows a temperature profile obtained by OMPS without correction of the radiance profile; the lower figure is the same temperature profile but with correction of the radiance profile by our NSRTM. The temperature profiles of the ERA 5 and MSIS 2.0 models and the lidar profile (in this case, the Réunion lidar) obtained on the same day show the extent of the correction.</p>
Full article ">Figure 12
<p>Example of Earth limb radiances measured by OMPS on 13 August 2012 at 45°N latitude [<a href="#B41-remotesensing-16-03878" class="html-bibr">41</a>].</p>
Full article ">Figure 13
<p>Annual temperature difference between OMPS and the lidar on Réunion Island at different wavelengths.</p>
Full article ">Figure 14
<p>Annual temperature difference between OMPS and the MSIS 2.0 model at different wavelengths. The effect of aerosols on temperature profiles can be seen between 20 and 30 km. As the wavelength increases, aerosol scattering takes precedence over molecular scattering. Lidars do not provide temperature data below 30 km, so we show this phenomenon using MSIS differences.</p>
Full article ">Figure 15
<p>Scatterplot and mean standard deviation of the temperature inversion method with OMPS obtained with each wavelength as a function of altitude.</p>
Full article ">Figure 16
<p>Comparisons of OMPS temperature profiles with ERA5, MSIS 2.0 and HOH. On the left are the differences between OMPS and the various sources compared; in the centre the standard deviation; and on the right the uncertainty on the standard deviation. In order from first to last line, the study sites are OHP, RUN, MLO and HOH.</p>
Full article ">Figure 17
<p>Deviation in temperature between OMPS and OHP. The red zone represents the calculated expected differences. The blue and yellow curves represent the temperature differences between OMPS and OHP at 2 and 3 stds.</p>
Full article ">
12 pages, 2427 KiB  
Article
Validity and Reliability of a New Wearable Chest Strap to Estimate Respiratory Frequency in Elite Soccer Athletes
by Adriano Di Paco, Diego A. Bonilla, Rocco Perrotta, Raffaele Canonico, Erika Cione and Roberto Cannataro
Sports 2024, 12(10), 277; https://doi.org/10.3390/sports12100277 - 12 Oct 2024
Viewed by 1414
Abstract
Assessing respiratory frequency (fR) is practical in monitoring training progress in competitive athletes, especially during exercise. This study aimed to validate a new wearable chest strap (wCS) to estimate fR against ergospirometry as a criterion device in soccer players. [...] Read more.
Assessing respiratory frequency (fR) is practical in monitoring training progress in competitive athletes, especially during exercise. This study aimed to validate a new wearable chest strap (wCS) to estimate fR against ergospirometry as a criterion device in soccer players. A total of 26 elite professional soccer players (mean [standard deviation]: 23.6 [4.8] years; 180.6 [5.7] cm; 77.2 [5.4] kg) from three Italian Serie A League teams participated in this cross-sectional study. The sample included attackers, midfielders, and defenders. fR was assessed during a maximal cardiopulmonary exercise test (CPET) on a treadmill using (i) a breath-by-breath gas exchange analyzer (Vyntus® CPX, Vyaire Medical) and (ii) a novel wCS with sensors designed to assess breath frequency following chest expansions. Pearson’s correlation coefficient (r), adjusted coefficient of determination (aR2), Bland–Altman plot analysis, and Lin’s concordance correlation coefficient (ρc) were used for comparative analysis (correlation and concordance) among the methods. The repeated measures correlation coefficient (rrm) was used to assess the strength of the linear association between the methods. The intraclass correlation coefficient (ICC) and the Finn coefficient (rF) were used for inter-rater reliability. All statistical analyses were performed within the R statistical computing environment, with 95% confidence intervals (95% CIs) reported and statistical significance set at p < 0.05. A total of 16529 comparisons were performed after collecting the CPET data. The robust time series analysis with Hodges–Lehmann estimation showed no significant differences between both methods (p > 0.05). Correlation among devices was statistically significant and very large (r [95% CI]: 0.970 [0.970, 0.971], p < 0.01; aR2 [95% CI]: 0.942 [0.942, 0.943], p < 0.01) with strong evidence supporting consistency of the new wCS (BF10 > 100). In addition, a high concordance was found (ρc [95% CI]: 0.970 [0.969, 0.971], bias correction factor: 0.999). VyntusTM CPX, as a standard criterion, showed moderate agreement with wCS after Bland–Altman analysis (bias [95% lower to the upper limit of agreement]; % agree: 0.170 [−4.582 to 4.923] breaths·min−1; 69.9%). A strong association between measurements (rrm [95% CI]: 0.960 [0.959, 0.961]), a high absolute agreement between methods (ICC [95% CI]: 0.970 [0.970, 0.971]), and high inter-rater reliability (rF: 0.947) were found. With an RMSE = 2.42 breaths·min−1, the new wCS seems to be an valid and reliable in-field method to evaluate fR compared to a breath-by-breath gas exchange analyzer. Notwithstanding, caution is advised if methods are used interchangeably while further external validation occurs. Full article
(This article belongs to the Special Issue Promoting and Monitoring Physical Fitness in All Contexts)
Show Figures

Figure 1

Figure 1
<p>Novel wearable chest strap to measure respiratory frequency.</p>
Full article ">Figure 2
<p>Electronic board, both sides, with and without built-in cover.</p>
Full article ">Figure 3
<p>Measured and estimated respiratory frequency (<span class="html-italic">f</span><sub>R</sub>) values. The <span class="html-italic">f</span><sub>R</sub> is reported in breaths·min<sup>−1</sup>. The scatter plot shows individual measurements over time, with a smooth regression line highlighting the trend for both devices.</p>
Full article ">Figure 4
<p>(<b>A</b>) Pairwise scatter plot matrix, distribution, and Pearson correlation coefficient. The correlation plot includes histograms, density distributions, and a smooth regression line of the estimated and measured respiratory frequency (<span class="html-italic">f</span><sub>R</sub>) values. *** Statistical significance at <span class="html-italic">p</span> ≤ 0.001. (<b>B</b>) Repeated measures correlation concordance plot for each participant. Separate parallel lines are fitted to the data from each participant, and the corresponding line is shown in a different color. The blue dashed line is the fit of the simple correlation.</p>
Full article ">Figure 5
<p>Bland–Altman plot for differences between measured and estimated respiratory frequency (<span class="html-italic">f</span><sub>R</sub>) values. Individual differences between actual and estimated fat mass values are plotted against the mean of measured and estimated fat mass values.</p>
Full article ">
23 pages, 14242 KiB  
Article
EHNet: Efficient Hybrid Network with Dual Attention for Image Deblurring
by Quoc-Thien Ho, Minh-Thien Duong, Seongsoo Lee and Min-Cheol Hong
Sensors 2024, 24(20), 6545; https://doi.org/10.3390/s24206545 - 10 Oct 2024
Viewed by 775
Abstract
The motion of an object or camera platform makes the acquired image blurred. This degradation is a major reason to obtain a poor-quality image from an imaging sensor. Therefore, developing an efficient deep-learning-based image processing method to remove the blur artifact is desirable. [...] Read more.
The motion of an object or camera platform makes the acquired image blurred. This degradation is a major reason to obtain a poor-quality image from an imaging sensor. Therefore, developing an efficient deep-learning-based image processing method to remove the blur artifact is desirable. Deep learning has recently demonstrated significant efficacy in image deblurring, primarily through convolutional neural networks (CNNs) and Transformers. However, the limited receptive fields of CNNs restrict their ability to capture long-range structural dependencies. In contrast, Transformers excel at modeling these dependencies, but they are computationally expensive for high-resolution inputs and lack the appropriate inductive bias. To overcome these challenges, we propose an Efficient Hybrid Network (EHNet) that employs CNN encoders for local feature extraction and Transformer decoders with a dual-attention module to capture spatial and channel-wise dependencies. This synergy facilitates the acquisition of rich contextual information for high-quality image deblurring. Additionally, we introduce the Simple Feature-Embedding Module (SFEM) to replace the pointwise and depthwise convolutions to generate simplified embedding features in the self-attention mechanism. This innovation substantially reduces computational complexity and memory usage while maintaining overall performance. Finally, through comprehensive experiments, our compact model yields promising quantitative and qualitative results for image deblurring on various benchmark datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of the proposed Efficient Hybrid Network (EHNet) and the state-of-the-art deep-learning methods on the GoPro dataset in terms of peak signal-to-noise ratio (PSNR), complexity, and network parameters. In that, the circle size indicates the size of the methods.</p>
Full article ">Figure 2
<p>The overall architecture of our proposed EHNet for image deblurring.</p>
Full article ">Figure 3
<p>The detailed structure of the main components in the proposed network.</p>
Full article ">Figure 4
<p>The feature extraction process in self-attention modules. SFEM extracts simple local features via single depthwise convolution, splits them into branches, and passes them to the next self-attention step.</p>
Full article ">Figure 5
<p>The structure of Multi-Head Transposed Attention (MHTA) module.</p>
Full article ">Figure 6
<p>The detailed structure of MWSA.</p>
Full article ">Figure 7
<p>Qualitative comparisons of zoomed-in patches on the GoPro dataset.</p>
Full article ">Figure 8
<p>Qualitative comparisons of zoomed-in patches on the HIDE dataset.</p>
Full article ">Figure 9
<p>Qualitative comparisons of zoomed-in patches on the RealBlur-J dataset.</p>
Full article ">Figure 10
<p>Qualitative comparisons of zoomed-in patches on the RealBlur-R dataset.</p>
Full article ">Figure 11
<p>Visualization of the effectiveness of proposed modules.</p>
Full article ">
Back to TopTop