Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 24, September-2
Previous Issue
Volume 24, August-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 17 (September-1 2024) – 409 articles

Cover Story (view full-size image): Triple-negative breast cancer (TNBC) is an aggressive type of breast cancer that is difficult to diagnose; therefore, early detection is crucial for treatment and prognosis. Current diagnostic methods lack the ability for early-stage identification, leading to poor survival rates. There is a critical need for the development of tools or methods that can detect TNBC at earlier stages. An electrochemical biosensor capable of identifying specific micro-RNA biomarkers for TNBC offers a promising solution. The biosensor platform developed in this work uses single-stranded DNA molecules to detect specific micro-RNA biomarkers for TNBC. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 7195 KiB  
Article
MEEAFusion: Multi-Scale Edge Enhancement and Joint Attention Mechanism Based Infrared and Visible Image Fusion
by Yingjiang Xie, Zhennan Fei, Da Deng, Lingshuai Meng, Fu Niu and Jinggong Sun
Sensors 2024, 24(17), 5860; https://doi.org/10.3390/s24175860 - 9 Sep 2024
Viewed by 1244
Abstract
Infrared and visible image fusion can integrate rich edge details and salient infrared targets, resulting in high-quality images suitable for advanced tasks. However, most available algorithms struggle to fully extract detailed features and overlook the interaction of complementary features across different modal images [...] Read more.
Infrared and visible image fusion can integrate rich edge details and salient infrared targets, resulting in high-quality images suitable for advanced tasks. However, most available algorithms struggle to fully extract detailed features and overlook the interaction of complementary features across different modal images during the feature fusion process. To address this gap, this study presents a novel fusion method based on multi-scale edge enhancement and a joint attention mechanism (MEEAFusion). Initially, convolution kernels of varying scales were utilized to obtain shallow features with multiple receptive fields unique to the source image. Subsequently, a multi-scale gradient residual block (MGRB) was developed to capture the high-level semantic information and low-level edge texture information of the image, enhancing the representation of fine-grained features. Then, the complementary feature between infrared and visible images was defined, and a cross-transfer attention fusion block (CAFB) was devised with joint spatial attention and channel attention to refine the critical supplemental information. This allowed the network to obtain fused features that were rich in both common and complementary information, thus realizing feature interaction and pre-fusion. Lastly, the features were reconstructed to obtain the fused image. Extensive experiments on three benchmark datasets demonstrated that the MEEAFusion proposed in this research has considerable strengths in terms of rich texture details, significant infrared targets, and distinct edge contours, and it achieves superior fusion performance. Full article
Show Figures

Figure 1

Figure 1
<p>Display of fusion results. IR and VIS denote infrared image and visible image, and Figures (<b>a</b>–<b>g</b>) show the fusion results of FusionGAN [<a href="#B8-sensors-24-05860" class="html-bibr">8</a>], IPLF [<a href="#B9-sensors-24-05860" class="html-bibr">9</a>], STDFusionNet [<a href="#B10-sensors-24-05860" class="html-bibr">10</a>], DenseFuse [<a href="#B11-sensors-24-05860" class="html-bibr">11</a>], RFN-Nest [<a href="#B12-sensors-24-05860" class="html-bibr">12</a>], PMGI [<a href="#B13-sensors-24-05860" class="html-bibr">13</a>], and FLFuse-Net [<a href="#B14-sensors-24-05860" class="html-bibr">14</a>], respectively. The red and green boxes outline the salient targets and detail regions.</p>
Full article ">Figure 2
<p>MEEAFusion—overall framework.</p>
Full article ">Figure 3
<p>MGRB module structure.</p>
Full article ">Figure 4
<p>Gradient convolution results of the visible image. (<b>a</b>,<b>b</b>) show the 3 × 3 and 5 × 5 Sobel convolution results, respectively.</p>
Full article ">Figure 5
<p>CAFB module structure.</p>
Full article ">Figure 6
<p>Visual display of fusion results for scene 00537D.</p>
Full article ">Figure 7
<p>Visual display of fusion results for scene 00878N.</p>
Full article ">Figure 8
<p>Visual display of fusion results for scene 01024N.</p>
Full article ">Figure 9
<p>Data distribution of fusion results for 36 pairs of MSRS images over the eight objective evaluation criteria. Each point (x, y) in this Figure means (100 × x)% of fused images whose metric values do not exceed y.</p>
Full article ">Figure 10
<p>Visual display of fusion results for bench scene. The salient regions are highlighted with red boxes.</p>
Full article ">Figure 11
<p>Visual display of fusion results for Kaptein_1123 scene. The salient and detailed regions are highlighted with red and green boxes.</p>
Full article ">Figure 12
<p>Data distribution of fusion results for 20 pairs of TNO images over the eight objective evaluation criteria. Each point (x, y) in this Figure means (100 × x)% of fused images whose metric values do not exceed y.</p>
Full article ">Figure 13
<p>Visual display of fusion results for scene FLIR_00006. The detailed regions are highlighted with red boxes.</p>
Full article ">Figure 13 Cont.
<p>Visual display of fusion results for scene FLIR_00006. The detailed regions are highlighted with red boxes.</p>
Full article ">Figure 14
<p>Visual display of fusion results for scene FLIR_06570. The salient and detailed regions are highlighted with red and green boxes.</p>
Full article ">Figure 15
<p>Data distribution of fusion results for 30 pairs of RoadScene images over the eight objective evaluation criteria. Each point (x, y) in this Figure means (100 × x)% of fused images whose metric values do not exceed y.</p>
Full article ">Figure 16
<p>Visual results of the ablation experiment. The salient and detailed regions are highlighted with red and green boxes.</p>
Full article ">Figure 17
<p>Visual display of YOLOv5s prediction results for fused images of scene 00479D.</p>
Full article ">Figure 18
<p>Visual display of YOLOv5s prediction results for fused images of scene 01348N.</p>
Full article ">
18 pages, 7620 KiB  
Article
Multi-Sensing Inspection System for Thermally Induced Micro-Instability in Metal-Based Selective Laser Melting
by Xing Peng, Rongjie Liao and Ziyan Zhu
Sensors 2024, 24(17), 5859; https://doi.org/10.3390/s24175859 - 9 Sep 2024
Viewed by 677
Abstract
Additive manufacturing (AM) excels in engineering intricate shapes, pioneering functional components, and lightweight structures. Nevertheless, components fabricated through AM often manifest elevated residual stresses and a myriad of thermally induced micro-instabilities, including cracking, incomplete fusion, crazing, porosity, spheroidization, and inclusions. In response, this [...] Read more.
Additive manufacturing (AM) excels in engineering intricate shapes, pioneering functional components, and lightweight structures. Nevertheless, components fabricated through AM often manifest elevated residual stresses and a myriad of thermally induced micro-instabilities, including cracking, incomplete fusion, crazing, porosity, spheroidization, and inclusions. In response, this study proposed a sophisticated multi-sensing inspection system specifically tailored for the inspection of thermally induced micro-instabilities at the micro–nano scale. Simulation results substantiate that the modulation transfer function (MTF) values for each field of view in both visible and infrared optical channels surpass the benchmark of 0.3, ensuring imaging fidelity conducive to meticulous examination. Furthermore, the innovative system can discern and accurately capture data pertaining to thermally induced micro-instabilities across visible and infrared spectra, seamlessly integrating this information into a backend image processing system within operational parameters of a 380–450 mm distance and a 20–70 °C temperature range. Notably, the system’s design is harmoniously aligned with the requisites of processing and assembly, heralding a significant advancement in bolstering the inspection effect of thermally induced micro-instabilities for the AM component. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the multi-sensor inspection system: IL: infrared imaging channel; VL: visible imaging channel; DM: beam splitter; RM: reflective mirror; ID: infrared imaging detector; VD: visible imaging detector; HUB: hub; PC: computer; SU: sensor unit.</p>
Full article ">Figure 2
<p>Optical path diagram of the visible imaging channel.</p>
Full article ">Figure 3
<p>The imaging quality evaluation of the visible channel: (<b>a</b>) MTF diagram; (<b>b</b>) field curvature and distortion diagram; (<b>c</b>) spot diagrams; (<b>d</b>) enclosed energy diagram; (<b>e</b>) relative illuminance diagram; (<b>f</b>) wavefront map.</p>
Full article ">Figure 4
<p>The MTF profiles of the visible imaging channel at different working distances.</p>
Full article ">Figure 5
<p>The enclosed energy of the visible imaging channel at different working distances.</p>
Full article ">Figure 6
<p>Optical path diagram of the infrared imaging channel.</p>
Full article ">Figure 7
<p>The imaging quality evaluation of the infrared channel: (<b>a</b>) MTF diagram; (<b>b</b>) field curvature and distortion diagram; (<b>c</b>) spot diagrams; (<b>d</b>) enclosed energy diagram; (<b>e</b>) relative illuminance diagram; (<b>f</b>) wavefront map.</p>
Full article ">Figure 8
<p>The MTF profiles of the infrared imaging channel at different working distances.</p>
Full article ">Figure 9
<p>The enclosed energy of the infrared imaging channel at different working distances.</p>
Full article ">Figure 10
<p>MTF profiles of the visible channel imaging system at various working temperatures.</p>
Full article ">Figure 11
<p>MTF profiles of the infrared channel imaging system at various working temperatures.</p>
Full article ">Figure 12
<p>Tolerance analysis of visible channel imaging system.</p>
Full article ">Figure 13
<p>Tolerance analysis results of infrared channel imaging system.</p>
Full article ">
22 pages, 5266 KiB  
Article
Self-Supervised Dam Deformation Anomaly Detection Based on Temporal–Spatial Contrast Learning
by Yu Wang and Guohua Liu
Sensors 2024, 24(17), 5858; https://doi.org/10.3390/s24175858 - 9 Sep 2024
Viewed by 757
Abstract
The detection of anomalies in dam deformation is paramount for evaluating structural integrity and facilitating early warnings, representing a critical aspect of dam health monitoring (DHM). Conventional data-driven methods for dam anomaly detection depend extensively on historical data; however, obtaining annotated data is [...] Read more.
The detection of anomalies in dam deformation is paramount for evaluating structural integrity and facilitating early warnings, representing a critical aspect of dam health monitoring (DHM). Conventional data-driven methods for dam anomaly detection depend extensively on historical data; however, obtaining annotated data is both expensive and labor-intensive. Consequently, methodologies that leverage unlabeled or semi-labeled data are increasingly gaining popularity. This paper introduces a spatiotemporal contrastive learning pretraining (STCLP) strategy designed to extract discriminative features from unlabeled datasets of dam deformation. STCLP innovatively combines spatial contrastive learning based on temporal contrastive learning to capture representations embodying both spatial and temporal characteristics. Building upon this, a novel anomaly detection method for dam deformation utilizing STCLP is proposed. This method transfers pretrained parameters to targeted downstream classification tasks and leverages prior knowledge for enhanced fine-tuning. For validation, an arch dam serves as the case study. The results reveal that the proposed method demonstrates excellent performance, surpassing other benchmark models. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Overall framework of the proposed method.</p>
Full article ">Figure 2
<p>Diagram of TSCLP.</p>
Full article ">Figure 3
<p>Diagram of data augmentation.</p>
Full article ">Figure 4
<p>Definition of positive and negative samples.</p>
Full article ">Figure 5
<p>Perspectives of the arch dam.</p>
Full article ">Figure 6
<p>Pendulum systems for monitoring horizontal deformation.</p>
Full article ">Figure 7
<p>Deformation series of monitoring data for selected sensors.</p>
Full article ">Figure 8
<p>Matrix graph of the deformation correlation coefficient between different monitoring points.</p>
Full article ">Figure 9
<p>Loss and accuracy curves.</p>
Full article ">Figure 10
<p>Anomaly detection results of seven representative measuring points.</p>
Full article ">Figure 10 Cont.
<p>Anomaly detection results of seven representative measuring points.</p>
Full article ">Figure 11
<p>Comparison with other state-of-the-art methods.</p>
Full article ">Figure 11 Cont.
<p>Comparison with other state-of-the-art methods.</p>
Full article ">Figure 12
<p>Sensitivity analysis results. (<b>a</b>) The impact of time steps on performance. (<b>b</b>) The impact of weight changes on performance.</p>
Full article ">
28 pages, 8907 KiB  
Article
Design and Development of an Automatic Layout Algorithm for Laser GNSS RTK
by Jiazhi Tang, Xuan Sun, Xianjian Lu, Jiguang Jia and Shihua Tang
Sensors 2024, 24(17), 5857; https://doi.org/10.3390/s24175857 - 9 Sep 2024
Viewed by 576
Abstract
At the current stage, the automation level of GNSS RTK equipment is low, and manual operation leads to decreased accuracy and efficiency in setting out. To address these issues, this paper has designed an algorithm for automatic setting out that resolves the common [...] Read more.
At the current stage, the automation level of GNSS RTK equipment is low, and manual operation leads to decreased accuracy and efficiency in setting out. To address these issues, this paper has designed an algorithm for automatic setting out that resolves the common problem of reduced accuracy in conventional RTK. First, the calculation of the laser rotation center is conducted using relevant parameters to calibrate the instrument’s posture and angle. Then, by analyzing the posture information, the relative position and direction of the instrument to the point to be set out are determined, and the rotation angles in the horizontal and vertical directions are calculated. Following this, the data results are analyzed, and the obtained rotation angles are output to achieve automatic control of the instrument. Finally, a rotating laser composed of servo motors and laser modules is used to control the GNSS RTK equipment to locate the set-out point, thereby determining its position on the ground and displaying it in real-time. Compared to traditional GNSS RTK equipment, the proposed automatic setting out algorithm and the developed GNSS laser RTK equipment reduce the setting out error from 15 mm to 10.3 mm. This reduces the barrier to using GNSS RTK equipment, minimizes human influence, enhances the work efficiency of setting out measurements, and ensures high efficiency and stability under complex conditions. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>GNSS laser RTK inertial navigation schematic.</p>
Full article ">Figure 2
<p>Main content.</p>
Full article ">Figure 3
<p>Centering pole inclined posture.</p>
Full article ">Figure 4
<p>(<b>a</b>) Front view of the motor and laser installation method; (<b>b</b>) Left view of the motor and laser installation method; (<b>c</b>) Top view of the motor and laser installation method.</p>
Full article ">Figure 5
<p>System in plumb state.</p>
Full article ">Figure 6
<p>Top view of the center of laser rotation with RTK phase center in plumb case.</p>
Full article ">Figure 7
<p>Schematic diagram of the pole tip inclination angle.</p>
Full article ">Figure 8
<p>Schematic diagram of motor rotation points.</p>
Full article ">Figure 9
<p>The error distribution chart when the roll angle and pitch angle error are 0.05 degrees and the heading angle error is 0.2 degrees.</p>
Full article ">Figure 10
<p>The error distribution chart when the roll angle and pitch angle error are 0.05 degrees and the heading angle error is 1.0 degrees.</p>
Full article ">Figure 11
<p>The error distribution chart when the roll angle and pitch angle error are 0.1 degrees and the heading angle error is 0.2 degrees.</p>
Full article ">Figure 12
<p>The error distribution chart when the roll angle and pitch angle error are 0.1 degrees and the heading angle error is 1.0 degrees.</p>
Full article ">Figure 13
<p>The error distribution chart when the roll angle and pitch angle error are 0.05 degrees and the heading angle error is 0.2 degrees.</p>
Full article ">Figure 14
<p>The error distribution chart when the roll angle and pitch angle error are 0.05 degrees and the heading angle error is 1.0 degrees.</p>
Full article ">Figure 15
<p>The error distribution chart when the roll angle and pitch angle error are 0.1 degrees and the heading angle error is 0.2 degrees.</p>
Full article ">Figure 16
<p>The error distribution chart when the roll angle and pitch angle error are 0.1 degrees and the heading angle error is 1.0 degrees.</p>
Full article ">Figure 17
<p>Schematic diagram of motor rotation error.</p>
Full article ">Figure 18
<p>Motor rotation angle error <math display="inline"><semantics> <mrow> <msup> <mi>m</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>0.02</mn> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Motor rotation angle error <math display="inline"><semantics> <mrow> <msup> <mi>m</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>0.05</mn> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 20
<p>GNSS laser RTK automatic layout engineering prototype.</p>
Full article ">Figure 21
<p>P1–P15 point distribution.</p>
Full article ">
34 pages, 14710 KiB  
Article
Research on Spatiotemporal Continuous Information Perception of Overburden Compression–Tensile Strain Transition Zone during Mining and Integrated Safety Guarantee System
by Gang Cheng, Ziyi Wang, Bin Shi, Tianlu Cai, Minfu Liang, Jinghong Wu and Qinliang You
Sensors 2024, 24(17), 5856; https://doi.org/10.3390/s24175856 - 9 Sep 2024
Viewed by 767
Abstract
The mining of deep underground coal seams induces the movement, failure, and collapse of the overlying rock–soil body, and the development of this damaging effect on the surface causes ground fissures and ground subsidence on the surface. To ensure safety throughout the life [...] Read more.
The mining of deep underground coal seams induces the movement, failure, and collapse of the overlying rock–soil body, and the development of this damaging effect on the surface causes ground fissures and ground subsidence on the surface. To ensure safety throughout the life cycle of the mine, fully distributed, real-time, and continuous sensing and early warning is essential. However, due to mining being a dynamic process with time and space, the overburden movement and collapse induced by mining activities often have a time lag effect. Therefore, how to find a new way to resolve the issue of the existing discontinuous monitoring technology of overburden deformation, obtain the spatiotemporal continuous information of the overlying strata above the coal seam in real time and accurately, and clarify the whole process of deformation in the compression–tensile strain transition zone of overburden has become a key breakthrough in the investigation of overburden deformation mechanism and mining subsidence. On this basis, firstly, the advantages and disadvantages of in situ observation technology of mine rock–soil body were compared and analyzed from the five levels of survey, remote sensing, testing, exploration, and monitoring, and a deformation and failure perception technology based on spatiotemporal continuity was proposed. Secondly, the evolution characteristics and deformation failure mechanism of the compression–tensile strain transition zone of overburden were summarized from three aspects: the typical mode of deformation and collapse of overlying rock–soil body, the key controlling factors of deformation and failure in the overburden compression–tensile strain transition zone, and the stability evaluation of overburden based on reliability theory. Finally, the spatiotemporal continuous perception technology of overburden deformation based on DFOS is introduced in detail, and an integrated coal seam mining overburden safety guarantee system is proposed. The results of the research can provide an important evaluation basis for the design of mining intensity, emergency decisions, and disposal of risks, and they can also give important guidance for the assessment of ground geological and ecological restoration and management caused by underground coal mining. Full article
(This article belongs to the Special Issue Recent Advances in Optical Sensor for Mining)
Show Figures

Figure 1

Figure 1
<p>Structure of energy consumption in China, 2019–2023.</p>
Full article ">Figure 2
<p>Mine accidents and geological disasters caused by mining: (<b>a</b>) Roadway deformation, (<b>b</b>) Mine water inrush, (<b>c</b>) Ground subsidence, and (<b>d</b>) Induced landslide.</p>
Full article ">Figure 3
<p>Statistics of coal mine accidents in China, 2014–2023.</p>
Full article ">Figure 4
<p>Development and evolution of stope structure model.</p>
Full article ">Figure 5
<p>Theoretical model of overburden stress distribution. (Where: σ<sub>z</sub> is the peak stress of coal pillars; <span class="html-italic">k</span> is the stress concentration coefficient; γ is the average bulk density of the overlying rock layer of the coal seam, k/Nm<sup>3</sup>; h is buried in the coal seam deep, m.)</p>
Full article ">Figure 6
<p>Typical types of overburden deformation: (<b>a</b>) Bending and tensile failure, (<b>b</b>) Overall shear failure, and (<b>c</b>) Shear and sliding failure.</p>
Full article ">Figure 7
<p>The process of gray relational analysis.</p>
Full article ">Figure 8
<p>Schematic diagram of probability integration method.</p>
Full article ">Figure 9
<p>Prediction process of overburden stability.</p>
Full article ">Figure 10
<p>Bayesian-based overburden rock stability evaluation.</p>
Full article ">Figure 11
<p>Principle of FBG and DFOS technologies: (<b>a</b>) FBG (Fiber Brag Grating), (<b>b</b>) UWFBG ((Ultra-Weak Fiber Bragg Grating), (<b>c</b>) BOTDR (Brillouin Optical Time Domain Reflectometry), and (<b>d</b>) DTS (Distributed Temperature Sensing).</p>
Full article ">Figure 12
<p>Temperature test results at different leakage pressures (<b>a</b>–<b>d</b>). Strain changes in sensing cables in different layers (leakage pressure of 1 MPa). ① represents the bottom layer; ② represents the middle layer; ③ represents the top layer.</p>
Full article ">Figure 13
<p>Set-up of digital BOFDA system (“DAC” represents Digital to Analog Converter; “ADC” represents Analog to Digital Converter).</p>
Full article ">Figure 14
<p>Monitoring system layout and result.</p>
Full article ">Figure 15
<p>Strain curve of decoupling test.</p>
Full article ">Figure 16
<p>Strain distribution of overburden deformation.</p>
Full article ">Figure 17
<p>Backfilling material test model.</p>
Full article ">Figure 18
<p>Sensing cable layout for the ground monitoring system: (<b>a</b>) Cable layout, (<b>b</b>) Borehole backfill, (<b>c</b>) Cable coupled with borehole, and (<b>d</b>) Cable protection.</p>
Full article ">Figure 19
<p>Sensing cable layout for the underground monitoring system: (<b>a</b>) Cable layout, (<b>b</b>) Borehole drill, (<b>c</b>) Grouting, (<b>d</b>) Cable implant, (<b>e</b>) Optic fiber monitoring result, and (<b>f</b>) Electrical method monitoring result.</p>
Full article ">Figure 20
<p>The layout process of pullout test and distribution of sensing cable strain data.</p>
Full article ">Figure 21
<p>The three-stage model of pullout force–displacement relationship. The blue lines indicate different pull-out force distributions, and the five Roman numerals represent the five stages of pure elasticity, elasticity-softening, pure softening, softening-residual, and pure residual.</p>
Full article ">Figure 22
<p>Coupling test for sensing cable–soil under controllable confining pressure: (<b>a</b>) diagram of test device; (<b>b</b>) curves of ground subsidence and calculated values of ground pressure [<a href="#B40-sensors-24-05856" class="html-bibr">40</a>].</p>
Full article ">Figure 23
<p>Integrated safety guarantee system for coal mining.</p>
Full article ">Figure 24
<p>Neural perception of the rock–soil body.</p>
Full article ">Figure 25
<p>Self-diagnostic self-healing FBG sensing network system. The optical fiber has the ability of self-healing and self-diagnosis, and the cross sign indicates that after the upper fiber is broken, it can be switched to the following fiber for monitoring, so as to achieve uninterrupted monitoring. The dash lines indicate that the two fibers can be switched.</p>
Full article ">Figure 26
<p>Modeling of overburden deformation prediction based on machine learning.</p>
Full article ">Figure 27
<p>Early warning levels for overburden stability.</p>
Full article ">Figure 28
<p>Integrated spatiotemporal continuous sensing system.</p>
Full article ">
21 pages, 5536 KiB  
Article
A Machine Learning Approach for Path Loss Prediction Using Combination of Regression and Classification Models
by Ilia Iliev, Yuliyan Velchev, Peter Z. Petkov, Boncho Bonev, Georgi Iliev and Ivaylo Nachev
Sensors 2024, 24(17), 5855; https://doi.org/10.3390/s24175855 - 9 Sep 2024
Viewed by 1147
Abstract
One of the key parameters in radio link planning is the propagation path loss. Most of the existing methods for its prediction are not characterized by a good balance between accuracy, generality, and low computational complexity. To address this problem, a machine learning [...] Read more.
One of the key parameters in radio link planning is the propagation path loss. Most of the existing methods for its prediction are not characterized by a good balance between accuracy, generality, and low computational complexity. To address this problem, a machine learning approach for path loss prediction is presented in this study. The novelty is the proposal of a compound model, which consists of two regression models and one classifier. The first regression model is adequate when a line-of-sight scenario is fulfilled in radio wave propagation, whereas the second one is appropriate for non-line-of-sight conditions. The classification model is intended to provide a probabilistic output, through which the outputs of the regression models are combined. The number of used input parameters is only five. They are related to the distance, the antenna heights, and the statistics of the terrain profile and line-of-sight obstacles. The proposed approach allows creation of a generalized model that is valid for various types of areas and terrains, different antenna heights, and line-of-sight and non line-of-sight propagation conditions. An experimental dataset is provided by measurements for a variety of relief types (flat, hilly, mountain, and foothill) and for rural, urban, and suburban areas. The experimental results show an excellent performances in terms of a root mean square error of a prediction as low as 7.3 dB and a coefficient of determination as high as 0.702. Although the study covers only one operating frequency of 433 MHz, the proposed model can be trained and applied for any frequency in the decimeter wavelength range. The main reason for the choice of such an operating frequency is because it falls within the range in which many wireless systems of different types are operating. These include Internet of Things (IoT), machine-to-machine (M2M) mesh radio networks, power efficient communication over long distances such as Low-Power Wide-Area Network (LPWAN)—LoRa, etc. Full article
(This article belongs to the Topic Advances in Wireless and Mobile Networking)
Show Figures

Figure 1

Figure 1
<p>Architecture of the compound model for path loss prediction in two variants of combining the outputs of the two regression models (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 2
<p>Architectures of the proposed individual models (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 3
<p>The measurement setup through which the experimental dataset is created.</p>
Full article ">Figure 4
<p>The used J-pole antenna. The length of each elements is given in the table.</p>
Full article ">Figure 5
<p>Map of geographical points associated with measurement records for Septemvri town (Bulgaria), rural and suburban areas.</p>
Full article ">Figure 6
<p>Map of geographical points associated with measurement records for Belogradchik town (Bulgaria), rural and suburban areas.</p>
Full article ">Figure 7
<p>Map of geographical points associated with measurement records for two selected places in Sofia city (Bulgaria), urban and suburban areas.</p>
Full article ">Figure 8
<p>Loss function minimization during training of Models A/B, (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 9
<p>Loss and accuracy (Acc) curves improvement during training and validation of Model P.</p>
Full article ">Figure 10
<p>ROC curve of Model P (red line) with <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>U</mi> <mi>C</mi> <mo>=</mo> <mn>0.889</mn> </mrow> </semantics></math>. The random chance line is the black dashed one.</p>
Full article ">Figure 11
<p>Confusion matrix of Model P (label “A” corresponds to LOS scenario, whereas “B” is for NLOS).</p>
Full article ">Figure 12
<p>Boxplot diagrams of the prediction error of Models A and B.</p>
Full article ">Figure 13
<p>Boxplot diagrams of the prediction error of the compound model with “soft” and “hard” combinations of the outputs.</p>
Full article ">Figure 14
<p>Histograms of the prediction error of the compound model with (<b>a</b>) “soft” and (<b>b</b>) “hard” combinations of the outputs. A normalization is made in respect of the total number of elements.</p>
Full article ">Figure 15
<p>True and predicted outputs of the compound model with the “soft” combination versus distance, <span class="html-italic">d</span>, and effective antenna height, <math display="inline"><semantics> <msub> <mi>h</mi> <mrow> <mi>e</mi> <mi>f</mi> <mi>f</mi> </mrow> </msub> </semantics></math> (all samples from the dataset are used).</p>
Full article ">
35 pages, 9672 KiB  
Article
Design and Modelling of MEMS Vibrating Internal Ring Gyroscopes for Harsh Environments
by Waqas Amin Gill, Ian Howard, Ilyas Mazhar and Kristoffer McKee
Sensors 2024, 24(17), 5854; https://doi.org/10.3390/s24175854 - 9 Sep 2024
Viewed by 846
Abstract
This paper presents a design, model, and comparative analysis of two internal MEMS vibrating ring gyroscopes for harsh environmental conditions. The proposed design investigates the symmetric structure of the vibrating ring gyroscopes that operate at the identical shape of wine glass mode resonance [...] Read more.
This paper presents a design, model, and comparative analysis of two internal MEMS vibrating ring gyroscopes for harsh environmental conditions. The proposed design investigates the symmetric structure of the vibrating ring gyroscopes that operate at the identical shape of wine glass mode resonance frequencies for both driving and sensing purposes. This approach improves the gyroscope’s sensitivity and precision in rotational motion. The analysis starts with an investigation of the dynamic behaviour of the vibrating ring gyroscope with the detailed derivation of motion equations. The design geometry, meshing technology, and simulation results were comprehensively evaluated on two internal vibrating ring gyroscopes. The two designs are distinguished by their support spring configurations and internal ring structures. Design I consists of eight semicircular support springs and Design II consists of sixteen semicircular support springs. These designs were modelled and analyzed using finite element analysis (FEA) in Ansys 2023 R1 software. This paper further evaluates static and dynamic performance, emphasizing mode matching and temperature stability. The results reveal that Design II, with additional support springs, offers better mode matching, higher resonance frequencies, and better thermal stability compared to Design I. Additionally, electrostatic, modal, and harmonic analyses highlight the gyroscope’s behaviour under varying DC voltages and environmental conditions. Furthermore, this study investigates the impact of temperature fluctuations on performance, demonstrating the robustness of the designs within a temperature range from −100 °C to 100 °C. These research findings suggest that the internal vibrating ring gyroscopes are highly suitable for harsh conditions such as high temperature and space applications. Full article
(This article belongs to the Special Issue Application of MEMS/NEMS-Based Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>A schematic illustration of a dynamic system of a MEMS vibrating ring gyroscope.</p>
Full article ">Figure 2
<p>An illustration of the reference frames in terms of a ring resonator.</p>
Full article ">Figure 3
<p>A resonant structure model for the MEMS internal ring gyroscope shows with separate eight masses.</p>
Full article ">Figure 4
<p>An elliptical mode of vibration for a ring resonator and displacement illustration for primary and secondary vibrational modes.</p>
Full article ">Figure 5
<p>Driving and sensing mechanism of a ring resonator.</p>
Full article ">Figure 6
<p>A cross-sectional view of the ring structure with centreline radius R and other design parameters.</p>
Full article ">Figure 7
<p>An illustration of the force’s reactions on the semicircular support springs (<b>a</b>) forces and moments experienced by semicircular support springs (<b>b</b>) normal forces (<b>c</b>) moments.</p>
Full article ">Figure 8
<p>Design I: MEMS vibrating internal ring gyroscope with eight semicircular support springs.</p>
Full article ">Figure 9
<p>Design II: MEMS vibrating internal ring gyroscope with sixteen semicircular support springs.</p>
Full article ">Figure 10
<p>Electrostatic actuation modelling results for MEMS vibrating internal ring gyroscope for Design I and Design II.</p>
Full article ">Figure 11
<p>The FEA modal analysis for Design I.</p>
Full article ">Figure 12
<p>The FEA modal analysis for Design II.</p>
Full article ">Figure 13
<p>A schematic representation of the electrode scheme for electrostatic actuation with red dotted arrows show enlarge view of the driving electrode.</p>
Full article ">Figure 14
<p>A harmonic response analysis for the MEMS vibrating internal ring gyroscope Design I.</p>
Full article ">Figure 15
<p>A harmonic response analysis for the MEMS vibrating internal ring gyroscope Design II.</p>
Full article ">Figure 16
<p>Temperature fluctuations affect the Young’s modulus of silicon material and red cross shows values at different temperatures.</p>
Full article ">Figure 17
<p>Design I: thermal stresses developed at (<b>a</b>) −100 °C, (<b>b</b>) +100 °C.</p>
Full article ">Figure 18
<p>Design II: thermal stresses developed at (<b>a</b>) −100 °C, (<b>b</b>) +100 °C.</p>
Full article ">Figure 19
<p>Design I: thermal strains developed at (<b>a</b>) −100 °C, (<b>b</b>) +100 °C.</p>
Full article ">Figure 20
<p>Design II: thermal strains developed at (<b>a</b>) −100 °C, (<b>b</b>) +100 °C.</p>
Full article ">Figure 21
<p>Thermal stresses vs. temperature changes for Design I and Design II.</p>
Full article ">Figure 22
<p>Thermal strains vs. temperature changes for Design I and Design II.</p>
Full article ">Figure 23
<p>Design I: thermal deformation vs. temperature changes. (<b>a</b>) Deformation at −100 °C. (<b>b</b>) Deformation at +100 °C.</p>
Full article ">Figure 24
<p>Design II: thermal deformation vs. temperature changes. (<b>a</b>) Deformation at −100 °C. (<b>b</b>) Deformation at +100 °C.</p>
Full article ">Figure 25
<p>Thermal deformation vs. temperature changes for MEMS vibrating internal ring gyroscopes.</p>
Full article ">Figure 26
<p>Design I: effect of temperature changes on resonance frequencies.</p>
Full article ">Figure 27
<p>Design II: effect of temperature changes on resonance frequencies.</p>
Full article ">Figure 28
<p>Design I: mode mismatch resonance frequencies.</p>
Full article ">Figure 29
<p>Design II: mode mismatch resonance frequencies.</p>
Full article ">
22 pages, 4716 KiB  
Article
Designing of Airspeed Measurement Method for UAVs Based on MEMS Pressure Sensors
by Zhipeng Chen, Haojie Li, Hang Yu, Yuan Zhao, Jing Ma, Chuanhao Zhang and He Zhang
Sensors 2024, 24(17), 5853; https://doi.org/10.3390/s24175853 - 9 Sep 2024
Viewed by 3291
Abstract
Airspeed measurement is crucial for UAV control. To achieve accurate airspeed measurements for UAVs, this paper calculates airspeed data by measuring changes in air pressure and temperature. Based on this, a data processing method based on mechanical filtering and the improved AR-SHAKF algorithm [...] Read more.
Airspeed measurement is crucial for UAV control. To achieve accurate airspeed measurements for UAVs, this paper calculates airspeed data by measuring changes in air pressure and temperature. Based on this, a data processing method based on mechanical filtering and the improved AR-SHAKF algorithm is proposed to indirectly measure airspeed with high precision. In particular, a mathematical model for an airspeed measurement system was established, and an installation method for the pressure sensor was designed to measure the total pressure, static pressure, and temperature. Secondly, the measurement principle of the sensor was analyzed, and a metal tube was installed to act as a mechanical filter, particularly in cases where the aircraft has a significant impact on the gas flow field. Furthermore, a time series model was used to establish the sensor state equation and the initial noise values. It also enhanced the Sage–Husa adaptive filter to analyze the unavoidable error impact of initial noise values. By constraining the range of measurement noise, it achieved adaptive noise estimation. To validate the superiority of the proposed method, a low-complexity airspeed measurement device based on MEMS pressure sensors was designed. The results demonstrate that the airspeed measurement device and the designed velocity measurement method can effectively calculate airspeed with high measurement accuracy and strong interference resistance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The airspeed measurement flowchart.</p>
Full article ">Figure 2
<p>Schematic diagram of airspeed measurement device structure.</p>
Full article ">Figure 3
<p>Schematic diagram of improved structure for airspeed measurement device.</p>
Full article ">Figure 4
<p>Comparison diagram of air pressure sensor retrofit with metal tube. (<b>a</b>) Air pressure sensor; (<b>b</b>) Retrofitting metal tube air pressure sensor.</p>
Full article ">Figure 5
<p>Sensor static experiment setup.</p>
Full article ">Figure 6
<p>Sensor static measurement data.</p>
Full article ">Figure 7
<p>Sensor Allan variance plot; (<b>a</b>) Sensor 1; (<b>b</b>) Sensor 2.</p>
Full article ">Figure 8
<p>ACF and PACF plots for Sensor 1 and Sensor 2; (<b>a</b>) ACF plot for Sensor 1; (<b>b</b>) PACF plot for Sensor 1; (<b>c</b>) ACF plot for Sensor 2; (<b>d</b>) PACF plot for Sensor 2.</p>
Full article ">Figure 9
<p>Comparison of static data filtering.</p>
Full article ">Figure 10
<p>Allan variance plot of filtered static sensor data; (<b>a</b>) Sensor 1; (<b>b</b>) Sensor 2.</p>
Full article ">Figure 11
<p>Wind tunnel test diagram of airspeed measurement device.</p>
Full article ">Figure 12
<p>Experiment 1 data.</p>
Full article ">Figure 13
<p>Comparison of sensor signal filtering; (<b>a</b>) Total pressure signal; (<b>b</b>) Static pressure signal.</p>
Full article ">Figure 14
<p>Comparison of calculated airspeed results with true airspeed values.</p>
Full article ">Figure 15
<p>Wind tunnel test diagram of airspeed measurement device without metal tubes.</p>
Full article ">Figure 16
<p>Experiment 2 data.</p>
Full article ">Figure 17
<p>Comparison of airspeed calculation results with true airspeed.</p>
Full article ">
18 pages, 4323 KiB  
Article
One-Dimensional ZnO Nanorod Array Grown on Ag Nanowire Mesh/ZnO Composite Seed Layer for H2 Gas Sensing and UV Detection Applications
by Fang-Hsing Wang, An-Jhe Li, Han-Wen Liu and Tsung-Kuei Kang
Sensors 2024, 24(17), 5852; https://doi.org/10.3390/s24175852 - 9 Sep 2024
Cited by 1 | Viewed by 613
Abstract
Photodetectors and gas sensors are vital in modern technology, spanning from environmental monitoring to biomedical diagnostics. This paper explores the UV detection and gas sensing properties of a zinc oxide (ZnO) nanorod array (ZNA) grown on silver nanowire mesh (AgNM) using a hydrothermal [...] Read more.
Photodetectors and gas sensors are vital in modern technology, spanning from environmental monitoring to biomedical diagnostics. This paper explores the UV detection and gas sensing properties of a zinc oxide (ZnO) nanorod array (ZNA) grown on silver nanowire mesh (AgNM) using a hydrothermal method. We examined the impact of different zinc acetate precursor concentrations on their properties. Results show the AgNM forms a network with high transparency (79%) and low sheet resistance (7.23 Ω/□). A sol–gel ZnO thin film was coated on this mesh, providing a seed layer with a hexagonal wurtzite structure. Increasing the precursor concentration alters the diameter, length, and area density of ZNAs, affecting their performance. The ZNA-AgNM-based photodetector shows enhanced dark current and photocurrent with increasing precursor concentration, achieving a maximum photoresponsivity of 114 A/W at 374 nm and a detectivity of 6.37 × 1014 Jones at 0.05 M zinc acetate. For gas sensing, the resistance of ZNA-AgNM-based sensors decreases with temperature, with the best hydrogen response (2.71) at 300 °C and 0.04 M precursor concentration. These findings highlight the potential of ZNA-AgNM for high-performance UV photodetectors and hydrogen gas sensors, offering an alternative way for the development of future sensing devices with enhanced performance and functionality. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental procedure flow chart and schematic of ZNA-AgNM-based MSM devices.</p>
Full article ">Figure 2
<p>Cross-section view and plane-view FE-SEM images of (<b>a</b>,<b>b</b>) the silver nanowire mesh (AgNM) and (<b>c</b>,<b>d</b>) the ZnO seed layer coated on AgNM.</p>
Full article ">Figure 3
<p>XRD patterns of the AgNM and the ZnO seed layer.</p>
Full article ">Figure 4
<p>Plane-view FE-SEM images of the ZNAs with different precursor concentrations: (<b>a</b>) 0.02 M, (<b>b</b>) 0.03 M, (<b>c</b>) 0.04 M, and (<b>d</b>) 0.05 M. The inset shows their cross-section view.</p>
Full article ">Figure 5
<p>Dimensional parameters of ZNAs with different precursor concentrations.</p>
Full article ">Figure 6
<p>XRD patterns of the ZNA-AgNMs grown with different precursor concentrations.</p>
Full article ">Figure 7
<p>Room-temperature photoluminescence (PL) spectra of ZNA-AgNMs grown with different zinc acetate concentrations.</p>
Full article ">Figure 8
<p>Resistance of the ZNA-AgNM-based gas sensors as a function of operating temperature.</p>
Full article ">Figure 9
<p>(<b>a</b>) H<sub>2</sub> gas response of ZNA-AgNM-based devices at different zinc acetate concentrations. (<b>b</b>) H<sub>2</sub> response curves of ZNA-AgNM-based devices with a zinc acetate concentration of 0.04 M at 300 °C. The inset highlights the response curve at 2000 ppm H<sub>2</sub>.</p>
Full article ">Figure 10
<p>(<b>a</b>) Dark current and (<b>b</b>) photocurrents of the ZNA-AgNM-based devices prepared with different zinc acetate solution concentrations.</p>
Full article ">Figure 11
<p>Photoresponsivity of the ZNA-AgNM-based devices as a function of light wavelength at a bias of 5 V.</p>
Full article ">
30 pages, 1978 KiB  
Article
RDSC: Range-Based Device Spatial Clustering for IoT Networks
by Fouad Achkouty, Laurent Gallon and Richard Chbeir
Sensors 2024, 24(17), 5851; https://doi.org/10.3390/s24175851 - 9 Sep 2024
Viewed by 669
Abstract
The growth of the Internet of Things (IoT) has become a crucial area of modern research. While the increasing number of IoT devices has driven significant advancements, it has also introduced several challenges, such as data storage, data privacy, communication protocols, complex network [...] Read more.
The growth of the Internet of Things (IoT) has become a crucial area of modern research. While the increasing number of IoT devices has driven significant advancements, it has also introduced several challenges, such as data storage, data privacy, communication protocols, complex network topologies, and IoT device management. In essence, the management of IoT devices is becoming more and more challenging, especially with the limited capacity and power of the IoT devices. The devices, having limited capacities, cannot store the information of the entire environment at once. In addition, device power consumption can affect network performance and stability. The devices’ sensing areas with device grouping and management can simplify further networking tasks and improve response quality with data aggregation and correction techniques. In fact, most research papers are looking forward to expanding network lifetimes by relying on devices with high power capabilities. This paper proposes a device spatial clustering technique that covers crucial challenges in IoT. Our approach groups the dispersed devices to create clusters of connected devices while considering their coverage, their storage capacities, and their power. A new clustering protocol alongside a new clustering algorithm is introduced, resolving the aforementioned challenges. Moreover, a technique for non-sensed area extraction is presented. The efficiency of the proposed approach has been evaluated with extensive experiments that gave notable results. Our technique was also compared with other clustering algorithms, showing the different results of these algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>Deployment of devices in the Chiberta forest.</p>
Full article ">Figure 2
<p>Our approach architecture.</p>
Full article ">Figure 3
<p>Coverage range transformation.</p>
Full article ">Figure 4
<p>Objective function application use cases.</p>
Full article ">Figure 5
<p>Clustering algorithm schema.</p>
Full article ">Figure 6
<p>Uncovered zone example.</p>
Full article ">Figure 7
<p>Cluster surface area for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Cluster vertex number for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Clusters surface area for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Cluster vertex number for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Cluster surface area for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Cluster vertex number for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Cluster power for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Cluster vertex number for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Cluster power for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Cluster vertex number for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Cluster power for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Cluster vertex number for <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Merge factor impact.</p>
Full article ">Figure 20
<p>RDSC device power result for 1000 devices.</p>
Full article ">Figure 21
<p>RDSC zone surface result for 1000 devices.</p>
Full article ">Figure 22
<p>RDSC device numbers per cluster result for 1000 devices.</p>
Full article ">Figure 23
<p>RDSC number of vertices’ result for 1000 devices.</p>
Full article ">Figure 24
<p>RDSC result for 20 devices.</p>
Full article ">Figure 25
<p>RDSC result for 100 devices.</p>
Full article ">Figure 26
<p>DBSCAN result for 20 devices.</p>
Full article ">Figure 27
<p>DBSCAN result for 100 devices.</p>
Full article ">Figure 28
<p>K-means result for 20 devices.</p>
Full article ">Figure 29
<p>K-means result for 100 devices.</p>
Full article ">
19 pages, 10893 KiB  
Article
An Improved YOLOv8 OBB Model for Ship Detection through Stable Diffusion Data Augmentation
by Sang Feng, Yi Huang and Ning Zhang
Sensors 2024, 24(17), 5850; https://doi.org/10.3390/s24175850 - 9 Sep 2024
Viewed by 1513
Abstract
Unmanned aerial vehicles (UAVs) with cameras offer extensive monitoring capabilities and exceptional maneuverability, making them ideal for real-time ship detection and effective ship management. However, ship detection by camera-equipped UAVs faces challenges when it comes to multi-viewpoints, multi-scales, environmental variability, and dataset scarcity. [...] Read more.
Unmanned aerial vehicles (UAVs) with cameras offer extensive monitoring capabilities and exceptional maneuverability, making them ideal for real-time ship detection and effective ship management. However, ship detection by camera-equipped UAVs faces challenges when it comes to multi-viewpoints, multi-scales, environmental variability, and dataset scarcity. To overcome these challenges, we proposed a data augmentation method based on stable diffusion to generate new images for expanding the dataset. Additionally, we improve the YOLOv8n OBB model by incorporating the BiFPN structure and EMA module, enhancing its ability to detect multi-viewpoint and multi-scale ship instances. Through multiple comparative experiments, we evaluated the effectiveness of our proposed data augmentation method and the improved model. The results indicated that our proposed data augmentation method is effective for low-volume datasets with complex object features. The YOLOv8n-BiFPN-EMA OBB model we proposed performed well in detecting multi-viewpoint and multi-scale ship instances, achieving the mAP (@0.5) of 92.3%, the mAP (@0.5:0.95) of 77.5%, a reduction of 0.8 million in model parameters, and a detection speed that satisfies real-time ship detection requirements. Full article
Show Figures

Figure 1

Figure 1
<p>A horizontal bounding box and an oriented bounding box enclose the same ship instance. (<b>a</b>) Original image; (<b>b</b>) Horizontal bounding box; (<b>c</b>) Oriented bounding box.</p>
Full article ">Figure 2
<p>Each of the five classes of ship instances.</p>
Full article ">Figure 3
<p>The structure of stable diffusion model.</p>
Full article ">Figure 4
<p>The improved data augmentation method.</p>
Full article ">Figure 5
<p>The architecture of the improved YOLOv8 OBB model.</p>
Full article ">Figure 6
<p>The structures of PANet, BiFPN, and improved BiFPN. (<b>a</b>) PANet; (<b>b</b>) BiFPN; (<b>c</b>) Improved BiFPN. Circles of different colors represent different feature maps.</p>
Full article ">Figure 7
<p>The structure of the EMA module.</p>
Full article ">Figure 8
<p>Some example images of the augmented dataset.</p>
Full article ">Figure 9
<p>Comparison of the models.</p>
Full article ">Figure 10
<p>YOLOv8n (Original).</p>
Full article ">Figure 11
<p>YOLOv8n (Ours).</p>
Full article ">Figure 12
<p>YOLOv8n-BiFPN-EMA (Ours).</p>
Full article ">
12 pages, 1198 KiB  
Article
Visual Deprivation’s Impact on Dynamic Posture Control of Trunk: A Comprehensive Sensing Information Analysis of Neurophysiological Mechanisms
by Anna Sasaki, Honoka Nagae, Yukio Furusaka, Kei Yasukawa, Hayato Shigetoh, Takayuki Kodama and Junya Miyazaki
Sensors 2024, 24(17), 5849; https://doi.org/10.3390/s24175849 - 9 Sep 2024
Viewed by 755
Abstract
Visual information affects static postural control, but how it affects dynamic postural control still needs to be fully understood. This study investigated the effect of proprioception weighting, influenced by the presence or absence of visual information, on dynamic posture control during voluntary trunk [...] Read more.
Visual information affects static postural control, but how it affects dynamic postural control still needs to be fully understood. This study investigated the effect of proprioception weighting, influenced by the presence or absence of visual information, on dynamic posture control during voluntary trunk movements. We recorded trunk movement angle and angular velocity, center of pressure (COP), electromyographic, and electroencephalography signals from 35 healthy young adults performing a standing trunk flexion–extension task under two conditions (Vision and No-Vision). A random forest analysis identified the 10 most important variables for classifying the conditions, followed by a Wilcoxon signed-rank test. The results showed lower maximum forward COP displacement and trunk flexion angle, and faster maximum flexion angular velocity in the No-Vision condition. Additionally, the alpha/beta ratio of the POz during the switch phase was higher in the No-Vision condition. These findings suggest that visual deprivation affects cognitive- and sensory-integration-related brain regions during movement phases, indicating that sensory re-weighting due to visual deprivation impacts motor control. The effects of visual deprivation on motor control may be used for evaluation and therapeutic interventions in the future. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental Environment Setup: (<b>A</b>) Research equipment and attachment locations. The two green rectangles represent the two arms of the electronic goniometer. (<b>B</b>) Condition setup. In the No-Vision condition, participants wore an eye mask to deprive vision. (<b>C</b>) The curves are from one repetition of one subject’s movement tasks and a time series of measurement indicators. The APA, flexion, switch, and extension phases are classified based on angular velocity and COP-AP baseline values to analyze each measurement indicator. EEG: electroencephalography; EMG: electromyography (Fp1: left side prefrontal, Fp2: right side prefrontal, Cz: center of the parietal, POz: back center of the parietal); RA: rectus abdominis; ES: erector spinae; COP: center of pressure; AP: anterior–posterior; APA: anticipatory postural adjustments.</p>
Full article ">Figure 2
<p>Variable importance for classifying conditions with and without visual information. Max: maximum; APA: anticipatory postural adjustments; COP: center of pressure; RA: rectus abdominis; ES: erector spinae; CCI: co-contraction index.</p>
Full article ">Figure 3
<p>Scalograms of EEG for each channel in the Vision and No-vision conditions. The scalograms are from one repetition of one subject’s movement tasks. The Fp panels indicate the average of Fp1 and Fp2. The closer the color is to red, the higher the power value in the frequency band; the closer to blue, the lower the power value in the frequency band. Black lines in the figure indicate APA offset (flexion onset), flexion offset, and extension onset, from left to right.</p>
Full article ">
13 pages, 7338 KiB  
Article
A Combined Sensor Design Applied to Large-Scale Measurement Systems
by Xiao Pan, Huashuai Ren, Fei Liu, Jiapei Li, Pengfei Cheng and Zhongwen Deng
Sensors 2024, 24(17), 5848; https://doi.org/10.3390/s24175848 - 9 Sep 2024
Viewed by 560
Abstract
The photoelectric sensing unit in a large-space measurement system primarily determines the measurement accuracy of the system. Aiming to resolve the problem whereby existing sensing units have difficulty accurately measuring the hidden points and free-form surfaces in large components, in this study, we [...] Read more.
The photoelectric sensing unit in a large-space measurement system primarily determines the measurement accuracy of the system. Aiming to resolve the problem whereby existing sensing units have difficulty accurately measuring the hidden points and free-form surfaces in large components, in this study, we designed a multi-node fusion of a combined sensor. Firstly, a multi-node fusion hidden-point measurement model and a solution model are established, and the measurement results converge after the number of nodes is simulated to be nine. Secondly, an adaptive front-end photoelectric conditioning circuit, including signal amplification, filtering, and adjustable level is designed, and the accuracy of the circuit function is verified. Then, a nonlinear least-squares calibration method is proposed by combining the constraints of the multi-position vector cones, and the internal parameters of the probe, in relation to the various detection nodes, are calibrated. Finally, a distributed system and laser tracking system are introduced to establish a fusion experimental validation platform, and the results show that the standard deviation and accuracy of the three-axis measurement of the test point of the combined sensor in the measurement area of 7000 mm × 7000 mm × 3000 mm are better than 0.026 mm and 0.24 mm, respectively, and the accuracy of the length measurement is within 0.28 mm. Further, the measurement accuracy of the hidden point of the aircraft hood and the free-form surface is better than 0.26 mm, which can meet most of the industrial measurement needs and expand the application field of large-space measurement systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the system measurement model.</p>
Full article ">Figure 2
<p>Combined sensor measurement model.</p>
Full article ">Figure 3
<p>Schematic diagram of combined sensor modeling.</p>
Full article ">Figure 4
<p>Error curves for different numbers of sensing units.</p>
Full article ">Figure 5
<p>Pre-stage photoelectric conditioning circuit.</p>
Full article ">Figure 6
<p>Band-stop filter circuit.</p>
Full article ">Figure 7
<p>Level comparison circuit.</p>
Full article ">Figure 8
<p>Test waveforms of analog and pulse signals. (<b>a</b>) Reference signal. (<b>b</b>) Sector signal. (<b>c</b>) Reference pulse signal. (<b>d</b>) Sector pulse signal.</p>
Full article ">Figure 9
<p>(<b>a</b>) Photoelectric conditioning circuit board. (<b>b</b>) Physical diagram of the combined sensor.</p>
Full article ">Figure 10
<p>Experimental measurement scenario.</p>
Full article ">Figure 11
<p>Schematic diagram of measurement points.</p>
Full article ">Figure 12
<p>(<b>a</b>) The standard deviation of 500 repeated measurements. (<b>b</b>) Measurement error compared to the laser tracking system.</p>
Full article ">Figure 13
<p>Scenarios of experiments on hidden spots and shaped surfaces of aircraft engine hoods.</p>
Full article ">Figure 14
<p>Triaxial measurement error at the hidden point of the airplane hood. (<b>a</b>) X-direction. (<b>b</b>) Y-direction. (<b>c</b>) Z-direction.</p>
Full article ">
18 pages, 5824 KiB  
Article
A Fusion Tracking Algorithm for Electro-Optical Theodolite Based on the Three-State Transition Model
by Shixue Zhang, Houfeng Wang, Liduo Song, Hongwen Li and Shuai Liu
Sensors 2024, 24(17), 5847; https://doi.org/10.3390/s24175847 - 9 Sep 2024
Viewed by 511
Abstract
This study presents a novel approach to address the autonomous stable tracking issue in electro-optical theodolite operating in closed-loop mode. The proposed methodology includes a multi-sensor adaptive weighted fusion algorithm and a fusion tracking algorithm based on a three-state transition model. A refined [...] Read more.
This study presents a novel approach to address the autonomous stable tracking issue in electro-optical theodolite operating in closed-loop mode. The proposed methodology includes a multi-sensor adaptive weighted fusion algorithm and a fusion tracking algorithm based on a three-state transition model. A refined recursive formula for error covariance estimation is developed by integrating attenuation factors and least squares extrapolation. This formula is employed to formulate a multi-sensor weighted fusion algorithm that utilizes error covariance estimation. By assigning weighted coefficients to calculate the residual of the newly introduced error term and defining the sensor’s unique states based on these coefficients, a fusion tracking algorithm grounded on the three-state transition model is introduced. In cases of interference or sensor failure, the algorithm either computes the weighted fusion value of the multi-sensor measurement or triggers autonomous sensor switching to ensure the autonomous and stable measurement of the theodolite. Experimental results indicate that when a specific sensor is affected by interference or the off-target amount cannot be extracted, the algorithm can swiftly switch to an alternative sensor. This capability facilitates the precise and consistent generation of data, thereby ensuring the stable operation of the tracking system. Furthermore, the algorithm demonstrates robustness across various measurement scenarios. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustration of state transition.</p>
Full article ">Figure 2
<p>Illustration of effective sampling time.</p>
Full article ">Figure 3
<p>The processing of the sensor at the initial effective time.</p>
Full article ">Figure 4
<p>Flowchart for fusion tracking algorithms with valid bits in measure.</p>
Full article ">Figure 5
<p>The setting of simulation experiment.</p>
Full article ">Figure 6
<p>The sensor selection of method 1.</p>
Full article ">Figure 7
<p>The azimuth output value of method 1.</p>
Full article ">Figure 8
<p>The sensor selection of method 2.</p>
Full article ">Figure 9
<p>The azimuth output value of method 2.</p>
Full article ">Figure 10
<p>The sensor selection of method 3.</p>
Full article ">Figure 11
<p>The azimuth output value of method 3.</p>
Full article ">Figure 12
<p>The sensor selection of method 1.</p>
Full article ">Figure 13
<p>The azimuth output value of method 1.</p>
Full article ">Figure 14
<p>The sensor selection of method 2.</p>
Full article ">Figure 15
<p>The azimuth output value of method 2.</p>
Full article ">Figure 16
<p>The sensor selection of method 3.</p>
Full article ">Figure 17
<p>The azimuth output value of method 3.</p>
Full article ">
19 pages, 2917 KiB  
Article
Comparative Analysis of Machine Learning Techniques for Water Consumption Prediction: A Case Study from Kocaeli Province
by Kasim Görenekli and Ali Gülbağ
Sensors 2024, 24(17), 5846; https://doi.org/10.3390/s24175846 - 9 Sep 2024
Viewed by 767
Abstract
This study presents a comparative analysis of various Machine Learning (ML) techniques for predicting water consumption using a comprehensive dataset from Kocaeli Province, Turkey. Accurate prediction of water consumption is crucial for effective water resource management and planning, especially considering the significant impact [...] Read more.
This study presents a comparative analysis of various Machine Learning (ML) techniques for predicting water consumption using a comprehensive dataset from Kocaeli Province, Turkey. Accurate prediction of water consumption is crucial for effective water resource management and planning, especially considering the significant impact of the COVID-19 pandemic on water usage patterns. A total of four ML models, Artificial Neural Networks (ANN), Random Forest (RF), Support Vector Machines (SVM), and Gradient Boosting Machines (GBM), were evaluated. Additionally, optimization techniques such as Particle Swarm Optimization (PSO) and the Second-Order Optimization (SOO) Levenberg–Marquardt (LM) algorithm were employed to enhance the performance of the ML models. These models incorporate historical data from previous months to enhance model accuracy and generalizability, allowing for robust predictions that account for both short-term fluctuations and long-term trends. The performance of each model was assessed using cross-validation. The R2 and correlation values obtained in this study for the best-performing models are highlighted in the results section. For instance, the GBM model achieved an R2 value of 0.881, indicating a strong capability in capturing the underlying patterns in the data. This study is one of the first to conduct a comprehensive analysis of water consumption prediction using machine learning algorithms on a large-scale dataset of 5000 subscribers, including the unique conditions imposed by the COVID-19 pandemic. The results highlight the strengths and limitations of each technique, providing insights into their applicability for water consumption prediction. This study aims to enhance the understanding of ML applications in water management and offers practical recommendations for future research and implementation. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Mean River Discharge for the Year 2022 Compared to the Period 1991–2020 [<a href="#B6-sensors-24-05846" class="html-bibr">6</a>].</p>
Full article ">Figure 2
<p>Monthly Water Consumption Trends.</p>
Full article ">Figure 3
<p>Correlation Matrices for (<b>a</b>) Commercial; (<b>b</b>) Official; (<b>c</b>) Residential.</p>
Full article ">Figure 4
<p>Comparative Performance of Models.</p>
Full article ">Figure 5
<p>Distribution of Water Consumption Volumes and Total Consumption Across Groups by Subscriber Type.</p>
Full article ">Figure 6
<p>Scatter Plots of key 2-variable correlations for (<b>a</b>) commercial (m<sup>3</sup>); (<b>b</b>) official (m<sup>3</sup>); (<b>c</b>) residential (m<sup>3</sup>).</p>
Full article ">Figure 7
<p>Impact of COVID-19 on Water Consumption by Subscriber Type.</p>
Full article ">
19 pages, 5464 KiB  
Article
A Multi-Scale Liver Tumor Segmentation Method Based on Residual and Hybrid Attention Enhanced Network with Contextual Integration
by Liyan Sun, Linqing Jiang, Mingcong Wang, Zhenyan Wang and Yi Xin
Sensors 2024, 24(17), 5845; https://doi.org/10.3390/s24175845 - 9 Sep 2024
Viewed by 829
Abstract
Liver cancer is one of the malignancies with high mortality rates worldwide, and its timely detection and accurate diagnosis are crucial for improving patient prognosis. To address the limitations of traditional image segmentation techniques and the U-Net network in capturing fine image features, [...] Read more.
Liver cancer is one of the malignancies with high mortality rates worldwide, and its timely detection and accurate diagnosis are crucial for improving patient prognosis. To address the limitations of traditional image segmentation techniques and the U-Net network in capturing fine image features, this study proposes an improved model based on the U-Net architecture, named RHEU-Net. By replacing traditional convolution modules in the encoder and decoder with improved residual modules, the network’s feature extraction capabilities and gradient stability are enhanced. A Hybrid Gated Attention (HGA) module is integrated before the skip connections, enabling the parallel processing of channel and spatial attentions, optimizing the feature fusion strategy, and effectively replenishing image details. A Multi-Scale Feature Enhancement (MSFE) layer is introduced at the bottleneck, utilizing multi-scale feature extraction technology to further enhance the expression of receptive fields and contextual information, improving the overall feature representation effect. Testing on the LiTS2017 dataset demonstrated that RHEU-Net achieved Dice scores of 95.72% for liver segmentation and 70.19% for tumor segmentation. These results validate the effectiveness of RHEU-Net and underscore its potential for clinical application. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Architecture of RHEU-Net, where Module A denotes the residual module, Module B refers to the Multi-Scale Feature Enhancement module (MSFE), Module C indicates the Hybrid Gated Attention module (HGA), and Module D includes convolution operations.</p>
Full article ">Figure 2
<p>(<b>a</b>) Structure of the residual module in ResNet; (<b>b</b>) Structure of the residual module used in the encoder; (<b>c</b>) Structure of the residual module used in the decoder.</p>
Full article ">Figure 3
<p>Structure of the Hybrid Gated Attention module.</p>
Full article ">Figure 4
<p>Structure of the Channel Attention Module.</p>
Full article ">Figure 5
<p>Structure of the spatial attention module.</p>
Full article ">Figure 6
<p>Structure of the Hybrid Gated Attention Module.</p>
Full article ">Figure 7
<p>Structure of the Multi-Scale Feature Enhancement Module.</p>
Full article ">Figure 8
<p>(<b>a</b>) Original image; (<b>b</b>) Flip horizontal; (<b>c</b>) Flip vertical; (<b>d</b>) Left rotation; (<b>e</b>) Right rotation.</p>
Full article ">Figure 9
<p>Segmentation results from various networks on selected test set images in the ablation experiment. From left to right: (<b>a</b>) original CT image, (<b>b</b>) gold standard, (<b>c</b>) U-Net, (<b>d</b>) Res+U-Net, (<b>e</b>) HGA+U-Net, (<b>f</b>) MSFE+U-Net, (<b>g</b>) Res+HGA+U-Net, and (<b>h</b>) RHEU-Net (method described in this study).</p>
Full article ">Figure 10
<p>Training loss trends of different models.</p>
Full article ">Figure 11
<p>Comparison of liver segmentation results from different networks against the gold standard. From left to right, the images represent: (<b>a</b>) original CT image, (<b>b</b>) gold standard, (<b>c</b>) Unet, (<b>d</b>) AttentionUnet, (<b>e</b>) ResUnet-a, (<b>f</b>) CAUnet, (<b>g</b>) Res Unet++, (<b>h</b>) RIUNet, (<b>i</b>) RHEUnet (method described in this study).</p>
Full article ">
17 pages, 6764 KiB  
Article
Construction of Cu2Y2O5/g-C3N4 Novel Composite for the Sensitive and Selective Trace-Level Electrochemical Detection of Sulfamethazine in Food and Water Samples
by Rajendran Surya, Subramanian Sakthinathan, Ganesh Abinaya Meenakshi, Chung-Lun Yu and Te-Wei Chiu
Sensors 2024, 24(17), 5844; https://doi.org/10.3390/s24175844 - 9 Sep 2024
Viewed by 653
Abstract
The most frequently used sulfonamide is sulfamethazine (SMZ) because it is often found in foods made from livestock, which is hazardous for individuals. Here, we have developed an easy, quick, selective, and sensitive analytical technique to efficiently detect SMZ. Recently, transition metal oxides [...] Read more.
The most frequently used sulfonamide is sulfamethazine (SMZ) because it is often found in foods made from livestock, which is hazardous for individuals. Here, we have developed an easy, quick, selective, and sensitive analytical technique to efficiently detect SMZ. Recently, transition metal oxides have attracted many researchers for their excellent performance as a promising sensor for SMZ analysis because of their superior redox activity, electrocatalytic activity, electroactive sites, and electron transfer properties. Further, Cu-based oxides have a resilient electrical conductivity; however, to boost it to an extreme extent, a composite including two-dimensional (2D) graphitic carbon nitride (g-C3N4) nanosheets needs to be constructed and ready as a composite (denoted as g-C3N4/Cu2Y2O5). Moreover, several techniques, including X-ray diffraction analysis, scanning electron microscopy analysis, energy-dispersive X-ray spectroscopy, Fourier transform infrared spectroscopy, and Raman spectroscopy were employed to analyze the composites. The electrochemical measurements have revealed that the constructed g-C3N4/Cu2Y2O5 composites exhibit great electrochemical activity. Nevertheless, the sensor achieved outstanding repeatability and reproducibility alongside a low limit of detection (LOD) of 0.23 µM, a long linear range of 2 to 276 µM, and an electrode sensitivity of 8.86 µA µM−1 cm−2. Finally, the proposed GCE/g-C3N4/Cu2Y2O5 electrode proved highly effective for detection of SMZ in food samples, with acceptable recoveries. The GCE/g-C3N4/Cu2Y2O5 electrode has been successfully applied to SMZ detection in food and water samples. Full article
(This article belongs to the Special Issue Advances and Applications of Electrochemical Sensors and Biosensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) XRD patterns of g-C<sub>3</sub>N<sub>4</sub>, Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>, and g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>. (<b>B</b>) FT-IR spectra of g-C<sub>3</sub>N<sub>4</sub>, Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>, and g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> composite. (<b>C</b>) FESEM images (<b>a</b>–<b>c</b>) g-C<sub>3</sub>N<sub>4</sub>. (<b>d</b>–<b>f</b>) Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>. (<b>g</b>–<b>i</b>) g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> composite. (<b>D</b>) Elemental mapping region of (<b>a</b>) g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> composite. Elemental mapping of (<b>b</b>) Y, (<b>c</b>) C, (<b>d</b>) Cu, (<b>e</b>) N, (<b>f</b>) O, and (<b>g</b>) EDX spectra of g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> composite.</p>
Full article ">Figure 2
<p>(<b>a</b>) CV for bare GCE, GCE/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>, GCE/g-C<sub>3</sub>N<sub>4</sub>, GCE/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>/g-C<sub>3</sub>N<sub>4</sub> in 0.1 M KCl contain 5 mM [Fe(CN)<sub>6</sub>]<sup>3−/4−</sup> and (<b>b</b>) CV studies of Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>/g-C<sub>3</sub>N<sub>4</sub> at various scan rates (different color indicate for different scan rate). (<b>c</b>) calibration plot for potential vs. peak current. (<b>d</b>) EIS spectra of bare GCE, Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>/GCE, g-C<sub>3</sub>N<sub>4</sub>/GCE, Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>/g-C<sub>3</sub>N<sub>4</sub>/GCE.</p>
Full article ">Figure 3
<p>(<b>a</b>) Cyclic voltammogram of bare GCE, GCE/g-C<sub>3</sub>N<sub>4</sub>, GCE/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>, and GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> in 0.05 M phosphate-buffered solution (pH 7) with 50 μM of SMZ. (<b>b</b>) Bar diagram of different electrode responses. (<b>c</b>) CV curves of GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrode at different concentrations of SMZ (10–200 μM) (different color indicate for different concentration). (<b>d</b>) Correspondence to the calibration plot vs. SMZ concentrations.</p>
Full article ">Figure 4
<p>(<b>a</b>) CV responses of the electrode GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> in phosphate-buffered solution with 50 μM of SMZ at various scan rates (different color indicate for different scan rates), (<b>b</b>) calibration plot of the scan rate vs. SMZ determination peak current, (<b>c</b>) cyclic voltammogram of electrode Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>/g-C<sub>3</sub>N<sub>4</sub> in phosphate-buffered solution containing 50 μM SMZ at various pH levels (3–11), (<b>d</b>) pH vs. E<sub>p</sub>, (<b>e</b>) pH vs. I<sub>p</sub>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Amperometry response for oxidation of SMZ with various concentrations in phosphate-buffered solution (pH 7.0) at RRDE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub>-altered electrode. (<b>b</b>) Calibration plot for the peak current vs. concentration. (<b>c</b>) Selectivity of the GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> in phosphate-buffered solution with the addition of a 10-fold concentration of interference molecules by DPV. (<b>d</b>) Operational stability studies of the RRD/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrodes.</p>
Full article ">Figure 6
<p>(<b>a</b>) Repeatability test of electrode GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> in the presence of SMZ and (<b>b</b>) Bar diagram of GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrode with different segments. (<b>c</b>) The reproducibility test of GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrode in the presence of SMZ and (<b>d</b>) Bar diagram of different modified GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrodes. (<b>e</b>) Cyclic stability of GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrode.</p>
Full article ">Figure 7
<p>The determination of SMZ in (<b>a</b>) milk, (<b>b</b>) honey, and (<b>c</b>) river water samples (different color indicate for different concentration).</p>
Full article ">Scheme 1
<p>Schematic illustration of GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrode preparation and application to SMZ detection.</p>
Full article ">Scheme 2
<p>Irreversible electro-oxidation reaction mechanism of SMZ over the GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrode.</p>
Full article ">Scheme 3
<p>Real sample analysis and reaction mechanism of the proposed GCE/g-C<sub>3</sub>N<sub>4</sub>/Cu<sub>2</sub>Y<sub>2</sub>O<sub>5</sub> electrode sensor.</p>
Full article ">
13 pages, 3091 KiB  
Article
Measurement Method of Stress in High-Voltage Cable Accessories Based on Ultrasonic Longitudinal Wave Attenuation
by Jingang Su, Peng Zhang, Xingwang Huang and Xianhai Pang
Sensors 2024, 24(17), 5843; https://doi.org/10.3390/s24175843 - 9 Sep 2024
Viewed by 529
Abstract
High-voltage cables are the main arteries of urban power supply. Cable accessories are connecting components between different sections of cables or between cables and other electrical equipment. The stress in the cold shrink tube of cable accessories is a key parameter to ensure [...] Read more.
High-voltage cables are the main arteries of urban power supply. Cable accessories are connecting components between different sections of cables or between cables and other electrical equipment. The stress in the cold shrink tube of cable accessories is a key parameter to ensure the stable operation of the power system. This paper attempts to explore a method for measuring the stress in the cold shrink tube of high-voltage cable accessories based on ultrasonic longitudinal wave attenuation. Firstly, a pulse ultrasonic longitudinal wave testing system based on FPGA is designed, where the ultrasonic sensor operates in a single-transmit, single-receive mode with a frequency of 3 MHz, a repetition frequency of 50 Hz, and a data acquisition and transmission frequency of 40 MHz. Then, through experiments and theoretical calculations, the transmission and attenuation characteristics of ultrasonic longitudinal waves in multi-layer elastic media are studied, revealing an exponential relationship between ultrasonic wave attenuation and the thickness of the cold shrink tube. Finally, by establishing a theoretical model of the radial stress of the cold shrink tube, using the thickness of the cold shrink tube as an intermediate variable, an effective measurement of the stress of the cold shrink tube was achieved. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of multi-layer structure of cable accessories: (<b>a</b>) front view and (<b>b</b>) side view (1: cold shrink tube; 2: PVC outer sheath; 3: wrapping tape; 4: copper armor; 5: XLPE insulation layer; and 6: aluminum core).</p>
Full article ">Figure 2
<p>Schematic diagram of ultrasonic probe fixing bracket: (<b>a</b>) front view and (<b>b</b>) side view (7: bracket; 8: through hole for placing ultrasonic probe; and 9: air gap).</p>
Full article ">Figure 3
<p>Schematic diagram of the measurement system for cold shrink tube stress based on ultrasonic longitudinal wave attenuation.</p>
Full article ">Figure 4
<p>Measurement system for the cold shrink tube stress: (<b>a</b>) ultrasonic probes and (<b>b</b>) FPGA module and data acquisition and transmission module.</p>
Full article ">Figure 5
<p>Schematic diagram of ultrasonic longitudinal wave reflection and refraction at interfaces (10: ultrasonic emission probe; 11: ultrasonic reception probe; 12: coupling agent).</p>
Full article ">Figure 6
<p>Schematic diagram of standard pieces for cold shrink tube thickness measurement: (<b>a</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 2.2 mm; (<b>b</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 1.6 mm; (<b>c</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 1 mm; and (<b>d</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 2.2 mm, air calibration.</p>
Full article ">Figure 7
<p>Cold shrink tube wrapping PVC standard pieces (showing only 11 pieces).</p>
Full article ">Figure 8
<p>Ultrasonic pulse sequences under different cold shrink tube thicknesses: (<b>a</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 2.2 mm; (<b>b</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 1.6 mm; (<b>c</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 1 mm; and (<b>d</b>) <math display="inline"><semantics> <mi>δ</mi> </semantics></math> = 2.2 mm, air calibration.</p>
Full article ">Figure 9
<p>Relationship between ultrasonic wave attenuation and the relative thickness of the cold shrink tube.</p>
Full article ">Figure 10
<p>Radial stress diagram of the cold shrink tube: (<b>a</b>) the cross-sectional view of the cold shrink tube; (<b>b</b>) the axial cross-sectional view of the cold shrink tube; and (<b>c</b>) the cross-sectional view of the upper half of the cold shrink tube.</p>
Full article ">Figure 11
<p>Relationship between the resultant radial force in the y-direction and the thickness of the cold shrink tube.</p>
Full article ">Figure 12
<p>Relationship between radial stress and thickness of cold shrink tube.</p>
Full article ">Figure 13
<p>Measurement results of cold shrink tube stress.</p>
Full article ">
10 pages, 1699 KiB  
Article
Ultrashort-Echo-Time MRI of the Disco-Vertebral Junction: Modulation of Image Contrast via Echo Subtraction and Echo Times
by Karen C. Chen, Palanan Siriwananrangsun and Won C. Bae
Sensors 2024, 24(17), 5842; https://doi.org/10.3390/s24175842 - 9 Sep 2024
Viewed by 819
Abstract
Introduction: The disco-vertebral junction (DVJ) of the lumbar spine contains thin structures with short T2 values, including the cartilaginous endplate (CEP) sandwiched between the bony vertebral endplate (VEP) and the nucleus pulposus (NP). We previously demonstrated that ultrashort-echo-time (UTE) MRI, compared to conventional [...] Read more.
Introduction: The disco-vertebral junction (DVJ) of the lumbar spine contains thin structures with short T2 values, including the cartilaginous endplate (CEP) sandwiched between the bony vertebral endplate (VEP) and the nucleus pulposus (NP). We previously demonstrated that ultrashort-echo-time (UTE) MRI, compared to conventional MRI, is able to depict the tissues at the DVJ with improved contrast. In this study, we sought to further optimize UTE MRI by characterizing the contrast-to-noise ratio (CNR) of these tissues when either single echo or echo subtraction images are used and with varying echo times (TEs). Methods: In four cadaveric lumbar spines, we acquired 3D Cones (a UTE sequence) images at varying TEs from 0.032 ms to 16 ms. Additionally, spin echo T1- and T2-weighted images were acquired. The CNRs of CEP-NP and CEP-VEP were measured in all source images and 3D Cones echo subtraction images. Results: In the spin echo images, it was challenging to distinguish the CEP from the VEP, as both had low signal intensity. However, the 3D Cones source images at the shortest TE of 0.032 ms provided an excellent contrast between the CEP and the VEP. As the TE increased, the contrast decreased in the source images. In contrast, the 3D Cones echo subtraction images showed increasing CNR values as the second TE increased, reaching statistical significance when the second TE was above 10 ms (p < 0.05). Conclusions: Our study highlights the feasibility of incorporating UTE MRI for the evaluation of the DVJ and its advantages over conventional spin echo sequences for improving the contrast between the CEP and adjacent tissues. Additionally, modulation of the contrast for the target tissues can be achieved using either source images or subtraction images, as well as by varying the echo times. Full article
(This article belongs to the Special Issue Biomedical Sensing System Based on Image Analysis)
Show Figures

Figure 1

Figure 1
<p>Conventional sagittal MR imaging of the human lumbar spine performed with (<b>A</b>) spin echo T1-weighted and (<b>B</b>) spin echo T2-weighted sequences. The cartilage endplate at the disco-vertebral junction (interface between vertebral body and intervertebral disc) has low signal intensity and is indistinguishable from the vertebral endplate, the cortical bone of the vertebral body.</p>
Full article ">Figure 2
<p>3D Cones (an ultrashort-echo-time, UTE, sequence) images acquired at echo times (TEs) of (<b>A</b>) 0.032 ms, (<b>B</b>) 2.5 ms, (<b>C</b>) 6.7 ms, (<b>D</b>) 11 ms, and (<b>E</b>) 16 ms. The cartilage endplate at the disco-vertebral junction has a high signal intensity in (<b>A</b>), appearing distinct from the adjacent vertebral endplate (VEP) and the intervertebral disc. In later TE source images (<b>B</b>–<b>E</b>), the CEP becomes progressively darker and indistinguishable from the VEP.</p>
Full article ">Figure 3
<p>3D Cones subtraction images, acquired by digitally subtracting the 2nd echo image (at various TEs) from the 1st echo image at TE = 0.032 ms. Subtraction images using the 2nd TE of (<b>A</b>) 2.5 ms, (<b>B</b>) 6.7 ms, (<b>C</b>) 11 ms, and (<b>D</b>) 16 ms. By increasing the 2nd TE, visual improvement in the contrast between the cartilage endplate (arrows) and the adjacent vertebral endplate (arrowheads) and nucleus pulposus (dotted line) can be seen.</p>
Full article ">Figure 4
<p>Regions of interest for the bony vertebral endplate (VEP), cartilage endplate (CEP), and nucleus pulposus (NP), indicated with arrowheads, arrows, and dotted line, respectively. For each specimen, the average signal intensity within each ROI was determined.</p>
Full article ">Figure 5
<p>Contrast-to-noise ratio (CNR) of cartilage endplate (CEP) minus nucleus pulposus (NP) (<b>A</b>,<b>B</b>) and CEP minus vertebral endplate (VEP) (<b>C</b>,<b>D</b>) in the 3D Cones source images (<b>A</b>,<b>C</b>) and subtraction images (<b>B</b>,<b>D</b>), as a function of varying echo times (TEs).</p>
Full article ">
16 pages, 3130 KiB  
Article
AEPF: Attention-Enabled Point Fusion for 3D Object Detection
by Sachin Sharma, Richard T. Meyer and Zachary D. Asher
Sensors 2024, 24(17), 5841; https://doi.org/10.3390/s24175841 - 9 Sep 2024
Viewed by 1261
Abstract
Current state-of-the-art (SOTA) LiDAR-only detectors perform well for 3D object detection tasks, but point cloud data are typically sparse and lacks semantic information. Detailed semantic information obtained from camera images can be added with existing LiDAR-based detectors to create a robust 3D detection [...] Read more.
Current state-of-the-art (SOTA) LiDAR-only detectors perform well for 3D object detection tasks, but point cloud data are typically sparse and lacks semantic information. Detailed semantic information obtained from camera images can be added with existing LiDAR-based detectors to create a robust 3D detection pipeline. With two different data types, a major challenge in developing multi-modal sensor fusion networks is to achieve effective data fusion while managing computational resources. With separate 2D and 3D feature extraction backbones, feature fusion can become more challenging as these modes generate different gradients, leading to gradient conflicts and suboptimal convergence during network optimization. To this end, we propose a 3D object detection method, Attention-Enabled Point Fusion (AEPF). AEPF uses images and voxelized point cloud data as inputs and estimates the 3D bounding boxes of object locations as outputs. An attention mechanism is introduced to an existing feature fusion strategy to improve 3D detection accuracy and two variants are proposed. These two variants, AEPF-Small and AEPF-Large, address different needs. AEPF-Small, with a lightweight attention module and fewer parameters, offers fast inference. AEPF-Large, with a more complex attention module and increased parameters, provides higher accuracy than baseline models. Experimental results on the KITTI validation set show that AEPF-Small maintains SOTA 3D detection accuracy while inferencing at higher speeds. AEPF-Large achieves mean average precision scores of 91.13, 79.06, and 76.15 for the car class’s easy, medium, and hard targets, respectively, in the KITTI validation set. Results from ablation experiments are also presented to support the choice of model architecture. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Architecture for AEPF: Attention-Enabled Point Fusion for 3D object detection. Blocks illustrate processes from <a href="#sec3dot1-sensors-24-05841" class="html-sec">Section 3.1</a>, <a href="#sec3dot2-sensors-24-05841" class="html-sec">Section 3.2</a>, <a href="#sec3dot3-sensors-24-05841" class="html-sec">Section 3.3</a> and <a href="#sec3dot4-sensors-24-05841" class="html-sec">Section 3.4</a>. Attention mechanisms for AEPF-Small and AEPF-Large are also shown.</p>
Full article ">Figure 2
<p>Visualization of detection results for two AEPF variants. Panels (<b>A</b>,<b>B</b>) display results for AEPF-S, while panels (<b>C</b>,<b>D</b>) show results for AEPF-L. False positives and missed detections from AEPF-S, highlighted by dotted yellow lines, are effectively addressed by AEPF-L. Red bounding boxes indicate cars and purple bounding boxes indicate pedestrians.</p>
Full article ">Figure 3
<p>Visualization of detection results for (<b>A</b>) MVXNet (obtained from [<a href="#B53-sensors-24-05841" class="html-bibr">53</a>]), (<b>B</b>) AEPF-S, and (<b>C</b>) AEPF-L. Dotted green lines indicate false negatives, while dotted yellow lines indicate false positives. AEPF-L effectively resolves false negatives identified by MVXNet and AEPF-S. Purple bounding boxes indicate pedestrians and red bounding boxes indicate cars.</p>
Full article ">
20 pages, 3340 KiB  
Article
Implementing Autonomous Control in the Digital-Twins-Based Internet of Robotic Things for Remote Patient Monitoring
by Sangeen Khan, Sehat Ullah, Khalil Ullah, Sulaiman Almutairi and Sulaiman Aftan
Sensors 2024, 24(17), 5840; https://doi.org/10.3390/s24175840 - 9 Sep 2024
Viewed by 1237
Abstract
Conventional patient monitoring methods require skin-to-skin contact, continuous observation, and long working shifts, causing physical and mental stress for medical professionals. Remote patient monitoring (RPM) assists healthcare workers in monitoring patients distantly using various wearable sensors, reducing stress and infection risk. RPM can [...] Read more.
Conventional patient monitoring methods require skin-to-skin contact, continuous observation, and long working shifts, causing physical and mental stress for medical professionals. Remote patient monitoring (RPM) assists healthcare workers in monitoring patients distantly using various wearable sensors, reducing stress and infection risk. RPM can be enabled by using the Digital Twins (DTs)-based Internet of Robotic Things (IoRT) that merges robotics with the Internet of Things (IoT) and creates a virtual twin (VT) that acquires sensor data from the physical twin (PT) during operation to reflect its behavior. However, manual navigation of PT causes cognitive fatigue for the operator, affecting trust dynamics, satisfaction, and task performance. Also, operating manual systems requires proper training and long-term experience. This research implements autonomous control in the DTs-based IoRT to remotely monitor patients with chronic or contagious diseases. This work extends our previous paper that required the user to manually operate the PT using its VT to collect patient data for medical inspection. The proposed decision-making algorithm enables the PT to autonomously navigate towards the patient’s room, collect and transmit health data, and return to the base station while avoiding various obstacles. Rather than manually navigating, the medical personnel direct the PT to a specific target position using the Menu buttons. The medical staff can monitor the PT and the received sensor information in the pre-built virtual environment (VE). Based on the operator’s preference, manual control of the PT is also achievable. The experimental outcomes and comparative analysis verify the efficiency of the proposed system. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) VT in the VE. (<b>b</b>) PT in the RE.</p>
Full article ">Figure 2
<p>Graphical abstract of the proposed system.</p>
Full article ">Figure 3
<p>(<b>a</b>) ∆ABC for avoiding front, front and right obstacles. (<b>b</b>) ∆ABC for avoiding front and left obstacles. (<b>c</b>) Calculated straight distance after avoiding front, front and right obstacles. (<b>d</b>) Calculated straight distance (AC) after avoiding front and left obstacles.</p>
Full article ">Figure 4
<p>(<b>a</b>) Right trapezoid ABCD. (<b>b</b>) Right trapezoid with rectangle XBCD, and ∆AXD. (<b>c</b>) AY.</p>
Full article ">Figure 5
<p>Flow chart for the decision-making algorithm.</p>
Full article ">Figure 6
<p>Experimental scenario of the proposed system.</p>
Full article ">Figure 7
<p>(<b>a</b>) ME and SD values of Cat 1; (<b>b</b>) ME and SD values of Cat 2.</p>
Full article ">
16 pages, 1712 KiB  
Article
Evaluation of Smartphone Technology on Spatiotemporal Gait in Older and Diseased Adult Populations
by Coby Contreras, Ethan C. Stanley, Chanc Deschamps-Prescott, Susan Burnap, Madison Hopkins, Bennett Browning and Jesse C. Christensen
Sensors 2024, 24(17), 5839; https://doi.org/10.3390/s24175839 - 9 Sep 2024
Viewed by 894
Abstract
Objective: Advancements in smartphone technology provide availability to evaluate movement in a more practical and feasible manner, improving clinicians’ ability to diagnose and treat adults at risk for mobility loss. The purpose of this study was to evaluate the validity and reliability of [...] Read more.
Objective: Advancements in smartphone technology provide availability to evaluate movement in a more practical and feasible manner, improving clinicians’ ability to diagnose and treat adults at risk for mobility loss. The purpose of this study was to evaluate the validity and reliability of a smartphone application to measure spatiotemporal outcomes during level (primary) and uphill/downhill (secondary) walking with and without an assistive device for older adults (OAs), Parkinson’s Disease (PD) and cerebrovascular accident (CVA) populations. Methods: A total of 50 adults (OA = 20; PD = 15; CVA = 15) underwent gait analysis at self-selected gait speeds under 0-degree, 5-degree uphill and 5-degree downhill environments. The validity and reliability of the smartphone outcomes were compared to a motion-capture laboratory. Bland–Altman analysis was used to evaluate limits of agreement between the two systems. Intraclass correlation coefficients (ICCs) were used to determine absolute agreement, and Pearson correlation coefficients (r) were used to assess the strength of the association between the two systems. Results: For level walking, Bland–Altman analysis revealed relatively equal estimations of spatiotemporal outcomes between systems for OAs without an assistive device and slight to mild under- and overestimations of outcomes between systems for PD and CVA with and without an assistive device. Moderate to very high correlations between systems (without an assistive device: OA r-range, 0.72–0.99; PD r-range, 0.87–0.97; CVA r-range, 0.56–0.99; with an assistive device: PD r-range, 0.35–0.98; CVA r-range, 0.50–0.99) were also observed. Poor to excellent ICCs for reliability between systems (without an assistive device: OA ICC range, 0.71–0.99; PD ICC range, 0.73–0.97; CVA ICC range, 0.56–0.99; with an assistive device: PD ICC range, 0.22–0.98; CVA ICC range, 0.44–0.99) were observed across all outcomes. Conclusions: This smartphone application can be clinically useful in detecting most spatiotemporal outcomes in various walking environments for older and diseased adults at risk for mobility loss. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Flow chart for data collection and processing.</p>
Full article ">Figure 2
<p>Marker and smartphone placement for modified Plug-In-Gait marker set and iPhone anterior thigh placement ((<b>A</b>) anterior, (<b>B</b>) posterior). Image supplied by C-Motion, Inc. (Germantown, MD, USA) used with permission.</p>
Full article ">Figure 3
<p>Bland–Altman plots comparing the smartphone application and motion-capture system measurements in assessing spatiotemporal outcomes for level walking without an assistive device across OAs, PD and CVA. Mean bias is displayed as a solid line, and 95% limits of agreement are displayed as dashed lines.</p>
Full article ">Figure 4
<p>Bland–Altman plots comparing the smartphone application and motion-capture system measurements in assessing spatiotemporal outcomes for level walking with an assistive device across PD and CVD. Mean bias is displayed as a solid line, and 95% limits of agreement are displayed as dashed lines.</p>
Full article ">
22 pages, 894 KiB  
Article
Enhancing Unmanned Aerial Vehicle Security: A Zero-Knowledge Proof Approach with Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge for Authentication and Location Proof
by Athanasios Koulianos, Panagiotis Paraskevopoulos, Antonios Litke and Nikolaos K. Papadakis
Sensors 2024, 24(17), 5838; https://doi.org/10.3390/s24175838 - 8 Sep 2024
Viewed by 1390
Abstract
UAVs are increasingly being used in various domains, from personal and commercial applications to military operations. Ensuring the security and trustworthiness of UAV communications is crucial, and blockchain technology has been explored as a solution. However, privacy remains a challenge, especially in public [...] Read more.
UAVs are increasingly being used in various domains, from personal and commercial applications to military operations. Ensuring the security and trustworthiness of UAV communications is crucial, and blockchain technology has been explored as a solution. However, privacy remains a challenge, especially in public blockchains. In this work, we propose a novel approach utilizing zero-knowledge proof techniques, specifically zk-SNARKs, which are non-interactive cryptographic proofs. This approach allows UAVs to prove their authenticity or location without disclosing sensitive information. We generated zk-SNARK proofs using the Zokrates tool on a Raspberry Pi, simulating a drone environment, and analyzed power consumption and CPU utilization. The results are promising, especially in the case of larger drones with higher battery capacities. Ethereum was chosen as the public blockchain platform, with smart contracts developed in Solidity and tested on the Sepolia testnet using Remix IDE. This novel proposed approach paves the way for a new path of research in the UAV area. Full article
(This article belongs to the Special Issue UAV Secure Communication for IoT Applications)
Show Figures

Figure 1

Figure 1
<p>Header and body structure of a block in a typical blockchain.</p>
Full article ">Figure 2
<p>This circuit takes as input x, y and computes the result of <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> using the addition and multiplication gates.</p>
Full article ">Figure 3
<p>Proposed architecture for private UAV communications utilizing the zk-SNARK algorithm.</p>
Full article ">Figure 4
<p>A comprehensive UML sequence diagram illustrating the entire process of UAV authentication and operation area verification using the zk-SNARK algorithm.</p>
Full article ">Figure 5
<p>Proof file, as was generated by Zokrates using the Groth16 scheme.</p>
Full article ">Figure 6
<p>Verification smart contract deployed on Sepolia testnet.</p>
Full article ">Figure 7
<p>The main smart contract including the verifier.sol, as given by the Algorithm 2, deployed on the Sepolia testnet.</p>
Full article ">Figure 8
<p>Verification transaction for the UAV authentication. The red line outlines a call to the verifyProof function within the UAVVerification smart contract.</p>
Full article ">Figure 9
<p>Smart contract’s interface for UAV authentication.</p>
Full article ">Figure 10
<p>An example where the UAV generates a valid proof for its location. The boolean value <tt>true</tt> returned by the smart contract, as highlighted by the red line in the figure, confirms the validity of the proof.</p>
Full article ">Figure 11
<p>An example where the proof is not valid. The boolean value <tt>false</tt> returned by the smart contract, as highlighted by the red line in the figure, indicates that the proof is invalid.</p>
Full article ">Figure 12
<p>Cost for deploying and interacting with the proposed SCs in Ethereum Sepolia testnet.</p>
Full article ">Figure 13
<p>Power consumption over time for CPU operations in zk-SNARKs.</p>
Full article ">Figure 14
<p>CPU utilization over time for CPU operations in zk-SNARKS.</p>
Full article ">
16 pages, 1521 KiB  
Article
A Novel End-to-End Deep Learning Framework for Chip Packaging Defect Detection
by Siyi Zhou, Shunhua Yao, Tao Shen and Qingwang Wang
Sensors 2024, 24(17), 5837; https://doi.org/10.3390/s24175837 - 8 Sep 2024
Viewed by 1445
Abstract
As semiconductor chip manufacturing technology advances, chip structures are becoming more complex, leading to an increased likelihood of void defects in the solder layer during packaging. However, identifying void defects in packaged chips remains a significant challenge due to the complex chip background, [...] Read more.
As semiconductor chip manufacturing technology advances, chip structures are becoming more complex, leading to an increased likelihood of void defects in the solder layer during packaging. However, identifying void defects in packaged chips remains a significant challenge due to the complex chip background, varying defect sizes and shapes, and blurred boundaries between voids and their surroundings. To address these challenges, we present a deep-learning-based framework for void defect segmentation in chip packaging. The framework consists of two main components: a solder region extraction method and a void defect segmentation network. The solder region extraction method includes a lightweight segmentation network and a rotation correction algorithm that eliminates background noise and accurately captures the solder region of the chip. The void defect segmentation network is designed for efficient and accurate defect segmentation. To cope with the variability of void defect shapes and sizes, we propose a Mamba model-based encoder that uses a visual state space module for multi-scale information extraction. In addition, we propose an interactive dual-stream decoder that uses a feature correlation cross gate module to fuse the streams’ features to improve their correlation and produce more accurate void defect segmentation maps. The effectiveness of the framework is evaluated through quantitative and qualitative experiments on our custom X-ray chip dataset. Furthermore, the proposed void defect segmentation framework for chip packaging has been applied to a real factory inspection line, achieving an accuracy of 93.3% in chip qualification. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) X-ray inspection machine. (<b>b</b>) The schematic diagram of X-ray image acquisition principle.</p>
Full article ">Figure 2
<p>The raw X-ray images obtained by X-ray inspection machine.</p>
Full article ">Figure 3
<p>The overall architecture of RSALite-UNet.</p>
Full article ">Figure 4
<p>The architecture of our proposed DM-UNet.</p>
Full article ">Figure 5
<p>(<b>a</b>) VSS Block. (<b>b</b>) FCCG Module.</p>
Full article ">Figure 6
<p>Visualization results of chip cover plate segmentation.</p>
Full article ">Figure 7
<p>Visualization results of chip void defect segmentation.</p>
Full article ">Figure 8
<p>The schematic of chip qualification judgment.</p>
Full article ">
23 pages, 21985 KiB  
Article
Impact of Land Use and Land Cover (LULC) Changes on Carbon Stocks and Economic Implications in Calabria Using Google Earth Engine (GEE)
by Yasir Hassan Khachoo, Matteo Cutugno, Umberto Robustelli and Giovanni Pugliano
Sensors 2024, 24(17), 5836; https://doi.org/10.3390/s24175836 - 8 Sep 2024
Viewed by 1241
Abstract
Terrestrial ecosystems play a crucial role in global carbon cycling by sequestering carbon from the atmosphere and storing it primarily in living biomass and soil. Monitoring terrestrial carbon stocks is essential for understanding the impacts of changes in land use on carbon sequestration. [...] Read more.
Terrestrial ecosystems play a crucial role in global carbon cycling by sequestering carbon from the atmosphere and storing it primarily in living biomass and soil. Monitoring terrestrial carbon stocks is essential for understanding the impacts of changes in land use on carbon sequestration. This study investigates the potential of remote sensing techniques and the Google Earth Engine to map and monitor changes in the forests of Calabria (Italy) over the past two decades. Using satellite-sourced Corine land cover datasets and the InVEST model, changes in Land Use Land Cover (LULC), and carbon concentrations are analyzed, providing insights into the carbon dynamics of the region. Furthermore, cellular automata and Markov chain techniques are used to simulate the future spatial and temporal dynamics of LULC. The results reveal notable fluctuations in LULC; specifically, settlement and bare land have expanded at the expense of forested and grassland areas. These land use and land cover changes significantly declined the overall carbon stocks in Calabria between 2000 and 2024, resulting in notable economic impacts. The region experienced periods of both decline and growth in carbon concentration, with overall losses resulting in economic impacts up to EUR 357.57 million and carbon losses equivalent to 6,558,069.68 Mg of CO 2 emissions during periods of decline. Conversely, during periods of carbon gain, the economic benefit reached EUR 41.26 million, with sequestered carbon equivalent to 756,919.47 Mg of CO 2 emissions. This research aims to highlight the critical role of satellite data in enhancing our understanding and development of comprehensive strategies for managing carbon stocks in terrestrial ecosystems. Full article
(This article belongs to the Special Issue Metrology for Living Environment 2024)
Show Figures

Figure 1

Figure 1
<p>Location of the AOI: The Italian peninsula with regional boundaries on the left, and a detailed, zoomed—in view of Calabria on the right, reporting its provinces and Digital Elevation Model (DEM).</p>
Full article ">Figure 2
<p>Methodological flowchart: the preprocessing stage (yellow) uses GEE for data clipping and reclassification. The prediction stage (purple) employs the CA–Markov model in TerrSet to generate future land use maps. The carbon assessment stage (green) utilizes InVEST to estimate carbon stocks based on land use maps. Each color corresponds to a different software used in that stage.</p>
Full article ">Figure 3
<p>LULC maps of the AOI.</p>
Full article ">Figure 4
<p>Carbon storage maps of the AOI.</p>
Full article ">
26 pages, 14062 KiB  
Article
Off-Grid Underwater Acoustic Source Direction-of-Arrival Estimation Method Based on Iterative Empirical Mode Decomposition Interval Threshold
by Chuanxi Xing, Guangzhi Tan and Saimeng Dong
Sensors 2024, 24(17), 5835; https://doi.org/10.3390/s24175835 - 8 Sep 2024
Viewed by 789
Abstract
To solve the problem that the hydrophone arrays are disturbed by ocean noise when collecting signals in shallow seas, resulting in reduced accuracy and resolution of target orientation estimation, a direction-of-arrival (DOA) estimation algorithm based on iterative EMD interval thresholding (EMD-IIT) and off-grid [...] Read more.
To solve the problem that the hydrophone arrays are disturbed by ocean noise when collecting signals in shallow seas, resulting in reduced accuracy and resolution of target orientation estimation, a direction-of-arrival (DOA) estimation algorithm based on iterative EMD interval thresholding (EMD-IIT) and off-grid sparse Bayesian learning is proposed. Firstly, the noisy signal acquired by the hydrophone array is denoised by the EMD-IIT algorithm. Secondly, the singular value decomposition is performed on the denoised signal, and then an off-grid sparse reconstruction model is established. Finally, the maximum a posteriori probability of the target signal is obtained by the Bayesian learning algorithm, and the DOA estimate of the target is derived to achieve the orientation estimation of the target. Simulation analysis and sea trial data results show that the algorithm achieves a resolution probability of 100% at an azimuthal separation of 8° between adjacent signal sources. At a low signal-to-noise ratio of −9 dB, the resolution probability reaches 100%. Compared with the conventional MUSIC-like and OGSBI-SVD algorithms, this algorithm can effectively eliminate noise interference and provides better performance in terms of localization accuracy, algorithm runtime, and algorithm robustness. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Model diagram of array received signal.</p>
Full article ">Figure 2
<p>Iteration flow chart of EMD algorithm.</p>
Full article ">Figure 3
<p>Flowchart of EMD-IIT algorithm.</p>
Full article ">Figure 4
<p>Flowchart of algorithm for iterative EMD interval thresholding and off-grid sparse Bayesian learning.</p>
Full article ">Figure 5
<p>The time–frequency spectrum of the original signal.</p>
Full article ">Figure 6
<p>The time–frequency spectrum of each array when the noisy signal is received. (<b>a</b>) The time–frequency spectrum of the first to the fourth array when the noisy signal is received. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array when the noisy signal is received.</p>
Full article ">Figure 7
<p>The time–frequency spectrum of each array after EMD-IIT denoising. (<b>a</b>) The time–frequency spectrum of the first to the fourth array after EMD-IIT denoising. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array after EMD-IIT denoising.</p>
Full article ">Figure 8
<p>The spatial power spectrum of three algorithms.</p>
Full article ">Figure 9
<p>RMSE vs. number of Monte Carlo trials.</p>
Full article ">Figure 10
<p>RMSE vs. signal-to-noise ratios.</p>
Full article ">Figure 11
<p>RMSE vs. snaps.</p>
Full article ">Figure 12
<p>Variation plots of RMSE with signal-to-noise ratios under different grid distances.</p>
Full article ">Figure 13
<p>Spatial power spectrum of compact sound source.</p>
Full article ">Figure 14
<p>Discriminative probabilities at different DOA intervals.</p>
Full article ">Figure 15
<p>Discriminative probabilities at different signal-to-noise ratios.</p>
Full article ">Figure 16
<p>Runtimes at different grid spacings.</p>
Full article ">Figure 17
<p>The profile of sound speed.</p>
Full article ">Figure 18
<p>The transmitted signal.</p>
Full article ">Figure 19
<p>The time–frequency spectrum of the sound source.</p>
Full article ">Figure 20
<p>Sea trial deployment diagram.</p>
Full article ">Figure 21
<p>The time–frequency spectrum of each array at 14:57 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 21 Cont.
<p>The time–frequency spectrum of each array at 14:57 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 22
<p>The time–frequency spectrum of each array at 16:41 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 22 Cont.
<p>The time–frequency spectrum of each array at 16:41 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 23
<p>Estimation of the spatial power spectrum at the second position. (<b>a</b>) The snapshot count is 512. (<b>b</b>) The snapshot count is 1024.</p>
Full article ">Figure 24
<p>Estimation of the spatial power spectrum at the third position. (<b>a</b>) The snapshot count is 512. (<b>b</b>) The snapshot count is 1024.</p>
Full article ">
25 pages, 1972 KiB  
Article
FL-DSFA: Securing RPL-Based IoT Networks against Selective Forwarding Attacks Using Federated Learning
by Rabia Khan, Noshina Tariq, Muhammad Ashraf, Farrukh Aslam Khan, Saira Shafi and Aftab Ali
Sensors 2024, 24(17), 5834; https://doi.org/10.3390/s24175834 - 8 Sep 2024
Viewed by 1202
Abstract
The Internet of Things (IoT) is a significant technological advancement that allows for seamless device integration and data flow. The development of the IoT has led to the emergence of several solutions in various sectors. However, rapid popularization also has its challenges, and [...] Read more.
The Internet of Things (IoT) is a significant technological advancement that allows for seamless device integration and data flow. The development of the IoT has led to the emergence of several solutions in various sectors. However, rapid popularization also has its challenges, and one of the most serious challenges is the security of the IoT. Security is a major concern, particularly routing attacks in the core network, which may cause severe damage due to information loss. Routing Protocol for Low-Power and Lossy Networks (RPL), a routing protocol used for IoT devices, is faced with selective forwarding attacks. In this paper, we present a federated learning-based detection technique for detecting selective forwarding attacks, termed FL-DSFA. A lightweight model involving the IoT Routing Attack Dataset (IRAD), which comprises Hello Flood (HF), Decreased Rank (DR), and Version Number (VN), is used in this technique to increase the detection efficiency. The attacks on IoT threaten the security of the IoT system since they mainly focus on essential elements of RPL. The components include control messages, routing topologies, repair procedures, and resources within sensor networks. Binary classification approaches have been used to assess the training efficiency of the proposed model. The training step includes the implementation of machine learning algorithms, including logistic regression (LR), K-nearest neighbors (KNN), support vector machine (SVM), and naive Bayes (NB). The comparative analysis illustrates that this study, with SVM and KNN classifiers, exhibits the highest accuracy during training and achieves the most efficient runtime performance. The proposed system demonstrates exceptional performance, achieving a prediction precision of 97.50%, an accuracy of 95%, a recall rate of 98.33%, and an F1 score of 97.01%. It outperforms the current leading research in this field, with its classification results, scalability, and enhanced privacy. Full article
Show Figures

Figure 1

Figure 1
<p>IoT network.</p>
Full article ">Figure 2
<p>Research framework.</p>
Full article ">Figure 3
<p>Feature importance from Linear Discriminant Analysis (LDA).</p>
Full article ">Figure 4
<p>KNN training model.</p>
Full article ">Figure 5
<p>KNN confusion matrix.</p>
Full article ">Figure 6
<p>LR training model.</p>
Full article ">Figure 7
<p>LR Confusion Matrix.</p>
Full article ">Figure 8
<p>SVM Training Model.</p>
Full article ">Figure 9
<p>SVM Confusion Matrix.</p>
Full article ">Figure 10
<p>NB training model.</p>
Full article ">Figure 11
<p>NB confusion matrix.</p>
Full article ">Figure 12
<p>Global models’ performance on test data.</p>
Full article ">Figure 13
<p>Global models’ performance on validation data.</p>
Full article ">
11 pages, 2723 KiB  
Article
Validity of Valor Inertial Measurement Unit for Upper and Lower Extremity Joint Angles
by Jacob Smith, Dhyey Parikh, Vincent Tate, Safeer Farrukh Siddicky and Hao-Yuan Hsiao
Sensors 2024, 24(17), 5833; https://doi.org/10.3390/s24175833 - 8 Sep 2024
Viewed by 1000
Abstract
Inertial measurement units (IMU) are increasingly utilized to capture biomechanical measures such as joint kinematics outside traditional biomechanics laboratories. These wearable sensors have been proven to help clinicians and engineers monitor rehabilitation progress, improve prosthesis development, and record human performance in a variety [...] Read more.
Inertial measurement units (IMU) are increasingly utilized to capture biomechanical measures such as joint kinematics outside traditional biomechanics laboratories. These wearable sensors have been proven to help clinicians and engineers monitor rehabilitation progress, improve prosthesis development, and record human performance in a variety of settings. The Valor IMU aims to offer a portable motion capture alternative to provide reliable and accurate joint kinematics when compared to industry gold standard optical motion capture cameras. However, IMUs can have disturbances in their measurements caused by magnetic fields, drift, and inappropriate calibration routines. Therefore, the purpose of this investigation is to validate the joint angles captured by the Valor IMU in comparison to an optical motion capture system across a variety of movements. Our findings showed mean absolute differences between Valor IMU and Vicon motion capture across all subjects’ joint angles. The tasks ranged from 1.81 degrees to 17.46 degrees, the root mean squared errors ranged from 1.89 degrees to 16.62 degrees, and interclass correlation coefficient agreements ranged from 0.57 to 0.99. The results in the current paper further promote the usage of the IMU system outside traditional biomechanical laboratories. Future examinations of this IMU should include smaller, modular IMUs with non-slip Velcro bands and further validation regarding transverse plane joint kinematics such as joint internal/external rotations. Full article
(This article belongs to the Special Issue Advanced Wearable Sensor for Human Movement Monitoring)
Show Figures

Figure 1

Figure 1
<p>Representative illustration of Valor inertial measurement unit placement. Ten Valor Velcro straps are used to secure each Valor IMU to the subjects’ ankles, shanks, thighs, waist, trunk, and upper arms, as seen marked by the red circles.</p>
Full article ">Figure 2
<p>Left and right ankle, knee, and hip joint angles are represented by the average curves between Vicon motion capture (V3D), shown in red, and the Valor inertial measurement unit (Valor), shown in blue. Data from a representative subject performing the vertical jump task.</p>
Full article ">Figure 3
<p>Left and right shoulder joint abduction and flexion angles are represented by the average curves between Vicon motion capture (V3D) shown in red, and the Valor inertial measurement unit (Valor), shown in blue. Data from a representative subject performing the vertical jump task.</p>
Full article ">Figure 3 Cont.
<p>Left and right shoulder joint abduction and flexion angles are represented by the average curves between Vicon motion capture (V3D) shown in red, and the Valor inertial measurement unit (Valor), shown in blue. Data from a representative subject performing the vertical jump task.</p>
Full article ">
18 pages, 2308 KiB  
Article
Impact of a Precision Intervention for Vascular Health in Middle-Aged and Older Postmenopausal Women Using Polar Heart Rate Sensors: A 24-Week RCT Study Based on the New Compilation of Tai Chi (Bafa Wubu)
by Xiaona Wang, Yanli Han, Haojie Li, Xin Wang and Guixian Wang
Sensors 2024, 24(17), 5832; https://doi.org/10.3390/s24175832 - 8 Sep 2024
Cited by 1 | Viewed by 1023
Abstract
(1) Background: This study utilized a 24-week intervention incorporating heart rate sensors for real-time monitoring of intervention training, aiming to comprehensively assess the effects of Tai Chi on vascular endothelial function, atherosclerosis progression, and lipid metabolism. The insights gained may inform personalized non-pharmacological [...] Read more.
(1) Background: This study utilized a 24-week intervention incorporating heart rate sensors for real-time monitoring of intervention training, aiming to comprehensively assess the effects of Tai Chi on vascular endothelial function, atherosclerosis progression, and lipid metabolism. The insights gained may inform personalized non-pharmacological interventions to enhance the management of cardiovascular health in this population to provide sustainable benefits and improve quality of life. (2) Methods: Forty postmenopausal middle-aged and elderly women were randomly assigned to an exercise or control group. The exercise group underwent a 24-week Tai Chi (BaFa WuBu) training intervention with real-time heart rate monitoring using Polar sensors. Pre- and post-intervention assessments included body composition, blood pressure, vascularity, and blood parameters measured with the Inbody 720, Vascular Endothelial Function Detector, and Arteriosclerosis. Data were analyzed using SPSS 26.0 and mixed-design ANOVA to assess the effects of time, group, and their interactions on study outcomes. (3) Results: After training through 24 weeks of Tai Chi (BaFa WuBu) intervention, compared with the control group, systolic blood pressure in the exercise group was significantly lower (p < 0.05), and the difference between left and right arm pulse pressure, left and right ankle mean arterial pressure, left and right side baPWV, left and right side ABI, TC, TG, LDL, and blood pressure viscosity were all very significantly lower (p < 0.01), and the diastolic blood pressure was significantly higher (p < 0.05). Compared with baseline values in the exercise group, systolic blood pressure, right and left arm pulse pressure difference, right and left ankle mean arterial pressure, right and left side baPWV, right and left side ABI, TC, TG, LDL, and blood pressure viscosity decreased very significantly (p < 0.01) and diastolic blood pressure and FMD increased very significantly (p < 0.01) in the exercise group after the intervention. (4) Conclusions: In our study, a 24-week Tai Chi (BaFa WuBu) program significantly improved vascular health in middle-aged and older postmenopausal women. This simplified Tai Chi form is gentle and effective, ideal for older adults. Regular practice led to reduced vascular obstruction, improved lipid metabolism, and enhanced vascular endothelial function, crucial for preventing vascular diseases. The real-time heart rate sensors used were pivotal, enabling precise monitoring and adjustment of exercise intensity, thereby enhancing the study’s scientific rigor and supporting Tai Chi (BaFa WuBu) as a beneficial therapeutic exercise. Full article
Show Figures

Figure 1

Figure 1
<p>Polar heart rate sensor.</p>
Full article ">Figure 2
<p>Effects of 24 weeks of Tai Chi (BaFa WuBu) intervention on systolic and diastolic blood pressure in middle-aged and elderly postmenopausal women, with the control group in blue and the exercise group in pink.</p>
Full article ">Figure 3
<p>Effects of 24 weeks of Tai Chi (BaFa WuBu) intervention on pulse pressure difference in middle-aged and elderly postmenopausal women, control group in blue and exercise group in pink.</p>
Full article ">Figure 4
<p>Effects of 24 weeks of Tai Chi (BaFa WuBu) intervention on mean arterial pressure in middle-aged and elderly postmenopausal women, blue is control group, pink is exercise group.</p>
Full article ">Figure 5
<p>Effects of 24 weeks of Tai Chi (BaFa WuBu) intervention on vascular stiffness in middle-aged and older postmenopausal women. baPWV is the arm–ankle pulse wave conduction velocity and ABI is the ankle–brachial index; blue is the control group and pink is the exercise group.</p>
Full article ">Figure 6
<p>Effects of 24 weeks of Tai Chi (BaFa WuBu) intervention on vascular endothelial function in middle-aged and older postmenopausal women, FMD is brachial artery flow-mediated vasodilatation function; control group in blue, exercise group in pink.</p>
Full article ">Figure 7
<p>Effects of 24 weeks of Tai Chi (BaFa WuBu) intervention on lipids in middle-aged and older postmenopausal women; TC is total cholesterol, TG is triglyceride, LDL is low-density lipoprotein, and HDL is high-density lipoprotein; blue is the control group and pink is the exercise group.</p>
Full article ">Figure 8
<p>Effect of 24 weeks of Tai Chi (BaFa WuBu) intervention on plasma viscosity in middle-aged and older postmenopausal women, control group in blue and exercise group in pink.</p>
Full article ">
20 pages, 5766 KiB  
Article
High-Accuracy Calibration Method of a Thermal Camera Using Two Reference Blackbodies
by Tomasz Sosnowski, Mariusz Kastek, Krzysztof Sawicki, Andrzej Ligienza, Sławomir Gogler and Bogusław Więcek
Sensors 2024, 24(17), 5831; https://doi.org/10.3390/s24175831 - 8 Sep 2024
Viewed by 3145
Abstract
Body temperature is one of the most important physiological parameters of a human being used to assess his basic vital functions. In medical practice, various types of measuring instruments are used to measure temperature, such as liquid thermometers, electronic thermometers, non-contact ear thermometers, [...] Read more.
Body temperature is one of the most important physiological parameters of a human being used to assess his basic vital functions. In medical practice, various types of measuring instruments are used to measure temperature, such as liquid thermometers, electronic thermometers, non-contact ear thermometers, and non-contact forehead thermometers. Such body temperature measurement techniques require the connection of appropriate sensors to a person, and non-contact thermometers operate over short distances and force a specific position of the person during the measurement. As a result, using the above methods, it is practically impossible to perform body temperature measurements of a moving human being. A thermal imaging camera can be used effectively for the purpose of the temperature measurement of moving objects, but the remote measurement of a human body temperature using a thermal imaging camera is affected by many factors that are difficult to control. Accurate remote measurement of human body temperature requires a measurement system that implements a specialized temperature determination algorithm. This article presents a model of a measurement system that facilitates the development of a highly accurate temperature measurement method. For the model, its parameters were determined on the calibration stand. The correct operation of the developed method and the effectiveness of temperature measurement have been confirmed by tests on a test stand using reference radiation sources. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of an assembly for remote temperature measurement using a thermal imaging camera.</p>
Full article ">Figure 2
<p>General concept of a detector module broken down into basic functional systems.</p>
Full article ">Figure 3
<p>Infrared radiation model of the scene in thermographic measurements.</p>
Full article ">Figure 4
<p>Geometric relationships for the radiation transfer between two surfaces <math display="inline"><semantics> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mn>2</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Schematic of the thermal imaging camera model.</p>
Full article ">Figure 6
<p>Block diagram of the station for determining the parameters and calibration of the radiometric thermal camera.</p>
Full article ">Figure 7
<p>View of the workstation for determining the parameters and calibration of the radiometric thermal camera.</p>
Full article ">Figure 8
<p>View of the thermal imaging camera inside the climate chamber.</p>
Full article ">Figure 9
<p>View of blackbodies at the test stand (<b>a</b>) and an example of a thermal image recorded at the stand for testing the accuracy of temperature measurement (<b>b</b>).</p>
Full article ">Figure 10
<p>Microbolometer thermal camera with the thermal sensors (<b>a</b>) and values of the configuration factors <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>d</mi> <mi>D</mi> <mo>−</mo> <mi>Q</mi> </mrow> </msub> </semantics></math> between the surface of the detector array <span class="html-italic">D</span> and the surface <span class="html-italic">Q</span> (<b>b</b>).</p>
Full article ">Figure 11
<p>View of the measurement situation with four test blackbodies.</p>
Full article ">Figure 12
<p>Plot of the measured temperature for the test blackbodies.</p>
Full article ">Figure 13
<p>Absolute error of temperature measurement for test blackbodies.</p>
Full article ">Figure 14
<p>Relative error of temperature measurement for test blackbodies.</p>
Full article ">Figure 15
<p>Expanded uncertainty with 95% confidence level for test blackbodies.</p>
Full article ">Figure 16
<p>Radiant power of the scene (<math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>s</mi> <mi>c</mi> </mrow> </msub> </semantics></math>, blue) and the summed power of all other interfering signals (<math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> </semantics></math>, red) as a function of temperature.</p>
Full article ">
Previous Issue
Back to TopTop