Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 24, July-1
Previous Issue
Volume 24, June-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 12 (June-2 2024) – 332 articles

Cover Story (view full-size image): Interest in developing portable and personal air quality measurement devices has grown, especially due to the need for ventilation after COVID-19. Monitoring hazardous chemicals is crucial for safety standards and human welfare. Public institutions use precise but costly equipment requiring constant calibration and maintenance by qualified personnel. These reference stations have low spatial resolution, as they are limited in number and pose restrictions due to high costs and low temporal resolution, with a rate of a few samples per hour. This paper presents a home-designed and developed personal device (smartwatch) with an LCD screen, Lithium battery, Bluetooth and MEMS-based gas and ambient sensors, offering personalized, real-time air quality information in an attractive and modern design device. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 1749 KiB  
Article
The Effect of Caffeine on Movement-Related Cortical Potential Morphology and Detection
by Mads Jochumsen, Emma Rahbek Lavesen, Anne Bruun Griem, Caroline Falkenberg-Andersen and Sofie Kirstine Gedsø Jensen
Sensors 2024, 24(12), 4030; https://doi.org/10.3390/s24124030 - 20 Jun 2024
Viewed by 882
Abstract
Movement-related cortical potential (MRCP) is observed in EEG recordings prior to a voluntary movement. It has been used for e.g., quantifying motor learning and for brain-computer interfacing (BCIs). The MRCP amplitude is affected by various factors, but the effect of caffeine is underexplored. [...] Read more.
Movement-related cortical potential (MRCP) is observed in EEG recordings prior to a voluntary movement. It has been used for e.g., quantifying motor learning and for brain-computer interfacing (BCIs). The MRCP amplitude is affected by various factors, but the effect of caffeine is underexplored. The aim of this study was to investigate if a cup of coffee with 85 mg caffeine modulated the MRCP amplitude and the classification of MRCPs versus idle activity, which estimates BCI performance. Twenty-six healthy participants performed 2 × 100 ankle dorsiflexion separated by a 10-min break before a cup of coffee was consumed, followed by another 100 movements. EEG was recorded during the movements and divided into epochs, which were averaged to extract three average MRCPs that were compared. Also, idle activity epochs were extracted. Features were extracted from the epochs and classified using random forest analysis. The MRCP amplitude did not change after consuming caffeine. There was a slight increase of two percentage points in the classification accuracy after consuming caffeine. In conclusion, a cup of coffee with 85 mg caffeine does not affect the MRCP amplitude, and improves MRCP-based BCI performance slightly. The findings suggest that drinking coffee is only a minor confounder in MRCP-related studies. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces and Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Channel locations of the recorded EEG channels. (<b>B</b>) Continuous EEG and EMG were recorded, while visually cued ankle dorsiflexions were performed. (<b>C</b>) Overview of the progression of the experiment.</p>
Full article ">Figure 2
<p>The continuous EEG (<b>top</b>) and EMG (<b>bottom</b>) are displayed for three movements. The dashed black line in the EMG plot shows the participant-specific threshold. The shaded areas indicate the extracted epochs.</p>
Full article ">Figure 3
<p>Grand average plot of idle brain activity, the two recording sessions prior to caffeine intake, and the recording session post caffeine intake. The solid lines indicate the mean across the participants, and the shaded area is the standard error across the participants.</p>
Full article ">
15 pages, 6042 KiB  
Article
A Ground-Based Electrostatically Suspended Accelerometer
by Hanxiao Liu, Xiaoxia He, Chenhui Wu and Rong Zhang
Sensors 2024, 24(12), 4029; https://doi.org/10.3390/s24124029 - 20 Jun 2024
Viewed by 632
Abstract
In this study, we have developed an electrostatically suspended accelerometer (ESA) specifically designed for ground use. To ensure sufficient overload capacity and minimize noise resulting from high suspension voltage, we introduced a proof mass design featuring a hollow, thin-walled cylinder with a thin [...] Read more.
In this study, we have developed an electrostatically suspended accelerometer (ESA) specifically designed for ground use. To ensure sufficient overload capacity and minimize noise resulting from high suspension voltage, we introduced a proof mass design featuring a hollow, thin-walled cylinder with a thin flange fixed at the center, offering the highest surface-area-to-mass ratio compared to various typical proof mass structures. Preload voltage is directly applied to the proof mass via a golden wire, effectively reducing the maximum supply voltage for suspension. The arrangement of suspension electrodes, offering five degrees of freedom and minimizing cross-talk, was designed to prioritize simplicity and maximize the utilization of electrode area for suspension purposes. The displacement detection and electrostatic suspension force were accurately modeled based on the structure. A controller incorporating an inverse winding mechanism was developed and simulated using Simulink. The simulation results unequivocally demonstrate the successful completion of the stable initial levitation process and suspension under ±1g overload. Full article
(This article belongs to the Special Issue Advanced Inertial Sensors: Advances, Challenges and Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic sketch of electrostatic suspension system overview and operation principle in one DOF.</p>
Full article ">Figure 2
<p>Typical proof mass structural diagrams: (<b>a</b>) hollow sphere; (<b>b</b>) hollow hexahedron; (<b>c</b>) six thin hollow plates; (<b>d</b>) hollow cylinder; (<b>e</b>) hollow cylinder with outer flange; (<b>f</b>) hollow cylinder with inner flange.</p>
Full article ">Figure 3
<p>Schematic Diagram of the Proof Mass Structure.</p>
Full article ">Figure 4
<p>Schematic Diagram of Electrode Structure.</p>
Full article ">Figure 5
<p>The numbering definition pf electrodes: (<b>a</b>) Numbering definition of planar electrodes in disk direction; (<b>b</b>). The numbering definition of cylindrical electrodes in the cylinder direction.</p>
Full article ">Figure 6
<p>Schematic diagram of displacement detection: (<b>a</b>) displacement detection in <math display="inline"><semantics> <mi>Z</mi> </semantics></math> DOF; (<b>b</b>) displacement detection in <math display="inline"><semantics> <mrow> <mi>X</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>Y</mi> </mrow> </semantics></math> DOF. (<b>c</b>) displacement detection in <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>ϕ</mi> </mrow> </semantics></math> DOF.</p>
Full article ">Figure 7
<p>Schematic diagram suspension control principle: (<b>a</b>) voltage load and electrostatic force suspension scheme in <math display="inline"><semantics> <mi>Z</mi> </semantics></math> DOF; (<b>b</b>) voltage load and electrostatic force suspension scheme in <math display="inline"><semantics> <mrow> <mi>X</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>Y</mi> </mrow> </semantics></math> DOF. (<b>c</b>) voltage load and electrostatic force suspension scheme in <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>ϕ</mi> </mrow> </semantics></math> DOF.</p>
Full article ">Figure 8
<p>Schematic diagram of a single-degree-of-freedom electrostatic suspension.</p>
Full article ">Figure 9
<p>Bode diagram of the open-loop system.</p>
Full article ">Figure 10
<p>Controller with inverse “winding”.</p>
Full article ">Figure 11
<p>Simulink model of system structure.</p>
Full article ">Figure 12
<p>Simulink simulation results of the initial levitation process: (<b>a</b>) PIDPL alone; (<b>b</b>) PIDPL with inverse “winding”.</p>
Full article ">Figure 13
<p>Simulink simulation results of the suspension stage: (<b>a</b>) result in <math display="inline"><semantics> <mi>Z</mi> </semantics></math> DOF; (<b>b</b>) result in <math display="inline"><semantics> <mrow> <mi>X</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>Y</mi> </mrow> </semantics></math> DOF. (<b>c</b>) result in <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>ϕ</mi> </mrow> </semantics></math> DOF.</p>
Full article ">Figure 14
<p>Step response simulation results in a robust analysis: (<b>a</b>) result in <math display="inline"><semantics> <mi>Z</mi> </semantics></math> DOF; (<b>b</b>) result in <math display="inline"><semantics> <mrow> <mi>X</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>Y</mi> </mrow> </semantics></math> DOF. (<b>c</b>) result in <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>ϕ</mi> </mrow> </semantics></math> DOF.</p>
Full article ">Figure 15
<p>Initial levitation simulation results in a robust analysis: (<b>a</b>) result in <math display="inline"><semantics> <mi>Z</mi> </semantics></math> DOF; (<b>b</b>) result in <math display="inline"><semantics> <mrow> <mi>X</mi> <mo> </mo> <mo>&amp;</mo> <mo> </mo> <mi>Y</mi> </mrow> </semantics></math> DOF.</p>
Full article ">
15 pages, 5391 KiB  
Article
Determinants of Maximum Magnetic Anomaly Detection Distance
by Hangcheng Li, Jiaming Luo, Jiajun Zhang, Jing Li, Yi Zhang, Wenwei Zhang and Mingji Zhang
Sensors 2024, 24(12), 4028; https://doi.org/10.3390/s24124028 - 20 Jun 2024
Viewed by 805
Abstract
The maximum detection distance is usually the primary concern of magnetic anomaly detection (MAD). Intuition tells us that larger object size, stronger magnetization and finer measurement resolution guarantee a further detectable distance. However, the quantitative relationship between detection distance and the above determinants [...] Read more.
The maximum detection distance is usually the primary concern of magnetic anomaly detection (MAD). Intuition tells us that larger object size, stronger magnetization and finer measurement resolution guarantee a further detectable distance. However, the quantitative relationship between detection distance and the above determinants is seldom studied. In this work, unmanned aerial vehicle-based MAD field experiments are conducted on cargo vessels and NdFeB magnets as typical magnetic objects to give a set of visualized magnetic field flux density images. Isometric finite element models are established, calibrated and analyzed according to the experiment configuration. A maximum detectable distance map as a function of target size and measurement resolution is then obtained from parametric sweeping on an experimentally calibrated finite element analysis model. We find that the logarithm of detectable distance is positively proportional to the logarithm of object size while negatively proportional to the logarithm of resolution, within the ranges of 1 m~500 m and 1 pT~1 μT, respectively. A three-parameter empirical formula (namely distance-size-resolution logarithmic relationship) is firstly developed to determine the most economic sensor configuration for a given detection task, to estimate the maximum detection distance for a given magnetic sensor and object, or to evaluate minimum detectable object size at a given magnetic anomaly detection scenario. Full article
(This article belongs to the Special Issue Advances in Magnetic Anomaly Sensing Systems)
Show Figures

Figure 1

Figure 1
<p>Unmanned Aerial Vehicle magnetic anomaly detection schematic for shipwreck rescue.</p>
Full article ">Figure 2
<p>Magnetic abnormal detection system based on Unmanned Aerial Vehicle.</p>
Full article ">Figure 3
<p>Equivalent magnetic noise spectrum density of magnetic sensor.</p>
Full article ">Figure 4
<p>Cargo ship physical diagram.</p>
Full article ">Figure 5
<p>The measured results of |<b>B</b>| distribution of cargo ship, where black dots are flight track and data collection points: (<b>a</b>) D1 is the <span class="html-italic">x-z</span> cross-section of cargo ship, (<b>b</b>) D2 is the <span class="html-italic">y-z</span> cross-section of cargo ship, (<b>c</b>) L3 is the altitude intersection line of cross section, and (<b>d</b>) Diagrammatic sketch of cargo ship MAD measurement.</p>
Full article ">Figure 6
<p>NdFeB magnet physical diagram.</p>
Full article ">Figure 7
<p>The measured results of magnetic field distribution of sandwich structured self-made NdFeB magnet, in which the white dots indicate flight tracks and data collection points: (<b>a</b>) D1 and (<b>b</b>) D2 are the cross-sectional magnetic field distribution of the magnet, (<b>c</b>) L3 shows the variation of magnetic field intensity along altitude over the magnet, and (<b>d</b>) Diagrammatic sketch of cargo ship MAD measurement.</p>
Full article ">Figure 8
<p>FEA simulation model mesh generation and Geometric model of cargo ship.</p>
Full article ">Figure 9
<p>The simulated results of |<b>B</b>| distribution of cargo ship: (<b>a</b>) D1 is the <span class="html-italic">x</span>-<span class="html-italic">z</span> cross-section of cargo ship, (<b>b</b>) D2 is the <span class="html-italic">y</span>-<span class="html-italic">z</span> cross-section of cargo ship, (<b>c</b>) L3 is the altitude intersection line of cross-section, and (<b>d</b>) Diagrammatic sketch of cargo ship MAD simulation.</p>
Full article ">Figure 10
<p>The simulated results of |<b>B</b>| distribution of NdFeB magnet: (<b>a</b>) D1 is the <span class="html-italic">x-z</span> cross-section, (<b>b</b>) D2 is the <span class="html-italic">y-z</span> cross-section, (<b>c</b>) L3 is the altitude intersection line of cross-section, and (<b>d</b>) Diagrammatic sketch of magnet MAD simulation.</p>
Full article ">Figure 11
<p>Experimentally Calibrated FEA Results of Size-Resolution Determinant on Maximum Detection Distance.</p>
Full article ">Figure 12
<p>Analytical Results of Size-Resolution Determinant on Maximum Detection Distance. The No. corresponding to the platforms in <a href="#sensors-24-04028-t001" class="html-table">Table 1</a> are enclosed within circles, with the adjacent label indicating the platform’s <span class="html-italic">S</span>, <span class="html-italic">R</span>, and <span class="html-italic">D</span>, respectively.</p>
Full article ">
22 pages, 7497 KiB  
Article
Experimental and Numerical Investigation of Bogie Hunting Instability for Railway Vehicles Based on Multiple Sensors
by Biao Zheng, Lai Wei, Jing Zeng and Dafu Zhang
Sensors 2024, 24(12), 4027; https://doi.org/10.3390/s24124027 - 20 Jun 2024
Cited by 3 | Viewed by 1008
Abstract
Bogie hunting instability is one of the common faults in railway vehicles. It not only affects ride comfort but also threatens operational safety. Due to the lower operating speed of metro vehicles, their bogie hunting stability is often overlooked. However, as wheel tread [...] Read more.
Bogie hunting instability is one of the common faults in railway vehicles. It not only affects ride comfort but also threatens operational safety. Due to the lower operating speed of metro vehicles, their bogie hunting stability is often overlooked. However, as wheel tread wear increases, metro vehicles with high conicity wheel–rail contact can also experience bogie hunting instability. In order to enhance the operational safety of metro vehicles, this paper conducts field tests and simulation calculations to study the bogie hunting instability behavior of metro vehicles and proposes corresponding solutions from the perspective of wheel–rail contact relationships. Acceleration and displacement sensors are installed on metro vehicles to collect data, which are processed in real time in 2 s intervals. The lateral acceleration of the frame is analyzed to determine if bogie hunting instability has occurred. Based on calculated safety indicators, it is determined whether deceleration is necessary to ensure the safety of vehicle operation. For metro vehicles in the later stages of wheel wear (after 300,000 km), the stability of their bogies should be monitored in real time. To improve the stability of metro vehicle bogies while ensuring the longevity of wheelsets, metro vehicle wheel treads should be reprofiled regularly, with a recommended reprofiling interval of 350,000 km. Full article
(This article belongs to the Special Issue Sensors for Real-Time Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Sensor layout points for field test.</p>
Full article ">Figure 2
<p>Brake caliper suspended on the frame.</p>
Full article ">Figure 3
<p>Mounting sensors on the tested vehicle; (<b>a</b>) sensors on left of wheelset 1; (<b>b</b>) sensor on brake caliper; (<b>c</b>) sensors on carbody floor; (<b>d</b>) sensor on bogie frame.</p>
Full article ">Figure 3 Cont.
<p>Mounting sensors on the tested vehicle; (<b>a</b>) sensors on left of wheelset 1; (<b>b</b>) sensor on brake caliper; (<b>c</b>) sensors on carbody floor; (<b>d</b>) sensor on bogie frame.</p>
Full article ">Figure 4
<p>Field test of bogie hunting; (<b>a</b>) tested metro vehicle and line; (<b>b</b>) interference between brake caliper and wheelset; (<b>c</b>) tread profile detection; (<b>d</b>) data acquisition equipment.</p>
Full article ">Figure 4 Cont.
<p>Field test of bogie hunting; (<b>a</b>) tested metro vehicle and line; (<b>b</b>) interference between brake caliper and wheelset; (<b>c</b>) tread profile detection; (<b>d</b>) data acquisition equipment.</p>
Full article ">Figure 5
<p>Investigation of wheel–rail contact nonlinearity; (<b>a</b>) comparison of new and worn wheel profile; (<b>b</b>) contact conicity.</p>
Full article ">Figure 6
<p>STEFT spectrum of lateral accelerations; (<b>a</b>) axle box; (<b>b</b>) bogie frame; (<b>c</b>) carbody.</p>
Full article ">Figure 7
<p>Dynamic evaluation results of tested vehicle; (<b>a</b>) wheel axle forces; (<b>b</b>) ride index.</p>
Full article ">Figure 8
<p>Measured displacements of metro vehicle; (<b>a</b>,<b>b</b>) primary lateral displacements; (<b>c</b>) primary longitudinal displacements; (<b>d</b>) lateral displacements between caliper and wheelset.</p>
Full article ">Figure 9
<p>Relative motion between brake caliper and wheel (Red lines indicate the distance between brake caliper and wheel).</p>
Full article ">Figure 10
<p>Dynamic evaluation results of tested vehicle; (<b>a</b>) wheel axle force; (<b>b</b>) lateral ride index; (<b>c</b>) vertical ride index.</p>
Full article ">Figure 10 Cont.
<p>Dynamic evaluation results of tested vehicle; (<b>a</b>) wheel axle force; (<b>b</b>) lateral ride index; (<b>c</b>) vertical ride index.</p>
Full article ">Figure 11
<p>Vehicle system model; (<b>a</b>) scheme; nonlinear characteristics of (<b>b</b>) damper and (<b>c</b>) secondary lateral stop.</p>
Full article ">Figure 12
<p>Model of brake caliper; (<b>a</b>) there dimensional model (<b>b</b>) dynamic model.</p>
Full article ">Figure 13
<p>Comparison between field test and simulation; time history (<b>a</b>) and spectrogram (<b>b</b>) of lateral accelerations of bogie; (<b>c</b>) primary lateral displacements; (<b>d</b>) lateral displacements between caliper and wheelset.</p>
Full article ">Figure 14
<p>Effect of cardanic stiffness of caliper on lateral displacements between caliper and wheelset.</p>
Full article ">Figure 15
<p>Comparison between new wheel and worn wheel; time history (<b>a</b>) and spectrogram (<b>b</b>) of lateral accelerations of bogie; (<b>c</b>) primary lateral displacements; (<b>d</b>) lateral displacements between caliper and wheelset.</p>
Full article ">
19 pages, 11350 KiB  
Article
Preparation of CNT/CNF/PDMS/TPU Nanofiber-Based Conductive Films Based on Centrifugal Spinning Method for Strain Sensors
by Shunqi Mei, Bin Xu, Jitao Wan and Jia Chen
Sensors 2024, 24(12), 4026; https://doi.org/10.3390/s24124026 - 20 Jun 2024
Cited by 3 | Viewed by 994
Abstract
Flexible conductive films are a key component of strain sensors, and their performance directly affects the overall quality of the sensor. However, existing flexible conductive films struggle to maintain high conductivity while simultaneously ensuring excellent flexibility, hydrophobicity, and corrosion resistance, thereby limiting their [...] Read more.
Flexible conductive films are a key component of strain sensors, and their performance directly affects the overall quality of the sensor. However, existing flexible conductive films struggle to maintain high conductivity while simultaneously ensuring excellent flexibility, hydrophobicity, and corrosion resistance, thereby limiting their use in harsh environments. In this paper, a novel method is proposed to fabricate flexible conductive films via centrifugal spinning to generate thermoplastic polyurethane (TPU) nanofiber substrates by employing carbon nanotubes (CNTs) and carbon nanofibers (CNFs) as conductive fillers. These fillers are anchored to the nanofibers through ultrasonic dispersion and impregnation techniques and subsequently modified with polydimethylsiloxane (PDMS). This study focuses on the effect of different ratios of CNTs to CNFs on the film properties. Research demonstrated that at a 1:1 ratio of CNTs to CNFs, with TPU at a 20% concentration and PDMS solution at 2 wt%, the conductive films crafted from these blended fillers exhibited outstanding performance, characterized by electrical conductivity (31.4 S/m), elongation at break (217.5%), and tensile cycling stability (800 cycles at 20% strain). Furthermore, the nanofiber-based conductive films were tested by attaching them to various human body parts. The tests demonstrated that these films effectively respond to motion changes at the wrist, elbow joints, and chest cavity, underscoring their potential as core components in strain sensors. Full article
(This article belongs to the Section Nanosensors)
Show Figures

Figure 1

Figure 1
<p>Flow chart for the preparation of CNT/CNF/PDMS/TPU nanofiber-based conductive films: (<b>a</b>) schematic diagram of the centrifugal spinning equipment; (<b>b</b>) TPU nanofiber film; (<b>c</b>) CNT/CNF/TPU nanofiber film; (<b>d</b>) CNT/CNF/PDMS/TPU nanofiber-based conductive film.</p>
Full article ">Figure 2
<p>SEM images of nanofiber-based conductive films: (<b>a</b>) Pure TPU film; (<b>b</b>) CNT/TPU film; (<b>c</b>,<b>d</b>) CNT/CNF/TPU films; (<b>e</b>,<b>f</b>) CNT/CNF/PDMS/TPU films.</p>
Full article ">Figure 3
<p>(<b>a</b>) FTIR spectra; (<b>b</b>) XRD diffractograms; (<b>c</b>) TGA curves; and (<b>d</b>) DTG curves for pure TPU film, CNT/PDMS/TPU film, and CNT/CNF/PDMS/TPU film.</p>
Full article ">Figure 4
<p>Conductivity of different thin film samples. (a) CNTs/PDMS/TPU; (b) CNTs:CNFs = 4:1/PDMS/TPU; (c) CNTs:CNFs = 3:2/PDMS/TPU; (d) CNTs:CNFs = 1:1/PDMS/TPU; (e) CNTs:CNFs = 2:3/PDMS/TPU; (f) CNTs:CNFs = 1:4/PDMS/TPU.</p>
Full article ">Figure 5
<p>Elongation at break of different film samples. (a) CNTs/PDMS/TPU; (b) CNTs:CNFs = 4:1/PDMS/TPU; (c) CNTs:CNFs = 3:2/PDMS/TPU; (d) CNTs:CNFs = 1:1/PDMS/TPU; (e) CNTs:CNFs = 2:3/PDMS/TPU; (f) CNTs:CNFs = 1:4/ PDMS/TPU.</p>
Full article ">Figure 6
<p>Contact angle of different samples: (a) CNTs/TPU; (b) CNTs/PDMS/TPU; (c) CNTs:CNFs = 4:1/PDMS/TPU; (d) CNTs:CNFs = 3:2/PDMS/TPU; (e) CNTs:CNFs = 1:1/PDMS/TPU; (f) CNTs:CNFs = 2:3/PDMS/TPU; (g) CNTs:CNFs = 1:4/PDMS/TPU.</p>
Full article ">Figure 7
<p>(<b>a</b>) Conductivity and contact angle of CNT/CNF/PDMS/TPU films immersed in acidic solution (pH = 1) with time. (<b>b</b>) Variation in conductivity and contact angle of CNT/CNF/PDMS/TPU films with the number of cycles at 20% strain.</p>
Full article ">Figure 8
<p>Resistance change versus tensile strain curves for each sample: (<b>a</b>) CNTs/PDMS/TPU; (<b>b</b>) CNTs:CNFs = 4:1/PDMS/TPU; (<b>c</b>) CNTs:CNFs = 3:2/PDMS/TPU; (<b>d</b>) CNTs:CNFs = 1:1/PDMS/TPU; (<b>e</b>) CNTs:CNFs = 2:3/PDMS/TPU; (<b>f</b>) CNTs:CNFs = 1:4/PDMS/TPU.</p>
Full article ">Figure 9
<p>Resistance change in each sample within 20 tensile cycles: (<b>a</b>) CNTs/PDMS/TPU; (<b>b</b>) CNTs:CNFs = 4:1/PDMS/TPU; (<b>c</b>) CNTs:CNFs = 3:2/PDMS/TPU; (<b>d</b>) CNTs:CNFs = 1:1/PDMS/TPU; (<b>e</b>) CNTs:CNFs = 2:3/PDMS/TPU; (<b>f</b>) CNTs:CNFs = 1:4/PDMS/TPU.</p>
Full article ">Figure 10
<p>Peak mean and fluctuation of resistance change rate for each sample: (a) CNTs/PDMS/TPU; (b) CNTs:CNFs = 4:1/PDMS/TPU; (c) CNTs:CNFs = 3:2/PDMS/TPU; (d) CNTs:CNFs = 1:1/PDMS/TPU; (e) CNTs:CNFs = 2:3/PDMS/TPU; (f) CNTs:CNFs = 1:4/PDMS/TPU.</p>
Full article ">Figure 11
<p>CNTs:CNFs = 1:1/PDMS/TPU conductive film resistance change within 800 cycles at 20% tensile strain.</p>
Full article ">Figure 12
<p>(<b>a</b>) Schematic diagram of strain transducer. (<b>b</b>) wrist test; (<b>c</b>) elbow test; (<b>d</b>) chest breathing test.</p>
Full article ">Figure 13
<p>CNT/CNF/PDMS/TPU nanofiber-based conductive films applied to human motion. Resistivity change rate curves: (<b>a</b>) Wrist motion. (<b>b</b>) Elbow joint motion. (<b>c</b>) Chest heave detection during breathing.</p>
Full article ">
16 pages, 4237 KiB  
Article
An Integrated LSTM-Rule-Based Fusion Method for the Localization of Intelligent Vehicles in a Complex Environment
by Quan Yuan, Fuwu Yan, Zhishuai Yin, Chen Lv, Jie Hu, Yue Li and Jinhai Wang
Sensors 2024, 24(12), 4025; https://doi.org/10.3390/s24124025 - 20 Jun 2024
Viewed by 734
Abstract
To improve the accuracy and robustness of autonomous vehicle localization in a complex environment, this paper proposes a multi-source fusion localization method that integrates GPS, laser SLAM, and an odometer model. Firstly, fuzzy rules are constructed to accurately analyze the in-vehicle localization deviation [...] Read more.
To improve the accuracy and robustness of autonomous vehicle localization in a complex environment, this paper proposes a multi-source fusion localization method that integrates GPS, laser SLAM, and an odometer model. Firstly, fuzzy rules are constructed to accurately analyze the in-vehicle localization deviation and confidence factor to improve the initial fusion localization accuracy. Then, an odometer model for obtaining the projected localization trajectory is constructed. Considering the high accuracy of the odometer’s projected trajectory within a short distance, we used the shape of the projected localization trajectory to inhibit the initial fusion localization noise and used trajectory matching to obtain an accurate localization. Finally, the Dual-LSTM network is constructed to predict the localization and build an electronic fence to guarantee the safety of the vehicle while also guaranteeing the updating of short-distance localization information of the vehicle when the above-mentioned fusion localization is unreliable. Under the limited arithmetic condition of the vehicle platform, accurate and reliable localization is realized in a complex environment. The proposed method was verified by long-time operation on the real vehicle platform, and compared with the EKF fusion localization method, the average root mean square error of localization was reduced by 66%, reaching centimeter-level localization accuracy. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Localization system architecture.</p>
Full article ">Figure 2
<p>GPS outputs in complex environment.</p>
Full article ">Figure 3
<p>Schematic diagram of odometry calculation.</p>
Full article ">Figure 4
<p>Schematic diagram of trajectory matching.</p>
Full article ">Figure 5
<p>Architecture of the encode–decode mode.</p>
Full article ">Figure 6
<p>Efficacy explanation of data processing.</p>
Full article ">Figure 7
<p>LSTM internal structure.</p>
Full article ">Figure 8
<p>SV 1.0 plus vehicle platform.</p>
Full article ">Figure 9
<p>Routing map.</p>
Full article ">Figure 10
<p>The network prediction result.</p>
Full article ">Figure 11
<p>The final test of the module.</p>
Full article ">Figure 12
<p>The detailed comparison results of the method.</p>
Full article ">Figure 13
<p>The test error of three localization methods.</p>
Full article ">Figure 14
<p>The test error of two localization methods.</p>
Full article ">
36 pages, 34495 KiB  
Article
A Novel 3D Reconstruction Sensor Using a Diving Lamp and a Camera for Underwater Cave Exploration
by Quentin Massone, Sébastien Druon and Jean Triboulet
Sensors 2024, 24(12), 4024; https://doi.org/10.3390/s24124024 - 20 Jun 2024
Viewed by 797
Abstract
Aquifer karstic structures, due to their complex nature, present significant challenges in accurately mapping their intricate features. Traditional methods often rely on invasive techniques or sophisticated equipment, limiting accessibility and feasibility. In this paper, a new approach is proposed for a non-invasive, low-cost [...] Read more.
Aquifer karstic structures, due to their complex nature, present significant challenges in accurately mapping their intricate features. Traditional methods often rely on invasive techniques or sophisticated equipment, limiting accessibility and feasibility. In this paper, a new approach is proposed for a non-invasive, low-cost 3D reconstruction using a camera that observes the light projection of a simple diving lamp. The method capitalizes on the principles of structured light, leveraging the projection of light contours onto the karstic surfaces. By capturing the resultant light patterns with a camera, three-dimensional representations of the structures are reconstructed. The simplicity and portability of the equipment required make this method highly versatile, enabling deployment in diverse underwater environments. This approach is validated through extensive field experiments conducted in various aquifer karstic settings. The results demonstrate the efficacy of this method in accurately delineating intricate karstic features with remarkable detail and resolution. Furthermore, the non-destructive nature of this technique minimizes disturbance to delicate aquatic ecosystems while providing valuable insights into the subterranean landscape. This innovative methodology not only offers a cost-effective and non-invasive means of mapping aquifer karstic structures but also opens avenues for comprehensive environmental monitoring and resource management. Its potential applications span hydrogeological studies, environmental conservation efforts, and sustainable water resource management practices in karstic terrains worldwide. Full article
Show Figures

Figure 1

Figure 1
<p>Earth’s water distribution.</p>
Full article ">Figure 2
<p>Three-dimensional reconstruction methods in underwater environments.</p>
Full article ">Figure 3
<p>Flowchart of the camera + projector method.</p>
Full article ">Figure 4
<p>Diagram of the system consisting of the light projector represented by a cone of revolution and the camera observing the light projection on a plane.</p>
Full article ">Figure 5
<p>Parameterization of the cone using its generatrices. With this parameterization, one can find the closest generatrix to an external point <math display="inline"><semantics> <mi mathvariant="bold-italic">M</mi> </semantics></math> and thus find the orthogonal projection <math display="inline"><semantics> <mi mathvariant="bold-italic">H</mi> </semantics></math> of this point on the cone.</p>
Full article ">Figure 6
<p>Example showing the closest points on the upper part of the cone (in red) to points outside the surface of the cone (in blue).</p>
Full article ">Figure 7
<p>Example of detecting the contour of a halo of light on a wall using a chessboard to estimate the plane pose. The contour is in green and the adjusted ellipse in blue.</p>
Full article ">Figure 8
<p>Method for obtaining several elliptical sections of the cone.</p>
Full article ">Figure 9
<p>Representation of the camera/cone pair in an orthogonal projection 2D view where the projection axis is perpendicular to <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mi mathvariant="script">P</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mi mathvariant="script">C</mi> </msub> </semantics></math>. The relative orientation of the cone with the camera can be defined here by a single angle called <math display="inline"><semantics> <mi>μ</mi> </semantics></math>. This is only true if one considers that the camera has an infinite field of view and is therefore symmetrical about its axis defined by <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mi mathvariant="script">C</mi> </msub> </semantics></math> (the cone is basically symmetrical about its axis defined by <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">z</mi> <mi mathvariant="script">P</mi> </msub> <mo>=</mo> <mi mathvariant="bold-italic">d</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 10
<p>Representation of the plane <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Π</mi> <mi>g</mi> </msub> </semantics></math> when it is not tangent to the cone passing through the optical center and the two generatrices <math display="inline"><semantics> <msub> <mi mathvariant="script">g</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="script">g</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>Representation of <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Π</mi> <msub> <mi>g</mi> <mi>A</mi> </msub> </msub> </semantics></math> (<b>a</b>) and <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Π</mi> <msub> <mi>g</mi> <mi>B</mi> </msub> </msub> </semantics></math> (<b>b</b>), the only two planes tangent to the cone passing through the optical center.</p>
Full article ">Figure 12
<p>Visualization of the vectors <math display="inline"><semantics> <mi mathvariant="bold-italic">t</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="bold-italic">s</mi> </semantics></math> and their angles.</p>
Full article ">Figure 13
<p>Representation in the image plane and in the cone of the areas containing the first intersections (in cyan) and the second intersections (in magenta).</p>
Full article ">Figure 14
<p>Representation in the image plane and in the cone of the areas containing the first intersections (in cyan) and the second intersections (in magenta) in the case where part of the cone (in brown) is behind the camera.</p>
Full article ">Figure 15
<p>Illustration of how to obtain <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">p</mi> <mi mathvariant="bold-italic">ref</mi> </msub> </semantics></math>, the reference point in the image plane which indicates the relative position of the cone, and which is used to obtain the curves <math display="inline"><semantics> <msub> <mrow> <msup> <mi mathvariant="script">C</mi> <mo mathvariant="bold">′</mo> </msup> </mrow> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mrow> <msup> <mi mathvariant="script">C</mi> <mo mathvariant="bold">′</mo> </msup> </mrow> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 16
<p>The camera (represented by a pyramid) and the projector (represented by a cone) arranged inside the model of our gallery to simulate a contour of light in an image. This contour is obtained by projecting the intersections between the generatrices of the cone and the model into the image.</p>
Full article ">Figure 17
<p>Determining the points <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">A</mi> <mo mathvariant="bold">′</mo> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">B</mi> <mo mathvariant="bold">′</mo> </msup> </semantics></math> for our previous example contour.</p>
Full article ">Figure 18
<p>Determining the curves <math display="inline"><semantics> <msub> <mrow> <msup> <mi mathvariant="script">C</mi> <mo mathvariant="bold">′</mo> </msup> </mrow> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mrow> <msup> <mi mathvariant="script">C</mi> <mo mathvariant="bold">′</mo> </msup> </mrow> <mn>2</mn> </msub> </semantics></math> for the previous example contour.</p>
Full article ">Figure 19
<p>The reconstructed 3D points added to the 3D scene of <a href="#sensors-24-04024-f019" class="html-fig">Figure 19</a> with the first intersections in cyan and the second intersections in magenta.</p>
Full article ">Figure 20
<p>The diving lamp and its angle of aperture.</p>
Full article ">Figure 21
<p>The system used, consisting of a camera and a conical-shaped projector, to which a black cylinder has been added.</p>
Full article ">Figure 22
<p>Image of one of the eight chessboard patterns taken at different poses for camera calibration using Zhang’s method.</p>
Full article ">Figure 23
<p>Three-dimensional view of the chessboard in its eight poses in relation to the camera (<span style="color: #FF0000">X</span>, <span style="color: #00FF00">Y</span>, <span style="color: #0000FF">Z</span>).</p>
Full article ">Figure 24
<p>Two images of light projected onto the wall from the five where the axis of the projector is almost orthogonal to the wall (images 1 and 5).</p>
Full article ">Figure 25
<p>Three-dimensional view of the elliptical sections, the camera and the chessboard pattern. In the image on the left, the elements are expressed in the pattern frame. In the image on the right, the elements are expressed in the camera frame (<span style="color: #FF0000">X</span>, <span style="color: #00FF00">Y</span>, <span style="color: #0000FF">Z</span>).</p>
Full article ">Figure 26
<p>Representation of the estimated cone in relation to elliptical sections where the color of the points depends on the distance orthogonal to the cone (<span style="color: #FF0000">X</span>, <span style="color: #00FF00">Y</span>, <span style="color: #0000FF">Z</span>).</p>
Full article ">Figure 27
<p>Two images in the aqueduct, at different distances, with light contours.</p>
Full article ">Figure 27 Cont.
<p>Two images in the aqueduct, at different distances, with light contours.</p>
Full article ">Figure 28
<p>Intersection selection. (<b>a</b>) Generatrices projection <span style="color: #FFFF00"><math display="inline"><semantics> <msub> <mo>ℓ</mo> <msub> <mi>g</mi> <mi>A</mi> </msub> </msub> </semantics></math></span>, <span style="color: #FFA500"><math display="inline"><semantics> <msub> <mo>ℓ</mo> <msub> <mi>g</mi> <mi>B</mi> </msub> </msub> </semantics></math></span> (2.17 m). (<b>b</b>) Separation of the curves (2.17 m). (<b>c</b>) Intersections: left in <span style="color: #00BFFF">cyan</span> and right in <span style="color: #FF1493">magenta</span> (2.17 m). (<b>d</b>) Generatrix projection <span style="color: #FFA500"><math display="inline"><semantics> <msub> <mi>l</mi> <mrow> <mi>g</mi> <mi>a</mi> </mrow> </msub> </semantics></math></span>, <span style="color: #FFFF00"><math display="inline"><semantics> <msub> <mi>l</mi> <mrow> <mi>g</mi> <mi>b</mi> </mrow> </msub> </semantics></math></span> (1.50 m). (<b>e</b>) Separation of the curves (1.50 m). (<b>f</b>) Intersections: left in <span style="color: #00BFFF">cyan</span> and right in <span style="color: #FF1493">magenta</span> (1.50 m).</p>
Full article ">Figure 29
<p>Three-dimensional reconstruction of the contours extracted from the six images (<span style="color: #FF0000">X</span>, <span style="color: #00FF00">Y</span>, <span style="color: #0000FF">Z</span>).</p>
Full article ">Figure 29 Cont.
<p>Three-dimensional reconstruction of the contours extracted from the six images (<span style="color: #FF0000">X</span>, <span style="color: #00FF00">Y</span>, <span style="color: #0000FF">Z</span>).</p>
Full article ">Figure 30
<p>New sensor configuration with four cameras.</p>
Full article ">
16 pages, 7468 KiB  
Article
A Low-Cost Sensing Solution for SHM, Exploiting a Dedicated Approach for Signal Recognition
by Bruno Andò, Danilo Greco, Giacomo Navarra and Francesco Lo Iacono
Sensors 2024, 24(12), 4023; https://doi.org/10.3390/s24124023 - 20 Jun 2024
Viewed by 661
Abstract
Health assessment and preventive maintenance of structures are mandatory to predict injuries and to schedule required interventions, especially in seismic areas. Structural health monitoring aims to provide a robust and effective approach to obtaining valuable information on structural conditions of buildings and civil [...] Read more.
Health assessment and preventive maintenance of structures are mandatory to predict injuries and to schedule required interventions, especially in seismic areas. Structural health monitoring aims to provide a robust and effective approach to obtaining valuable information on structural conditions of buildings and civil infrastructures, in conjunction with methodologies for the identification and, sometimes, localization of potential risks. In this paper a low-cost solution for structural health monitoring is proposed, exploiting a customized embedded system for the acquisition and storing of measurement signals. Experimental surveys for the assessment of the sensing node have also been performed. The obtained results confirmed the expected performances, especially in terms of resolution in acceleration and tilt measurement, which are 0.55 mg and 0.020°, respectively. Moreover, we used a dedicated algorithm for the classification of recorded signals in the following three classes: noise floor (being mainly related to intrinsic noise of the sensing system), exogenous sources (not correlated to the dynamic behavior of the structure), and structural responses (the response of the structure to external stimuli, such as seismic events, artificially forced and/or environmental solicitations). The latter is of main interest for the investigation of structures’ health, while other signals need to be recognized and filtered out. The algorithm, which has been tested against real data, demonstrates relevant features in performing the above-mentioned classification task. Full article
(This article belongs to the Special Issue Eurosensors 2023 Selected Papers)
Show Figures

Figure 1

Figure 1
<p>Schematization of the multi-sensor node [<a href="#B17-sensors-24-04023" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>The sensor node [<a href="#B16-sensors-24-04023" class="html-bibr">16</a>] installed in a real environment.</p>
Full article ">Figure 3
<p>Time evolution of typical signals and their time–frequency representation: (<b>a</b>) NF, (<b>b</b>) ES, (<b>c</b>) SR.</p>
Full article ">Figure 4
<p>Distribution of RMS values for the whole set of considered patterns. The detailed view aims to show the superposition of patterns belonging to different classes.</p>
Full article ">Figure 5
<p>Sensitivity (<span class="html-italic">Se</span>) and specificity (<span class="html-italic">Sp</span>) values as a function of the considered threshold for the two cases discriminating (<b>a</b>) NF from other sources, (<b>b</b>) SR from ES.</p>
Full article ">Figure 6
<p>Distribution of RMS values for the whole set of considered patterns. The detailed view aims to show an overlap of patterns belonging to different classes. (<b>a</b>) Low-pass filtered signals; (<b>b</b>) high-pass filtered signals.</p>
Full article ">Figure 7
<p>Sensitivity (<span class="html-italic">Se</span>) and specificity (<span class="html-italic">Sp</span>) values as a function of the considered threshold for the following tasks: (<b>a</b>) separation of NF from other sources by LP data and (<b>b</b>) separation of ES from other sources by HP data.</p>
Full article ">Figure 8
<p>The real-time classification algorithm.</p>
Full article ">Figure 9
<p>Vibration exciter test. (<b>a</b>) Setup adopted during the test along the <span class="html-italic">X</span>-axis. (<b>b</b>) Time series of the concatenated time window of 10 periods for each frequency [<a href="#B17-sensors-24-04023" class="html-bibr">17</a>]. (<b>c</b>) Wavelet analysis of the corresponding time window [<a href="#B17-sensors-24-04023" class="html-bibr">17</a>]; (<b>d</b>) Discrete Fourier transform of the concatenated signal.</p>
Full article ">Figure 10
<p>Vibration exciter test. Bar chart of indexes value under different operating conditions for each axis: (<b>a</b>) <span class="html-italic">δ<sub>A</sub></span>, (<b>b</b>) <span class="html-italic">δ<sub>f</sub></span>, (<b>c</b>) repeatability.</p>
Full article ">Figure 11
<p>Setup employed during the tests with the vibrating platform that reports the axis orientation of the sensing system.</p>
Full article ">Figure 12
<p>Time series along the <span class="html-italic">X</span>-axis (<b>top</b>), <span class="html-italic">Y</span>-axis (<b>center</b>), and <span class="html-italic">Z</span>-axis (<b>bottom</b>), in the frequency sweep test, recorded by (<b>a</b>) reference instrumentation; (<b>b</b>) the sensing platform.</p>
Full article ">Figure 13
<p>Wavelet analysis for the frequency sweep test. (<b>a</b>) Wavelet power spectrum of the reference instrumentation signals; (<b>b</b>) wavelet power spectrum of the sensing platform signals.</p>
Full article ">Figure 14
<p>Time series recorded during the tilt test, in the case of the node rotation applied around the <span class="html-italic">Y</span>-axis. <b>Top</b>: 0.2 Hz; <b>bottom</b>: 0.5 Hz. (<b>a</b>) Reference signals; (<b>b</b>) sensing platform signals.</p>
Full article ">Figure 15
<p>Time series of the seismic test along the <span class="html-italic">X</span>-axis (<b>top</b>), <span class="html-italic">Y</span>-axis (<b>center</b>), and <span class="html-italic">Z</span>-axis (<b>bottom</b>), recorded by (<b>a</b>) reference instrumentation; (<b>b</b>) the sensing platform.</p>
Full article ">Figure 16
<p>Wavelet power spectrum of the seismic test recorded by reference instrumentation (<b>top</b>) and the sensing platform (<b>bottom</b>), along: (<b>a</b>) <span class="html-italic">X</span>-axis, (<b>b</b>) <span class="html-italic">Y</span>-axis; (<b>c</b>) <span class="html-italic">Z</span>-axis.</p>
Full article ">Figure 17
<p>Validation test: expected data (blue symbols) and estimated data (red symbols). (<b>a</b>) Algorithm based on RMS values of the raw acceleration module; (<b>b</b>) algorithm shown in <a href="#sensors-24-04023-f008" class="html-fig">Figure 8</a> and exploiting rules (4).</p>
Full article ">
18 pages, 19175 KiB  
Article
Ethereum Phishing Scam Detection Based on Data Augmentation Method and Hybrid Graph Neural Network Model
by Zhen Chen, Sheng-Zheng Liu, Jia Huang, Yu-Han Xiu, Hao Zhang and Hai-Xia Long
Sensors 2024, 24(12), 4022; https://doi.org/10.3390/s24124022 - 20 Jun 2024
Viewed by 1213
Abstract
The rapid advancement of blockchain technology has fueled the prosperity of the cryptocurrency market. Unfortunately, it has also facilitated certain criminal activities, particularly the increasing issue of phishing scams on blockchain platforms such as Ethereum. Consequently, developing an efficient phishing detection system is [...] Read more.
The rapid advancement of blockchain technology has fueled the prosperity of the cryptocurrency market. Unfortunately, it has also facilitated certain criminal activities, particularly the increasing issue of phishing scams on blockchain platforms such as Ethereum. Consequently, developing an efficient phishing detection system is critical for ensuring the security and reliability of cryptocurrency transactions. However, existing methods have shortcomings in dealing with sample imbalance and effective feature extraction. To address these issues, this study proposes an Ethereum phishing scam detection method based on DA-HGNN (Data Augmentation Method and Hybrid Graph Neural Network Model), validated by real Ethereum datasets to prove its effectiveness. Initially, basic node features consisting of 11 attributes were designed. This study applied a sliding window sampling method based on node transactions for data augmentation. Since phishing nodes often initiate numerous transactions, the augmented samples tended to balance. Subsequently, the Temporal Features Extraction Module employed Conv1D (One-Dimensional Convolutional neural network) and GRU-MHA (GRU-Multi-Head Attention) models to uncover intrinsic relationships between features from the time sequences and to mine adequate local features, culminating in the extraction of temporal features. The GAE (Graph Autoencoder) concept was then leveraged, with SAGEConv (Graph SAGE Convolution) as the encoder. In the SAGEConv reconstruction module, by reconstructing the relationships between transaction graph nodes, the structural features of the nodes were learned, obtaining reconstructed node embedding representations. Ultimately, phishing fraud nodes were further identified by integrating temporal features, basic features, and embedding representations. A real Ethereum dataset was collected for evaluation, and the DA-HGNN model achieved an AUC-ROC (Area Under the Receiver Operating Characteristic Curve) of 0.994, a Recall of 0.995, and an F1-score of 0.994, outperforming existing methods and baseline models. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of DA-HGNN.</p>
Full article ">Figure 2
<p>Accuracy and loss curves of DA-HGNN on the D1 dataset. (<b>a</b>) Accuracy curves; (<b>b</b>) loss curves.</p>
Full article ">Figure 3
<p>Accuracy and loss curves of DA-HGNN on the D2 dataset. (<b>a</b>) Accuracy curves; (<b>b</b>) loss curves.</p>
Full article ">Figure 4
<p>Accuracy and loss curves of DA-HGNN on the D3 dataset. (<b>a</b>) Accuracy curves; (<b>b</b>) loss curves.</p>
Full article ">Figure 5
<p>Accuracy and loss curves of DA-HGNN and different models on the D1 test set. (<b>a</b>) Accuracy curves; (<b>b</b>) loss curves.</p>
Full article ">Figure 6
<p>Accuracy and loss curves of DA-HGNN and different models on the D2 test set. (<b>a</b>) Accuracy curves; (<b>b</b>) loss curves.</p>
Full article ">Figure 7
<p>Accuracy and loss curves of DA-HGNN and different models on the D3 test set. (<b>a</b>) Accuracy curves; (<b>b</b>) loss curves.</p>
Full article ">Figure 8
<p>Confusion matrix of DA-HGNN on three datasets. (<b>a</b>) Confusion matrix on the D1 test set; (<b>b</b>) confusion matrix on the D2 test set; (<b>c</b>) confusion matrix on the D3 test set.</p>
Full article ">Figure 9
<p>ROC and PR curves of DA-HGNN and different models on the D3 dataset. (<b>a</b>) ROC curves; (<b>b</b>) PR curves.</p>
Full article ">Figure 10
<p>Ablation experimental results of different features in three datasets. (<b>a</b>) Ablation experiment on the D1; (<b>b</b>) ablation experiment on the D2; (<b>c</b>) ablation experiment on the D3.</p>
Full article ">Figure 11
<p>Sensitivity analysis of multi-head attention heads on DA-HGNN. (<b>a</b>) Precision; (<b>b</b>) F1-Score; (<b>c</b>) AUC-ROC; (<b>d</b>) AUC-PR.</p>
Full article ">Figure 12
<p>Sensitivity analysis of embedding dimension size on DA-HGNN. (<b>a</b>) Precision; (<b>b</b>) F1-Score; (<b>c</b>) AUC-ROC; (<b>d</b>) AUC-PR.</p>
Full article ">Figure 13
<p>Running time of DA-HGNN under different numbers of nodes.</p>
Full article ">
12 pages, 3250 KiB  
Article
Self-Powered Acceleration Sensor for Distance Prediction via Triboelectrification
by Zhengbing Ding, Dinh Cong Nguyen, Hakjeong Kim, Xing Wang, Kyungwho Choi, Jihae Lee and Dukhyun Choi
Sensors 2024, 24(12), 4021; https://doi.org/10.3390/s24124021 - 20 Jun 2024
Cited by 1 | Viewed by 806
Abstract
Accurately predicting the distance an object will travel to its destination is very important in various sports. Acceleration sensors as a means of real-time monitoring are gaining increasing attention in sports. Due to the low energy output and power density of Triboelectric Nanogenerators [...] Read more.
Accurately predicting the distance an object will travel to its destination is very important in various sports. Acceleration sensors as a means of real-time monitoring are gaining increasing attention in sports. Due to the low energy output and power density of Triboelectric Nanogenerators (TENGs), recent efforts have focused on developing various acceleration sensors. However, these sensors suffer from significant drawbacks, including large size, high complexity, high power input requirements, and high cost. Here, we described a portable and cost-effective real-time refreshable strategy design comprising a series of individually addressable and controllable units based on TENGs embedded in a flexible substrate. This results in a highly sensitive, low-cost, and self-powered acceleration sensor. Putting, which accounts for nearly half of all strokes played, is obviously an important component of the golf game. The developed acceleration sensor has an accuracy controlled within 5%. The initial velocity and acceleration of the forward movement of a rolling golf ball after it is hit by a putter can be displayed, and the stopping distance is quickly calculated and predicted in about 7 s. This research demonstrates the application of the portable TENG-based acceleration sensor while paving the way for designing portable, cost-effective, scalable, and harmless ubiquitous self-powered acceleration sensors. Full article
(This article belongs to the Special Issue Advances in Nanosensors and Nanogenerators - 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Structural design and working principle of the acceleration sensors. (<b>a</b>) Structural diagram of the acceleration sensor. (<b>b</b>) When an object moves horizontally above the sensor, electrostatic induction occurs between the object and the three sensors, charge transfer takes place between the object and the sensor material, and the corresponding voltage signal is shown on the oscilloscope. (<b>c</b>) Simulation of the potential difference between the object and the sensor material during the contact electrification phase using COMSOL Multiphysics 5.6.</p>
Full article ">Figure 2
<p>Measurements and performance analysis of the acceleration sensors. (<b>a</b>) Calculation of the acceleration and distance based on the voltage signals generated by TENGs. (<b>b</b>) The object passing over sensors of different widths <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>w</mi> </mrow> <mrow> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> and space <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>w</mi> </mrow> <mrow> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Errors between the theoretical distance and the actual measured distance for different sensors at various speeds. (<b>d</b>) The maximum voltage generated by the object when passing over different sensors.</p>
Full article ">Figure 3
<p>Experimental platform and response characteristics. (<b>a</b>) Schematic of the ball motion test under different initial kinetic energies. (<b>b</b>) The object acquires different initial kinetic energies by falling through different angles. (<b>c</b>–<b>e</b>) Graphs of the Voltage−Time relationship of the object rolling forward across the mat after gaining kinetic energy from the hammer falling from angles of 30, 60, and 90°, respectively, as shown in each inset.</p>
Full article ">Figure 4
<p>Comparison of different performances of the TENG-based acceleration sensors. (<b>a</b>) Algorithmic block diagram of acceleration sensors. (<b>b</b>,<b>c</b>) The acceleration and speed display, respectively, of the object passing through different types of sensors under different initial kinetic energies. (<b>d</b>–<b>f</b>) The predicted distance and actual distance that the object can roll under different initial kinetic energies and the error involved, respectively.</p>
Full article ">Figure 5
<p>Practical application of the acceleration sensor based on TENG in golf sports. (<b>a</b>) Schematic of an athlete hitting a golf ball. (<b>b</b>) Schematic showing that the data read by the acceleration sensor can be automatically transmitted to mobile phones and smartwatches. (<b>c</b>–<b>e</b>) The golf ball passes through the TENG sensor to obtain its actual acceleration, speed, and distance on mobile phones and smartwatches. Scale bar: 20 cm. (<b>f</b>,<b>i</b>) The fabricated 1 m sensor-containing mat of 60 mm width × 400 mm spacing. Scale bar: 10 cm. (<b>ii</b>,<b>iii</b>) The sensor in (<b>i</b>) can predict a ball rolling to about 5 m and 8 m, respectively. (<b>iv</b>) A sensor with a width of 30 mm and a spacing of 600 mm can predict the ball rolling to a distance of about 10 m.</p>
Full article ">
16 pages, 2062 KiB  
Communication
Enhancing Hospital Efficiency and Patient Care: Real-Time Tracking and Data-Driven Dispatch in Patient Transport
by Su-Wen Huang, Shyue-Yow Chiou, Rung-Ching Chen and Chayanon Sub-r-pa
Sensors 2024, 24(12), 4020; https://doi.org/10.3390/s24124020 - 20 Jun 2024
Viewed by 1062
Abstract
Inefficient patient transport in hospitals often leads to delays, overworked staff, and suboptimal resource utilization, ultimately impacting patient care. Existing dispatch management algorithms are often evaluated in simulation environments, raising concerns about their real-world applicability. This study presents a real-world experiment that bridges [...] Read more.
Inefficient patient transport in hospitals often leads to delays, overworked staff, and suboptimal resource utilization, ultimately impacting patient care. Existing dispatch management algorithms are often evaluated in simulation environments, raising concerns about their real-world applicability. This study presents a real-world experiment that bridges the gap between theoretical dispatch algorithms and real-world implementation. It applies process capability analysis at Taichung Veterans General Hospital in Taichung, Taiwan, and utilizes IoT for real-time tracking of staff and medical devices to address challenges associated with manual dispatch processes. Experimental data collected from the hospital underwent statistical evaluation between January 2021 and December 2021. The results of our experiment, which compared the use of traditional dispatch methods with the Beacon dispatch method, found that traditional dispatch had an overtime delay of 41.0%; in comparison, the Beacon dispatch method had an overtime delay of 26.5%. These findings demonstrate the transformative potential of this solution for not only hospital operations but also for improving service quality across the healthcare industry in the context of smart hospitals. Full article
(This article belongs to the Special Issue IoT-Based Smart Environments, Applications and Tools)
Show Figures

Figure 1

Figure 1
<p>Nurses wait in line to access the facility during the patient dispatch process.</p>
Full article ">Figure 2
<p>Software interface for managing patient transport in a hospital, with tasks assigned manually by the command center.</p>
Full article ">Figure 3
<p>Overview of Beacon dispatch system workflow.</p>
Full article ">Figure 4
<p>Application used to receive and update the status of dispatch tasks.</p>
Full article ">Figure 5
<p>Monthly dispatch performance (daytime shift), showing the number of on-time and delayed dispatches for traditional vs. Beacon dispatch system.</p>
Full article ">Figure 6
<p>Monthly dispatch performance (nighttime shift), showing the number of on-time and delayed dispatches for traditional vs. Beacon dispatch system.</p>
Full article ">
27 pages, 14228 KiB  
Article
High-Magnification Object Tracking with Ultra-Fast View Adjustment and Continuous Autofocus Based on Dynamic-Range Focal Sweep
by Tianyi Zhang, Kohei Shimasaki, Idaku Ishii and Akio Namiki
Sensors 2024, 24(12), 4019; https://doi.org/10.3390/s24124019 - 20 Jun 2024
Viewed by 854
Abstract
Active vision systems (AVSs) have been widely used to obtain high-resolution images of objects of interest. However, tracking small objects in high-magnification scenes is challenging due to shallow depth of field (DoF) and narrow field of view (FoV). To address this, we introduce [...] Read more.
Active vision systems (AVSs) have been widely used to obtain high-resolution images of objects of interest. However, tracking small objects in high-magnification scenes is challenging due to shallow depth of field (DoF) and narrow field of view (FoV). To address this, we introduce a novel high-speed AVS with a continuous autofocus (C-AF) approach based on dynamic-range focal sweep and a high-frame-rate (HFR) frame-by-frame tracking pipeline. Our AVS leverages an ultra-fast pan-tilt mechanism based on a Galvano mirror, enabling high-frequency view direction adjustment. Specifically, the proposed C-AF approach uses a 500 fps high-speed camera and a focus-tunable liquid lens operating at a sine wave, providing a 50 Hz focal sweep around the object’s optimal focus. During each focal sweep, 10 images with varying focuses are captured, and the one with the highest focus value is selected, resulting in a stable output of well-focused images at 50 fps. Simultaneously, the object’s depth is measured using the depth-from-focus (DFF) technique, allowing dynamic adjustment of the focal sweep range. Importantly, because the remaining images are only slightly less focused, all 500 fps images can be utilized for object tracking. The proposed tracking pipeline combines deep-learning-based object detection, K-means color clustering, and HFR tracking based on color filtering, achieving 500 fps frame-by-frame tracking. Experimental results demonstrate the effectiveness of the proposed C-AF approach and the advanced capabilities of the high-speed AVS for magnified object tracking. Full article
(This article belongs to the Special Issue Advanced Optical and Optomechanical Sensors)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed high-magnification object tracking system: The combination of (1) C-AF based on dynamic-range focal sweep and (2) 500 fps frame-by-frame object tracking achieves high-speed and precise high-magnification tracking of small objects moving in a wide scene.</p>
Full article ">Figure 2
<p>From time <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>t</mi> <mi>k</mi> </msub> </semantics></math>, the focus of the camera system changes with the variation of the diopter of the liquid lens. During each period, the high-speed camera captures multiple images with different focuses. Using the focus measure algorithm, the image with the best focus can be extracted. Simultaneously, through the correlation between the focal length and the distance of the subject plane, the object’s depth can be calculated.</p>
Full article ">Figure 3
<p>Schematic of depth measurement with focal sweep.</p>
Full article ">Figure 4
<p>Diagram of the adjustment of the dynamic-range focal sweep: The first forward focal sweep measures the object’s depth. Then, at the backward focal sweep, the range of the focal sweep can be adjusted. At the second forward focal sweep, the system can finish the range adjustment and measure the object’s depth again.</p>
Full article ">Figure 5
<p>Diagram of the adjustment of the view direction: (<b>a</b>) Two Galvano mirrors are used to adjust the horizontal and vertical viewpoints. (<b>b</b>) Schematic representation of the horizontal viewpoint adjustment.</p>
Full article ">Figure 6
<p>The Pipeline of the object tracking method in our system: The pipeline consists of three main threads: (1) Thread 1: 500 fps Focus Measure, (2) Thread 2: Object Main-Color Updating, and (3) Thread 3: 500 fps Frame-by-Frame Object Tracking. The focus measure algorithm is implemented to extract 50 fps well-focused images and to determine the object’s depth, adjusting the focal sweep range. The object detection algorithm operates at 25 fps, providing color updating at the same rate. Meanwhile, the object tracking algorithm achieves 500 fps frame-by-frame object tracking.</p>
Full article ">Figure 7
<p>System configuration of proposed high-magnification autofocus tracking system.</p>
Full article ">Figure 8
<p>The environment of Experiment 1.</p>
Full article ">Figure 9
<p>The focal sweep adjustment and the depth measurement results using the proposed C-AF with butterfly model’s movements.</p>
Full article ">Figure 10
<p>The focal sweep adjustment and the depth measurement results using the proposed C-AF with qr code’s movements.</p>
Full article ">Figure 11
<p>The focal sweep adjustment and the depth measurement results using the proposed C-AF with screw’s movements.</p>
Full article ">Figure 12
<p>Variation of size with butterfly model’s movements.</p>
Full article ">Figure 13
<p>Variation of size with QR code’s movements.</p>
Full article ">Figure 14
<p>Variation of size with screw’s movements.</p>
Full article ">Figure 15
<p>Output images’ detection results with object movements at 3 m/s.</p>
Full article ">Figure 16
<p>Results of proposed HFR object tracking method for multiple objects at different distances: These figures were captured at 500 fps sequentially during one process of the focal sweep. The first, the second, the third columns shows the original images, color-filtered maps, and the results.</p>
Full article ">Figure 17
<p>Environment of Experiment 4.</p>
Full article ">Figure 18
<p>The focal sweep adjustment and the depth measurement results using the proposed C-AF with the object’s movements.</p>
Full article ">Figure 19
<p>Viewpoint’s variation during the object movement.</p>
Full article ">Figure 20
<p>Some results of HFR high-magnification object tracking.</p>
Full article ">
19 pages, 5434 KiB  
Article
A Method of Precise Auto-Calibration in a Micro-Electro-Mechanical System Accelerometer
by Sergiusz Łuczak, Magdalena Ekwińska and Daniel Tomaszewski
Sensors 2024, 24(12), 4018; https://doi.org/10.3390/s24124018 - 20 Jun 2024
Viewed by 778
Abstract
A novel design of a MEMS (Micro-Electromechanical System) capacitive accelerometer fabricated by surface micromachining, with a structure enabling precise auto-calibration during operation, is presented. Precise auto-calibration was introduced to ensure more accurate acceleration measurements compared to standard designs. The standard mechanical structure of [...] Read more.
A novel design of a MEMS (Micro-Electromechanical System) capacitive accelerometer fabricated by surface micromachining, with a structure enabling precise auto-calibration during operation, is presented. Precise auto-calibration was introduced to ensure more accurate acceleration measurements compared to standard designs. The standard mechanical structure of the accelerometer (seismic mass integrated with elastic suspension and movable plates coupled with fixed plates forming a system of differential sensing capacitors) was equipped with three movable detection electrodes coupled with three fixed electrodes, thus creating three atypical tunneling displacement transducers detecting three specific positions of seismic mass with high precision, enabling the auto-calibration of the accelerometer while it was being operated. Auto-calibration is carried out by recording the accelerometer indication while the seismic mass occupies a specific position, which corresponds to a known value of acting acceleration determined in a pre-calibration process. The diagram and the design of the mechanical structure of the accelerometer, the block diagram of the electronic circuits, and the mathematical relationships used for auto-calibration are presented. The results of the simulation studies related to mechanical and electric phenomena are discussed. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors 2024)
Show Figures

Figure 1

Figure 1
<p>A schematic diagram of the mechanical structure of an accelerometer (at the central position of the seismic mass, <span class="html-italic">a</span> = 0).</p>
Full article ">Figure 2
<p>The seismic mass occupies the specific outer position under the action of acceleration: (<b>a</b>) the left position of the seismic mass, <span class="html-italic">a</span> &gt; 0; (<b>b</b>) the right position of the seismic mass, <span class="html-italic">a</span> &lt; 0.</p>
Full article ">Figure 3
<p>A block diagram of an analog MEMS accelerometer with an auto-calibration feature (color code: blue—new components; green—modified components; white—standard components); <span class="html-italic">a</span>—linear acceleration; <span class="html-italic">d</span>—the linear displacement of the seismic mass; <span class="html-italic">d<sub>i</sub></span>—the linear position detected by the detector, Di; <span class="html-italic">C</span>—the electric capacity, <span class="html-italic">U</span>—the raw voltage; <span class="html-italic">U<sub>out</sub></span>—the voltage output signal; <span class="html-italic">U<sub>i</sub></span>—the raw voltage signal generated by the electronic transducer corresponding to position <span class="html-italic">d<sub>i</sub></span>; <span class="html-italic">i<sub>imax</sub></span>—the maximal current signal generated by the detector of the specific position, Di; <span class="html-italic">O</span>—the offset; <span class="html-italic">S</span>—the scale factor.</p>
Full article ">Figure 4
<p>Signal conditioning circuit.</p>
Full article ">Figure 5
<p>The standard structure of the sensor (without the detection electrodes): (<b>a</b>) a 3D model; (<b>b</b>) the basic dimensions of the designed structure.</p>
Full article ">Figure 6
<p>The structure of the sensor with three pairs of detection electrodes.</p>
Full article ">Figure 7
<p>Three pairs of tunneling current electrodes.</p>
Full article ">Figure 8
<p>The geometric dependence of the distance between the moving tips of the electrodes: (<b>a</b>) a minimal distance of 1 nm; (<b>b</b>) a minimal distance of 0.1 nm.</p>
Full article ">Figure 9
<p>The geometric dependence of the distance between the moving tips (the straight lines illustrate the idea of a linear approximation).</p>
Full article ">Figure 10
<p>The modeled structure of the tunneling current transducer.</p>
Full article ">Figure 11
<p>The absorbed value of the anode current.</p>
Full article ">Figure 12
<p>The electron concentration at a distance of 1 nm for the following voltages applied to the silicon anode: (<b>a</b>) 5 V; (<b>b</b>) 10 V; (<b>c</b>); −5 V; and (<b>d</b>) −10 V.</p>
Full article ">Figure 13
<p>Potential at a distance of 1 nm for the following voltages applied to the silicon anode: (<b>a</b>) 5 V; (<b>b</b>) 10 V; (<b>c</b>) −5 V; (<b>d</b>) and −10 V.</p>
Full article ">
15 pages, 5832 KiB  
Article
Detection of Multi-Layered Bond Delamination Defects Based on Full Waveform Inversion
by Jiawei Wen, Can Jiang and Hao Chen
Sensors 2024, 24(12), 4017; https://doi.org/10.3390/s24124017 - 20 Jun 2024
Viewed by 751
Abstract
This study aimed to address the challenges encountered in traditional bulk wave delamination detection methods characterized by low detection efficiency. Additionally, the limitations of guided wave delamination detection methods were addressed, particularly those utilizing reflected waves, which are susceptible to edge reflections, thus [...] Read more.
This study aimed to address the challenges encountered in traditional bulk wave delamination detection methods characterized by low detection efficiency. Additionally, the limitations of guided wave delamination detection methods were addressed, particularly those utilizing reflected waves, which are susceptible to edge reflections, thus complicating effective defect extraction. Leveraging the full waveform inversion algorithm, an innovative approach was established for detecting delamination defects in multi-layered structures using ultrasonic guided wave arrays. First, finite element modeling was employed to simulate guided wave data acquisition by a circular array within an aluminum–epoxy bilayer structure with embedded delamination defects. Subsequently, the full waveform inversion algorithm was applied to reconstruct both regular and irregular delamination defects. Analysis results indicated the efficacy of the proposed approach in accurately identifying delamination defects of varying shapes. Furthermore, an experimental platform for guided wave delamination defect detection was established, and experiments were conducted on a steel–cement bilayer structure containing an irregular delamination defect. The experimental results validated the exceptional imaging precision of our proposed technique for identifying delamination defects in multi-layered boards. In summary, the proposed method can accurately determine both the positions and sizes of defects with higher detection efficiency than traditional pulse-echo delamination detection methods. Full article
(This article belongs to the Topic Advances in Non-Destructive Testing Methods, 2nd Volume)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the GWT algorithm based on FWI.</p>
Full article ">Figure 2
<p>Finite element simulation model for debonding detection of aluminum–epoxy layered plates. (<b>a</b>) Geometrical schematic for marking the transducer position. (<b>b</b>) Aluminum–epoxy adhesive bonding model.</p>
Full article ">Figure 3
<p>Dispersion curve of A0 mode phase velocity under different thickness ratios of aluminum–epoxy adhesive laminates. The position of the black dashed line indicates a frequency of 70 kHz.</p>
Full article ">Figure 4
<p>Reconstruction results of aluminum and epoxy adhesive at different thickness ratios. (<b>a</b>) Original model; (<b>b</b>) 5:1; (<b>c</b>) 1:3.</p>
Full article ">Figure 5
<p>Reconstruction results of different transducer installation positions. (<b>a</b>) Original model. (<b>b</b>) Transducer mounted on aluminum plate surface. (<b>c</b>) Transducer mounted on epoxy adhesive surface.</p>
Full article ">Figure 6
<p>Displacement wave structure of the A0 mode at 80 kHz in aluminum–epoxy adhesive laminate.</p>
Full article ">Figure 7
<p>Guided wave detection experiment photo of a steel–cement bilayer plate. (<b>a</b>) Irregularly shaped PTFE film adhered to a 2 mm thick steel plate. (<b>b</b>) Twenty-four piezoelectric transducers mounted on the steel–cement bilayer plate.</p>
Full article ">Figure 8
<p>Experimental equipment and measurement principles for pulse-echo debonding defect detection.</p>
Full article ">Figure 9
<p>Steel–cement multiple wave measurement signals.</p>
Full article ">Figure 10
<p>Reconstruction results of the pulse-echo method.</p>
Full article ">Figure 11
<p>Dispersion curve of A0 mode waves in steel–cement laminated plate. (<b>a</b>) Phase velocity. (<b>b</b>) Group velocity.</p>
Full article ">Figure 12
<p>Displacement wave structure of the A0 mode at 50 kHz in a steel–cement laminate.</p>
Full article ">Figure 13
<p>Experimental setup.</p>
Full article ">Figure 14
<p>Waveform acquired from an emitter–receiver pair.</p>
Full article ">Figure 15
<p>Reconstruction results of delamination defects in steel–cement laminated plate. Green dots mark the positions of the sensors. The Green dashed line represents the contour of the actual position of the PTFE film.</p>
Full article ">
29 pages, 815 KiB  
Review
Literature Review of Deep-Learning-Based Detection of Violence in Video
by Pablo Negre, Ricardo S. Alonso, Alfonso González-Briones, Javier Prieto and Sara Rodríguez-González
Sensors 2024, 24(12), 4016; https://doi.org/10.3390/s24124016 - 20 Jun 2024
Cited by 1 | Viewed by 2597
Abstract
Physical aggression is a serious and widespread problem in society, affecting people worldwide. It impacts nearly every aspect of life. While some studies explore the root causes of violent behavior, others focus on urban planning in high-crime areas. Real-time violence detection, powered by [...] Read more.
Physical aggression is a serious and widespread problem in society, affecting people worldwide. It impacts nearly every aspect of life. While some studies explore the root causes of violent behavior, others focus on urban planning in high-crime areas. Real-time violence detection, powered by artificial intelligence, offers a direct and efficient solution, reducing the need for extensive human supervision and saving lives. This paper is a continuation of a systematic mapping study and its objective is to provide a comprehensive and up-to-date review of AI-based video violence detection, specifically in physical assaults. Regarding violence detection, the following have been grouped and categorized from the review of the selected papers: 21 challenges that remain to be solved, 28 datasets that have been created in recent years, 21 keyframe extraction methods, 16 types of algorithm inputs, as well as a wide variety of algorithm combinations and their corresponding accuracy results. Given the lack of recent reviews dealing with the detection of violence in video, this study is considered necessary and relevant. Full article
(This article belongs to the Special Issue Edge Computing in IoT Networks Based on Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Basic steps of video violence detection algorithms.</p>
Full article ">Figure 2
<p>Classification and count of the types of challenges in violence detection on video.</p>
Full article ">Figure 3
<p>Count and categorization of the types of algorithm inputs used in the selected articles, grouped by category.</p>
Full article ">Figure 4
<p>Count of the types of algorithms used in violence detection phase 1 in the selected articles grouped by category.</p>
Full article ">Figure 5
<p>Count of the types of algorithms used in violence detection phase 2 in the selected articles grouped by category.</p>
Full article ">Figure 6
<p>Count of the types of algorithm combinations used in the selected articles grouped by subcategory. Colors: Yellow corresponds to the use of CNNs, purple to skeleton-based algorithms, orange to manual techniques, red to those utilizing audio, gray to those employing transformers, and green to LSTM.</p>
Full article ">Figure 7
<p>Accuracy obtained by selected items in the Action Movies dataset. Colors: Yellow corresponds to the use of CNNs, purple to skeleton-based algorithms, orange to manual techniques, red to those utilizing audio, gray to those employing transformers, and green to LSTM. Citations in descending order of accuracy: [<a href="#B29-sensors-24-04016" class="html-bibr">29</a>,<a href="#B35-sensors-24-04016" class="html-bibr">35</a>,<a href="#B39-sensors-24-04016" class="html-bibr">39</a>,<a href="#B43-sensors-24-04016" class="html-bibr">43</a>,<a href="#B45-sensors-24-04016" class="html-bibr">45</a>,<a href="#B49-sensors-24-04016" class="html-bibr">49</a>,<a href="#B50-sensors-24-04016" class="html-bibr">50</a>,<a href="#B53-sensors-24-04016" class="html-bibr">53</a>,<a href="#B56-sensors-24-04016" class="html-bibr">56</a>,<a href="#B57-sensors-24-04016" class="html-bibr">57</a>,<a href="#B58-sensors-24-04016" class="html-bibr">58</a>,<a href="#B59-sensors-24-04016" class="html-bibr">59</a>,<a href="#B60-sensors-24-04016" class="html-bibr">60</a>,<a href="#B62-sensors-24-04016" class="html-bibr">62</a>,<a href="#B63-sensors-24-04016" class="html-bibr">63</a>,<a href="#B64-sensors-24-04016" class="html-bibr">64</a>,<a href="#B66-sensors-24-04016" class="html-bibr">66</a>,<a href="#B71-sensors-24-04016" class="html-bibr">71</a>,<a href="#B74-sensors-24-04016" class="html-bibr">74</a>,<a href="#B76-sensors-24-04016" class="html-bibr">76</a>,<a href="#B77-sensors-24-04016" class="html-bibr">77</a>,<a href="#B86-sensors-24-04016" class="html-bibr">86</a>,<a href="#B87-sensors-24-04016" class="html-bibr">87</a>,<a href="#B88-sensors-24-04016" class="html-bibr">88</a>,<a href="#B92-sensors-24-04016" class="html-bibr">92</a>].</p>
Full article ">Figure 8
<p>Accuracy obtained by selected items in the Violent Flow dataset. Colors: Yellow corresponds to the use of CNNs, purple to skeleton-based algorithms, orange to manual techniques, red to those utilizing audio, gray to those employing transformers, and green to LSTM. Citations in descending order of accuracy: [<a href="#B35-sensors-24-04016" class="html-bibr">35</a>,<a href="#B40-sensors-24-04016" class="html-bibr">40</a>,<a href="#B45-sensors-24-04016" class="html-bibr">45</a>,<a href="#B47-sensors-24-04016" class="html-bibr">47</a>,<a href="#B50-sensors-24-04016" class="html-bibr">50</a>,<a href="#B51-sensors-24-04016" class="html-bibr">51</a>,<a href="#B53-sensors-24-04016" class="html-bibr">53</a>,<a href="#B55-sensors-24-04016" class="html-bibr">55</a>,<a href="#B57-sensors-24-04016" class="html-bibr">57</a>,<a href="#B58-sensors-24-04016" class="html-bibr">58</a>,<a href="#B63-sensors-24-04016" class="html-bibr">63</a>,<a href="#B64-sensors-24-04016" class="html-bibr">64</a>,<a href="#B66-sensors-24-04016" class="html-bibr">66</a>,<a href="#B71-sensors-24-04016" class="html-bibr">71</a>,<a href="#B74-sensors-24-04016" class="html-bibr">74</a>,<a href="#B75-sensors-24-04016" class="html-bibr">75</a>,<a href="#B76-sensors-24-04016" class="html-bibr">76</a>,<a href="#B82-sensors-24-04016" class="html-bibr">82</a>,<a href="#B86-sensors-24-04016" class="html-bibr">86</a>,<a href="#B88-sensors-24-04016" class="html-bibr">88</a>,<a href="#B92-sensors-24-04016" class="html-bibr">92</a>].</p>
Full article ">Figure 9
<p>Accuracy obtained by selected items in the Real Life Violent Scenes Dataset. Colors: Yellow corresponds to the use of CNNs, purple to skeleton-based algorithms, orange to manual techniques, red to those utilizing audio, gray to those employing transformers, and green to LSTM. Citations in descending order of accuracy: [<a href="#B51-sensors-24-04016" class="html-bibr">51</a>,<a href="#B67-sensors-24-04016" class="html-bibr">67</a>,<a href="#B73-sensors-24-04016" class="html-bibr">73</a>,<a href="#B76-sensors-24-04016" class="html-bibr">76</a>,<a href="#B77-sensors-24-04016" class="html-bibr">77</a>,<a href="#B81-sensors-24-04016" class="html-bibr">81</a>,<a href="#B95-sensors-24-04016" class="html-bibr">95</a>].</p>
Full article ">
18 pages, 1120 KiB  
Article
An Enhanced FGI-GSRx Software-Defined Receiver for the Execution of Long Datasets
by Muwahida Liaquat, Mohammad Zahidul H. Bhuiyan, Saiful Islam, Into Pääkkönen and Sanna Kaasalainen
Sensors 2024, 24(12), 4015; https://doi.org/10.3390/s24124015 - 20 Jun 2024
Viewed by 874
Abstract
The Global Navigation Satellite System (GNSS) software-defined receivers offer greater flexibility, cost-effectiveness, customization, and integration capabilities compared to traditional hardware-based receivers, making them essential for a wide range of applications. The continuous evolution of GNSS research and the availability of new features require [...] Read more.
The Global Navigation Satellite System (GNSS) software-defined receivers offer greater flexibility, cost-effectiveness, customization, and integration capabilities compared to traditional hardware-based receivers, making them essential for a wide range of applications. The continuous evolution of GNSS research and the availability of new features require these software-defined receivers to upgrade continuously to facilitate the latest requirements. The Finnish Geospatial Research Institute (FGI) has been supporting the GNSS research community with its open-source implementations, such as a MATLAB-based GNSS software-defined receiver `FGI-GSRx’ and a Python-based implementation `FGI-OSNMA’ for utilizing Galileo’s Open Service Navigation Message Authentication (OSNMA). In this context, longer datasets are crucial for GNSS software-defined receivers to support adaptation, optimization, and facilitate testing to investigate and develop future-proof receiver capabilities. In this paper, we present an updated version of FGI-GSRx, namely, FGI-GSRx-v2.0.0, which is also available as an open-source resource for the research community. FGI-GSRx-v2.0.0 offers improved performance as compared to its previous version, especially for the execution of long datasets. This is carried out by optimizing the receiver’s functionality and offering a newly added parallel processing feature to ensure faster capabilities to process the raw GNSS data. This paper also presents an analysis of some key design aspects of previous and current versions of FGI-GSRx for a better insight into the receiver’s functionalities. The results show that FGI-GSRx-v2.0.0 offers about a 40% run time execution improvement over FGI-GSRx-v1.0.0 in the case of the sequential processing mode and about a 59% improvement in the case of the parallel processing mode, with 17 GNSS satellites from GPS and Galileo. In addition, an attempt is made to execute v2.0.0 with MATLAB’s own parallel computing toolbox. A detailed performance comparison reveals an improvement of about 43% in execution time over the v2.0.0 parallel processing mode for the same GNSS scenario. Full article
(This article belongs to the Special Issue GNSS Software-Defined Radio Receivers: Status and Perspectives)
Show Figures

Figure 1

Figure 1
<p>FGI-GSRx sequential architecture. The green parts indicate the option to use a pre-stored output from acquisition and tracking.</p>
Full article ">Figure 2
<p>FGI-GSRx-v2.0.0 architecture. The green parts indicate the option to use a pre-stored output from acquisition and tracking.</p>
Full article ">Figure 3
<p>FGI-GSRx-v2.0.0 parallel tracking mode work flow.</p>
Full article ">Figure 4
<p>FGI-GSRx-v2.0.0 two-stage acquisition.</p>
Full article ">Figure 5
<p>Sky plots for GPS and Galileo satellites at the beginning of data collection.</p>
Full article ">Figure 6
<p>Processor usage utilization for the entire simulation interval for the sequential processing mode of FGI-GSRx.</p>
Full article ">Figure 7
<p>Galileo only: CPU usage for signal tracking of the v2.0.0 (<b>left</b>) Parallel processing mode. (<b>right</b>) Sequential processing with MATLAB parallel computing block.</p>
Full article ">Figure 8
<p>GPS only: CPU usage for signal tracking of the v2.0.0 (<b>left</b>) Parallel processing mode. (<b>right</b>) Sequential processing with MATLAB parallel computing block.</p>
Full article ">Figure 9
<p>GPS and Galileo: CPU usage for signal tracking of the v2.0.0 (<b>left</b>) Parallel processing mode. (<b>right</b>) Sequential processing with MATLAB parallel computing block.</p>
Full article ">Figure 10
<p>Position deviation plots generated by FGI-GSRx.</p>
Full article ">
12 pages, 1243 KiB  
Article
A Microvascular Segmentation Network Based on Pyramidal Attention Mechanism
by Hong Zhang, Wei Fang and Jiayun Li
Sensors 2024, 24(12), 4014; https://doi.org/10.3390/s24124014 - 20 Jun 2024
Cited by 1 | Viewed by 718
Abstract
The precise segmentation of retinal vasculature is crucial for the early screening of various eye diseases, such as diabetic retinopathy and hypertensive retinopathy. Given the complex and variable overall structure of retinal vessels and their delicate, minute local features, the accurate extraction of [...] Read more.
The precise segmentation of retinal vasculature is crucial for the early screening of various eye diseases, such as diabetic retinopathy and hypertensive retinopathy. Given the complex and variable overall structure of retinal vessels and their delicate, minute local features, the accurate extraction of fine vessels and edge pixels remains a technical challenge in the current research. To enhance the ability to extract thin vessels, this paper incorporates a pyramid channel attention module into a U-shaped network. This allows for more effective capture of information at different levels and increased attention to vessel-related channels, thereby improving model performance. Simultaneously, to prevent overfitting, this paper optimizes the standard convolutional block in the U-Net with the pre-activated residual discard convolution block, thus improving the model’s generalization ability. The model is evaluated on three benchmark retinal datasets: DRIVE, CHASE_DB1, and STARE. Experimental results demonstrate that, compared to the baseline model, the proposed model achieves improvements in sensitivity (Sen) scores of 7.12%, 9.65%, and 5.36% on these three datasets, respectively, proving its strong ability to extract fine vessels. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Pyramid channel attention module and submodule.</p>
Full article ">Figure 2
<p>Convolutional block.</p>
Full article ">Figure 3
<p>Dropout (<b>a</b>) and Dropblock (<b>b</b>) function diagrams.</p>
Full article ">Figure 4
<p>The proposed algorithm structure diagram.</p>
Full article ">Figure 5
<p>Raw images and masks from the datasets.</p>
Full article ">Figure 6
<p>Visualization of vessel segmentation results. This paper uses red and blue boxes to highlight the details and zoom in to make the results more intuitive.</p>
Full article ">
57 pages, 2562 KiB  
Review
A Review of Predictive Analytics Models in the Oil and Gas Industries
by Putri Azmira R Azmi, Marina Yusoff and Mohamad Taufik Mohd Sallehud-din
Sensors 2024, 24(12), 4013; https://doi.org/10.3390/s24124013 - 20 Jun 2024
Cited by 2 | Viewed by 3194
Abstract
Enhancing the management and monitoring of oil and gas processes demands the development of precise predictive analytic techniques. Over the past two years, oil and its prediction have advanced significantly using conventional and modern machine learning techniques. Several review articles detail the developments [...] Read more.
Enhancing the management and monitoring of oil and gas processes demands the development of precise predictive analytic techniques. Over the past two years, oil and its prediction have advanced significantly using conventional and modern machine learning techniques. Several review articles detail the developments in predictive maintenance and the technical and non-technical aspects of influencing the uptake of big data. The absence of references for machine learning techniques impacts the effective optimization of predictive analytics in the oil and gas sectors. This review paper offers readers thorough information on the latest machine learning methods utilized in this industry’s predictive analytical modeling. This review covers different forms of machine learning techniques used in predictive analytical modeling from 2021 to 2023 (91 articles). It provides an overview of the details of the papers that were reviewed, describing the model’s categories, the data’s temporality, field, and name, the dataset’s type, predictive analytics (classification, clustering, or prediction), the models’ input and output parameters, the performance metrics, the optimal model, and the model’s benefits and drawbacks. In addition, suggestions for future research directions to provide insights into the potential applications of the associated knowledge. This review can serve as a guide to enhance the effectiveness of predictive analytics models in the oil and gas industries. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Distribution of the predictive analytics model in the O&amp;G field.</p>
Full article ">Figure 2
<p>Total of predictive analytics models in the O&amp;G field by year.</p>
Full article ">Figure 3
<p>Internal Structure of LSTM [<a href="#B62-sensors-24-04013" class="html-bibr">62</a>].</p>
Full article ">Figure 4
<p>Preferred AI model types in the research articles about predictive analytics in the O&amp;G field: (<b>a</b>) overview of the AI models used in the publications and (<b>b</b>) extended “others” section.</p>
Full article ">Figure 5
<p>Types of O&amp;G sectors in research articles from 2021 to 2023.</p>
Full article ">Figure 6
<p>Preferred performance metrics by the researcher: (<b>a</b>) combination of performance metrics used in publications. (<b>b</b>) All additional performance metrics displayed.</p>
Full article ">Figure 7
<p>Average accuracy of ML models in the O&amp;G industry.</p>
Full article ">
18 pages, 23457 KiB  
Article
An Improved YOLOv8 Network for Detecting Electric Pylons Based on Optical Satellite Image
by Xin Chi, Yu Sun, Yingjun Zhao, Donghua Lu, Yan Gao and Yiting Zhang
Sensors 2024, 24(12), 4012; https://doi.org/10.3390/s24124012 - 20 Jun 2024
Viewed by 1162
Abstract
Electric pylons are crucial components of power infrastructure, requiring accurate detection and identification for effective monitoring of transmission lines. This paper proposes an innovative model, the EP-YOLOv8 network, which incorporates new modules: the DSLSK-SPPF and EMS-Head. The DSLSK-SPPF module is designed to capture [...] Read more.
Electric pylons are crucial components of power infrastructure, requiring accurate detection and identification for effective monitoring of transmission lines. This paper proposes an innovative model, the EP-YOLOv8 network, which incorporates new modules: the DSLSK-SPPF and EMS-Head. The DSLSK-SPPF module is designed to capture the surrounding features of electric pylons more effectively, enhancing the model’s adaptability to the complex shapes of these structures. The EMS-Head module enhances the model’s ability to capture fine details of electric pylons while maintaining a lightweight design. The EP-YOLOv8 network optimizes traditional YOLOv8n parameters, demonstrating a significant improvement in electric pylon detection accuracy with an average [email protected] value of 95.5%. The effective detection of electric pylons by the EP-YOLOv8 demonstrates its ability to overcome the inefficiencies inherent in existing optical satellite image-based models, particularly those related to the unique characteristics of electric pylons. This improvement will significantly aid in monitoring the operational status and layout of power infrastructure, providing crucial insights for infrastructure management and maintenance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The architecture of YOLOv8 network.</p>
Full article ">Figure 2
<p>Schematic diagram of the SPPF module structure.</p>
Full article ">Figure 3
<p>Schematic diagram of the LSK block module.</p>
Full article ">Figure 4
<p>Schematic diagram of the LSK module.</p>
Full article ">Figure 5
<p>Schematic diagram of the DSLSK-SPPF module.</p>
Full article ">Figure 6
<p>Sampling methods of regular convolution and deformable convolution Blue dots represent standard convolutions, while green dots represent deformable convolutions (Offset).</p>
Full article ">Figure 7
<p>Schematic diagram illustrating the computation of dynamic snake convolution kernel coordinates and optional receptive fields.</p>
Full article ">Figure 8
<p>Schematic diagram of the EMS-Conv module.</p>
Full article ">Figure 9
<p>Comparison of the detector head: (<b>a</b>) YOLOv8 detection head structure; (<b>b</b>) reconstructed detection head structure.</p>
Full article ">Figure 10
<p>Comparison of mAP@0.5 values between EP-YOLOv8 model and original model.</p>
Full article ">Figure 11
<p>Comparison of detection performance: (<b>a</b>) input image; (<b>b</b>) object detection result of YOLOv8n; (<b>c</b>) object detection result of EP-YOLOv8.</p>
Full article ">Figure 11 Cont.
<p>Comparison of detection performance: (<b>a</b>) input image; (<b>b</b>) object detection result of YOLOv8n; (<b>c</b>) object detection result of EP-YOLOv8.</p>
Full article ">Figure 11 Cont.
<p>Comparison of detection performance: (<b>a</b>) input image; (<b>b</b>) object detection result of YOLOv8n; (<b>c</b>) object detection result of EP-YOLOv8.</p>
Full article ">
19 pages, 2399 KiB  
Article
Accurate 3D LiDAR SLAM System Based on Hash Multi-Scale Map and Bidirectional Matching Algorithm
by Tingchen Ma, Lingxin Kong, Yongsheng Ou and Sheng Xu
Sensors 2024, 24(12), 4011; https://doi.org/10.3390/s24124011 - 20 Jun 2024
Viewed by 1063
Abstract
Simultaneous localization and mapping (SLAM) is a hot research area that is widely required in many robotics applications. In SLAM technology, it is essential to explore an accurate and efficient map model to represent the environment and develop the corresponding data association methods [...] Read more.
Simultaneous localization and mapping (SLAM) is a hot research area that is widely required in many robotics applications. In SLAM technology, it is essential to explore an accurate and efficient map model to represent the environment and develop the corresponding data association methods needed to achieve reliable matching from measurements to maps. These two key elements impact the working stability of the SLAM system, especially in complex scenarios. However, previous literature has not fully addressed the problems of efficient mapping and accurate data association. In this article, we propose a novel hash multi-scale (H-MS) map to ensure query efficiency with accurate modeling. In the proposed map, the inserted map point will simultaneously participate in modeling voxels of different scales in a voxel group, enabling the map to represent objects of different scales in the environment effectively. Meanwhile, the root node of the voxel group is saved to a hash table for efficient access. Secondly, considering the one-to-many (1 ×103 order of magnitude) high computational data association problem caused by maintaining multi-scale voxel landmarks simultaneously in the H-MS map, we further propose a bidirectional matching algorithm (MSBM). This algorithm utilizes forward–reverse–forward projection to balance the efficiency and accuracy problem. The proposed H-MS map and MSBM algorithm are integrated into a completed LiDAR SLAM (HMS-SLAM) system. Finally, we validated the proposed map model, matching algorithm, and integrated system on the public KITTI dataset. The experimental results show that, compared with the ikd tree map, the H-MS map model has higher insertion and deletion efficiency, both having O(1) time complexity. The computational efficiency and accuracy of the MSBM algorithm are better than that of the small-scale priority matching algorithm, and the computing speed of the MSBM achieves 49 ms/time under a single CPU thread. In addition, the HMS-SLAM system built in this article has also reached excellent performance in terms of mapping accuracy and memory usage. Full article
(This article belongs to the Special Issue Sensors and Algorithms for 3D Visual Analysis and SLAM)
Show Figures

Figure 1

Figure 1
<p>An example of the H-MS map. <math display="inline"><semantics> <msub> <mi mathvariant="bold-sans-serif">Ø</mi> <mn>0</mn> </msub> </semantics></math> is a map voxel at scale <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-sans-serif">Ø</mi> <mn>1</mn> </msub> <mrow> <mo>[</mo> <mn>0</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-sans-serif">Ø</mi> <mn>1</mn> </msub> <mrow> <mo>[</mo> <mn>1</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-sans-serif">Ø</mi> <mn>1</mn> </msub> <mrow> <mo>[</mo> <mn>2</mn> <mo>]</mo> </mrow> </mrow> </semantics></math> are map voxels at scale <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math>. Map point <math display="inline"><semantics> <mi mathvariant="bold">p</mi> </semantics></math> participates in the calculation of statistical features for both <math display="inline"><semantics> <msub> <mi mathvariant="bold-sans-serif">Ø</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-sans-serif">Ø</mi> <mn>1</mn> </msub> <mrow> <mo>[</mo> <mn>1</mn> <mo>]</mo> </mrow> </mrow> </semantics></math> simultaneously.</p>
Full article ">Figure 2
<p>The HMS-SLAM system flowchart.</p>
Full article ">Figure 3
<p>Comparison of point insertion performance between H-MS map and the ikd tree map.</p>
Full article ">Figure 4
<p>Time consumption for removing voxels from the H-MS map at different scales.</p>
Full article ">Figure 5
<p>The H-MS map created by the HMS-SLAM system in sequence 10 of the KITTI-360 dataset.</p>
Full article ">
12 pages, 2757 KiB  
Article
Multi-Parameter Characterization of Liquid-to-Ice Phase Transition Using Bulk Acoustic Waves
by Andrey Smirnov, Vladimir Anisimkin, Natalia Voronova, Vadim Kashin and Iren Kuznetsova
Sensors 2024, 24(12), 4010; https://doi.org/10.3390/s24124010 - 20 Jun 2024
Viewed by 699
Abstract
The detection of the liquid-to-ice transition is an important challenge for many applications. In this paper, a method for multi-parameter characterization of the liquid-to-ice phase transition is proposed and tested. The method is based on the fundamental properties of bulk acoustic waves (BAWs). [...] Read more.
The detection of the liquid-to-ice transition is an important challenge for many applications. In this paper, a method for multi-parameter characterization of the liquid-to-ice phase transition is proposed and tested. The method is based on the fundamental properties of bulk acoustic waves (BAWs). BAWs with shear vertical (SV) or shear horizontal (SH) polarization cannot propagate in liquids, only in solids such as ice. BAWs with longitudinal (L) polarization, however, can propagate in both liquids and solids, but with different velocities and attenuations. Velocities and attenuations for L-BAWs and SV-BAWs are measured in ice using parameters such as time delay and wave amplitude at a frequency range of 1–37 MHz. Based on these measurements, relevant parameters for Rayleigh surface acoustic waves and Poisson’s modulus for ice are determined. The homogeneity of the ice sample is also detected along its length. A dual sensor has been developed and tested to analyze two-phase transitions in two liquids simultaneously. Distilled water and a 0.9% solution of NaCl in water were used as examples. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors 2024)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic view and (<b>b</b>) photo of the BAW sensor for liquid–ice phase transition with input transducer (1), output transducer (2), and cell (3).</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic view and (<b>b</b>) photo of the LBAW sensor for detecting ice homogeneity: input LBAW transducers (1), output LBAW transducers (2), and cell for ice sample at <span class="html-italic">h</span> = 30 mm, <span class="html-italic">h<sub>1</sub></span> = 4 mm, <span class="html-italic">h<sub>2</sub></span> = 4 mm, <span class="html-italic">f<sub>L</sub></span> = 13 MHz, <span class="html-italic">l</span> = 17.8 mm, and <span class="html-italic">d</span> = 3.1 mm.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic view and (<b>b</b>) photo of the dual BAW sensor for simultaneous detection of two liquid-to-ice transitions: input LBAW transducer (1), bar quartz substrate (<span class="html-italic">l<sub>z</sub></span> = <span class="html-italic">l<sub>y</sub></span> = 32 mm, <span class="html-italic">l<sub>x</sub></span> = 16 mm) (2), two cameras of Teflon cells for liquid 1 and liquid 2 or for ice (3, 4), two output LBAW transducers (5, 6) at <span class="html-italic">d</span> = 1 mm, <span class="html-italic">l</span> = 7 mm, <span class="html-italic">h</span> = 15 mm.</p>
Full article ">Figure 4
<p>The dependences of the insertion loss <span class="html-italic">S<sub>12</sub></span> vs. time delay <span class="html-italic">τ</span> measured with LBAW in distilled water (<b>a</b>) before (<span class="html-italic">T</span> = +15 °C) and (<b>b</b>) after (<span class="html-italic">T</span> = −15 °C) phase transition. <span class="html-italic">f<sub>L</sub></span> = 30 MHz, <span class="html-italic">l</span> = 17.5 mm, <span class="html-italic">d</span> = 2.5 mm.</p>
Full article ">Figure 5
<p>The dependences of the insertion loss <span class="html-italic">S<sub>12</sub></span> vs. time delay <span class="html-italic">τ</span> measured for SVBAW in distilled water (<b>a</b>) before (T = +15 °C) and (<b>b</b>) after (T = −15 °C) phase transition at <span class="html-italic">f<sub>SV</sub></span> = 25 MHz, <span class="html-italic">l</span> = 3.3 mm, <span class="html-italic">d</span> = 2.7 mm.</p>
Full article ">Figure 6
<p>Insertion loss <span class="html-italic">S<sub>12</sub></span> vs. time delay <span class="html-italic">τ</span> measured at top (black line) and bottom (red line) ends of ice sample at <span class="html-italic">T</span> = −20 °C, <span class="html-italic">h</span> = 30 mm, <span class="html-italic">h<sub>1</sub></span> = 4 mm, <span class="html-italic">h<sub>2</sub></span> = 4 mm, <span class="html-italic">f<sub>L</sub></span> = 13 MHz, <span class="html-italic">l</span> = 17.8 mm, <span class="html-italic">d</span> = 3.1 mm, and <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <msub> <mrow> <mn>12</mn> </mrow> <mi>L</mi> </msub> </mrow> <mrow> <mi>S</mi> <mi>i</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </msubsup> </mrow> </semantics></math> = 37.5 dB (top) and 42.5 dB (bottom).</p>
Full article ">Figure 7
<p>Dependence of the insertion loss <span class="html-italic">S<sub>12</sub></span> vs. time delay <span class="html-italic">τ</span> measured in dual BAW sensor for two liquids and two ices simultaneously. The liquids at <span class="html-italic">T</span> = +20 °C (red line) and the ice samples at <span class="html-italic">T</span> = −20 °C (black line). The arrows indicate shifts of the QL and QSV signals due to liquid-to-ice phase transitions at <span class="html-italic">d</span> = 1 mm, <span class="html-italic">l</span> = 7 mm, <span class="html-italic">h</span> = 15 mm, and <span class="html-italic">f<sub>L</sub></span> = 13 MHz.</p>
Full article ">
19 pages, 7015 KiB  
Article
Pipeline Leak Detection: A Comprehensive Deep Learning Model Using CWT Image Analysis and an Optimized DBN-GA-LSSVM Framework
by Muhammad Farooq Siddique, Zahoor Ahmad, Niamat Ullah, Saif Ullah and Jong-Myon Kim
Sensors 2024, 24(12), 4009; https://doi.org/10.3390/s24124009 - 20 Jun 2024
Cited by 8 | Viewed by 2050
Abstract
Detecting pipeline leaks is an essential factor in maintaining the integrity of fluid transport systems. This paper introduces an advanced deep learning framework that uses continuous wavelet transform (CWT) images for precise detection of such leaks. Transforming acoustic signals from pipelines under various [...] Read more.
Detecting pipeline leaks is an essential factor in maintaining the integrity of fluid transport systems. This paper introduces an advanced deep learning framework that uses continuous wavelet transform (CWT) images for precise detection of such leaks. Transforming acoustic signals from pipelines under various conditions into CWT scalograms, followed by signal processing by non-local means and adaptive histogram equalization, results in new enhanced leak-induced scalograms (ELIS) that capture detailed energy fluctuations across time-frequency scales. The fundamental approach takes advantage of a deep belief network (DBN) fine-tuned with a genetic algorithm (GA) and unified with a least squares support vector machine (LSSVM) to improve feature extraction and classification accuracy. The DBN-GA framework precisely extracts informative features, while the LSSVM classifier precisely distinguishes between leaky and non-leak conditions. By concentrating solely on the advanced capabilities of ELIS processed through an optimized DBN-GA-LSSVM model, this research achieves high detection accuracy and reliability, making a significant contribution to pipeline monitoring and maintenance. This innovative approach to capturing complex signal patterns can be applied to real-time leak detection and critical infrastructure safety in several industrial applications. Full article
(This article belongs to the Special Issue Feature Papers in Fault Diagnosis & Sensors 2024)
Show Figures

Figure 1

Figure 1
<p>Integration of acoustic emission, image processing, and deep learning for leak detection.</p>
Full article ">Figure 2
<p>CWT Scalograms (<b>a</b>) With NLM and AHE (<b>b</b>) Without NLM and AHE.</p>
Full article ">Figure 3
<p>Architecture and Training Procedure of a Deep Belief Network.</p>
Full article ">Figure 4
<p>Flowchart for Genetic Algorithm.</p>
Full article ">Figure 5
<p>(<b>a</b>) Experimental Setup Overview for Acoustic Emission-Based Leak Detection (<b>b</b>) Schematic Visualization for AE-Based Leak Detection.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Experimental Setup Overview for Acoustic Emission-Based Leak Detection (<b>b</b>) Schematic Visualization for AE-Based Leak Detection.</p>
Full article ">Figure 6
<p>AE Signals Comparison for Leak and Non-leak Conditions (<b>a</b>) 18 bar pressure (<b>b</b>) at 13 Bar Pressure.</p>
Full article ">Figure 6 Cont.
<p>AE Signals Comparison for Leak and Non-leak Conditions (<b>a</b>) 18 bar pressure (<b>b</b>) at 13 Bar Pressure.</p>
Full article ">Figure 7
<p>AE Signal Attenuation in 114.3 mm diameter steel pipe.</p>
Full article ">Figure 8
<p>Confusion matrix comparison of the suggested model (<b>a</b>) with the Rahimi (<b>b</b>) and Ahmad (<b>c</b>) models (leak size = 1.0 mm).</p>
Full article ">Figure 9
<p>Confusion matrix comparison of the suggested model (<b>a</b>) with the Rahimi (<b>b</b>) and Ahmad (<b>c</b>) models (leak size = 0.7 mm).</p>
Full article ">Figure 10
<p>Confusion matrix comparison of the suggested model (<b>a</b>) with the Rahimi (<b>b</b>) and Ahmad models (<b>c</b>) (leak size = 0.5 mm).</p>
Full article ">Figure 11
<p>t-SNE comparison of the suggested model (<b>a</b>) with the Rahimi (<b>b</b>) and Ahmad (<b>c</b>) models (leak size = 1 mm).</p>
Full article ">Figure 12
<p>t-SNE comparison of the suggested model (<b>a</b>) with the Rahimi (<b>b</b>) and Ahmad (<b>c</b>) models (leak size = 0.7 mm).</p>
Full article ">Figure 13
<p>t-SNE comparison of the suggested model (<b>a</b>) with the Rahimi (<b>b</b>) and Ahmad (<b>c</b>) models (leak size = 0.5 mm).</p>
Full article ">
19 pages, 630 KiB  
Article
Efficiency and Security Evaluation of Lightweight Cryptographic Algorithms for Resource-Constrained IoT Devices
by Indu Radhakrishnan, Shruti Jadon and Prasad B. Honnavalli
Sensors 2024, 24(12), 4008; https://doi.org/10.3390/s24124008 - 20 Jun 2024
Viewed by 2591
Abstract
The IoT has become an integral part of the technological ecosystem that we all depend on. The increase in the number of IoT devices has also brought with it security concerns. Lightweight cryptography (LWC) has evolved to be a promising solution to improve [...] Read more.
The IoT has become an integral part of the technological ecosystem that we all depend on. The increase in the number of IoT devices has also brought with it security concerns. Lightweight cryptography (LWC) has evolved to be a promising solution to improve the privacy and confidentiality aspect of IoT devices. The challenge is to choose the right algorithm from a plethora of choices. This work aims to compare three different LWC algorithms: AES-128, SPECK, and ASCON. The comparison is made by measuring various criteria such as execution time, memory utilization, latency, throughput, and security robustness of the algorithms in IoT boards with constrained computational capabilities and power. These metrics are crucial to determine the suitability and help in making informed decisions on choosing the right cryptographic algorithms to strike a balance between security and performance. Through the evaluation it is observed that SPECK exhibits better performance in resource-constrained IoT devices. Full article
(This article belongs to the Collection Cryptography and Security in IoT and Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Challenges of using traditional cryptographic algorithms in IoT devices.</p>
Full article ">Figure 2
<p>Test-bed setup.</p>
Full article ">Figure 3
<p>Memory used in Arduino Nano and Micro.</p>
Full article ">Figure 4
<p>Execution times of AES-128, SPECK, and ASCON.</p>
Full article ">Figure 5
<p>Encryption and decryption throughput.</p>
Full article ">Figure 6
<p>Encryption and decryption latency.</p>
Full article ">Figure 7
<p>Key scheduling throughput and latency.</p>
Full article ">
15 pages, 2752 KiB  
Article
A Novel Fast Iterative STAP Method with a Coprime Sampling Structure
by Mingfu Li and Hui Li
Sensors 2024, 24(12), 4007; https://doi.org/10.3390/s24124007 - 20 Jun 2024
Viewed by 661
Abstract
In space-time adaptive processing (STAP), the coprime sampling structure can obtain better clutter suppression capabilities at a lower hardware cost than the uniform linear sampling structure. However, in practical applications, the performance of the algorithm is often limited by the number of training [...] Read more.
In space-time adaptive processing (STAP), the coprime sampling structure can obtain better clutter suppression capabilities at a lower hardware cost than the uniform linear sampling structure. However, in practical applications, the performance of the algorithm is often limited by the number of training samples. To solve this problem, this paper proposes a fast iterative coprime STAP algorithm based on truncated kernel norm minimization (TKNM). This method establishes a virtual clutter covariance matrix (CCM), introduces truncated kernel norm regularization technology to ensure the low rank of the CCM, and transforms the non-convex problem into a convex optimization problem. Finally, a fast iterative solution method based on the alternating direction method is presented. The effectiveness and accuracy of the proposed algorithm are verified through simulation experiments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Coprime configuration: (<b>a</b>) coprime array; (<b>b</b>) coprime PRI.</p>
Full article ">Figure 2
<p>Difference coarray and copulse: (<b>a</b>) difference array; (<b>b</b>) difference impulses.</p>
Full article ">Figure 3
<p>Relationship of the RMSE and the number of samples.</p>
Full article ">Figure 4
<p>The ratio of DOF.</p>
Full article ">Figure 5
<p>Beampatterns in angle Doppler: (<b>a</b>) T-STAP; (<b>b</b>) C-STAP; (<b>c</b>) TNNM-FIC-STAP.</p>
Full article ">Figure 6
<p>Beampatterns: (<b>a</b>) spatial domain; (<b>b</b>) Doppler domain.</p>
Full article ">Figure 7
<p>SCNR.</p>
Full article ">
25 pages, 3572 KiB  
Article
A Data Compression Method for Wellbore Stability Monitoring Based on Deep Autoencoder
by Shan Song, Xiaoyong Zhao, Zhengbing Zhang and Mingzhang Luo
Sensors 2024, 24(12), 4006; https://doi.org/10.3390/s24124006 - 20 Jun 2024
Viewed by 685
Abstract
The compression method for wellbore trajectory data is crucial for monitoring wellbore stability. However, classical methods like methods based on Huffman coding, compressed sensing, and Differential Pulse Code Modulation (DPCM) suffer from low real-time performance, low compression ratios, and large errors between the [...] Read more.
The compression method for wellbore trajectory data is crucial for monitoring wellbore stability. However, classical methods like methods based on Huffman coding, compressed sensing, and Differential Pulse Code Modulation (DPCM) suffer from low real-time performance, low compression ratios, and large errors between the reconstructed data and the source data. To address these issues, a new compression method is proposed, leveraging a deep autoencoder for the first time to significantly improve the compression ratio. Additionally, the method reduces error by compressing and transmitting residual data from the feature extraction process using quantization coding and Huffman coding. Furthermore, a mean filter based on the optimal standard deviation threshold is applied to further minimize error. Experimental results show that the proposed method achieves an average compression ratio of 4.05 for inclination and azimuth data; compared to the DPCM method, it is improved by 118.54%. Meanwhile, the average mean square error of the proposed method is 76.88, which is decreased by 82.46% when compared to the DPCM method. Ablation studies confirm the effectiveness of the proposed improvements. These findings highlight the efficacy of the proposed method in enhancing wellbore stability monitoring performance. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Block diagram of compressed data transmission system for LWD.</p>
Full article ">Figure 2
<p>Diagram of data compression method based on deep autoencoder.</p>
Full article ">Figure 3
<p>Autoencoder structure.</p>
Full article ">Figure 4
<p>Deep autoencoder structure.</p>
Full article ">Figure 5
<p>Schematic diagram of LWD system device operation.</p>
Full article ">Figure 6
<p>Compression results of deep autoencoder on dataset <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math>: (<b>a</b>) represents the original data of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">X</mi> </mrow> <mrow> <mi>inc</mi> <mtext>_</mtext> <mi>test</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">X</mi> </mrow> <mrow> <mi>azi</mi> <mtext>_</mtext> <mi>test</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) represents the features extracted by the deep autoencoder, (<b>c</b>) represents the reconstructed data of the deep autoencoder, and (<b>d</b>) represents the residual between the raw data and the reconstructed data.</p>
Full article ">Figure 6 Cont.
<p>Compression results of deep autoencoder on dataset <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math>: (<b>a</b>) represents the original data of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">X</mi> </mrow> <mrow> <mi>inc</mi> <mtext>_</mtext> <mi>test</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">X</mi> </mrow> <mrow> <mi>azi</mi> <mtext>_</mtext> <mi>test</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) represents the features extracted by the deep autoencoder, (<b>c</b>) represents the reconstructed data of the deep autoencoder, and (<b>d</b>) represents the residual between the raw data and the reconstructed data.</p>
Full article ">Figure 7
<p>Compression results of deep autoencoder on dataset <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">X</mi> </mrow> <mrow> <mi>azi</mi> <mtext>_</mtext> <mi>test</mi> </mrow> </msub> </mrow> </semantics></math>: (<b>a</b>) represents the original data of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">X</mi> </mrow> <mrow> <mi>inc</mi> <mtext>_</mtext> <mi>test</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">X</mi> </mrow> <mrow> <mi>azi</mi> <mtext>_</mtext> <mi>test</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) represents the features extracted by the deep autoencoder, (<b>c</b>) represents the reconstructed data of the deep autoencoder, and (<b>d</b>) represents the residual between the raw data and the reconstructed data.</p>
Full article ">Figure 8
<p>Comparison of compression performance between the proposed method, DPCM-I, and DPCM: (<b>a</b>) represents the compression results for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data, (<b>b</b>) represents the compression results for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">azi</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data.</p>
Full article ">Figure 9
<p>Comparison of the reconstructed data of the proposed method, deepAE, DPCM-I, and DPCM and the raw data: (<b>a</b>,<b>b</b>) represent a portion of the original data curves in <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">azi</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math>, respectively. (<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) represent the reconstructed data corresponding to (<b>a</b>) after compressing and decompressing <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data using the proposed method, DPCM-I, DPCM and deepAE, respectively; (<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) represent the reconstructed data corresponding to (<b>b</b>) after compressing and decompressing <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">azi</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data using the proposed method, DPCM-I, DPCM, and deepAE, respectively.</p>
Full article ">Figure 10
<p>Comparison of differences between the reconstructed data curves of deepAE, deepAE+QC+HC, and the proposed method and the original data curves: (<b>a</b>,<b>b</b>) represent a portion of the original data curves in <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">azi</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math>, respectively; (<b>c</b>,<b>e</b>,<b>g</b>) represent the reconstructed data corresponding to (<b>a</b>) after compressing and decompressing <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data using deepAE, deepAE+QC+HC, and the proposed method, respectively; (<b>d</b>,<b>f</b>,<b>h</b>) represent the reconstructed data corresponding to (<b>b</b>) after compressing and decompressing <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">azi</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data using deepAE, deepAE+QC+HC, and the proposed method, respectively.</p>
Full article ">Figure 10 Cont.
<p>Comparison of differences between the reconstructed data curves of deepAE, deepAE+QC+HC, and the proposed method and the original data curves: (<b>a</b>,<b>b</b>) represent a portion of the original data curves in <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">azi</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math>, respectively; (<b>c</b>,<b>e</b>,<b>g</b>) represent the reconstructed data corresponding to (<b>a</b>) after compressing and decompressing <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">inc</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data using deepAE, deepAE+QC+HC, and the proposed method, respectively; (<b>d</b>,<b>f</b>,<b>h</b>) represent the reconstructed data corresponding to (<b>b</b>) after compressing and decompressing <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>X</mi> </mrow> <mrow> <mi mathvariant="italic">azi</mi> <mtext>_</mtext> <mi mathvariant="italic">test</mi> </mrow> </msub> </mrow> </semantics></math> data using deepAE, deepAE+QC+HC, and the proposed method, respectively.</p>
Full article ">
18 pages, 13734 KiB  
Article
Channel-Blind Joint Source–Channel Coding for Wireless Image Transmission
by Hongjie Yuan, Weizhang Xu, Yuhuan Wang and Xingxing Wang
Sensors 2024, 24(12), 4005; https://doi.org/10.3390/s24124005 - 20 Jun 2024
Viewed by 880
Abstract
Joint source–channel coding (JSCC) based on deep learning has shown significant advancements in image transmission tasks. However, previous channel-adaptive JSCC methods often rely on the signal-to-noise ratio (SNR) of the current channel for encoding, which overlooks the neural network’s self-adaptive capability across varying [...] Read more.
Joint source–channel coding (JSCC) based on deep learning has shown significant advancements in image transmission tasks. However, previous channel-adaptive JSCC methods often rely on the signal-to-noise ratio (SNR) of the current channel for encoding, which overlooks the neural network’s self-adaptive capability across varying SNRs. This paper investigates the self-adaptive capability of deep learning-based JSCC models to dynamically changing channels and introduces a novel method named Channel-Blind JSCC (CBJSCC). CBJSCC leverages the intrinsic learning capability of neural networks to self-adapt to dynamic channels and diverse SNRs without relying on external SNR information. This approach is advantageous, as it is not affected by channel estimation errors and can be applied to one-to-many wireless communication scenarios. To enhance the performance of JSCC tasks, the CBJSCC model employs a specially designed encoder–decoder. Experimental results show that CBJSCC outperforms existing channel-adaptive JSCC methods that depend on SNR estimation and feedback, both in additive white Gaussian noise environments and under slow Rayleigh fading channel conditions. Through a comprehensive analysis of the model’s performance, we further validate the robustness and adaptability of this strategy across different application scenarios, with the experimental results providing strong evidence to support this claim. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the DL-based JSCC.</p>
Full article ">Figure 2
<p>(<b>a</b>) Architecture of the proposed CBJSCC method. The blue dashed box and the green dashed box represent the encoder and the decoder, respectively. The numbers under each IRAB block indicate the expansion factor. <span class="html-italic">N</span> and <span class="html-italic">T</span> represent the dimension of the feature map. (<b>b</b>) In real-world deployment of the CBJSCC model, the encoded signal can be transmitted directly to the client through various channels at different ratios.</p>
Full article ">Figure 3
<p>(<b>a</b>) The IRAB module is characterized by the convolution kernel size, denoted by the numbers in the box. In IRAB, the variable <span class="html-italic">X</span> signifies the multiple of the current convolution kernel size in relation to the input. (<b>b</b>) The ESA block. (<b>c</b>) The ACMix block.</p>
Full article ">Figure 4
<p>Image reconstruction performance comparison on the Kodak dataset under an AWGN channel. (<b>a</b>,<b>b</b>) display results for transmission rate ratios of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>6</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>12</mn> </mrow> </semantics></math>, respectively. The solid red curve represents the performance of the proposed CBJSCC method. The dashed lines represent the performance of ADJSCC under different SNR feedback conditions, including perfect feedback and feedback SNRs of 0 dB, 5 dB, 10 dB, and 20 dB. (<b>c</b>) extends the evaluation to slow Rayleigh fading channels, comparing CBJSCC with alternative methods, indicated within parentheses, at different transmission rates.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) demonstrate the image reconstruction performance of CBJSCC and ADJSCC on the Kodak dataset under an AWGN channel with a 1 dB SNR at different transmission rates: (<b>a</b>) <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>6</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>12</mn> </mrow> </semantics></math>. (<b>c</b>) shows the image reconstruction under a slow Rayleigh fading channel with a transmission rate of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>,<b>b</b>) demonstrate the image reconstruction performance of CBJSCC and ADJSCC on the Kodak dataset under an AWGN channel with a 1 dB SNR at different transmission rates: (<b>a</b>) <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>6</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>12</mn> </mrow> </semantics></math>. (<b>c</b>) shows the image reconstruction under a slow Rayleigh fading channel with a transmission rate of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>(<b>a</b>) Architecture of the AF module in ADJSCC. (<b>b</b>) Modified CBJSCC with the AF module. The red dashed line represents the input of SNR information. The yellow block represents the inserted AF module.</p>
Full article ">Figure 7
<p>PSNR curves for reconstructed images under three different scenarios.</p>
Full article ">Figure 8
<p>Performance comparison with different training SNR ranges. The red solid line represents the model trained on a wide SNR range from [−5, 20] dB. The pink dashed line indicates the same range but with fine-tuning. The light blue dotted line shows the model trained on discrete SNR values {1, 4, 7, 10, 13} dB. The blue dash–dot line represents the model trained on the continuous range [1, 13] dB.</p>
Full article ">Figure 9
<p>A clustered column chart showing the performance of CBJSCC across multiple dataset domains. The color of the line indicates the channel SNR.</p>
Full article ">
15 pages, 8793 KiB  
Article
Optical Design of a Hyperspectral Remote-Sensing System Based on an Image-Slicer Integral Field Unit in the Short-Wave Infrared Band
by Yi Ding, Chunyu Liu, Guoxiu Zhang, Pengfei Hao, Shuai Liu, Yingming Zhao, Yuxin Zhang and Hongxin Liu
Sensors 2024, 24(12), 4004; https://doi.org/10.3390/s24124004 - 20 Jun 2024
Viewed by 819
Abstract
Grating-type spectral imaging systems are frequently employed in scenes for high-resolution remote-sensing observations of the Earth. However, the entrance of the grating-type spectral imaging system is a slit or a pinhole. This structure relies on the push broom method, which presents a challenge [...] Read more.
Grating-type spectral imaging systems are frequently employed in scenes for high-resolution remote-sensing observations of the Earth. However, the entrance of the grating-type spectral imaging system is a slit or a pinhole. This structure relies on the push broom method, which presents a challenge in capturing spectral information of transiently changing targets. To address this issue, the IFU is used to slice the focal plane of the telescope system, thereby expanding the instantaneous field of view (IFOV) of the grating-type spectral imaging system. The aberrations introduced by the expansion of the single-slice field of view (FOV) of the IFU are corrected, and the conversion of the IFU’s FOV from arcseconds to degrees is achieved. The design of a spectral imaging system based on an image-slicer IFU for remote sensing is finally completed. The system has a wavelength range of 1400 nm to 2000 nm, and a spectral resolution of better than 3 nm. Compared with the traditional grating-type spectral imaging system, its IFOV is expanded by a factor of four. And it allows for the capture of complete spectral information of transiently changing targets through a single exposure. The simulation results demonstrate that the system has good performance at each sub-slit, thereby validating the effectiveness and advantages of the proposed system for dynamic target capture in remote sensing. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Principle of the image-slicer IFS: (<b>a</b>) schematic of the optical path structure of the system and (<b>b</b>) schematic of a 3D data cube formation from the focal plane of the fore-optical system.</p>
Full article ">Figure 1 Cont.
<p>Principle of the image-slicer IFS: (<b>a</b>) schematic of the optical path structure of the system and (<b>b</b>) schematic of a 3D data cube formation from the focal plane of the fore-optical system.</p>
Full article ">Figure 2
<p>Schematic diagram of the field of view cut and its dispersion.</p>
Full article ">Figure 3
<p>Schematic of a spectral imaging system based on an image-slicer IFU for remote sensing.</p>
Full article ">Figure 4
<p>Optical path structure of the IFU: (<b>a</b>) schematic arrangement of the mirrors and (<b>b</b>) optical path of the system simulation.</p>
Full article ">Figure 5
<p>Schematic of the tilt of sub-mirrors within a single group IFU.</p>
Full article ">Figure 6
<p>MTF curve of the fore-optical system.</p>
Full article ">Figure 7
<p>Footprint diagram of the common pupil: (<b>a</b>) footprint diagram of the common pupil plane at −0.5 deg; (<b>b</b>) footprint diagram of the common pupil plane at 0 deg; and (<b>c</b>) footprint diagram of the common pupil plane at 0.5 deg.</p>
Full article ">Figure 8
<p>Optical design of the spectral imaging system based on an image-slicer IFU for remote sensing.</p>
Full article ">Figure 9
<p>MTF curves of the spectral imaging system based on an image-slicer IFU: (<b>a</b>) MTF curves in configuration 1 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm; (<b>b</b>) MTF curves in configuration 2 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm; (<b>c</b>) MTF curves in configuration 3 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm; and (<b>d</b>) MTF curves in configuration 4 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm.</p>
Full article ">Figure 10
<p>Energy concentration curves of the spectral imaging system based on an image-slicer IFU: (<b>a</b>) energy concentration curves in configuration 1 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm; (<b>b</b>) energy concentration curves in configuration 2 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm; (<b>c</b>) energy concentration curves in configuration 3 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm; and (<b>d</b>) energy concentration curves in configuration 4 for the wavelengths of 1700 nm, 1400 nm, and 2000 nm.</p>
Full article ">Figure 11
<p>Footprint diagram in the image plane of the entire system and the spot diagram in configuration 1 for different wavelengths.</p>
Full article ">
12 pages, 1321 KiB  
Article
Developing a Five-Minute Normative Database of Heart Rate Variability for Diagnosing Cardiac Autonomic Dysregulation for Patients with Major Depressive Disorder
by Li-Hsin Chang, Min-Han Huang and I-Mei Lin
Sensors 2024, 24(12), 4003; https://doi.org/10.3390/s24124003 - 20 Jun 2024
Viewed by 904
Abstract
Heart rate variability (HRV) is related to cardiac vagal control and emotional regulation and an index for cardiac vagal control and cardiac autonomic activity. This study aimed to develop the Taiwan HRV normative database covering individuals aged 20 to 70 years and to [...] Read more.
Heart rate variability (HRV) is related to cardiac vagal control and emotional regulation and an index for cardiac vagal control and cardiac autonomic activity. This study aimed to develop the Taiwan HRV normative database covering individuals aged 20 to 70 years and to assess its diagnosing validity in patients with major depressive disorder (MDD). A total of 311 healthy participants were in the HRV normative database and divided into five groups in 10-year age groups, and then the means and standard deviations of the HRV indices were calculated. We recruited 272 patients with MDD for cross-validation, compared their HRV indices with the normative database, and then converted them to Z-scores to explore the deviation of HRV in MDD patients from healthy groups. The results found a gradual decline in HRV indices with advancing age in the HC group, and females in the HC group exhibit higher cardiac vagal control and parasympathetic activity than males. Conversely, patients in the MDD group demonstrate lower HRV indices than those in the HC group, with their symptoms of depression and anxiety showing a negative correlation with HRV indices. The Taiwan HRV normative database has good psychometric characteristics of cross-validation. Full article
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)
Show Figures

Figure 1

Figure 1
<p>The HRV index across different age groups for male, female, and all participants. Note: The green, red, and gray dashed lines represent the linear regression lines of the HRV index. The shaded green and orange areas depict the 95% confidence interval for HRV in males and females, respectively. *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 2
<p>The scatterplots of HRV Z-score in the MDD group.</p>
Full article ">
19 pages, 11929 KiB  
Article
Improved Intrusion Detection Based on Hybrid Deep Learning Models and Federated Learning
by Jia Huang, Zhen Chen, Sheng-Zheng Liu, Hao Zhang and Hai-Xia Long
Sensors 2024, 24(12), 4002; https://doi.org/10.3390/s24124002 - 20 Jun 2024
Viewed by 981
Abstract
The security of the Industrial Internet of Things (IIoT) is of vital importance, and the Network Intrusion Detection System (NIDS) plays an indispensable role in this. Although there is an increasing number of studies on the use of deep learning technology to achieve [...] Read more.
The security of the Industrial Internet of Things (IIoT) is of vital importance, and the Network Intrusion Detection System (NIDS) plays an indispensable role in this. Although there is an increasing number of studies on the use of deep learning technology to achieve network intrusion detection, the limited local data of the device may lead to poor model performance because deep learning requires large-scale datasets for training. Some solutions propose to centralize the local datasets of devices for deep learning training, but this may involve user privacy issues. To address these challenges, this study proposes a novel federated learning (FL)-based approach aimed at improving the accuracy of network intrusion detection while ensuring data privacy protection. This research combines convolutional neural networks with attention mechanisms to develop a new deep learning intrusion detection model specifically designed for the IIoT. Additionally, variational autoencoders are incorporated to enhance data privacy protection. Furthermore, an FL framework enables multiple IIoT clients to jointly train a shared intrusion detection model without sharing their raw data. This strategy significantly improves the model’s detection capability while effectively addressing data privacy and security issues. To validate the effectiveness of the proposed method, a series of experiments were conducted on a real-world Internet of Things (IoT) network intrusion dataset. The experimental results demonstrate that our model and FL approach significantly improve key performance metrics such as detection accuracy, precision, and false-positive rate (FPR) compared to traditional local training methods and existing models. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>DVACNN-Fed architecture.</p>
Full article ">Figure 2
<p>Test accuracy under a different learning rate (lr) on TON-IoT dataset.</p>
Full article ">Figure 3
<p>The performance of privacy protection on BoT-IoT datasets.</p>
Full article ">Figure 4
<p>Test loss curves of considered intrusion detection models under four scenarios on TON-IoT datasets.</p>
Full article ">Figure 5
<p>Comparison of considered intrusion detection models under four scenarios on TON-IoT datasets.</p>
Full article ">Figure 6
<p>FPR of considered intrusion detection models under four scenarios on TON-IoT datasets.</p>
Full article ">Figure 6 Cont.
<p>FPR of considered intrusion detection models under four scenarios on TON-IoT datasets.</p>
Full article ">Figure 7
<p>ROC and PR curves of considered intrusion detection and each class on TON-IoT datasets.</p>
Full article ">Figure 7 Cont.
<p>ROC and PR curves of considered intrusion detection and each class on TON-IoT datasets.</p>
Full article ">Figure 8
<p>Test loss curves of considered intrusion detection models under four scenarios on BoT-IoT datasets.</p>
Full article ">Figure 8 Cont.
<p>Test loss curves of considered intrusion detection models under four scenarios on BoT-IoT datasets.</p>
Full article ">Figure 9
<p>Comparison of considered intrusion detection models under four scenarios on BoT-IoT datasets.</p>
Full article ">Figure 10
<p>Comparison of local, ideal, and ours under four scenarios on TON-IoT datasets.</p>
Full article ">Figure 11
<p>Comparison of local, ideal, and ours under four scenarios on BoT-IoT datasets.</p>
Full article ">Figure 11 Cont.
<p>Comparison of local, ideal, and ours under four scenarios on BoT-IoT datasets.</p>
Full article ">
26 pages, 2613 KiB  
Systematic Review
Technologies for Evaluation of Pelvic Floor Functionality: A Systematic Review
by Nikolas Förstl, Ina Adler, Franz Süß and Sebastian Dendorfer
Sensors 2024, 24(12), 4001; https://doi.org/10.3390/s24124001 - 20 Jun 2024
Cited by 1 | Viewed by 1333
Abstract
Pelvic floor dysfunction is a common problem in women and has a negative impact on their quality of life. The aim of this review was to provide a general overview of the current state of technology used to assess pelvic floor functionality. It [...] Read more.
Pelvic floor dysfunction is a common problem in women and has a negative impact on their quality of life. The aim of this review was to provide a general overview of the current state of technology used to assess pelvic floor functionality. It also provides literature research of the physiological and anatomical factors that correlate with pelvic floor health. This systematic review was conducted according to the PRISMA guidelines. The PubMed, ScienceDirect, Cochrane Library, and IEEE databases were searched for publications on sensor technology for the assessment of pelvic floor functionality. Anatomical and physiological parameters were identified through a manual search. In the systematic review, 114 publications were included. Twelve different sensor technologies were identified. Information on the obtained parameters, sensor position, test activities, and subject characteristics was prepared in tabular form from each publication. A total of 16 anatomical and physiological parameters influencing pelvic floor health were identified in 17 published studies and ranked for their statistical significance. Taken together, this review could serve as a basis for the development of novel sensors which could allow for quantifiable prevention and diagnosis, as well as particularized documentation of rehabilitation processes related to pelvic floor dysfunctions. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram of the study-selection process.</p>
Full article ">Figure 2
<p>Number of publications included in this review, presented by their corresponding sensor type.</p>
Full article ">Figure 3
<p>Obtained Parameters with the extracted sensors with their number of acquirement.</p>
Full article ">Figure 4
<p>Activities performed during the data assessment with the respective sensors, along with their number of occurrences.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop