Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 24, July-2
Previous Issue
Volume 24, June-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 13 (July-1 2024) – 395 articles

Cover Story (view full-size image): Nucleic acid amplification tests are key tools for the detection and diagnosis of many diseases. While digital amplification offers more precise quantification of target nucleic acids than bulk assays, developing point-of-care (POC) digital nucleic acid tests has been challenging. With the use of the vibrating sharp-tip capillary, a simple and portable system for tunable on-demand droplet generation with a large droplet size range is possible. By combining it with loop-mediated isothermal amplification (LAMP), the requirement for heating elements has also been substantially reduced. This work paves the way for achieving digital nucleic acid amplification tests in resource-limited settings. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 4885 KiB  
Article
Research on Gate Opening Control Based on Improved Beetle Antennae Search
by Lijun Wang, Yibo Wang, Yehao Kang, Jie Shen, Ruixue Cheng, Jianyong Zhang and Shuheng Shi
Sensors 2024, 24(13), 4425; https://doi.org/10.3390/s24134425 - 8 Jul 2024
Viewed by 853
Abstract
To address the issues of sluggish response and inadequate precision in traditional gate opening control systems, this study presents a novel approach for direct current (DC) motor control utilizing an enhanced beetle antennae search (BAS) algorithm to fine-tune the parameters of a fuzzy [...] Read more.
To address the issues of sluggish response and inadequate precision in traditional gate opening control systems, this study presents a novel approach for direct current (DC) motor control utilizing an enhanced beetle antennae search (BAS) algorithm to fine-tune the parameters of a fuzzy proportional integral derivative (PID) controller. Initially, the mathematical model of the DC motor drive system is formulated. Subsequently, employing a search algorithm, the three parameters of the PID controller are optimized in accordance with the control requirements. Next, software simulation is employed to analyze the system’s response time and overshoot. Furthermore, a comparative analysis is conducted between fuzzy PID control based on the improved beetle antennae search algorithm, and conventional approaches such as the traditional beetle antennae search algorithm, the traditional particle swarm algorithm, and the enhanced particle swarm algorithm. The findings indicate the superior performance of the proposed method, characterized by reduced oscillations and accelerated convergence compared to the alternative methods. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the brushless DC motor control system.</p>
Full article ">Figure 2
<p>Load model of the DC motor drive mechanism.</p>
Full article ">Figure 3
<p>Gate opening control system.</p>
Full article ">Figure 4
<p>Schematic diagram of fuzzy PID control.</p>
Full article ">Figure 5
<p>Flowchart of the improved beetle antennae search algorithm.</p>
Full article ">Figure 6
<p>Flowchart of fuzzy PID parameter optimization by the beetle antennae search algorithm.</p>
Full article ">Figure 7
<p>PID control simulation waveforms.</p>
Full article ">Figure 8
<p>Simulated waveform of Fuzzy PID control.</p>
Full article ">Figure 9
<p>Optimal individual fitness curve.</p>
Full article ">Figure 10
<p>Fuzzy PID simulation waveform of the improved beetle antennae search algorithm.</p>
Full article ">Figure 11
<p>Convergence curve of the fitness function.</p>
Full article ">Figure 12
<p>Comparison of step response curves.</p>
Full article ">Figure 13
<p>Winch opener and PLC control cabinet.</p>
Full article ">Figure 14
<p>Water conservancy sluice gate.</p>
Full article ">Figure 15
<p>Comparison of motor speed response under different algorithmic controls.</p>
Full article ">
27 pages, 31676 KiB  
Article
Visual-Aided Obstacle Climbing by Modular Snake Robot
by Carla Cavalcante Koike, Dianne Magalhães Viana, Jones Yudi, Filipe Aziz Batista, Arthur Costa, Vinícius Carvalho and Thiago Rocha
Sensors 2024, 24(13), 4424; https://doi.org/10.3390/s24134424 - 8 Jul 2024
Viewed by 960
Abstract
Snake robots, also known as apodal robots, are among the most common and versatile modular robots. Primarily due to their ability to move in different patterns, they can evolve in scenarios with several constraints, some of them hardly accessible to other robot configurations. [...] Read more.
Snake robots, also known as apodal robots, are among the most common and versatile modular robots. Primarily due to their ability to move in different patterns, they can evolve in scenarios with several constraints, some of them hardly accessible to other robot configurations. This paper deals with a specific environment constraint where the robot needs to climb a prismatic obstacle, similar to a step. The objective is to carry out simulations of this function, before implementing it in the physical model. To this end, we propose two different algorithms, parameterized by the obstacle dimensions determined by image processing, and both are evaluated in simulated experiments. The results show that both algorithms are viable for testing in real robots, although more complex scenarios still need to be further studied. Full article
Show Figures

Figure 1

Figure 1
<p>Modular snake robot employed in this work.</p>
Full article ">Figure 2
<p>Module’s dimensions and overall geometry. (<b>a</b>) Module isometric view; (<b>b</b>) width and height of the module; (<b>c</b>) module’s length and distance to motor axis.</p>
Full article ">Figure 3
<p>Module’s components and assembly.</p>
Full article ">Figure 4
<p>Schematic view of the projected lengths according to the distance to the observer and size of the segment.</p>
Full article ">Figure 5
<p>Source image and processed image after edge detection and Hough transform. (<b>a</b>) Source image; (<b>b</b>) detected lines in pre-processed image.</p>
Full article ">Figure 6
<p>Resulting undirected unweighted graph.</p>
Full article ">Figure 7
<p>Photos obtained from a camera simulation code. (<b>a</b>) Camera first photo—simulation code; (<b>b</b>) camera second photo—simulation code.</p>
Full article ">Figure 8
<p>Analysis obtained Through image processing algorithm. (<b>a</b>) Camera first photo analysis; (<b>b</b>) camera second photo analysis.</p>
Full article ">Figure 9
<p>Photos obtained from vision sensor of CoppeliaSim. (<b>a</b>) Vision sensor first photo; (<b>b</b>) vision sensor second photo.</p>
Full article ">Figure 10
<p>Analysis obtained through image processing algorithm. (<b>a</b>) Vision sensor first photo analysis; (<b>b</b>) vision sensor second photo analysis.</p>
Full article ">Figure 11
<p>Start of movement with robot still far from the step.</p>
Full article ">Figure 12
<p>Moments when the photos were taken. (<b>a</b>) First module parallel to the step taking the first photo; (<b>b</b>) first module parallel to the step taking the second photo.</p>
Full article ">Figure 12 Cont.
<p>Moments when the photos were taken. (<b>a</b>) First module parallel to the step taking the first photo; (<b>b</b>) first module parallel to the step taking the second photo.</p>
Full article ">Figure 13
<p>Photos taken by the robot during the algorithm. (<b>a</b>) First photo taken; (<b>b</b>) second photo taken.</p>
Full article ">Figure 14
<p>Photos analysis—image processing. (<b>a</b>) Edges found for the first photo; (<b>b</b>) edges found for the second photo.</p>
Full article ">Figure 15
<p>Robot reaches the step.</p>
Full article ">Figure 16
<p>First three movements to climb the step. (<b>a</b>) Lifting the first module; (<b>b</b>) robot goes forward and the first module touches the corner of the step; (<b>c</b>) lifting the second module and supporting the first module on top of the step.</p>
Full article ">Figure 17
<p>Middle and end of climbing the step. (<b>a</b>) Robot halfway to complete the climbing; (<b>b</b>) climbing completed.</p>
Full article ">Figure 18
<p>Start of movement with robot still far from the step.</p>
Full article ">Figure 19
<p>Photos taken by the robot during the algorithm. (<b>a</b>) First photo taken; (<b>b</b>) second photo taken.</p>
Full article ">Figure 20
<p>Photos analysis—image processing. (<b>a</b>) Edges found for the first photo; (<b>b</b>) edges found for the second photo.</p>
Full article ">Figure 21
<p>Robot reaches the side of the step.</p>
Full article ">Figure 22
<p>Robot aligns its joints to the step.</p>
Full article ">Figure 23
<p>Robot make a base.</p>
Full article ">Figure 24
<p>First four movements to climb the step. (<b>a</b>) Lifting the first module; (<b>b</b>) lifting the second module and aligning the first; (<b>c</b>) first module yaws to be on the top of the step; (<b>d</b>) third module lifts, second module descends aligning, and the first module yaws also aligning.</p>
Full article ">Figure 25
<p>Robot makes the base at the top and undoes the base at the bottom.</p>
Full article ">Figure 26
<p>End of climbing the step and aligning all the modules. (<b>a</b>) All of the robot is on top of the step still with the base; (<b>b</b>) aligning all the modules on top of the step.</p>
Full article ">Figure 27
<p>Aligned robot joints before movement starts.</p>
Full article ">Figure 28
<p>Start of rotation movement.</p>
Full article ">Figure 29
<p>Middle of the rotation process.</p>
Full article ">Figure 30
<p>Rotation movement finished, and robot aligned to the step.</p>
Full article ">Figure 31
<p>Robot joints aligned before starting a climbing algorithm.</p>
Full article ">
23 pages, 3852 KiB  
Review
Automatic Monitoring Methods for Greenhouse and Hazardous Gases Emitted from Ruminant Production Systems: A Review
by Weihong Ma, Xintong Ji, Luyu Ding, Simon X. Yang, Kaijun Guo and Qifeng Li
Sensors 2024, 24(13), 4423; https://doi.org/10.3390/s24134423 - 8 Jul 2024
Viewed by 1324
Abstract
The research on automatic monitoring methods for greenhouse gases and hazardous gas emissions is currently a focal point in the fields of environmental science and climatology. Until 2023, the amount of greenhouse gases emitted by the livestock sector accounts for about 11–17% of [...] Read more.
The research on automatic monitoring methods for greenhouse gases and hazardous gas emissions is currently a focal point in the fields of environmental science and climatology. Until 2023, the amount of greenhouse gases emitted by the livestock sector accounts for about 11–17% of total global emissions, with enteric fermentation in ruminants being the main source of the gases. With the escalating problem of global climate change, accurate and effective monitoring of gas emissions has become a top priority. Presently, the determination of gas emission indices relies on specialized instrumentation such as breathing chambers, greenfeed systems, methane laser detectors, etc., each characterized by distinct principles, applicability, and accuracy levels. This paper first explains the mechanisms and effects of gas production by ruminant production systems, focusing on the monitoring methods, principles, advantages, and disadvantages of monitoring gas concentrations, and a summary of existing methods reveals their shortcomings, such as limited applicability, low accuracy, and high cost. In response to the current challenges in the field of equipment for monitoring greenhouse and hazardous gas emissions from ruminant production systems, this paper outlines future perspectives with the aim of developing more efficient, user-friendly, and cost-effective monitoring instruments. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Constituents of gases emitted to the atmosphere by ruminants and their proportions [<a href="#B12-sensors-24-04423" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>Processes of gas production (mainly CO<sub>2,</sub> CH<sub>4</sub>, and NH<sub>3</sub>) from feed decomposition in ruminants (mainly in the rumen).</p>
Full article ">Figure 3
<p>Pathways by which ruminants convert feed into gas for expulsion from the body [<a href="#B21-sensors-24-04423" class="html-bibr">21</a>].</p>
Full article ">Figure 4
<p>Summary of methodologies for monitoring greenhouse and hazardous gases.</p>
Full article ">Figure 5
<p>Electrochemical detector detection principle.</p>
Full article ">Figure 6
<p>Detection Principle of GC (Gas Chromatography).</p>
Full article ">Figure 7
<p>Principles of greenhouse gas monitoring using FTIR detectors.</p>
Full article ">Figure 8
<p>Schematic representation of the effect of satellite monitoring of CH<sub>4</sub> images [<a href="#B64-sensors-24-04423" class="html-bibr">64</a>].</p>
Full article ">Figure 9
<p>Principles of greenhouse gas monitoring using NDIR detectors.</p>
Full article ">Figure 10
<p>Methane emission plume observed by a SWIR camera, where darker colors indicate higher methane concentrations.</p>
Full article ">Figure 11
<p>Schematic diagram of the structure of the respiratory chamber and the life of the sheep in the respiratory chamber.</p>
Full article ">Figure 12
<p>Mobile open-circuit indirect calorimetry equipment cart. (1) Head hood, (2) fan, (3) mass flowmeter, (4) gas cooler, (5) gas analyzer (oxygen, carbon dioxide, and methane), and (6) box for system control and data acquisition panel [<a href="#B94-sensors-24-04423" class="html-bibr">94</a>].</p>
Full article ">Figure 13
<p>Greenfeed machine model diagram [<a href="#B86-sensors-24-04423" class="html-bibr">86</a>].</p>
Full article ">
19 pages, 6221 KiB  
Article
Learning Temporal–Spatial Contextual Adaptation for Three-Dimensional Human Pose Estimation
by Hexin Wang, Wei Quan, Runjing Zhao, Miaomiao Zhang and Na Jiang
Sensors 2024, 24(13), 4422; https://doi.org/10.3390/s24134422 - 8 Jul 2024
Viewed by 1064
Abstract
Three-dimensional human pose estimation focuses on generating 3D pose sequences from 2D videos. It has enormous potential in the fields of human–robot interaction, remote sensing, virtual reality, and computer vision. Existing excellent methods primarily focus on exploring spatial or temporal encoding to achieve [...] Read more.
Three-dimensional human pose estimation focuses on generating 3D pose sequences from 2D videos. It has enormous potential in the fields of human–robot interaction, remote sensing, virtual reality, and computer vision. Existing excellent methods primarily focus on exploring spatial or temporal encoding to achieve 3D pose inference. However, various architectures exploit the independent effects of spatial and temporal cues on 3D pose estimation, while neglecting the spatial–temporal synergistic influence. To address this issue, this paper proposes a novel 3D pose estimation method with a dual-adaptive spatial–temporal former (DASTFormer) and additional supervised training. The DASTFormer contains attention-adaptive (AtA) and pure-adaptive (PuA) modes, which will enhance pose inference from 2D to 3D by adaptively learning spatial–temporal effects, considering both their cooperative and independent influences. In addition, an additional supervised training with batch variance loss is proposed in this work. Different from common training strategy, a two-round parameter update is conducted on the same batch data. Not only can it better explore the potential relationship between spatial–temporal encoding and 3D poses, but it can also alleviate the batch size limitations imposed by graphics cards on transformer-based frameworks. Extensive experimental results show that the proposed method significantly outperforms most state-of-the-art approaches on Human3.6 and HumanEVA datasets. Full article
(This article belongs to the Special Issue Computer Vision and Virtual Reality: Technologies and Applications)
Show Figures

Figure 1

Figure 1
<p>Some failed visualize examples of 3D pose estimation using a single image from the wild dataset 3DPW [<a href="#B19-sensors-24-04422" class="html-bibr">19</a>] as input. The first line refers to the raw inputs.The second line shows the 3D pose estimated by videoposed3D [<a href="#B16-sensors-24-04422" class="html-bibr">16</a>]. (<b>a</b>) When the body is obstructed, the estimated pose of the right arm deviates. (<b>b</b>) When the body is in a complex posture, there is unexpected overlap in the 3D pose of upper body. (<b>c</b>) When the background is cluttered, there is an incorrect association between the left and right legs.</p>
Full article ">Figure 2
<p>Outline of the proposed method. LTS and DASTFormer are responsible for feature encoding. BVLoss is only applied for the second supervised training and guides the 3D Pose_After results surpass the 3D Pose_Before. Best viewed in color.</p>
Full article ">Figure 3
<p>DASTFormer. DASTFormer consists of <span class="html-italic">N</span> spatial–temporal blocks (in grey) with two adaptive modes. The green subgraph on the left represents the attention-adaptive mode (<math display="inline"><semantics> <mrow> <mi>A</mi> <mi>t</mi> <mi>A</mi> </mrow> </semantics></math>), while the blue part on the right shows the pure-adaptive mode (<math display="inline"><semantics> <mrow> <mi>P</mi> <mi>u</mi> <mi>A</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>Visualizations of PuA in Block 3. The first row presents the real weights, while the second row depicts the normalized weights. Each column represents the attention weights <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi>α</mi> <mi>T</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>α</mi> <mrow> <mi>S</mi> <mi>T</mi> </mrow> </msub> </semantics></math>, respectively. The <span class="html-italic">x</span>-axis and <span class="html-italic">y</span>-axis represent frame number and keypoint id.</p>
Full article ">Figure 5
<p>Qualitative comparison with PoseFormer [<a href="#B20-sensors-24-04422" class="html-bibr">20</a>] and GT. Our method is qualitatively compared with PoseFormer [<a href="#B20-sensors-24-04422" class="html-bibr">20</a>] on some actions in Human3.6M. The blue circles highlight positions where our method achieves superior results.</p>
Full article ">Figure 6
<p>Visualization under challenging in real-world videos. The green arrows indicate accurate pose estimation, while the red arrows signify deviations in the estimated pose. The labels (<b>a</b>) and (<b>b</b>) represent two different videos.</p>
Full article ">
20 pages, 5255 KiB  
Article
Tackling Few-Shot Challenges in Automatic Modulation Recognition: A Multi-Level Comparative Relation Network Combining Class Reconstruction Strategy
by Zhao Ma, Shengliang Fang, Youchen Fan, Shunhu Hou and Zhaojing Xu
Sensors 2024, 24(13), 4421; https://doi.org/10.3390/s24134421 - 8 Jul 2024
Viewed by 759
Abstract
Automatic Modulation Recognition (AMR) is a key technology in the field of cognitive communication, playing a core role in many applications, especially in wireless security issues. Currently, deep learning (DL)-based AMR technology has achieved many research results, greatly promoting the development of AMR [...] Read more.
Automatic Modulation Recognition (AMR) is a key technology in the field of cognitive communication, playing a core role in many applications, especially in wireless security issues. Currently, deep learning (DL)-based AMR technology has achieved many research results, greatly promoting the development of AMR technology. However, the few-shot dilemma faced by DL-based AMR methods greatly limits their application in practical scenarios. Therefore, this paper endeavored to address the challenge of AMR with limited data and proposed a novel meta-learning method, the Multi-Level Comparison Relation Network with Class Reconstruction (MCRN-CR). Firstly, the method designs a structure of a multi-level comparison relation network, which involves embedding functions to output their feature maps hierarchically, comprehensively calculating the relation scores between query samples and support samples to determine the modulation category. Secondly, the embedding function integrates a reconstruction module, leveraging an autoencoder for support sample reconstruction, wherein the encoder serves dual purposes as the embedding mechanism. The training regimen incorporates a meta-learning paradigm, harmoniously combining classification and reconstruction losses to refine the model’s performance. The experimental results on the RadioML2018 dataset show that our designed method can greatly alleviate the small sample problem in AMR and is superior to existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of measurement space.</p>
Full article ">Figure 2
<p>AMR method based on relational networks.</p>
Full article ">Figure 3
<p>The overall framework diagram of the proposed MCRN-CR.</p>
Full article ">Figure 4
<p>Structure of encoder and decoder.</p>
Full article ">Figure 5
<p>The recognition accuracy of different models at all SNRs.</p>
Full article ">Figure 6
<p>Confusion matrix diagram at the 12 dB SNR.</p>
Full article ">Figure 7
<p>The recognition accuracy of the comparison model at all SNRs.</p>
Full article ">Figure 8
<p>Confusion matrix diagram of the control model at the 12 dB SNR.</p>
Full article ">Figure 9
<p>Recognition accuracy curves of models under different <span class="html-italic">K</span> values.</p>
Full article ">Figure 10
<p>Recognition accuracy curves of models under different <span class="html-italic">C</span> values.</p>
Full article ">
24 pages, 2167 KiB  
Article
Utilizing Deep Feature Fusion for Automatic Leukemia Classification: An Internet of Medical Things-Enabled Deep Learning Framework
by Md Manowarul Islam, Habibur Rahman Rifat, Md. Shamim Bin Shahid, Arnisha Akhter and Md Ashraf Uddin
Sensors 2024, 24(13), 4420; https://doi.org/10.3390/s24134420 - 8 Jul 2024
Viewed by 1124
Abstract
Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone [...] Read more.
Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection. Full article
(This article belongs to the Special Issue Securing E-health Data across IoMT and Wearable Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Workflow of the proposed framework. This workflow comprises six essential components: 1. Image acquisition 2. Cloud-based feature fusion model. 3. Image preprocessing, 4. Extraction of features 5. Block for concatenating features and classifying them. 6. Sending the outcome to the medical center and patient.</p>
Full article ">Figure 2
<p>Original images.</p>
Full article ">Figure 3
<p>Segmented images.</p>
Full article ">Figure 4
<p>The structure of the suggested model. The input image shapes are 128 × 128 × 3, and feature extraction is performed using transfer learning models. In order to reduce the number of parameters and preserve spatial information, the global average pooling2D is utilized. To mitigate overfitting concerns, dropout layers with a value of 0.2 are implemented in the dense layer.</p>
Full article ">Figure 5
<p>Leveraging spatial and morphological features: A squeeze-and-excitation enhanced deep learning architecture for leukemia classification.</p>
Full article ">Figure 6
<p>Original images’ training and validation.</p>
Full article ">Figure 7
<p>Segmented images’ training and validation.</p>
Full article ">Figure 8
<p>Combined images’ training and validation.</p>
Full article ">Figure 9
<p>Confusion matrix.</p>
Full article ">Figure 10
<p>Comparison between traditional CNN models.</p>
Full article ">Figure 11
<p>Comparison between Mohamed E. Karar et al. [<a href="#B44-sensors-24-04420" class="html-bibr">44</a>] and Mustafa Ghaderzadeh et al. [<a href="#B38-sensors-24-04420" class="html-bibr">38</a>].</p>
Full article ">Figure 12
<p>Flow Diagram of classification process in the AWS Cloud Server. A patient or user can upload their sample test image in the cloud server; the server trained with the deep learning model can perform image preprocessing and testing. Finally, it sends the notification of the results to the user.</p>
Full article ">Figure 13
<p>Leukemia Classification Web Application. A user uploads the sample images for prediction, all the processing is then performed in the cloud server, and the results of the sample images are then sent back to the user.</p>
Full article ">
17 pages, 4444 KiB  
Article
A Study on Graph Optimization Method for GNSS/IMU Integrated Navigation System Based on Virtual Constraints
by Haiyang Qiu, Yun Zhao, Hui Wang and Lei Wang
Sensors 2024, 24(13), 4419; https://doi.org/10.3390/s24134419 - 8 Jul 2024
Viewed by 1046
Abstract
In GNSS/IMU integrated navigation systems, factors like satellite occlusion and non-line-of-sight can degrade satellite positioning accuracy, thereby impacting overall navigation system results. To tackle this challenge and leverage historical pseudorange information effectively, this paper proposes a graph optimization-based GNSS/IMU model with virtual constraints. [...] Read more.
In GNSS/IMU integrated navigation systems, factors like satellite occlusion and non-line-of-sight can degrade satellite positioning accuracy, thereby impacting overall navigation system results. To tackle this challenge and leverage historical pseudorange information effectively, this paper proposes a graph optimization-based GNSS/IMU model with virtual constraints. These virtual constraints in the graph model are derived from the satellite’s position from the previous time step, the rate of change of pseudoranges, and ephemeris data. This virtual constraint serves as an alternative solution for individual satellites in cases of signal anomalies, thereby ensuring the integrity and continuity of the graph optimization model. Additionally, this paper conducts an analysis of the graph optimization model based on these virtual constraints, comparing it with traditional graph models of GNSS/IMU and SLAM. The marginalization of the graph model involving virtual constraints is analyzed next. The experiment was conducted on a set of real-world data, and the results of the proposed method were compared with tightly coupled Kalman filtering and the original graph optimization method. In instantaneous performance testing, the method maintains an RMSE error within 5% compared with real pseudorange measurement, while in a continuous performance testing scenario with no available GNSS signal, the method shows approximately a 30% improvement in horizontal RMSE accuracy over the traditional graph optimization method during a 10-second period. This demonstrates the method’s potential for practical applications. Full article
(This article belongs to the Special Issue INS/GNSS Integrated Navigation Systems)
Show Figures

Figure 1

Figure 1
<p>Advantages of graph optimization for handling outliers.</p>
Full article ">Figure 2
<p>Flowchart for creating virtual constraints.</p>
Full article ">Figure 3
<p>Pseudorange prediction with previous satellite position and pseudorange rate.</p>
Full article ">Figure 4
<p>Virtual constraints-based GP model diagram for GNSS/IMU.</p>
Full article ">Figure 5
<p>Information matrix of graph model with virtual constraints.</p>
Full article ">Figure 6
<p>Traditional GNSS graph model.</p>
Full article ">Figure 7
<p>SLAM graph model.</p>
Full article ">Figure 8
<p>Marginalization for graph-based optimization.</p>
Full article ">Figure 9
<p>GNSS graph model with virtual constraint.</p>
Full article ">Figure 10
<p>Trajectories of tightly coupled Kalman filter, GO, and reference ground truth.</p>
Full article ">Figure 11
<p>The absolute errors for GO and Kalman filter compared with reference trajectory.</p>
Full article ">Figure 12
<p>The error between virtual pseudorange and real measured pseudorange.</p>
Full article ">Figure 13
<p>Trajectories of GO and VC GO using test data.</p>
Full article ">Figure 14
<p>The error between GO and VC GO in three-axis directions.</p>
Full article ">
23 pages, 9534 KiB  
Article
Adaptive Disturbance Suppression Method for Servo Systems Based on State Equalizer
by Jinzhao Li, Yonggang Li, Xiantao Li, Dapeng Mao and Bao Zhang
Sensors 2024, 24(13), 4418; https://doi.org/10.3390/s24134418 - 8 Jul 2024
Viewed by 568
Abstract
Disturbances in the aviation environment can compromise the stability of the aviation optoelectronic stabilization platform. Traditional methods, such as the proportional integral adaptive robust (PI + ARC) control algorithm, face a challenge: once high-frequency disturbances are introduced, their effectiveness is constrained by the [...] Read more.
Disturbances in the aviation environment can compromise the stability of the aviation optoelectronic stabilization platform. Traditional methods, such as the proportional integral adaptive robust (PI + ARC) control algorithm, face a challenge: once high-frequency disturbances are introduced, their effectiveness is constrained by the control system’s bandwidth, preventing further stability enhancement. A state equalizer speed closed-loop control algorithm is proposed, which combines proportional integral adaptive robustness with state equalizer (PI + ARC + State equalizer) control algorithm. This new control structure can suppress high-frequency disturbances caused by mechanical resonance, improve the bandwidth of the control system, and further achieve fast convergence and stability of the PI + ARC algorithm. Experimental results indicate that, in comparison to the control algorithm of PI + ARC, the inclusion of a state equalizer speed closed-loop compensation in the model significantly increases the closed-loop bandwidth by 47.6%, significantly enhances the control system’s resistance to disturbances, and exhibits robustness in the face of variations in the model parameters and feedback sensors of the control object. In summary, integrating a state equalizer speed closed-loop with PI + ARC significantly enhances the suppression of high-frequency disturbances and the performance of control systems. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Several advanced aviation optoelectronic stabilization platforms. (<b>a</b>) AN/AAQ-30; (<b>b</b>) MX20; (<b>c</b>) EOTS.</p>
Full article ">Figure 2
<p>Images of the same target area with different visual axis stability accuracies.</p>
Full article ">Figure 3
<p>Overall scheme design diagram.</p>
Full article ">Figure 4
<p>Elastic Position Distribution of Aviation Optoelectronic Stability Platform.</p>
Full article ">Figure 5
<p>Simplified System of Aviation Optoelectronic Stability Platform.</p>
Full article ">Figure 6
<p>Control system closed-loop resonant loop.</p>
Full article ">Figure 7
<p>Scanning curve of platform model.</p>
Full article ">Figure 8
<p>Simplified model diagram of aviation optoelectronic platform.</p>
Full article ">Figure 9
<p>Control System with Adaptive Robust Control.</p>
Full article ">Figure 10
<p>Resonant balanced speed closed-loop system.</p>
Full article ">Figure 11
<p>Curve plot of the influence of current response and motor speed on the closed-loop.</p>
Full article ">Figure 12
<p>Principle diagram of control structure combining adaptive robust control method and state equalizer.</p>
Full article ">Figure 13
<p>Experimental platform.</p>
Full article ">Figure 14
<p>Bode diagram before and after compensation.</p>
Full article ">Figure 15
<p>Method of experimental equipment for system anti-interference ability. (<b>a</b>) Installation diagram for the aviation optoelectronic stabilization platform for anti-interference testing. (<b>b</b>) Upper computer control interface for the experimental platform.</p>
Full article ">Figure 16
<p>Comparison of speed stability experiments.</p>
Full article ">Figure 17
<p>Under the disturbance of 2 Hz, the Fourier transform of the internal frame rate before and after the state equalizer speed closed-loop is adopted.</p>
Full article ">Figure 18
<p>Image Analysis System.</p>
Full article ">Figure 19
<p>The shaking range of the Los of the controller. (<b>a</b>) shaking range of the Los of PI + ARC controller; (<b>b</b>) shaking range of the Los of PI + ARC + State equalize controller.</p>
Full article ">Figure 20
<p>Vibration test device.</p>
Full article ">Figure 21
<p>Shaking range of the controller’s Los. (<b>a</b>) shaking range of the Los of PI + ARC controller; (<b>b</b>) shaking range of the Los of PI + ARC + State equalize controller.</p>
Full article ">Figure 22
<p>Gyroscopic Static Noise Data.</p>
Full article ">
22 pages, 2874 KiB  
Review
Leveraging Wearable Sensors in Virtual Reality Driving Simulators: A Review of Techniques and Applications
by Răzvan Gabriel Boboc, Eugen Valentin Butilă and Silviu Butnariu
Sensors 2024, 24(13), 4417; https://doi.org/10.3390/s24134417 - 8 Jul 2024
Viewed by 1168
Abstract
Virtual reality (VR) driving simulators are very promising tools for driver assessment since they provide a controlled and adaptable setting for behavior analysis. At the same time, wearable sensor technology provides a well-suited and valuable approach to evaluating the behavior of drivers and [...] Read more.
Virtual reality (VR) driving simulators are very promising tools for driver assessment since they provide a controlled and adaptable setting for behavior analysis. At the same time, wearable sensor technology provides a well-suited and valuable approach to evaluating the behavior of drivers and their physiological or psychological state. This review paper investigates the potential of wearable sensors in VR driving simulators. Methods: A literature search was performed on four databases (Scopus, Web of Science, Science Direct, and IEEE Xplore) using appropriate search terms to retrieve scientific articles from a period of eleven years, from 2013 to 2023. Results: After removing duplicates and irrelevant papers, 44 studies were selected for analysis. Some important aspects were extracted and presented: the number of publications per year, countries of publication, the source of publications, study aims, characteristics of the participants, and types of wearable sensors. Moreover, an analysis and discussion of different aspects are provided. To improve car simulators that use virtual reality technologies and boost the effectiveness of particular driver training programs, data from the studies included in this systematic review and those scheduled for the upcoming years may be of interest. Full article
(This article belongs to the Special Issue Virtual Reality and Sensing Techniques for Human)
Show Figures

Figure 1

Figure 1
<p>Quality assessment of the selected studies using CASP quality study checklist. Note: Y—yes, N—no, U—unclear.</p>
Full article ">Figure 2
<p>Flowchart of the literature screening process.</p>
Full article ">Figure 3
<p>Evolution of annual publication and number of citations per year.</p>
Full article ">Figure 4
<p>Geographic distribution of publications.</p>
Full article ">Figure 5
<p>Number of papers based on the source of publication.</p>
Full article ">Figure 6
<p>Mean age and standard deviation of the participants in the selected studies.</p>
Full article ">Figure 7
<p>The gender of the participants in the selected studies.</p>
Full article ">Figure 8
<p>Wearable devices and measurements in the selected studies.</p>
Full article ">
25 pages, 13959 KiB  
Article
Trajectory Analysis of 6-DOF Industrial Robot Manipulators by Using Artificial Neural Networks
by Mehmet Bahadır Çetinkaya, Kürşat Yildirim and Şahin Yildirim
Sensors 2024, 24(13), 4416; https://doi.org/10.3390/s24134416 - 8 Jul 2024
Cited by 2 | Viewed by 1066
Abstract
Robot manipulators are robotic systems that are frequently used in automation systems and able to provide increased speed, precision, and efficiency in the industrial applications. Due to their nonlinear and complex nature, it is crucial to optimize the robot manipulator systems in terms [...] Read more.
Robot manipulators are robotic systems that are frequently used in automation systems and able to provide increased speed, precision, and efficiency in the industrial applications. Due to their nonlinear and complex nature, it is crucial to optimize the robot manipulator systems in terms of trajectory control. In this study, positioning analyses based on artificial neural networks (ANNs) were performed for robot manipulator systems used in the textile industry, and the optimal ANN model for the high-accuracy positioning was improved. The inverse kinematic analyses of a 6-degree-of-freedom (DOF) industrial denim robot manipulator were carried out via four different learning algorithms, delta-bar-delta (DBD), online back propagation (OBP), quick back propagation (QBP), and random back propagation (RBP), for the proposed neural network predictor. From the results obtained, it was observed that the QBP-based 3-10-6 type ANN structure produced the optimal results in terms of estimation and modeling of trajectory control. In addition, the 3-5-6 type ANN structure was also improved, and its root mean square error (RMSE) and statistical R2 performances were compared with that of the 3-10-6 ANN structure. Consequently, it can be concluded that the proposed neural predictors can successfully be employed in real-time industrial applications for robot manipulator trajectory analysis. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Classical human-based chemical spraying process on denim textile.</p>
Full article ">Figure 2
<p>Representation of the proposed 6-DOF industrial robot and its rotation axes.</p>
Full article ">Figure 3
<p>Robot manipulator and each joint axes sets.</p>
Full article ">Figure 4
<p>The proposed artificial neural network representation.</p>
Full article ">Figure 5
<p>ANN-based modeling of a robot manipulator.</p>
Full article ">Figure 6
<p>DBD learning algorithm-based prediction results for the first joint angle.</p>
Full article ">Figure 7
<p>DBD learning algorithm-based prediction results for the second joint angle.</p>
Full article ">Figure 8
<p>DBD learning algorithm-based prediction results for the third joint angle.</p>
Full article ">Figure 9
<p>DBD learning algorithm-based prediction results for the fourth joint angle.</p>
Full article ">Figure 10
<p>DBD learning algorithm-based prediction results for the fifth joint angle.</p>
Full article ">Figure 11
<p>DBD learning algorithm-based prediction results for the sixth joint angle.</p>
Full article ">Figure 12
<p>OBP learning algorithm-based prediction results for the first joint angle.</p>
Full article ">Figure 13
<p>OBP learning algorithm-based prediction results for the second joint angle.</p>
Full article ">Figure 14
<p>OBP learning algorithm-based prediction results for the third joint angle.</p>
Full article ">Figure 15
<p>OBP learning algorithm-based prediction results for the fourth joint angle.</p>
Full article ">Figure 16
<p>OBP learning algorithm-based prediction results for the fifth joint angle.</p>
Full article ">Figure 17
<p>OBP learning algorithm-based prediction results for the sixth joint angle.</p>
Full article ">Figure 18
<p>QBP learning algorithm-based prediction results for the first joint angle.</p>
Full article ">Figure 19
<p>QBP learning algorithm-based prediction results for the second joint angle.</p>
Full article ">Figure 20
<p>QBP learning algorithm-based prediction results for the third joint angle.</p>
Full article ">Figure 21
<p>QBP learning algorithm-based prediction results for the fourth joint angle.</p>
Full article ">Figure 22
<p>QBP learning algorithm-based prediction results for the fifth joint angle.</p>
Full article ">Figure 23
<p>QBP learning algorithm-based prediction results for the sixth joint angle.</p>
Full article ">Figure 24
<p>RBP learning algorithm-based prediction results for the first joint angle.</p>
Full article ">Figure 25
<p>RBP learning algorithm-based prediction results for the second joint angle.</p>
Full article ">Figure 26
<p>RBP learning algorithm-based prediction results for the third joint angle.</p>
Full article ">Figure 27
<p>RBP learning algorithm-based prediction results for the fourth joint angle.</p>
Full article ">Figure 28
<p>RBP learning algorithm-based prediction results for the fifth joint angle.</p>
Full article ">Figure 29
<p>RBP learning algorithm-based prediction results for the sixth joint angle.</p>
Full article ">
22 pages, 4217 KiB  
Article
Graph Feature Refinement and Fusion in Transformer for Structural Damage Detection
by Tianjie Hu, Kejian Ma and Jianchun Xiao
Sensors 2024, 24(13), 4415; https://doi.org/10.3390/s24134415 - 8 Jul 2024
Viewed by 740
Abstract
Structural damage detection is of significance for maintaining the structural health. Currently, data-driven deep learning approaches have emerged as a highly promising research field. However, little progress has been made in studying the relationship between the global and local information of structural response [...] Read more.
Structural damage detection is of significance for maintaining the structural health. Currently, data-driven deep learning approaches have emerged as a highly promising research field. However, little progress has been made in studying the relationship between the global and local information of structural response data. In this paper, we have presented an innovative Convolutional Enhancement and Graph Features Fusion in Transformer (CGsformer) network for structural damage detection. The proposed CGsformer network introduces an innovative approach for hierarchical learning from global to local information to extract acceleration response signal features for structural damage representation. The key advantage of this network is the integration of a graph convolutional network in the learning process, which enables the construction of a graph structure for global features. By incorporating node learning, the graph convolutional network filters out noise in the global features, thereby facilitating the extraction to more effective local features. In the verification based on the experimental data of four-story steel frame model experiment data and IASC-ASCE benchmark structure simulated data, the CGsformer network achieved damage identification accuracies of 92.44% and 96.71%, respectively. It surpassed the existing traditional damage detection methods based on deep learning. Notably, the model demonstrates good robustness under noisy conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Self-attention mechanism.</p>
Full article ">Figure 2
<p>Multiheaded self-attention mechanism.</p>
Full article ">Figure 3
<p>The convolution module.</p>
Full article ">Figure 4
<p>The overall structure of the CGsformer. The proposed CGsformer is mainly composed of four parts: feedforward module, self-attention mechanism module, convolution module, and graph network module.</p>
Full article ">Figure 5
<p>The feedforward module.</p>
Full article ">Figure 6
<p>IASC-ASCE SHM Benchmark structure model.</p>
Full article ">Figure 7
<p>From left to right, they correspond to damage modes D.P.1, D.P.2, and D.P.4.</p>
Full article ">Figure 8
<p>Distribution of IASC-ASCE SHM Benchmark model measurement points.</p>
Full article ">Figure 9
<p>Acceleration time response curve of D.P.1 damage pattern, which was collected from sensor NO.1 (acc 1) on the first floor at 0 noise level.</p>
Full article ">Figure 10
<p>Acceleration time response curve of D.P.1 damage pattern, which was collected from sensor NO.3 (acc 3) on the first floor at 0 noise level.</p>
Full article ">Figure 11
<p>Illustration of ablation experiments with GCN placed at different positions in the CGsformer block.</p>
Full article ">Figure 12
<p>Diagram of the best-performing confusion matrix for the three noise levels.</p>
Full article ">Figure 13
<p>The four-story steel frame structure experimental model.</p>
Full article ">Figure 14
<p>Three types of replacement columns.</p>
Full article ">Figure 15
<p>The acceleration response curves of the D.P.0 damage pattern in the south side on the first floor.</p>
Full article ">Figure 16
<p>The acceleration response curves of the D.P.0 damage pattern in the north side on the first floor.</p>
Full article ">
16 pages, 765 KiB  
Article
Progressive Inter-Path Interference Cancellation Algorithm for Channel Estimation Using Orthogonal Time–Frequency Space
by Mauro Marchese, Henk Wymeersch, Paolo Spallaccini and Pietro Savazzi
Sensors 2024, 24(13), 4414; https://doi.org/10.3390/s24134414 - 8 Jul 2024
Viewed by 755
Abstract
Fractional delay-Doppler (DD) channel estimation in orthogonal time–frequency space (OTFS) systems poses a significant challenge considering the severe effects of inter-path interference (IPI). To this end, several algorithms have been extensively explored in the literature for accurate low-complexity channel estimation in both integer [...] Read more.
Fractional delay-Doppler (DD) channel estimation in orthogonal time–frequency space (OTFS) systems poses a significant challenge considering the severe effects of inter-path interference (IPI). To this end, several algorithms have been extensively explored in the literature for accurate low-complexity channel estimation in both integer and fractional DD scenarios. In this work, we develop a variant of the state-of-the-art delay-Doppler inter-path interference cancellation (DDIPIC) algorithm that progressively cancels the IPI as estimates are obtained. The key advantage of the proposed approach is that it requires only a final refinement procedure reducing the complexity of the algorithm. Specifically, the time difference in latency between the proposed approach and the DDIPIC algorithm is almost proportional to the square of the number of estimated paths. Numerical results show that the proposed algorithm outperforms the other channel estimation schemes achieving lower normalized mean square error (NMSE) and bit error rate (BER). Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Received DD domain pilot signal.</p>
Full article ">Figure 2
<p>NMSE as a function of pilot SNR.</p>
Full article ">Figure 3
<p>NRMSE of channel parameters as a function of pilot SNR.</p>
Full article ">Figure 4
<p>Average number of estimated paths as a function of pilot SNR.</p>
Full article ">Figure 5
<p>Probability mass function of the number of detected paths.</p>
Full article ">Figure 6
<p>Bit error rate as a function of SNR/bit.</p>
Full article ">
16 pages, 5811 KiB  
Article
Efficient Vibration Measurement and Modal Shape Visualization Based on Dynamic Deviations of Structural Edge Profiles
by Andong Zhu, Xinlong Gong, Jie Zhou, Xiaolong Zhang and Dashan Zhang
Sensors 2024, 24(13), 4413; https://doi.org/10.3390/s24134413 - 8 Jul 2024
Viewed by 801
Abstract
As a non-contact method, vision-based measurement for vibration extraction and modal parameter identification has attracted much attention. In most cases, artificial textures are crucial elements for visual tracking, and this feature limits the application of vision-based vibration measurement on textureless targets. As a [...] Read more.
As a non-contact method, vision-based measurement for vibration extraction and modal parameter identification has attracted much attention. In most cases, artificial textures are crucial elements for visual tracking, and this feature limits the application of vision-based vibration measurement on textureless targets. As a computation technique for visualizing subtle variations in videos, the video magnification technique can analyze modal responses and visualize modal shapes, but the efficiency is low, and the processing results contain clipping artifacts. This paper proposes a novel method for the application of a modal test. In contrast to the deviation magnification that exaggerates subtle geometric deviations from only a single image, the proposed method extracts vibration signals with sub-pixel accuracy on edge positions by changing the perspective of deviations from space to timeline. Then, modal shapes are visualized by decoupling all spatial vibrations following the vibration theory of continuous linear systems. Without relying on artificial textures and motion magnification, the proposed method achieves high operating efficiency and avoids clipping artifacts. Finally, the effectiveness and practical value of the proposed method are validated by two laboratory experiments on a cantilever beam and an arch dam model. Full article
(This article belongs to the Special Issue Structural Health Monitoring Based on Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>A simulation for understanding how to extract the dynamic deviations through the edge profile.</p>
Full article ">Figure 2
<p>Comparison of the vibration signals <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <msub> <mi>x</mi> <mi>n</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>f</mi> <mo>^</mo> </mover> <msub> <mi>x</mi> <mi>n</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The representation of spatial motion based on the theory of linear vibration systems in the simulation test.</p>
Full article ">Figure 4
<p>The experimental setup for the beam experiment.</p>
Full article ">Figure 5
<p>(<b>a</b>) The location of the spatial-temporal slice in the beam test, (<b>b</b>) the spatial-temporal slices of bilateral edges, (<b>c</b>) the vibration signal from bilateral edges and frequency spectra, (<b>d</b>) the vibration signal from the accelerometer and frequency spectra.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>), first three orders of the modal responses and frequency spectra in the beam test.</p>
Full article ">Figure 7
<p>(<b>a</b>) The grayscale image of the beam, (<b>b</b>) the vibration signal extracted before sharpening and frequency spectra, (<b>c</b>) the vibration signal extracted after sharpening and frequency spectra.</p>
Full article ">Figure 8
<p>(<b>a</b>) The grayscale image of the beam, (<b>b</b>–<b>d</b>) comparison of the first three orders of the spatial weights before and after sharpening in the beam test.</p>
Full article ">Figure 9
<p>(<b>a</b>) The modal shapes obtained by finite element simulation in the beam test, (<b>b</b>) comparison of the normalized modal shapes obtained by finite element simulation and the proposed method in the beam test.</p>
Full article ">Figure 10
<p>The experimental setup for the arch dam model test.</p>
Full article ">Figure 11
<p>(<b>a</b>) The extraction results of the dam body pixels and the location of the spatial-temporal slice in the arch dam model test, (<b>b</b>) the spatial-temporal slices from bilateral edges, (<b>c</b>) the vibration signal from bilateral edges and frequency spectra, (<b>d</b>) the vibration signal from the accelerometer and frequency spectra.</p>
Full article ">Figure 12
<p>(<b>a</b>,<b>b</b>), the first three orders of modal responses and frequency spectra in the arch dam model test.</p>
Full article ">Figure 13
<p>(<b>a</b>,<b>b</b>) Comparison results of modal shapes obtained by finite element simulation and the proposed method in the arch dam model test.</p>
Full article ">
19 pages, 6828 KiB  
Article
Feature Extraction Methods for Underwater Acoustic Target Recognition of Divers
by Yuchen Sun, Weiyi Chen, Changgeng Shuai, Zhiqiang Zhang, Pingbo Wang, Guo Cheng and Wenjing Yu
Sensors 2024, 24(13), 4412; https://doi.org/10.3390/s24134412 - 8 Jul 2024
Viewed by 1143
Abstract
The extraction of typical features of underwater target signals and excellent recognition algorithms are the keys to achieving underwater acoustic target recognition of divers. This paper proposes a feature extraction method for diver signals: frequency−domain multi−sub−band energy (FMSE), aiming to achieve accurate recognition [...] Read more.
The extraction of typical features of underwater target signals and excellent recognition algorithms are the keys to achieving underwater acoustic target recognition of divers. This paper proposes a feature extraction method for diver signals: frequency−domain multi−sub−band energy (FMSE), aiming to achieve accurate recognition of diver underwater acoustic targets by passive sonar. The impact of the presence or absence of targets, different numbers of targets, different signal−to−noise ratios, and different detection distances on this method was studied based on experimental data under different conditions, such as water pools and lakes. It was found that the FMSE method has the best robustness and performance compared with two other signal feature extraction methods: mel frequency cepstral coefficient filtering and gammatone frequency cepstral coefficient filtering. Combined with the commonly used recognition algorithm of support vector machines, the FMSE method can achieve a comprehensive recognition accuracy of over 94% for frogman underwater acoustic targets. This indicates that the FMSE method is suitable for underwater acoustic recognition of diver targets. Full article
(This article belongs to the Special Issue Advanced Acoustic Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>Calculation flow chart of the frequency−domain multi−sub−band energy feature extraction method.</p>
Full article ">Figure 2
<p>Calculation flow chart of mel frequency cepstral coefficient feature extraction method.</p>
Full article ">Figure 3
<p>Amplitude−frequency response curves of mel triangular filter bank.</p>
Full article ">Figure 4
<p>Calculation flow chart of gammatone frequency cepstral coefficient feature extraction method.</p>
Full article ">Figure 5
<p>Amplitude–frequency response curves of gammatone filter bank.</p>
Full article ">Figure 6
<p>Sub−band energy sequences with the: (<b>A</b>) presence or (<b>B</b>) absence of divers.</p>
Full article ">Figure 7
<p>Sub−band energy sequences with the presence of: (<b>A</b>) one or (<b>B</b>) two to four divers.</p>
Full article ">Figure 8
<p>Sub−band energy sequences of one diver with: (<b>A</b>) the original signal−to−noise ratio (SNR), (<b>B</b>) 40 dB SNR, (<b>C</b>) 0 dB SNR, and (<b>D</b>) −40 dB SNR.</p>
Full article ">Figure 8 Cont.
<p>Sub−band energy sequences of one diver with: (<b>A</b>) the original signal−to−noise ratio (SNR), (<b>B</b>) 40 dB SNR, (<b>C</b>) 0 dB SNR, and (<b>D</b>) −40 dB SNR.</p>
Full article ">Figure 9
<p>Sub−band energy sequences of one diver: (<b>A</b>) ~2 m and (<b>B</b>) ~20 m from the hydrophone.</p>
Full article ">Figure 10
<p>Mel frequency cepstral coefficient (MFCC) feature with the: (<b>A</b>) presence or (<b>B</b>) absence of divers.</p>
Full article ">Figure 11
<p>Mel frequency cepstral coefficient (MFCC) feature with the presence of: (<b>A</b>) one or (<b>B</b>) two to four divers.</p>
Full article ">Figure 12
<p>Mel frequency cepstral coefficient (MFCC) feature of one diver with: (<b>A</b>) the original signal−to−noise ratio (SNR), (<b>B</b>) 40 dB SNR, (<b>C</b>) 0 dB SNR, and (<b>D</b>) −40 dB SNR.</p>
Full article ">Figure 13
<p>Mel frequency cepstral coefficient (MFCC) feature of one diver: (<b>A</b>) ~2 m and (<b>B</b>) ~20 m from the hydrophone.</p>
Full article ">Figure 14
<p>Gammatone frequency cepstral coefficient (GFCC) feature with the: (<b>A</b>) presence or (<b>B</b>) absence of divers.</p>
Full article ">Figure 15
<p>Gammatone frequency cepstral coefficient (GFCC) feature with the presence of: (<b>A</b>) one or (<b>B</b>) two to four divers.</p>
Full article ">Figure 16
<p>Gammatone frequency cepstral coefficient (GFCC) feature of one diver with: (<b>A</b>) the original signal−to−noise ratio (SNR), (<b>B</b>) 40 dB SNR, (<b>C</b>) 0 dB SNR, and (<b>D</b>) −40 dB SNR.</p>
Full article ">Figure 17
<p>Gammatone frequency cepstral coefficient (GFCC) feature of one diver: (<b>A</b>) ~2 m and (<b>B</b>) ~20 m from the hydrophone.</p>
Full article ">Figure 18
<p>Comparison of three feature recognition effects. Note, MFCC, mel frequency cepstral Coefficient; GFCC, gammatone frequency cepstral coefficient; Acc, recognition accuracy, indicating the proportion of recognition results were consistent with the actual situation; Sen, recognition sensitivity, indicating the accuracy rate of recognizing the “presence of divers”; Spe, recognition specificity, indicating the accuracy rate of recognizing the “absence of divers”.</p>
Full article ">
38 pages, 2585 KiB  
Review
A Comprehensive Survey on Deep Learning-Based LoRa Radio Frequency Fingerprinting Identification
by Aqeel Ahmed, Bruno Quoitin, Alexander Gros and Veronique Moeyaert
Sensors 2024, 24(13), 4411; https://doi.org/10.3390/s24134411 - 8 Jul 2024
Viewed by 1802
Abstract
LoRa enables long-range communication for Internet of Things (IoT) devices, especially those with limited resources and low power requirements. Consequently, LoRa has emerged as a popular choice for numerous IoT applications. However, the security of LoRa devices is one of the major concerns [...] Read more.
LoRa enables long-range communication for Internet of Things (IoT) devices, especially those with limited resources and low power requirements. Consequently, LoRa has emerged as a popular choice for numerous IoT applications. However, the security of LoRa devices is one of the major concerns that requires attention. Existing device identification mechanisms use cryptography which has two major issues: (1) cryptography is hard on the device resources and (2) physical attacks might prevent them from being effective. Deep learning-based radio frequency fingerprinting identification (RFFI) is emerging as a key candidate for device identification using hardware-intrinsic features. In this paper, we present a comprehensive survey of the state of the art in the area of deep learning-based radio frequency fingerprinting identification for LoRa devices. We discuss various categories of radio frequency fingerprinting techniques along with hardware imperfections that can be exploited to identify an emitter. Furthermore, we describe different deep learning algorithms implemented for the task of LoRa device classification and summarize the main approaches and results. We discuss several representations of the LoRa signal used as input to deep learning models. Additionally, we provide a thorough review of all the LoRa RF signal datasets used in the literature and summarize details about the hardware used, the type of signals collected, the features provided, availability, and size. Finally, we conclude this paper by discussing the existing challenges in deep learning-based LoRa device identification and also envisage future research directions and opportunities. Full article
(This article belongs to the Special Issue Data Protection and Privacy in Industry 4.0 Era)
Show Figures

Figure 1

Figure 1
<p>Scope (<b>a</b>) and organization (<b>b</b>) of the survey.</p>
Full article ">Figure 2
<p>LoRaWAN architecture and impersonation attack.</p>
Full article ">Figure 3
<p>Spectrogram and structure of a LoRa frame. The color intensity in the spectrogram shows the power in that particular instance of time and frequency, with yellow color representing higher power than pink color. The preamble at the beginning of the frame is made up of a sequence of unmodulated up-chirps terminated by two and a quarter unmodulated down-chirps.</p>
Full article ">Figure 4
<p>A typical LoRa transceiver chain (adapted from [<a href="#B31-sensors-24-04411" class="html-bibr">31</a>]).</p>
Full article ">Figure 5
<p>Example CSS Modulated Symbols (<math display="inline"><semantics> <mrow> <mi>BW</mi> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math> Hz and <math display="inline"><semantics> <mrow> <mi>SF</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>Transient and steady-states in a LoRa signal.</p>
Full article ">Figure 7
<p>Direct conversion transmitter architecture (adapted from [<a href="#B47-sensors-24-04411" class="html-bibr">47</a>]).</p>
Full article ">Figure 8
<p>Possible transmitter architecture of Semtech’s SX127x (inspired from [<a href="#B56-sensors-24-04411" class="html-bibr">56</a>]).</p>
Full article ">Figure 9
<p>Block diagram of a typical deep learning-based LoRa radio-frequency fingerprinting identification architecture.</p>
Full article ">Figure 10
<p>Four different representations of the LoRa IQ signal. (<b>a</b>) Time-domain (IQ); (<b>b</b>) Frequency-domain (FFT); (<b>c</b>) Time-frequency domain (spectrogram); (<b>d</b>) Differential Constellation Trace Figure.</p>
Full article ">Figure 11
<p>Deep learning models, signal representations, and features.</p>
Full article ">Figure 12
<p>Dataset information: data collection environment, availability, and the type of data type used to train models (<b>a</b>) Existing LoRa datasets (<b>b</b>) Data type used to train the model.</p>
Full article ">Figure 13
<p>Frequency of LoRa parameters.</p>
Full article ">Figure 14
<p>Challenges addressed.</p>
Full article ">
16 pages, 8893 KiB  
Article
A Method for Real-Time Recognition of Safflower Filaments in Unstructured Environments Using the YOLO-SaFi Model
by Bangbang Chen, Feng Ding, Baojian Ma, Liqiang Wang and Shanping Ning
Sensors 2024, 24(13), 4410; https://doi.org/10.3390/s24134410 - 8 Jul 2024
Viewed by 1189
Abstract
The identification of safflower filament targets and the precise localization of picking points are fundamental prerequisites for achieving automated filament retrieval. In light of challenges such as severe occlusion of targets, low recognition accuracy, and the considerable size of models in unstructured environments, [...] Read more.
The identification of safflower filament targets and the precise localization of picking points are fundamental prerequisites for achieving automated filament retrieval. In light of challenges such as severe occlusion of targets, low recognition accuracy, and the considerable size of models in unstructured environments, this paper introduces a novel lightweight YOLO-SaFi model. The architectural design of this model features a Backbone layer incorporating the StarNet network; a Neck layer introducing a novel ELC convolution module to refine the C2f module; and a Head layer implementing a new lightweight shared convolution detection head, Detect_EL. Furthermore, the loss function is enhanced by upgrading CIoU to PIoUv2. These enhancements significantly augment the model’s capability to perceive spatial information and facilitate multi-feature fusion, consequently enhancing detection performance and rendering the model more lightweight. Performance evaluations conducted via comparative experiments with the baseline model reveal that YOLO-SaFi achieved a reduction of parameters, computational load, and weight files by 50.0%, 40.7%, and 48.2%, respectively, compared to the YOLOv8 baseline model. Moreover, YOLO-SaFi demonstrated improvements in recall, mean average precision, and detection speed by 1.9%, 0.3%, and 88.4 frames per second, respectively. Finally, the deployment of the YOLO-SaFi model on the Jetson Orin Nano device corroborates the superior performance of the enhanced model, thereby establishing a robust visual detection framework for the advancement of intelligent safflower filament retrieval robots in unstructured environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Images of safflower filament in different scenarios.</p>
Full article ">Figure 2
<p>Example of partial data augmentation.</p>
Full article ">Figure 3
<p>Network structure diagram of YOLO-SaFi.</p>
Full article ">Figure 4
<p>Structure diagram of StarNet.</p>
Full article ">Figure 5
<p>ELC convolution module.</p>
Full article ">Figure 6
<p>C2f_ELC module.</p>
Full article ">Figure 7
<p>Bottleneck_ELC convolution module.</p>
Full article ">Figure 8
<p>Detect_EL detection module.</p>
Full article ">Figure 9
<p>Mosaic data augmentation strategy.</p>
Full article ">Figure 10
<p>Curves of different model training. (<b>a</b>) Loss curve. (<b>b</b>) P-R curve.</p>
Full article ">Figure 11
<p>Detection effect of different scenarios.</p>
Full article ">Figure 12
<p>Heatmaps of different models.</p>
Full article ">Figure 13
<p>Edge device deployment.</p>
Full article ">Figure 14
<p>Detection effects of different model deployments. (<b>a</b>) YOLOv8. (<b>b</b>) YOLOv10. (<b>c</b>) YOLO-SaFi.</p>
Full article ">
17 pages, 8850 KiB  
Article
Deep Learning-Based Simultaneous Temperature- and Curvature-Sensitive Scatterplot Recognition
by Jianli Liu, Yuxin Ke, Dong Yang, Qiao Deng, Chuang Hei, Hu Han, Daicheng Peng, Fangqing Wen, Ankang Feng and Xueran Zhao
Sensors 2024, 24(13), 4409; https://doi.org/10.3390/s24134409 - 7 Jul 2024
Viewed by 1280
Abstract
Since light propagation in a multimode fiber (MMF) exhibits visually random and complex scattering patterns due to external interference, this study numerically models temperature and curvature through the finite element method in order to understand the complex interactions between the inputs and outputs [...] Read more.
Since light propagation in a multimode fiber (MMF) exhibits visually random and complex scattering patterns due to external interference, this study numerically models temperature and curvature through the finite element method in order to understand the complex interactions between the inputs and outputs of an optical fiber under conditions of temperature and curvature interference. The systematic analysis of the fiber’s refractive index and bending loss characteristics determined its critical bending radius to be 15 mm. The temperature speckle atlas is plotted to reflect varying bending radii. An optimal end-to-end residual neural network model capable of automatically extracting highly similar scattering features is proposed and validated for the purpose of identifying temperature and curvature scattering maps of MMFs. The viability of the proposed scheme is tested through numerical simulations and experiments, the results of which demonstrate the effectiveness and robustness of the optimized network model. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>FEM cell mesh and node description of the fiber end surface.</p>
Full article ">Figure 2
<p>FEM-based scatterplot simulation flow chart.</p>
Full article ">Figure 3
<p>Electric field distribution and electric field modes in straight fibers and fibers with different bending radii and directions.</p>
Full article ">Figure 4
<p>Numerical results of fiber bending loss at different bending radii.</p>
Full article ">Figure 5
<p>Experimental results of fiber bending loss at different bending radii.</p>
Full article ">Figure 6
<p>Schematic diagram of the temperature and curvature experiment platform. By altering the CMS’s (controlled moving scale) moving distance x, different bending radii R can be achieved. (OBJ: objective lens; ATB: adjustable temperature xox).</p>
Full article ">Figure 7
<p>SE-ResNet combining residual block and SE module: (<b>a</b>) SE basic structure; (<b>b</b>) basic structure of SE-ResNet.</p>
Full article ">Figure 8
<p>SE-ResNet model structure.</p>
Full article ">Figure 9
<p>Training and validation curves using different algorithms and hyperparameters. (<b>a</b>) Training loss using different gradient optimization algorithms. (<b>b</b>) Validation loss using different gradient optimization algorithms. (<b>c</b>) Training loss using different initial learning rates. (<b>d</b>) Validation loss using different initial learning rates. (<b>e</b>) Training loss using different batch sizes. (<b>f</b>) Validation loss using different batch sizes.</p>
Full article ">Figure 10
<p>SE-ResNet training process.</p>
Full article ">Figure 11
<p>SE-ResNet prediction results.</p>
Full article ">Figure 12
<p>Visualization of SE-ResNet-extracted convolutional layer features.</p>
Full article ">Figure 13
<p>Visualization of SE-ResNet model features.</p>
Full article ">Figure 14
<p>Prediction results of GoogleNet, AlexNet, VGG-16, and VGG-19: (<b>a</b>) GoogleNet; (<b>b</b>) AlexNet; (<b>c</b>) VGG-16; (<b>d</b>) VGG-19.</p>
Full article ">Figure 14 Cont.
<p>Prediction results of GoogleNet, AlexNet, VGG-16, and VGG-19: (<b>a</b>) GoogleNet; (<b>b</b>) AlexNet; (<b>c</b>) VGG-16; (<b>d</b>) VGG-19.</p>
Full article ">Figure 15
<p>Visualization of SE-ResNet features with 20 mm bending radii.</p>
Full article ">Figure 16
<p>Visualization of SE-ResNet features with 10 mm bending radii.</p>
Full article ">
21 pages, 10239 KiB  
Article
A Fusion Positioning System Based on Camera and LiDAR for Unmanned Rollers in Tunnel Construction
by Hao Huang, Yongbiao Hu and Xuebin Wang
Sensors 2024, 24(13), 4408; https://doi.org/10.3390/s24134408 - 7 Jul 2024
Viewed by 1104
Abstract
As an important vehicle in road construction, the unmanned roller is rapidly advancing in its autonomous compaction capabilities. To overcome the challenges of GNSS positioning failure during tunnel construction and diminished visual positioning accuracy under different illumination levels, we propose a feature-layer fusion [...] Read more.
As an important vehicle in road construction, the unmanned roller is rapidly advancing in its autonomous compaction capabilities. To overcome the challenges of GNSS positioning failure during tunnel construction and diminished visual positioning accuracy under different illumination levels, we propose a feature-layer fusion positioning system based on a camera and LiDAR. This system integrates loop closure detection and LiDAR odometry into the visual odometry framework. Furthermore, recognizing the prevalence of similar scenes in tunnels, we innovatively combine loop closure detection with the compaction process of rollers in fixed areas, proposing a selection method for loop closure candidate frames based on the compaction process. Through on-site experiments, it is shown that this method not only enhances the accuracy of loop closure detection in similar environments but also reduces the runtime. Compared with visual systems, in static positioning tests, the longitudinal and lateral accuracy of the fusion system are improved by 12 mm and 11 mm, respectively. In straight-line compaction tests under different illumination levels, the average lateral error increases by 34.1% and 32.8%, respectively. In lane-changing compaction tests, this system enhances the positioning accuracy by 33% in dim environments, demonstrating the superior positioning accuracy of the fusion positioning system amid illumination changes in tunnels. Full article
Show Figures

Figure 1

Figure 1
<p>Tunnel compaction scenes.</p>
Full article ">Figure 2
<p>System framework diagram.</p>
Full article ">Figure 3
<p>An example of visual odometry.</p>
Full article ">Figure 4
<p>Error loop closure frames in similar scenes.</p>
Full article ">Figure 5
<p>Schematic diagram of loop detection based on compaction process: (<b>a</b>) roller construction process; (<b>b</b>) loop closure detection process.</p>
Full article ">Figure 6
<p>Fusion model based on graph optimization.</p>
Full article ">Figure 7
<p>Distribution of sensors.</p>
Full article ">Figure 8
<p>Precision–recall curve under different illumination levels.</p>
Full article ">Figure 9
<p>Running time for loop closure detection.</p>
Full article ">Figure 10
<p>Positioning experiment with different loop closure detection methods: (<b>a</b>) positioning trajectory; (<b>b</b>) positioning error.</p>
Full article ">Figure 11
<p>Static positioning error diagram.</p>
Full article ">Figure 12
<p>Bright environment, illumination intensity = 96 lux: (<b>a</b>) test site; (<b>b</b>) positioning data.</p>
Full article ">Figure 13
<p>Dim environment, illumination intensity = 18 lux: (<b>a</b>) test site; (<b>b</b>) positioning data.</p>
Full article ">Figure 14
<p>Lateral error under different illumination levels: (<b>a</b>) real-time lateral error; (<b>b</b>) average and maximum lateral error.</p>
Full article ">Figure 15
<p>Forward and backward trajectory at illumination intensity of 95 lux.</p>
Full article ">Figure 16
<p>Forward and backward trajectory at illumination intensity of 20 lux.</p>
Full article ">Figure 17
<p>Lateral error under different illumination levels: (<b>a</b>) real-time lateral error; (<b>b</b>) average and maximum lateral error.</p>
Full article ">Figure 18
<p>Graph of computational time variations for short straight-line positioning experiment.</p>
Full article ">Figure 19
<p>Lane-changing positioning data under different illumination.</p>
Full article ">Figure 20
<p>Lane-changing positioning error under different illumination.</p>
Full article ">
18 pages, 8769 KiB  
Article
An Obstacle Detection Method Based on Longitudinal Active Vision
by Shuyue Shi, Juan Ni, Xiangcun Kong, Huajian Zhu, Jiaze Zhan, Qintao Sun and Yi Xu
Sensors 2024, 24(13), 4407; https://doi.org/10.3390/s24134407 - 7 Jul 2024
Viewed by 807
Abstract
The types of obstacles encountered in the road environment are complex and diverse, and accurate and reliable detection of obstacles is the key to improving traffic safety. Traditional obstacle detection methods are limited by the type of samples and therefore cannot detect others [...] Read more.
The types of obstacles encountered in the road environment are complex and diverse, and accurate and reliable detection of obstacles is the key to improving traffic safety. Traditional obstacle detection methods are limited by the type of samples and therefore cannot detect others comprehensively. Therefore, this paper proposes an obstacle detection method based on longitudinal active vision. The obstacles are recognized according to the height difference characteristics between the obstacle imaging points and the ground points in the image, and the obstacle detection in the target area is realized without accurately distinguishing the obstacle categories, which reduces the spatial and temporal complexity of the road environment perception. The method of this paper is compared and analyzed with the obstacle detection methods based on VIDAR (vision-IMU based detection and range method), VIDAR + MSER, and YOLOv8s. The experimental results show that the method in this paper has high detection accuracy and verifies the feasibility of obstacle detection in road environments where unknown obstacles exist. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>An obstacle detection based on longitudinal active vision.</p>
Full article ">Figure 2
<p>Obstacle ranging model.</p>
Full article ">Figure 3
<p>Schematic diagram of the static obstacle imaging.</p>
Full article ">Figure 4
<p>Schematic diagram of dynamic obstacle imaging.</p>
Full article ">Figure 5
<p>Schematic diagram of the camera rotation.</p>
Full article ">Figure 6
<p>Architecture of the longitudinal active camera obstacle detection system.</p>
Full article ">Figure 7
<p>Steering angles corresponding to different radii of rotation.</p>
Full article ">Figure 8
<p>Distance measurements corresponding to the different steering angles.</p>
Full article ">Figure 9
<p>Two-frame image acquisition before and after camera rotation; (<b>a</b>) is the obstacle image at the initial moment, (<b>b</b>) is the feature region extraction based on MSER, and (<b>c</b>) is the feature point extraction, where red * is the lowest point of each extreme region and blue + is the intersection point of the obstacle and the road plane. (<b>d</b>) The second frame image acquired after camera rotation.</p>
Full article ">Figure 10
<p>MSERs feature region extraction; (<b>a</b>) is the obstacle image in the initial moment, (<b>b</b>) the obstacle image in the next moment, and (<b>c</b>) is the region matching image in the two moments before and after, where the red region and + are the center of mass of MSERs and MSERs in the initial moment in the image, and the cyan region and o are the center of mass of MSERs and MSERs in the next moment in the image.</p>
Full article ">Figure 11
<p>Feature point location; (<b>a</b>) shows the location of the feature point located in the image at the initial moment and (<b>b</b>) shows the location of the feature point located in the image at the next moment.</p>
Full article ">Figure 12
<p>Obstacle area division (where the yellow box is the detected obstacle area and the upper number is the distance from the obstacle to the camera).</p>
Full article ">Figure 13
<p>Experimental equipment for real vehicle.</p>
Full article ">Figure 14
<p>Real vehicle experiment route.</p>
Full article ">Figure 15
<p>Detection results.</p>
Full article ">
18 pages, 2242 KiB  
Article
Clustered Routing Using Chaotic Genetic Algorithm with Grey Wolf Optimization to Enhance Energy Efficiency in Sensor Networks
by Halimjon Khujamatov, Mohaideen Pitchai, Alibek Shamsiev, Abdinabi Mukhamadiyev and Jinsoo Cho
Sensors 2024, 24(13), 4406; https://doi.org/10.3390/s24134406 - 7 Jul 2024
Cited by 2 | Viewed by 951
Abstract
As an alternative to flat architectures, clustering architectures are designed to minimize the total energy consumption of sensor networks. Nonetheless, sensor nodes experience increased energy consumption during data transmission, leading to a rapid depletion of energy levels as data are routed towards the [...] Read more.
As an alternative to flat architectures, clustering architectures are designed to minimize the total energy consumption of sensor networks. Nonetheless, sensor nodes experience increased energy consumption during data transmission, leading to a rapid depletion of energy levels as data are routed towards the base station. Although numerous strategies have been developed to address these challenges and enhance the energy efficiency of networks, the formulation of a clustering-based routing algorithm that achieves both high energy efficiency and increased packet transmission rate for large-scale sensor networks remains an NP-hard problem. Accordingly, the proposed work formulated an energy-efficient clustering mechanism using a chaotic genetic algorithm, and subsequently developed an energy-saving routing system using a bio-inspired grey wolf optimizer algorithm. The proposed chaotic genetic algorithm–grey wolf optimization (CGA-GWO) method is designed to minimize overall energy consumption by selecting energy-aware cluster heads and creating an optimal routing path to reach the base station. The simulation results demonstrate the enhanced functionality of the proposed system when associated with three more relevant systems, considering metrics such as the number of live nodes, average remaining energy level, packet delivery ratio, and overhead associated with cluster formation and routing. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Cluster-based WSN.</p>
Full article ">Figure 2
<p>Chaotic genetic algorithm.</p>
Full article ">Figure 3
<p>WSN clustering.</p>
Full article ">Figure 4
<p>Real-number coding of chromosome for cluster head selection.</p>
Full article ">Figure 5
<p>Number of live nodes in each network with respect to number of rounds.</p>
Full article ">Figure 6
<p>Average remaining energy with respect to number of rounds.</p>
Full article ">Figure 7
<p>Number of packets received by increasing nodes.</p>
Full article ">Figure 8
<p>Clustering overhead.</p>
Full article ">Figure 9
<p>Routing overhead.</p>
Full article ">
32 pages, 2023 KiB  
Systematic Review
Smart Buildings: A Comprehensive Systematic Literature Review on Data-Driven Building Management Systems
by Adrian Taboada-Orozco, Kokou Yetongnon and Christophe Nicolle
Sensors 2024, 24(13), 4405; https://doi.org/10.3390/s24134405 - 7 Jul 2024
Cited by 1 | Viewed by 2575
Abstract
Buildings are complex structures composed of heterogeneous elements; these require building management systems (BMSs) to dynamically adapt them to occupants’ needs and leverage building resources. The fast growth of information and communication technologies (ICTs) has transformed the BMS field into a multidisciplinary one. [...] Read more.
Buildings are complex structures composed of heterogeneous elements; these require building management systems (BMSs) to dynamically adapt them to occupants’ needs and leverage building resources. The fast growth of information and communication technologies (ICTs) has transformed the BMS field into a multidisciplinary one. Consequently, this has caused several research papers on data-driven solutions to require examination and classification. This paper provides a broad overview of BMS by conducting a systematic literature review (SLR) summarizing current trends in this field. Unlike similar reviews, this SLR provides a rigorous methodology to review current research from a computer science perspective. Therefore, our goal is four-fold: (i) Identify the main topics in the field of building; (ii) Identify the recent data-driven methods; (iii) Understand the BMS’s underlying computing architecture (iv) Understand the features of BMS that contribute to the smartization of buildings. The result synthesizes our findings and provides research directions for further research. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors in Smart Buildings)
Show Figures

Figure 1

Figure 1
<p>General diagram of the steps of the SLR conducted in this paper.</p>
Full article ">Figure 2
<p>The flow diagram shows the steps followed in the SLR. In the collection process, the dark blue represents the collection process per keyword and nested in light blue is the collection process by principal source. The collection process ends in the second process.</p>
Full article ">Figure 3
<p>Trends between 2004 and 2020 of some technologies including the Internet of Things, machine learning, cyber-physical systems, and edge computing. Between 2010 and 2021, noticeable the rapid growth in the popularity of these technologies.</p>
Full article ">Figure 4
<p>Number of collected papers per principal source.</p>
Full article ">Figure 5
<p>A total number of papers after the application of each inclusion and exclusion criteria.</p>
Full article ">Figure 6
<p>Impact of each EC in the selection process.</p>
Full article ">Figure 7
<p>Primary studies in a world map.</p>
Full article ">Figure 8
<p>Number of papers by country.</p>
Full article ">Figure 9
<p>The average score of quality criteria for the primary studies.</p>
Full article ">Figure 10
<p>The relation between BMS architecture and BMS services expresses an enhancement when data sources are increased, abstracted, and represented.</p>
Full article ">Figure 11
<p>Number of papers employing each identified type of BMS.</p>
Full article ">Figure 12
<p>The different configurations of computing layers identified in the reviewed papers. The first column depicts in blue the percentage of Cloud computing papers, in green Fog, in yellow Edge. The second column shows four blocks describing papers that focus on Cloud and include the other layers (2.3%), only cloud (16.3%), Fog computing (58.1%), and Fog and Edge (23.3%).</p>
Full article ">Figure 13
<p>Conceptual framework of <a href="#sec3dot5-sensors-24-04405" class="html-sec">Section 3.5</a>. The blocks are features of BMS that authors attribute as contributors to the smartness of a building. At the bottom, there is only BMS equipment, and on top, the accumulation of features (equipment + data processing + decision − making + adaptability).</p>
Full article ">
15 pages, 4404 KiB  
Case Report
Sensor-Assisted Analysis of Autonomic and Cerebrovascular Dysregulation following Concussion in an Individual with a History of Ten Concussions: A Case Study
by Courtney M. Kennedy, Joel S. Burma and Jonathan D. Smirl
Sensors 2024, 24(13), 4404; https://doi.org/10.3390/s24134404 - 7 Jul 2024
Viewed by 1315
Abstract
Introduction: Concussion is known to cause transient autonomic and cerebrovascular dysregulation that generally recovers; however, few studies have focused on individuals with an extensive concussion history. Method: The case was a 26-year-old male with a history of 10 concussions, diagnosed for bipolar type [...] Read more.
Introduction: Concussion is known to cause transient autonomic and cerebrovascular dysregulation that generally recovers; however, few studies have focused on individuals with an extensive concussion history. Method: The case was a 26-year-old male with a history of 10 concussions, diagnosed for bipolar type II disorder, mild attention-deficit hyperactivity disorder, and a history of migraines/headaches. The case was medicated with Valproic Acid and Escitalopram. Sensor-based baseline data were collected within six months of his injury and on days 1–5, 10, and 14 post-injury. Symptom reporting, heart rate variability (HRV), neurovascular coupling (NVC), and dynamic cerebral autoregulation (dCA) assessments were completed using numerous biomedical devices (i.e., transcranial Doppler ultrasound, 3-lead electrocardiography, finger photoplethysmography). Results: Total symptom and symptom severity scores were higher for the first-week post-injury, with physical and emotional symptoms being the most impacted. The NVC response showed lowered activation in the first three days post-injury, while autonomic (HRV) and autoregulation (dCA) were impaired across all testing visits occurring in the first 14 days following his concussion. Conclusions: Despite symptom resolution, the case demonstrated ongoing autonomic and autoregulatory dysfunction. Larger samples examining individuals with an extensive history of concussion are warranted to understand the chronic physiological changes that occur following cumulative concussions through biosensing devices. Full article
(This article belongs to the Special Issue Biomedical Sensors for Cardiology)
Show Figures

Figure 1

Figure 1
<p>Neurovascular coupling results during a complex visual scene search (“Where’s Waldo?”). Centimetres per second (cm/s); percent (%); area under the curve (AUC). Units for all measures are displayed in the facet titles, in the posterior (<b>left</b> panel) and middle (<b>right</b> panel) cerebral arteries (PCA and MCA, respectively). Negligible change in CBFv observed throughout testing within the PCA at baseline, quantified during 5-seconds eyes closed, (panel (<b>A</b>)) and peak task engagement (panel (<b>C</b>)); and within the MVA at baseline (panel (<b>B</b>)) and peak (panel (<b>D</b>)). Note the substantial reductions across days 1–3 in AUC30 within the PCA, the primary region associated with the initial processing of the raw visual information associated with the scene-search task (panel (<b>G</b>)), but not the MCA (panel (<b>H</b>)), associated with motor and somatosensory function. The blunted relative increase from baseline to peak observed during systole in the posterior aspect did not return to pre-injury levels until day 14 (panel (<b>E</b>)), however no changes were observed within the MCA (panel (<b>F</b>)).</p>
Full article ">Figure 2
<p>Time-domain heart rate variability (HRV) metrics collected during quiet sitting and standing periods. Standard deviation of average normal–normal intervals (SDNN); root mean square of successive intervals (RMSSD); proportion of consecutive RR intervals that differ by more than 50 ms (pNN50); beats per minute (bpm); milliseconds (ms); percent (%). Units for all measures are displayed in the facet titles. Note the substantial elevations in heart rate during both seated and standing periods that extended for the entire 14-day duration of testing (panel (<b>A</b>)), as well as the reductions in SDNN (panel (<b>B</b>)), RMSSD (panel (<b>C</b>)), and pNN50 (panel (<b>D</b>)) under both seated and standing conditions that were also present immediately following the injury, along with the consistent return towards baseline levels for these autonomic regulation metrics across the 14-day testing period.</p>
Full article ">Figure 3
<p>Frequency-domain heart rate variability (HRV) metrics collected during quiet sitting and standing periods. Low frequency (LF); high frequency (HF); normalized units (n.u.); percent (%). Units for all measures are displayed in the facet titles. Note the elevations in the relative LF (panel (<b>A</b>)) and LF/HF ratio (panel (<b>C</b>)) and the reductions in HF (panel (<b>B</b>)) across the 14-day testing period in this case.</p>
Full article ">Figure 4
<p>Dynamic cerebral autoregulation assessments via squat–stand maneuvers at 0.05 Hz in the middle (<b>left</b> panel) and posterior (<b>right</b> panel) cerebral arteries (MCA and PCA, respectively). Normalized gain (nGain); centimetres (cm); seconds (s); millimetres of mercury (mmHg); percent (%). Units for all measures are displayed in the facet titles. Negligible change observed in MCA coherence (panel (<b>A</b>)) and gain (panel <b>(E</b>)); and in PCA coherence (panel (<b>B</b>)) and phase (panel (<b>D</b>)). Note the slight reductions in diastolic middle cerebral artery (MCA) phase (panel (<b>C</b>)); elevations in nGain (panel (<b>G</b>)); and elevations in diastolic posterior cerebral artery (PCA) gain (panel (<b>F</b>)) and nGain (panel (<b>H</b>)).</p>
Full article ">Figure 5
<p>Dynamic cerebral autoregulation assessments via squat-stand maneuvers at 0.10 Hz in the middle (<b>left</b> panel) and posterior (<b>right</b> panel) cerebral arteries (MCA and PCA, respectively). Normalized gain (nGain); centimetres (cm); seconds (s); millimetres of mercury (mmHg); percent (%). Units for all measures are displayed in the facet titles. Negligible change observed in MCA coherence (panel (<b>A</b>)) and gain (panel (<b>E</b>)); and in PCA coherence (panel (<b>B</b>)) and phase (panel (<b>D</b>)). Note the slight reductions in diastolic and mean MCA phase (panel (<b>C</b>)) and elevations in nGain (panel (<b>G</b>)) and the elevations in diastolic PCA gain (panel (<b>F</b>)) and nGain (panel (<b>H</b>)).</p>
Full article ">
19 pages, 11934 KiB  
Article
The Characteristics of Long-Wave Irregularities in High-Speed Railway Vertical Curves and Method for Mitigation
by Laiwei Jiang, Yangtenglong Li, Yuyuan Zhao and Minyi Cen
Sensors 2024, 24(13), 4403; https://doi.org/10.3390/s24134403 - 7 Jul 2024
Viewed by 806
Abstract
Track geometry measurements (TGMs) are a critical methodology for assessing the quality of track regularities and, thus, are essential for ensuring the safety and comfort of high-speed railway (HSR) operations. TGMs also serve as foundational datasets for engineering departments to devise daily maintenance [...] Read more.
Track geometry measurements (TGMs) are a critical methodology for assessing the quality of track regularities and, thus, are essential for ensuring the safety and comfort of high-speed railway (HSR) operations. TGMs also serve as foundational datasets for engineering departments to devise daily maintenance and repair strategies. During routine maintenance, S-shaped long-wave irregularities (SLIs) were found to be present in the vertical direction from track geometry cars (TGCs) at the beginning and end of a vertical curve (VC). In this paper, we conduct a comprehensive analysis and comparison of the characteristics of these SLIs and design a long-wave filter for simulating inertial measurement systems (IMSs). This simulation experiment conclusively demonstrates that SLIs are not attributed to track geometric deformation from the design reference. Instead, imperfections in the longitudinal profile’s design are what cause abrupt changes in the vehicle’s acceleration, resulting in the measurement output of SLIs. Expanding upon this foundation, an additional investigation concerning the quantitative relationship between SLIs and longitudinal profiles is pursued. Finally, a method that involves the addition of a third-degree parabolic transition curve (TDPTC) or a full-wave sinusoidal transition curve (FSTC) is proposed for a smooth transition between the slope and the circular curve, designed to eliminate the abrupt changes in vertical acceleration and to mitigate SLIs. The correctness and effectiveness of this method are validated through filtering simulation experiments. These experiments indicate that the proposed method not only eliminates abrupt changes in vertical acceleration, but also significantly mitigates SLIs. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of a railway track VC and TGCs based on an IMS.</p>
Full article ">Figure 2
<p>Long-wavelength deformation in VC section.</p>
Full article ">Figure 3
<p>SLIs with different subgrade foundations and locations: (<b>a</b>) examples of SLIs with different subgrade foundations and locations; (<b>b</b>) peak values and response range of SLIs according to different subgrade foundations and locations using box-plots.</p>
Full article ">Figure 4
<p>SLIs with VC lengths ranging from 25 m to 412 m: (<b>a</b>) examples of SLIs with different VC lengths; (<b>b</b>) peak values and response range of SLIs according to different VC lengths using box-plots.</p>
Full article ">Figure 5
<p>SLIs at different inspection times.</p>
Full article ">Figure 6
<p>Track vertical irregularities derived from different wavebands: (<b>a</b>) examples of S-shaped irregularities in different wavebands; (<b>b</b>) peak values and response range of SLIs according to different wavebands using box-plots.</p>
Full article ">Figure 7
<p>Amplitude–frequency characteristics of system function.</p>
Full article ">Figure 8
<p>Vertical irregularities simulated individually with four distinct simulation models: (<b>a</b>) comparison and deviation between simulated and actual waveforms; (<b>b</b>) frequency histogram of deviations based on the length of the VCs.</p>
Full article ">Figure 8 Cont.
<p>Vertical irregularities simulated individually with four distinct simulation models: (<b>a</b>) comparison and deviation between simulated and actual waveforms; (<b>b</b>) frequency histogram of deviations based on the length of the VCs.</p>
Full article ">Figure 9
<p>Example of further validation by applying <span class="html-italic">H</span><sub>Ham</sub>(<span class="html-italic">z</span>).</p>
Full article ">Figure 10
<p>The characteristic parameters of SLIs.</p>
Full article ">Figure 11
<p>The characteristic parameters of the SLIs’ regression relationships between the peak value <span class="html-italic">η</span> and the gradient difference Δ<span class="html-italic">i</span>, as well as the radius <span class="html-italic">R</span>.</p>
Full article ">Figure 12
<p>The relationship of the change in <span class="html-italic">ζ</span> and <span class="html-italic">λ</span> with the gradient difference Δ<span class="html-italic">i</span> and the radius <span class="html-italic">R</span>.</p>
Full article ">Figure 13
<p>Optimized longitudinal profile and simulation of vertical acceleration: (<b>a</b>) schematic representation of a VC with an added TC; (<b>b</b>) the variation in simulated <span class="html-italic">a<sub>v</sub></span> (<span class="html-italic">V</span> = 350 km/h, <span class="html-italic">R</span> = 25,000 m).</p>
Full article ">Figure 14
<p>Comparison of SLIs after the addition of TDPTCs and FSTCs: (<b>a</b>) comparison of SLI waveforms after the addition of TDPTCs and FSTCs (<span class="html-italic">R</span> = 25,000 m); (<b>b</b>) changes in the peak values and SD of SLIs after the addition of TDPTCs and FSTCs (<span class="html-italic">R</span> = 20,000 m, <span class="html-italic">R</span> = 25,000 m).</p>
Full article ">
17 pages, 1779 KiB  
Article
A Semi-Supervised Adaptive Matrix Machine Approach for Fault Diagnosis in Railway Switch Machine
by Wenqing Li, Zhongwei Xu, Meng Mei, Meng Lan, Chuanzhen Liu and Xiao Gao
Sensors 2024, 24(13), 4402; https://doi.org/10.3390/s24134402 - 7 Jul 2024
Cited by 1 | Viewed by 786
Abstract
The switch machine, an essential element of railway infrastructure, is crucial in maintaining the safety of railway operations. Traditional methods for fault diagnosis are constrained by their dependence on extensive labeled datasets. Semi-supervised learning (SSL), although a promising solution to the scarcity of [...] Read more.
The switch machine, an essential element of railway infrastructure, is crucial in maintaining the safety of railway operations. Traditional methods for fault diagnosis are constrained by their dependence on extensive labeled datasets. Semi-supervised learning (SSL), although a promising solution to the scarcity of samples, faces challenges such as the imbalance of pseudo-labels and inadequate data representation. In response, this paper presents the Semi-Supervised Adaptive Matrix Machine (SAMM) model, designed for the fault diagnosis of switch machine. SAMM amalgamates semi-supervised learning with adaptive technologies, leveraging adaptive low-rank regularizer to discern the fundamental links between the rows and columns of matrix data and applying adaptive penalty items to correct imbalances across sample categories. This model methodically enlarges its labeled dataset using probabilistic outputs and semi-supervised, automatically adjusting parameters to accommodate diverse data distributions and structural nuances. The SAMM model’s optimization process employs the alternating direction method of multipliers (ADMM) to identify solutions efficiently. Experimental evidence from a dataset containing current signals from switch machines indicates that SAMM outperforms existing baseline models, demonstrating its exceptional status diagnostic capabilities in situations where labeled samples are scarce. Consequently, SAMM offers an innovative and effective approach to semi-supervised classification tasks involving matrix data. Full article
Show Figures

Figure 1

Figure 1
<p>Turnout structural schematic.</p>
Full article ">Figure 2
<p>Classification principle of SMM.</p>
Full article ">Figure 3
<p>SAMM model.</p>
Full article ">Figure 4
<p>Entire framework of the proposed fault diagnosis approach.</p>
Full article ">Figure 5
<p>Fault status current curves of ZDJ9 switch machine.</p>
Full article ">Figure 6
<p>Repeat 10 times with 5 labels.</p>
Full article ">Figure 7
<p>Confusion matrix of the optimal results for each model.</p>
Full article ">Figure 8
<p>Fault diagnosis precision under different labeled samples.</p>
Full article ">
18 pages, 21372 KiB  
Article
Underwater Single-Photon 3D Reconstruction Algorithm Based on K-Nearest Neighbor
by Hui Wang, Su Qiu, Taoran Lu, Yanjin Kuang and Weiqi Jin
Sensors 2024, 24(13), 4401; https://doi.org/10.3390/s24134401 - 7 Jul 2024
Viewed by 875
Abstract
The high sensitivity and picosecond time resolution of single-photon avalanche diodes (SPADs) can improve the operational range and imaging accuracy of underwater detection systems. When an underwater SPAD imaging system is used to detect targets, backward-scattering caused by particles in water often results [...] Read more.
The high sensitivity and picosecond time resolution of single-photon avalanche diodes (SPADs) can improve the operational range and imaging accuracy of underwater detection systems. When an underwater SPAD imaging system is used to detect targets, backward-scattering caused by particles in water often results in the poor quality of the reconstructed underwater image. Although methods such as simple pixel accumulation have been proven to be effective for time–photon histogram reconstruction, they perform unsatisfactorily in a highly scattering environment. Therefore, new reconstruction methods are necessary for underwater SPAD detection to obtain high-resolution images. In this paper, we propose an algorithm that reconstructs high-resolution depth profiles of underwater targets from a time–photon histogram by employing the K-nearest neighbor (KNN) to classify multiple targets and the background. The results contribute to the performance of pixel accumulation and depth estimation algorithms such as pixel cross-correlation and ManiPoP. We use public experimental data sets and underwater simulation data to verify the effectiveness of the proposed algorithm. The results of our algorithm show that the root mean square errors (RMSEs) of land targets and simulated underwater targets are reduced by 57.12% and 23.45%, respectively, achieving high-resolution single-photon depth profile reconstruction. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Simple pixel accumulation depth estimation error.</p>
Full article ">Figure 2
<p>SPAD underwater optical imaging experimental system schematic diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of PF32 SPAD photon time measurement.</p>
Full article ">Figure 4
<p>Data search process based on kd-tree.</p>
Full article ">Figure 5
<p>Diagram of KNN-based single-photon 3D reconstruction algorithm.</p>
Full article ">Figure 6
<p>Image of the detection target.</p>
Full article ">Figure 7
<p>K-value cross-validation results of real land-based target.</p>
Full article ">Figure 8
<p>(<b>a</b>) Depth ground truth; (<b>b</b>) depth profile reconstructed via pixel cross-correlation algorithm; (<b>c</b>) depth profile reconstructed via the ManiPoP algorithm; (<b>d</b>) depth profile reconstructed via the ManiPoP algorithm based on the KNN algorithm; (<b>e</b>) depth profile reconstructed via the pixel cross-correlation algorithm after simple pixel accumulation; and (<b>f</b>) depth profile reconstructed via the pixel cross-correlation algorithm based on the KNN algorithm.</p>
Full article ">Figure 9
<p>The area selected for error calculation.</p>
Full article ">Figure 10
<p>Comparison of simulated, underwater, single-photon, reconstructed depth profile and real, underwater, single-photon, reconstructed depth profile. (<b>a</b>) Simulation target; (<b>b</b>) depth profile reconstructed from real, underwater, single-photon data; and (<b>c</b>) depth profile reconstructed from simulated, underwater, single-photon data.</p>
Full article ">Figure 11
<p>Image of simulation target.</p>
Full article ">Figure 12
<p>(<b>a</b>) Simulated underwater target echo signal; (<b>b</b>) real underwater target echo signal.</p>
Full article ">Figure 13
<p>K-value cross-validation results of underwater simulation target.</p>
Full article ">Figure 14
<p>(<b>a</b>) Depth ground truth; (<b>b</b>) depth profile reconstructed via pixel cross-correlation algorithm; (<b>c</b>) depth profile reconstructed via ManiPoP algorithm; (<b>d</b>) depth profile reconstructed via ManiPoP algorithm based on KNN algorithm; (<b>e</b>) depth profile reconstructed via pixel cross-correlation algorithm after simple pixel accumulation; and (<b>f</b>) depth profile reconstructed via pixel cross-correlation algorithm based on KNN algorithm.</p>
Full article ">
18 pages, 1918 KiB  
Article
Acoustic Comfort Prediction: Integrating Sound Event Detection and Noise Levels from a Wireless Acoustic Sensor Network
by Daniel Bonet-Solà, Ester Vidaña-Vila and Rosa Ma Alsina-Pagès
Sensors 2024, 24(13), 4400; https://doi.org/10.3390/s24134400 - 7 Jul 2024
Viewed by 1441
Abstract
There is an increasing interest in accurately evaluating urban soundscapes to reflect citizens’ subjective perceptions of acoustic comfort. Various indices have been proposed in the literature to achieve this purpose. However, many of these methods necessitate specialized equipment or extensive data collection. This [...] Read more.
There is an increasing interest in accurately evaluating urban soundscapes to reflect citizens’ subjective perceptions of acoustic comfort. Various indices have been proposed in the literature to achieve this purpose. However, many of these methods necessitate specialized equipment or extensive data collection. This study introduces an enhanced predictor for dwelling acoustic comfort, utilizing cost-effective data consisting of a 30-s audio clip and location information. The proposed predictor incorporates two rating systems: a binary evaluation and an acoustic comfort index called ACI. The training and evaluation data are obtained from the “Sons al Balcó” citizen science project. To characterize the sound events, gammatone cepstral coefficients are used for automatic sound event detection with a convolutional neural network. To enhance the predictor’s performance, this study proposes incorporating objective noise levels from public IoT-based wireless acoustic sensor networks, particularly in densely populated areas like Barcelona. The results indicate that adding noise levels from a public network successfully enhances the accuracy of the acoustic comfort prediction for both rating systems, reaching up to 85% accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>Locations of the Barcelona sound sensor network and the videos collected during the 2021 <span class="html-italic">Sons al Balcó</span> campaign.</p>
Full article ">Figure 2
<p>Distribution of the Barcelona sound sensors deployed in 2021 by the predominant noise sources in their locations.</p>
Full article ">Figure 3
<p>Subjective Assessment of the Soundscapes in Barcelona (Likert scale).</p>
Full article ">Figure 4
<p>Enhanced Dwelling’s Soundscape Quality Estimator (Binary).</p>
Full article ">Figure 5
<p>Enhanced Dwelling’s Acoustic Comfort Index Estimator.</p>
Full article ">Figure 6
<p>Performance of the nearest sensor-based prediction depending on the maximum distance between sensor and studied location accepted.</p>
Full article ">Figure 7
<p>Error distance for the ACI assessment for the ASED, nearest-based and combined approaches.</p>
Full article ">Figure 8
<p>Quarterly evolution of the <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>A</mi> <mi>e</mi> <mi>q</mi> </mrow> </msub> </semantics></math> measured in the subset of BCN sound meters network used during this present study that were continuously collecting data from 2017 to 2022.</p>
Full article ">
14 pages, 2720 KiB  
Article
Impacts of Wearable Resistance Placement on Running Efficiency Assessed by Wearable Sensors: A Pilot Study
by Arunee Promsri, Siriyakorn Deedphimai, Petradda Promthep and Chonthicha Champamuang
Sensors 2024, 24(13), 4399; https://doi.org/10.3390/s24134399 - 7 Jul 2024
Viewed by 1039
Abstract
Wearable resistance training is widely applied to enhance running performance, but how different placements of wearable resistance across various body parts influence running efficiency remains unclear. This study aimed to explore the impacts of wearable resistance placement on running efficiency by comparing five [...] Read more.
Wearable resistance training is widely applied to enhance running performance, but how different placements of wearable resistance across various body parts influence running efficiency remains unclear. This study aimed to explore the impacts of wearable resistance placement on running efficiency by comparing five running conditions: no load, and an additional 10% load of individual body mass on the trunk, forearms, lower legs, and a combination of these areas. Running efficiency was assessed through biomechanical (spatiotemporal, kinematic, and kinetic) variables using acceleration-based wearable sensors placed on the shoes of 15 recreational male runners (20.3 ± 1.23 years) during treadmill running in a randomized order. The main findings indicate distinct effects of different load distributions on specific spatiotemporal variables (contact time, flight time, and flight ratio, p ≤ 0.001) and kinematic variables (footstrike type, p < 0.001). Specifically, adding loads to the lower legs produces effects similar to running with no load: shorter contact time, longer flight time, and a higher flight ratio compared to other load conditions. Moreover, lower leg loads result in a forefoot strike, unlike the midfoot strike seen in other conditions. These findings suggest that lower leg loads enhance running efficiency more than loads on other parts of the body. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of (<b>A</b>) wearable equipment utilized in the study ((<b>a</b>) weighted vest, (<b>b</b>) forearm cuffs, (<b>c</b>) lower leg cuffs, and (<b>d</b>) detachable metal plates); (<b>B</b>) RunScribe™ wearable sensors; and (<b>C</b>) the experimental protocol.</p>
Full article ">Figure 2
<p>Post hoc comparisons of (<b>A</b>) contact time, (<b>B</b>) flight time, (<b>C</b>) flight ratio, and (<b>D</b>) footstrike type between five running conditions: running with no load (None) and running with added loads on the forearms (Arm), lower legs (Leg), trunk (Trunk), and combined segments (All). Significant differences are indicated as * <span class="html-italic">p</span> &lt; 0.005 and ** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
12 pages, 780 KiB  
Article
Predicting the Arousal and Valence Values of Emotional States Using Learned, Predesigned, and Deep Visual Features
by Itaf Omar Joudeh, Ana-Maria Cretu and Stéphane Bouchard
Sensors 2024, 24(13), 4398; https://doi.org/10.3390/s24134398 - 7 Jul 2024
Cited by 1 | Viewed by 1177
Abstract
The cognitive state of a person can be categorized using the circumplex model of emotional states, a continuous model of two dimensions: arousal and valence. The purpose of this research is to select a machine learning model(s) to be integrated into a virtual [...] Read more.
The cognitive state of a person can be categorized using the circumplex model of emotional states, a continuous model of two dimensions: arousal and valence. The purpose of this research is to select a machine learning model(s) to be integrated into a virtual reality (VR) system that runs cognitive remediation exercises for people with mental health disorders. As such, the prediction of emotional states is essential to customize treatments for those individuals. We exploit the Remote Collaborative and Affective Interactions (RECOLA) database to predict arousal and valence values using machine learning techniques. RECOLA includes audio, video, and physiological recordings of interactions between human participants. To allow learners to focus on the most relevant data, features are extracted from raw data. Such features can be predesigned, learned, or extracted implicitly using deep learners. Our previous work on video recordings focused on predesigned and learned visual features. In this paper, we extend our work onto deep visual features. Our deep visual features are extracted using the MobileNet-v2 convolutional neural network (CNN) that we previously trained on RECOLA’s video frames of full/half faces. As the final purpose of our work is to integrate our solution into a practical VR application using head-mounted displays, we experimented with half faces as a proof of concept. The extracted deep features were then used to predict arousal and valence values via optimizable ensemble regression. We also fused the extracted visual features with the predesigned visual features and predicted arousal and valence values using the combined feature set. In an attempt to enhance our prediction performance, we further fused the predictions of the optimizable ensemble model with the predictions of the MobileNet-v2 model. After decision fusion, we achieved a root mean squared error (RMSE) of 0.1140, a Pearson’s correlation coefficient (PCC) of 0.8000, and a concordance correlation coefficient (CCC) of 0.7868 on arousal predictions. We achieved an RMSE of 0.0790, a PCC of 0.7904, and a CCC of 0.7645 on valence predictions. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of our visual data methodology.</p>
Full article ">Figure 2
<p>Predicted versus actual plots of fused (<b>a</b>) arousal, and (<b>b</b>) valence predictions from an optimizable ensemble trained on combined visual features and MobileNet-v2 trained on video frames of full faces (green) or half faces (blue). The red dashed line represents perfect predictions.</p>
Full article ">
16 pages, 7866 KiB  
Article
Acoustic Signal-Based Defect Identification for Directed Energy Deposition-Arc Using Wavelet Time–Frequency Diagrams
by Hui Zhang, Qianru Wu, Wenlai Tang and Jiquan Yang
Sensors 2024, 24(13), 4397; https://doi.org/10.3390/s24134397 - 7 Jul 2024
Cited by 1 | Viewed by 1059
Abstract
Several advantages of directed energy deposition-arc (DED-arc) have garnered considerable research attention including high deposition rates and low costs. However, defects such as discontinuity and pores may occur during the manufacturing process. Defect identification is the key to monitoring and quality assessments of [...] Read more.
Several advantages of directed energy deposition-arc (DED-arc) have garnered considerable research attention including high deposition rates and low costs. However, defects such as discontinuity and pores may occur during the manufacturing process. Defect identification is the key to monitoring and quality assessments of the additive manufacturing process. This study proposes a novel acoustic signal-based defect identification method for DED-arc via wavelet time–frequency diagrams. With the continuous wavelet transform, one-dimensional (1D) acoustic signals acquired in situ during manufacturing are converted into two-dimensional (2D) time–frequency diagrams to train, validate, and test the convolutional neural network (CNN) models. In this study, several CNN models were examined and compared, including AlexNet, ResNet-18, VGG-16, and MobileNetV3. The accuracy of the models was 96.35%, 97.92%, 97.01%, and 98.31%, respectively. The findings demonstrate that the energy distribution of normal and abnormal acoustic signals has significant differences in both the time and frequency domains. The proposed method is verified to identify defects effectively in the manufacturing process and advance the identification time. Full article
(This article belongs to the Special Issue Sensing in Intelligent and Unmanned Additive Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Overall workflow for defect identification.</p>
Full article ">Figure 2
<p>Experimental system.</p>
Full article ">Figure 3
<p>Weld morphology under different process parameters.</p>
Full article ">Figure 4
<p>Conversion method of the three types of acoustic signals from 1D signals to 2D time–frequency diagrams.</p>
Full article ">Figure 5
<p>Conventional CNN architecture for classification.</p>
Full article ">Figure 6
<p>Time–frequency diagrams of acoustic signals. (<b>a</b>) Normal, (<b>b</b>) discontinuity, and (<b>c</b>) pore.</p>
Full article ">Figure 7
<p>Training curve of CNNs. (<b>a</b>) Training loss, (<b>b</b>) validation loss, (<b>c</b>) training accuracy, and (<b>d</b>) validation accuracy.</p>
Full article ">Figure 8
<p>Confusion matrix. (<b>a</b>) AlexNet, (<b>b</b>) ResNet-18, (<b>c</b>) VGG-16, and (<b>d</b>) MobileNetV3.</p>
Full article ">Figure 9
<p>Accuracy of four models for three different categories.</p>
Full article ">Figure 10
<p>Visualization of CNN models using T-SNE. (<b>a</b>) AlexNet, (<b>b</b>) ResNet-18, (<b>c</b>) VGG-16, and (<b>d</b>) MobileNetV3.</p>
Full article ">
18 pages, 16925 KiB  
Article
Dynamic Modeling and Analysis of Flexible-Joint Robots with Clearance
by Jing Wang, Shisheng Zhou, Jimei Wu, Jiajuan Qing, Tuo Kang and Mingyue Shao
Sensors 2024, 24(13), 4396; https://doi.org/10.3390/s24134396 - 6 Jul 2024
Viewed by 1290
Abstract
The coupling effects of flexible joints and clearance on the dynamics of a robotic system were investigated. A numerical analysis was undertaken to reveal the coupling effects between flexible joints and clearance. The nonlinear spring-damping model and Coulomb model were applied to characterize [...] Read more.
The coupling effects of flexible joints and clearance on the dynamics of a robotic system were investigated. A numerical analysis was undertaken to reveal the coupling effects between flexible joints and clearance. The nonlinear spring-damping model and Coulomb model were applied to characterize the contact characteristics of the clearance, and a model for the flexible joint was formulated using the equivalent spring theory. An accurate robot model was established based on the clearance and joint flexibility characterization. The dynamic equation of a robot was obtained according to the Newton-Euler method. A comparative analysis was performed to assess the impacts of both the joint action of clearance and flexible joints and varying joint clearance values on the performance of the robot. The results showed that the coupling effects of flexible joints and clearance had a negative impact on the system dynamic performance. The amplitudes of the dynamic responses caused by the clearance are weakened by the flexible joint, but it leads to the lag of the system response. This study served as the theoretical foundation for exploring precise control techniques in robotics research. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>The mathematical model of clearance: (<b>a</b>) non-contact mode, (<b>b</b>) critical contact mode, (<b>c</b>) contact mode.</p>
Full article ">Figure 2
<p>The tangential force and normal force during the collision process.</p>
Full article ">Figure 3
<p>Mechanical model of flexible-joint robots with clearances.</p>
Full article ">Figure 4
<p>Simplified model of flexible-joint robot with clearance.</p>
Full article ">Figure 5
<p>Flow chart of dynamic simulation.</p>
Full article ">Figure 6
<p>Locus of journal center of rigid joint: (<b>a</b>) locus of center, (<b>b</b>) locus in X direction with time, (<b>c</b>) locus in Y direction with time.</p>
Full article ">Figure 7
<p>Locus of journal center of flexible joint: (<b>a</b>) locus of center, (<b>b</b>) locus in X direction with time, (<b>c</b>) locus in Y direction with time.</p>
Full article ">Figure 8
<p>Contact force in joints.</p>
Full article ">Figure 9
<p>Comparative analysis of the angular displacement at joint 1.</p>
Full article ">Figure 10
<p>Deviation in angular displacement at joint 1.</p>
Full article ">Figure 11
<p>Comparative analysis of the angular displacement at joint 2.</p>
Full article ">Figure 12
<p>Deviation in angular displacement at joint 2.</p>
Full article ">Figure 13
<p>Comparative analysis of the angular velocity at joint 1.</p>
Full article ">Figure 14
<p>Deviation in angular velocity at joint 1.</p>
Full article ">Figure 15
<p>Comparative analysis of the angular acceleration at joint 1.</p>
Full article ">Figure 16
<p>Deviation in angular acceleration at joint 1.</p>
Full article ">Figure 17
<p>Comparative analysis of acceleration at end-effector in X direction.</p>
Full article ">Figure 18
<p>Comparative analysis of acceleration at end-effector in Y direction.</p>
Full article ">Figure 19
<p>Comparison of contact force with different clearance sizes.</p>
Full article ">Figure 20
<p>Angular displacement of joint 1 with different clearance sizes.</p>
Full article ">Figure 21
<p>Angular velocity of joint 1 with different clearance sizes.</p>
Full article ">Figure 22
<p>Angular acceleration of joint 1 with different clearance sizes.</p>
Full article ">Figure 23
<p>Acceleration of end-effector with different clearance sizes in X direction.</p>
Full article ">Figure 24
<p>Acceleration of end-effector with different clearance sizes in Y direction.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop