Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 23, September-1
Previous Issue
Volume 23, August-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 16 (August-2 2023) – 295 articles

Cover Story (view full-size image): With the boom in devices operating at different voltages using transformers and the requirement for their safe, fault-free operation, soft sensors for monitoring are coming to the fore. We introduce two extended Kalman filters (EKF) for galvanic decoupled soft sensing and fault detection. They are investigated in power lines where only the primary electrical quantities, input voltage, and current of transformer are measured. Faults can occur in both primary and secondary winding. The first EKF estimates the voltage, current, and load resistance of the secondary winding. The second EKF serves for the harmonic detection and estimation of the amplitude and frequency of the primary winding voltage. Moreover, the EKFs are emphasized in the sensor fusion merging multiple data and their reconciliation with the measurements. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 2060 KiB  
Article
A Multiscale Method for Infrared Ship Detection Based on Morphological Reconstruction and Two-Branch Compensation Strategy
by Xintao Chen, Changzhen Qiu and Zhiyong Zhang
Sensors 2023, 23(16), 7309; https://doi.org/10.3390/s23167309 - 21 Aug 2023
Cited by 3 | Viewed by 1251
Abstract
Infrared ship target detection is crucial technology in marine scenarios. Ship targets vary in scale throughout navigation because the distance between the ship and the infrared camera is constantly changing. Furthermore, complex backgrounds, such as sea clutter, can cause significant interference during detection [...] Read more.
Infrared ship target detection is crucial technology in marine scenarios. Ship targets vary in scale throughout navigation because the distance between the ship and the infrared camera is constantly changing. Furthermore, complex backgrounds, such as sea clutter, can cause significant interference during detection tasks. In this paper, multiscale morphological reconstruction-based saliency mapping, combined with a two-branch compensation strategy (MMRSM-TBC) algorithm, is proposed for the detection of ship targets of various sizes and against complex backgrounds. First, a multiscale morphological reconstruction method is proposed to enhance the ship targets in the infrared image and suppress any irrelevant background. Then, by introducing a structure tensor with two feature-based filter templates, we utilize the contour information of the ship targets and further improve their intensities in the saliency map. After that, a two-branch compensation strategy is proposed, due to the uneven distribution of image grayscale. Finally, the target is extracted using an adaptive threshold. The experimental results fully show that our proposed algorithm achieves strong performance in the detection of different-sized ship targets and has a higher accuracy than other existing methods. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The overall procedure of the proposed method.</p>
Full article ">Figure 2
<p>Results of morphological reconstruction in two scenes. (<b>a</b>) Original grayscale image of scene 1. (<b>b</b>) Dilation-based reconstruction result of scene 1. (<b>c</b>) Erosion-based reconstruction result of scene 1. (<b>d</b>) Original grayscale image of scene 2. (<b>e</b>) Dilation-based reconstruction result of scene 2. (<b>f</b>) Erosion-based reconstruction result of scene 2.</p>
Full article ">Figure 3
<p>Results of reconstruction with different scales of the structuring element. (<b>a1</b>,<b>b1</b>,<b>c1</b>) are the original grayscale images. (<b>a2</b>,<b>b2</b>,<b>c2</b>) are the binarized ER results with a scale 5 structuring element. (<b>a3</b>,<b>b3</b>,<b>c3</b>) are the binarized ER results with a scale 20 structuring element. (<b>a4</b>,<b>b4</b>,<b>c4</b>) are the BIMFM results.</p>
Full article ">Figure 4
<p>Examples of the common features of ships.</p>
Full article ">Figure 5
<p>Two designed feature-based filer templates. (<b>a</b>) Filter template 1 that highlights the upper left direction edge feature. (<b>b</b>) Filter template 2 that highlights the lower right direction edge feature.</p>
Full article ">Figure 6
<p>An example of the performance of the FCM. (<b>a</b>) The original grayscale images. (<b>b</b>) The FCM of the original images. (<b>c</b>) Binarization result of (<b>b</b>). (<b>d</b>) Contour map without introducing the feature-based template. (<b>e</b>) Binarization result of (<b>d</b>).</p>
Full article ">Figure 7
<p>An example of uneven distribution of a ship. (<b>a</b>) The grayscale image. (<b>b</b>) The IMFM of (<b>a</b>). (<b>c</b>) The segmentation result.</p>
Full article ">Figure 8
<p>Overview of the two-branch compensation strategy.</p>
Full article ">Figure 9
<p>Detection results of different methods. (<b>a</b>–<b>i</b>) The original image of (Seq1–Seq9) and the corresponding detection results of the methods used for comparison.</p>
Full article ">
21 pages, 4593 KiB  
Article
A Two-Turn Shielded-Loop Magnetic Near-Field PCB Probe for Frequencies up to 3 GHz
by Mario Filipašić and Martin Dadić
Sensors 2023, 23(16), 7308; https://doi.org/10.3390/s23167308 - 21 Aug 2023
Cited by 2 | Viewed by 1479
Abstract
This paper proposes a novel design of shielded two-turn near-field probe with focus on high sensitivity and high electric field suppression. A comparison of different two-turn loop topologies and their influence on the probe sensitivity in the frequency range up to 3 GHz [...] Read more.
This paper proposes a novel design of shielded two-turn near-field probe with focus on high sensitivity and high electric field suppression. A comparison of different two-turn loop topologies and their influence on the probe sensitivity in the frequency range up to 3 GHz is presented. Furthermore, a comparison between a single loop probe and a two-turn probe is given and different topologies of the two-turn probe are analyzed and evaluated. The proposed probes were simulated using Ansys HFSS and manufactured on a standard FR4 substrate four-layer printed circuit board (PCB). A measurement setup for determining probe sensitivity and electric field suppression ratio using an in-house made PCB probe stand, vector network analyzer, microstrip line (MSL) and the manufactured probe is presented. It is shown that using a two-turn probe design it is possible to increase the probe sensitivity while minimizing the influence on the probe spatial resolution. The average sensitivity of the proposed two-turn probe compared to the conventional design is increased by 10.1 dB in the frequency range from 10 MHz up to 1 GHz. Full article
(This article belongs to the Collection Magnetic Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Square loop antenna (<b>b</b>) Zero- and first-phase-sequence currents induced by magnetic fields (<b>c</b>) First-phase-sequence currents <math display="inline"><semantics> <msubsup> <mi>I</mi> <mi>x</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>I</mi> <mi>y</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math> induced by electric field</p>
Full article ">Figure 2
<p>(<b>a</b>) Equivalent circuit for magnetic coupling mechanism from MSL (microstrip line) to loop antenna (<b>b</b>) Equivalent circuit for electric field coupling mechanism from MSL to loop antenna</p>
Full article ">Figure 3
<p>Depiction of the conventional probe dimensions. The figure shows different probe layers: (<b>a</b>) top layer (L1), (<b>b</b>) inner layer 1 (L2), (<b>c</b>) bottom layer (L4) and (<b>d</b>) inner layer 1 (L2) and bottom layer (L4).</p>
Full article ">Figure 4
<p>Partial inductance and parasitic capacitance of the (<b>a</b>) planar loop and (<b>b</b>) basic two-layer loop topology.</p>
Full article ">Figure 5
<p>3D simulation model of the probe and MSL developed in Ansys HFSS used for determining S-parameters of the probe.</p>
Full article ">Figure 6
<p>Probe and MSL configuration for (<b>a</b>) measuring maximum magnetic field and (<b>b</b>) measuring minimum magnetic field. In (<b>a</b>) the spatial sweep direction for the spatial resolution measurement is shown.</p>
Full article ">Figure 7
<p>3D model of the probe loop structure for each of the designed probes in Ansys HFSS. In (<b>a</b>) the conventional probe loop is shown, (<b>b</b>) shows probe with a folded line, (<b>c</b>,<b>d</b>) are 2 layer two turn probes, and (<b>e</b>) is a two-turn planar loop probe.</p>
Full article ">Figure 8
<p>Measurement setup for evaluating probe sensitivity consisting of the probe, MSL and VNA (vector network analyzer).</p>
Full article ">Figure 9
<p>Measurement setup for evaluating probe (<b>a</b>) sensitivity and (<b>b</b>) electric field suppression ratio. The figures show the probe fixation with screws and MSL fixation with standoffs in both measurement setups.</p>
Full article ">Figure 10
<p>Manufactured near-field probes, top row are probes without edge plating and bottom row are edge plated probes.</p>
Full article ">Figure 11
<p>Comparison of simulated and measured sensitivity of the probes.</p>
Full article ">Figure 12
<p>Comparison of the NFP1 (conventional), NFP2 and NFP3 probe measured sensitivity.</p>
Full article ">Figure 13
<p>Comparison of the conventional (NFP1), NFP2 and NFP3 (with and without edge plating) probes spatial resolution at frequency of 1 GHz.</p>
Full article ">Figure 14
<p>Comparison of probe edge plating influence on the measured electric field suppression ratio of the analyzed probes.</p>
Full article ">
17 pages, 926 KiB  
Article
Force-Position Hybrid Compensation Control for Path Deviation in Robot-Assisted Bone Drilling
by Shibo Li, Xin Zhong, Yuanyuan Yang, Xiaozhi Qi, Ying Hu and Xiaojun Yang
Sensors 2023, 23(16), 7307; https://doi.org/10.3390/s23167307 - 21 Aug 2023
Viewed by 1423
Abstract
Bone drilling is a common procedure in orthopedic surgery and is frequently attempted using robot-assisted techniques. However, drilling on rigid, slippery, and steep cortical surfaces, which are frequently encountered in robot-assisted operations due to limited workspace, can lead to tool path deviation. Path [...] Read more.
Bone drilling is a common procedure in orthopedic surgery and is frequently attempted using robot-assisted techniques. However, drilling on rigid, slippery, and steep cortical surfaces, which are frequently encountered in robot-assisted operations due to limited workspace, can lead to tool path deviation. Path deviation can have significant impacts on positioning accuracy, hole quality, and surgical safety. In this paper, we consider the deformation of the tool and the robot as the main factors contributing to path deviation. To address this issue, we establish a multi-stage mechanistic model of tool–bone interaction and develop a stiffness model of the robot. Additionally, a joint stiffness identification method is proposed. To compensate for path deviation in robot-assisted bone drilling, a force-position hybrid compensation control framework is proposed based on the derived models and a compensation strategy of path prediction. Our experimental results validate the effectiveness of the proposed compensation control method. Specifically, the path deviation is significantly reduced by 56.6%, the force of the tool is reduced by 38.5%, and the hole quality is substantially improved. The proposed compensation control method based on a multi-stage mechanistic model and joint stiffness identification method can significantly improve the accuracy and safety of robot-assisted bone drilling. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Example of drill deflection resulting in drilling path deviation, poor quality of hole, and drill bit damage.</p>
Full article ">Figure 2
<p>Geometric structure and unit force of fried dough twist drill. (<b>a</b>) Side view and coordinates. (<b>b</b>) Bottom view and coordinates. (<b>c</b>) Force analysis of micro-element on cutting edge. The tangential, radial, and axial shear forces are denoted by <math display="inline"><semantics> <mrow> <mi>d</mi> <msub> <mi>F</mi> <mi>t</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>d</mi> <msub> <mi>F</mi> <mi>r</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>d</mi> <msub> <mi>F</mi> <mi>f</mi> </msub> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 3
<p>Four stages of inclined drilling. (<b>a</b>) represents the local loading of the left cutting edge, (<b>b</b>) represents the local loading of both the left and right cutting edges, (<b>c</b>) represents the full loading of the left cutting edge and local loading of the right cutting edge, and (<b>d</b>) represents the full loading of both the left and right cutting edges. During one cycle, the green area denotes the left force area and the red area denotes the right force area. The feed rate is denoted by <span class="html-italic">c</span>, and <math display="inline"><semantics> <mi>β</mi> </semantics></math> represents the interaction angle.</p>
Full article ">Figure 4
<p>Deviation in robot-assisted bone drilling operation: robot positioning error and tool deformation.</p>
Full article ">Figure 5
<p>The cantilever beam model of the drilling tool.</p>
Full article ">Figure 6
<p>A block diagram of the robot force and position control system, including offline control based on the force model and tool stiffness, and online control based on the robot stiffness and tool stiffness. <math display="inline"><semantics> <mrow> <msup> <mrow/> <mi>B</mi> </msup> <msub> <mi>x</mi> <mi>d</mi> </msub> </mrow> </semantics></math> is the pose under the base coordinate and <span class="html-italic">q</span> denotes the joint coordinates of the robot arm.</p>
Full article ">Figure 7
<p>Schematic diagram of path prediction strategy.</p>
Full article ">Figure 8
<p>Finite element simulation of the initial deflection process. (<b>a</b>) FEA model of the tool moving towards the feeding direction, (<b>b</b>) horizontal reaction force changes with tool displacement.</p>
Full article ">Figure 9
<p>Force–time curve under different stages: (<b>a</b>) predicted by model; (<b>b</b>) obtained by experiment.</p>
Full article ">Figure 10
<p>Analysis of mechanical model parameters: (<b>a</b>) offset force vs. interaction angle; (<b>b</b>) offset force vs. feed rate.</p>
Full article ">Figure 11
<p>Experimental setup.</p>
Full article ">Figure 12
<p>The measured deviation under different proportional compensation schemes.</p>
Full article ">Figure 13
<p>The deflection force when drilling with and without compensation. Four stages are shown: (<b>a</b>) the preparation stage, (<b>b</b>) the interactive stage, (<b>c</b>) the stable drilling stage, and (<b>d</b>) the end stage.</p>
Full article ">Figure 14
<p>Comparison of hole quality without compensation (<b>a</b>) and with compensation (<b>b</b>). <math display="inline"><semantics> <msub> <mi>o</mi> <mrow> <mi>e</mi> <mi>x</mi> <mi>p</mi> </mrow> </msub> </semantics></math> is the expected position center of the hole; <math display="inline"><semantics> <msub> <mi>o</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </semantics></math> is the actual position center of the hole. Unit: mm.</p>
Full article ">
16 pages, 3210 KiB  
Article
A Zero-Shot Low Light Image Enhancement Method Integrating Gating Mechanism
by Junhao Tian and Jianwei Zhang
Sensors 2023, 23(16), 7306; https://doi.org/10.3390/s23167306 - 21 Aug 2023
Cited by 1 | Viewed by 1390
Abstract
Photographs taken under harsh ambient lighting can suffer from a number of image quality degradation phenomena due to insufficient exposure. These include reduced brightness, loss of transfer information, noise, and color distortion. In order to solve the above problems, researchers have proposed many [...] Read more.
Photographs taken under harsh ambient lighting can suffer from a number of image quality degradation phenomena due to insufficient exposure. These include reduced brightness, loss of transfer information, noise, and color distortion. In order to solve the above problems, researchers have proposed many deep learning-based methods to improve the illumination of images. However, most existing methods face the problem of difficulty in obtaining paired training data. In this context, a zero-reference image enhancement network for low light conditions is proposed in this paper. First, the improved Encoder-Decoder structure is used to extract image features to generate feature maps and generate the parameter matrix of the enhancement factor from the feature maps. Then, the enhancement curve is constructed using the parameter matrix. The image is iteratively enhanced using the enhancement curve and the enhancement parameters. Second, the unsupervised algorithm needs to design an image non-reference loss function in training. Four non-reference loss functions are introduced to train the parameter estimation network. Experiments on several datasets with only low-light images show that the proposed network has improved performance compared with other methods in NIQE, PIQE, and BRISQUE non-reference evaluation index, and ablation experiments are carried out for key parts, which proves the effectiveness of this method. At the same time, the performance data of the method on PC devices and mobile devices are investigated, and the experimental analysis is given. This proves the feasibility of the method in this paper in practical application. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

Figure 1
<p>As an example, it can be seen from the image that the enhancement effect of the method in this article is compared with the other two existing methods on the same image. It can be seen that the method in this article has a better effect on enhancing the overall brightness of the image.</p>
Full article ">Figure 2
<p>The structure of our proposed network. The network includes an image feature extraction net and an iterative enhancement net.</p>
Full article ">Figure 3
<p>The structure of MDTA.</p>
Full article ">Figure 4
<p>The structure of HDFN.</p>
Full article ">Figure 5
<p>The comparative graph of the two enhancement methods is shown below. One method uses a single parameter matrix, while the other method divides the parameter matrix into blocks for enhancing different image regions in multiple stages.</p>
Full article ">Figure 6
<p>Compared with other existing methods in terms of visual effects, it can be seen that this method has the best effect in enhancing the overall brightness of the image compared to other methods.</p>
Full article ">Figure 7
<p>In terms of the comparison of the method in this paper with other methods on the details, it can be seen that the proposed method in this paper performs well in terms of brightness and detail preservation compared to other methods.</p>
Full article ">Figure 8
<p>The Influence of Loss Functions on Experimental Results in Visual Effects. It can be seen that the Loss function designed in this paper has played a role in improving image quality.</p>
Full article ">
19 pages, 11811 KiB  
Article
A Novel Vectorized Curved Road Representation Based Aerial Guided Unmanned Vehicle Trajectory Planning
by Sujie Zhang, Qianru Hou, Xiaoyang Zhang, Xu Wu and Hongpeng Wang
Sensors 2023, 23(16), 7305; https://doi.org/10.3390/s23167305 - 21 Aug 2023
Cited by 1 | Viewed by 1026
Abstract
Unmanned vehicles frequently encounter the challenge of navigating through complex mountainous terrains, which are characterized by numerous unknown continuous curves. Drones, with their wide field of view and ability to vertically displace, offer a potential solution to compensate for the limited field of [...] Read more.
Unmanned vehicles frequently encounter the challenge of navigating through complex mountainous terrains, which are characterized by numerous unknown continuous curves. Drones, with their wide field of view and ability to vertically displace, offer a potential solution to compensate for the limited field of view of ground vehicles. However, the conventional approach of path extraction solely provides pixel-level positional information. Consequently, when drones guide ground unmanned vehicles using visual cues, the road fitting accuracy is compromised, resulting in reduced speed. Addressing these limitations with existing methods has proven to be a formidable task. In this study, we propose an innovative approach for guiding the visual movement of unmanned ground vehicles using an air–ground collaborative vectorized curved road representation and trajectory planning method. Our method offers several advantages over traditional road fitting techniques. Firstly, it incorporates a road star points ordering method based on the K-Means clustering algorithm, which simplifies the complex process of road fitting. Additionally, we introduce a road vectorization model based on the piecewise GA-Bézier algorithm, enabling the identification of the optimal frame from the initial frame to the current frame in the video stream. This significantly improves the road fitting effect (EV) and reduces the model running time (Tmodel). Furthermore, we employ smooth trajectory planning along the “route-plane” to maximize speed at turning points, thereby minimizing travel time (Ttravel). To validate the efficiency and accuracy of our proposed method, we conducted extensive simulation experiments and performed actual comparison experiments. The results demonstrate the superior performance of our approach in terms of both efficiency and accuracy. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Using aerial vehicles to visually guide the trajectory of unmanned ground vehicles on unknown curved roads. The aerial images are used to extract and model roads, which are then used to plan trajectories that satisfy the constraints of a continuous tangent and continuous curvature.</p>
Full article ">Figure 2
<p>Method of extracting star points from an image of a road: brightness thresholding, morphological opening operation filtering, obtaining the ROI contour, extracting the ROI area, Laplacian texture sharpening, sharpening image binarization, and obtaining the star points.</p>
Full article ">Figure 3
<p>Star point order. (<b>a</b>,<b>b</b>) The top images show the default order. (<b>c</b>,<b>d</b>) The bottom images show the sorted order.</p>
Full article ">Figure 4
<p>Road extraction results. The sub-images in each column show the original image, brightness thresholding result, morphological processing result, ROI contour, the ROI area, texture sharpening map, sharpening image binarization result, and road star point sorting result.</p>
Full article ">Figure 5
<p>Route unit model.</p>
Full article ">Figure 6
<p>Sichuan road route extraction effect map.</p>
Full article ">Figure 7
<p>Piecewise road vectorization model. Road units are added according to the constraints of a continuous tangent and continuous curvature at junctions to obtain the overall road.</p>
Full article ">Figure 8
<p>Fitting scheme for the first road unit. Because only the positions of the starting and ending points are constrained, the chord length and angle between adjacent control points can be used as optimization variables to solve. Then, the coordinates of the control points can be calculated.</p>
Full article ">Figure 9
<p>Road line fitting results. The thick solid line represents the centerline of the road, and different colors are used to distinguish different road units. The large colored circles represent the control points of each road unit.</p>
Full article ">Figure 10
<p>(<b>a</b>) Nankai road. (<b>b</b>) Discrete point data correction. (<b>c</b>) Discrete points in the world coordinate system. (<b>d</b>) Route fitting based on the traditional method. (<b>e</b>) Route fitting based on the proposed method. (<b>f</b>) Route extraction effect map.</p>
Full article ">Figure 11
<p>(<b>a</b>) Results of the route-plane trajectory planning. (<b>b</b>) The planning results in the velocity space. (<b>c</b>) The planning results in the acceleration space. (<b>d</b>) The last image shows which waypoints should be reached at each timestamp.</p>
Full article ">
15 pages, 1243 KiB  
Article
Posture Classification with a Bed-Monitoring System Using Radio Frequency Identification
by Yu Yamauchi and Nobuhiro Shimoi
Sensors 2023, 23(16), 7304; https://doi.org/10.3390/s23167304 - 21 Aug 2023
Cited by 4 | Viewed by 1069
Abstract
Aging of the population and the declining birthrate in Japan have produced severe human resource shortages in the medical and long-term care industries. Reportedly, falls account for more than 50% of all accidents in nursing homes. Recently, various bed-release sensors have become commercially [...] Read more.
Aging of the population and the declining birthrate in Japan have produced severe human resource shortages in the medical and long-term care industries. Reportedly, falls account for more than 50% of all accidents in nursing homes. Recently, various bed-release sensors have become commercially available. In fact, clip sensors, mat sensors, and infrared sensors are used widely in hospitals and nursing care facilities. We propose a simple and inexpensive monitoring system for elderly people as a technology capable of detecting bed activity, aimed particularly at preventing accidents involving falls. Based on findings obtained using that system, we aim at realizing a simple and inexpensive bed-monitoring system that improves quality of life. For this study, we developed a bed-monitoring system for detecting bed activity. It can predict bed release using RFID, which can achieve contactless measurements. The proposed bed-monitoring system incorporates an RFID antenna and tags, with a method for classifying postures based on the RFID communication status. Experimentation confirmed that three postures can be classified with two tags, seven postures with four tags, and nine postures with six tags. The detection rates were 90% for two tags, 75% for four tags, and more than 50% for six tags. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Configuration of the proposed bed-monitoring system.</p>
Full article ">Figure 2
<p>Communication ranges and strengths of sent signals for different antenna directions: (<b>a</b>) vertical and (<b>b</b>) horizontal.</p>
Full article ">Figure 3
<p>Communication ranges and strengths of received signals for different antenna directions: (<b>a</b>) vertical and (<b>b</b>) horizontal.</p>
Full article ">Figure 4
<p>Communication range and strength of received signal for different tag orientations: (<b>a</b>) vertical and (<b>b</b>) horizontal.</p>
Full article ">Figure 5
<p>Communication range and strength of sent signal waves when a wooden board is placed between the bed frame and the tag.</p>
Full article ">Figure 6
<p>Communication range and strength of received signal waves when a wooden board is placed between the bed frame and the tag: (<b>a</b>) vertical and (<b>b</b>) horizontal.</p>
Full article ">Figure 7
<p>Communication range and strength of received signal when a tag is placed under a mattress: (<b>a</b>) vertical and (<b>b</b>) horizontal tag orientations.</p>
Full article ">Figure 8
<p>Communication range and strength of each signal when the antenna is moved 0.6 m from the center of the bed to the head: (<b>a</b>) sent signal and (<b>b</b>) received signal.</p>
Full article ">Figure 9
<p>Each tag position placed on the bed.</p>
Full article ">Figure 10
<p>Posture classification by the detectable tag position.</p>
Full article ">Figure 11
<p>Time response of the tag in the respective behavior patterns.</p>
Full article ">
25 pages, 3539 KiB  
Article
Image Preprocessing with Enhanced Feature Matching for Map Merging in the Presence of Sensing Error
by Yu-Lin Chen and Kuei-Yuan Chan
Sensors 2023, 23(16), 7303; https://doi.org/10.3390/s23167303 - 21 Aug 2023
Viewed by 1244
Abstract
Autonomous robots heavily rely on simultaneous localization and mapping (SLAM) techniques and sensor data to create accurate maps of their surroundings. When multiple robots are employed to expedite exploration, the resulting maps often have varying coordinates and scales. To achieve a comprehensive global [...] Read more.
Autonomous robots heavily rely on simultaneous localization and mapping (SLAM) techniques and sensor data to create accurate maps of their surroundings. When multiple robots are employed to expedite exploration, the resulting maps often have varying coordinates and scales. To achieve a comprehensive global view, the utilization of map merging techniques becomes necessary. Previous studies have typically depended on extracting image features from maps to establish connections. However, it is important to note that maps of the same location can exhibit inconsistencies due to sensing errors. Additionally, robot-generated maps are commonly represented in an occupancy grid format, which limits the availability of features for extraction and matching. Therefore, feature extraction and matching play crucial roles in map merging, particularly when dealing with uncertain sensing data. In this study, we introduce a novel method that addresses image noise resulting from sensing errors and applies additional corrections before performing feature extraction. This approach allows for the collection of features from corresponding locations in different maps, facilitating the establishment of connections between different coordinate systems and enabling effective map merging. Evaluation results demonstrate the significant reduction of sensing errors during the image stitching process, thanks to the proposed image pre-processing technique. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Basic concept of map merging.</p>
Full article ">Figure 2
<p>Diagram of overlapping regions of maps <math display="inline"><semantics> <msub> <mi>M</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>M</mi> <mn>2</mn> </msub> </semantics></math>. Original coordinates are expressed by blue solid lines, and transformed coordinates are expressed by red dashed lines.</p>
Full article ">Figure 3
<p>Flowchart of the image stitching technique.</p>
Full article ">Figure 4
<p>Flowchart of proposed method. Blocks with an asterisk “*” indicate added or slightly modified steps.</p>
Full article ">Figure 5
<p>Binarization.</p>
Full article ">Figure 6
<p>Rotation of map images. (<b>a</b>) Binarized map. Stair shapes of linear features are marked with a red circle. (<b>b</b>) The zoomed part of (<b>a</b>). (<b>c</b>) The rotated image of (<b>b</b>). Stair shapes are roughly eliminated, but some pixel values of grids are unequal to 0 or 255 due to interpolation. (<b>d</b>) Implement binarization to classify pixel values. Stair shapes are still eliminated but some uneven burrs happen instead. These defects will be eliminated by image closing.</p>
Full article ">Figure 7
<p>Radon transform. (<b>a</b>) Map image. The longest line is marked with a red rectangle. (<b>b</b>) Two-dimensional space linear function <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>f</mi> </mrow> </semantics></math>. The largest value is marked with a red circle.</p>
Full article ">Figure 8
<p>Closing operation.</p>
Full article ">Figure 9
<p>Extraction of interest points. (<b>a</b>) Image before the extraction of interest points. (<b>b</b>) Image after the extraction of interest points.</p>
Full article ">Figure 10
<p>Correction of line segments of map images with extracted interest points. Certain geometric shapes that are sensitive to orientation might be influenced by the correction, as indicated by the red boxes and circles. Nevertheless, we can still extract comparable features and subsequently match them during the image stitching process. This is possible because the majority of extracted features are derived from the corrected line segments, while only a small portion originates from these orientation-sensitive shapes.</p>
Full article ">Figure 11
<p>Simulation environment of Scenario 1.</p>
Full article ">Figure 12
<p>Local maps of the environment in Scenario 1.</p>
Full article ">Figure 13
<p>The impact of sensing errors on mapping accuracy. As the standard deviation of LiDAR measurements increases, the resulting occupancy grid maps become more distorted and less accurate. (<b>a</b>) Simulation environment for lidar sensing errors. (<b>b</b>) Standard deviation of measurement of LiDAR = 0.005. (<b>c</b>) Standard deviation = 0.005. The line of occupied grids is straight. (<b>d</b>) Standard deviation of measurement of lidar = 0.05. (<b>e</b>) Standard deviation = 0.05. The line of occupied grids is distorted.</p>
Full article ">Figure 13 Cont.
<p>The impact of sensing errors on mapping accuracy. As the standard deviation of LiDAR measurements increases, the resulting occupancy grid maps become more distorted and less accurate. (<b>a</b>) Simulation environment for lidar sensing errors. (<b>b</b>) Standard deviation of measurement of LiDAR = 0.005. (<b>c</b>) Standard deviation = 0.005. The line of occupied grids is straight. (<b>d</b>) Standard deviation of measurement of lidar = 0.05. (<b>e</b>) Standard deviation = 0.05. The line of occupied grids is distorted.</p>
Full article ">Figure 14
<p>Merging results with the existing method in test case 1. In (<b>b</b>–<b>d</b>), the green areas represent correct pairings of grids, whereas the red areas represent incorrect pairings. Among these results, only the result of Set 1 correspond to the environment (<b>a</b>).</p>
Full article ">Figure 15
<p>Error effect for SIFT features in occupancy grid maps. Uncertainty can lead to the presence of redundant features (indicated by red circles), while distortion in occupied areas can also result in the same issue (highlighted by green circles). (<b>a</b>) Error effect for SIFT features in Local map 1. (<b>b</b>) Error effect for SIFT features in Local map 4.</p>
Full article ">Figure 16
<p>Evaluation of SIFT feature extraction performance with image pre-processing in occupancy grid maps. The extracted features are strategically located at the corners of occupied areas, effectively mitigating the influence of uncertainty and sensing errors. (<b>a</b>) Extraction of SIFT features in corrected Local map 1. (<b>b</b>) Extraction of SIFT features in corrected Local map 3.</p>
Full article ">Figure 17
<p>Merging results with our proposed method in test case 1. All the results of the three sets correspond to the environment [<a href="#sensors-23-07303-f014" class="html-fig">Figure 14</a>a].</p>
Full article ">Figure 18
<p>Environment of test case 2. (<b>a</b>) Floor plan of Yonglin biomedical engineering hall. (<b>b</b>) Simulation environment of Yonglin biomedical engineering hall.</p>
Full article ">Figure 19
<p>Local maps of Yonglin biomedical engineering hall.</p>
Full article ">Figure 20
<p>Merging results with our proposed method in test case 2. The visualization indicates that the results conform with the simulation environment (<a href="#sensors-23-07303-f018" class="html-fig">Figure 18</a>).</p>
Full article ">Figure 21
<p>Global maps of Yonglin biomedical engineering hall by merging three local maps in <a href="#sensors-23-07303-f019" class="html-fig">Figure 19</a>.</p>
Full article ">
17 pages, 4605 KiB  
Article
Federated Transfer Learning Strategy: A Novel Cross-Device Fault Diagnosis Method Based on Repaired Data
by Zhenhao Yan, Jiachen Sun, Yixiang Zhang, Lilan Liu, Zenggui Gao and Yuxing Chang
Sensors 2023, 23(16), 7302; https://doi.org/10.3390/s23167302 - 21 Aug 2023
Cited by 4 | Viewed by 1545
Abstract
Federated learning has attracted much attention in fault diagnosis since it can effectively protect data privacy. However, efficient fault diagnosis performance relies on the uninterrupted training of model parameters with massive amounts of perfect data. To solve the problems of model training difficulty [...] Read more.
Federated learning has attracted much attention in fault diagnosis since it can effectively protect data privacy. However, efficient fault diagnosis performance relies on the uninterrupted training of model parameters with massive amounts of perfect data. To solve the problems of model training difficulty and parameter negative transfer caused by data corruption, a novel cross-device fault diagnosis method based on repaired data is proposed. Specifically, the local model training link in each source client performs random forest regression fitting on the fault samples with missing fragments, and then the repaired data is used for network training. To avoid inpainting fragments to produce the wrong characteristics of faulty samples, joint domain discrepancy loss is introduced to correct the phenomenon of parameter bias during local model training. Considering the randomness of the overall performance change brought about by the local model update, an adaptive update is proposed for each round of global model download and local model update. Finally, the experimental verification was carried out in various industrial scenarios established by three sets of bearing data sets, and the effectiveness of the proposed method in terms of fault diagnosis performance and data privacy protection was verified by comparison with various currently popular federated transfer learning methods. Full article
(This article belongs to the Special Issue Advanced Sensing for Mechanical Vibration and Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of source client network architecture and training process.</p>
Full article ">Figure 2
<p>Flowchart of federated transfer dynamic interaction. The red dotted line represents the flow process of the global model, the yellow dotted line indicates the flow process of uploading the local model to the central server, the black dotted line represents the flow process of local model training feedback, and the black solid line represents the local model training process.</p>
Full article ">Figure 3
<p>The display diagram of each bearing fault simulation test bench.</p>
Full article ">Figure 4
<p>The comparison display of forecast data and real data in case 2: The blue curve represents the predicted data. The red curve represents the real data.</p>
Full article ">Figure 4 Cont.
<p>The comparison display of forecast data and real data in case 2: The blue curve represents the predicted data. The red curve represents the real data.</p>
Full article ">Figure 5
<p>The comparison display of forecast data and real data in case 3 (GPTFS-ORF3-2000 rpm): Figure (<b>a</b>) is the forecast curve during model training. Figure (<b>b</b>) is the forecast curve during model testing.</p>
Full article ">Figure 6
<p>The diagnostic accuracy fluctuation display of each comparison method for the target task. Figure (<b>a</b>) is the diagnosis details of the target task in case 1. Figure (<b>b</b>) is the diagnosis details of the target task in case 2. Figure (<b>c</b>) is the diagnosis details of the target task in case 3. Figure (<b>d</b>) is the diagnosis details of the target task in case 4.</p>
Full article ">Figure 7
<p>Feature visualization of the target task during testing. The black particles denotes normal condition. The blue particles denotes inner fault 1. The purple particles denotes inner fault 2. The red particles denotes inner fault 3. The yellow particles denotes outer fault 1. The blue-green particles denotes outer fault 2. The green particles denotes outer fault 3.</p>
Full article ">
12 pages, 4301 KiB  
Article
Fourier Ptychographic Microscopic Reconstruction Method Based on Residual Hybrid Attention Network
by Jie Li, Jingzi Hao, Xiaoli Wang, Yongshan Wang, Yan Wang, Hao Wang and Xinbo Wang
Sensors 2023, 23(16), 7301; https://doi.org/10.3390/s23167301 - 21 Aug 2023
Cited by 5 | Viewed by 1205
Abstract
Fourier ptychographic microscopy (FPM) is a novel technique for computing microimaging that allows imaging of samples such as pathology sections. However, due to the influence of systematic errors and noise, the quality of reconstructed images using FPM is often poor, and the reconstruction [...] Read more.
Fourier ptychographic microscopy (FPM) is a novel technique for computing microimaging that allows imaging of samples such as pathology sections. However, due to the influence of systematic errors and noise, the quality of reconstructed images using FPM is often poor, and the reconstruction efficiency is low. In this paper, a hybrid attention network that combines spatial attention mechanisms with channel attention mechanisms into FPM reconstruction is introduced. Spatial attention can extract fine spatial features and reduce redundant features while, combined with residual channel attention, it adaptively readjusts the hierarchical features to achieve the conversion of low-resolution complex amplitude images to high-resolution ones. The high-resolution images generated by this method can be applied to medical cell recognition, segmentation, classification, and other related studies, providing a better foundation for relevant research. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

Figure 1
<p>RHAN network architecture diagram.</p>
Full article ">Figure 2
<p>RHAB network architecture diagram.</p>
Full article ">Figure 3
<p>SA network architecture diagram.</p>
Full article ">Figure 4
<p>CA network architecture diagram.</p>
Full article ">Figure 5
<p>Comparison of the loss function between Adagrad and AdamW optimizers.</p>
Full article ">Figure 6
<p>Comparison of the reconstructed results using Adagrad and AdamW optimizers.</p>
Full article ">Figure 7
<p>Comparison of the loss curves of RCN, RSN, and RHAN.</p>
Full article ">Figure 8
<p>Comparison of the reconstruction results of RCN, RSN, and RHAN.</p>
Full article ">Figure 9
<p>Comparison of the reconstruction results of different images using different methods.</p>
Full article ">Figure 10
<p>Comparison of reconstruction results of different methods under different noise levels.</p>
Full article ">Figure 11
<p>Comparison of reconstruction results of real acquired images.</p>
Full article ">
15 pages, 6566 KiB  
Article
Effective Mean Square Differences: A Matching Algorithm for Highly Similar Sheet Metal Parts
by Hui Zhang, Zhen Guan, Joe Eastwood, Hongji Zhang and Xiaoyang Zhu
Sensors 2023, 23(16), 7300; https://doi.org/10.3390/s23167300 - 21 Aug 2023
Viewed by 1005
Abstract
The accurate identification of highly similar sheet metal parts remains a challenging issue in sheet metal production. To solve this problem, this paper proposes an effective mean square differences (EMSD) algorithm that can effectively distinguish highly similar parts with high accuracy. First, multi-level [...] Read more.
The accurate identification of highly similar sheet metal parts remains a challenging issue in sheet metal production. To solve this problem, this paper proposes an effective mean square differences (EMSD) algorithm that can effectively distinguish highly similar parts with high accuracy. First, multi-level downsampling and rotation searching are adopted to construct an image pyramid. Then, non-maximum suppression is utilised to determine the optimal rotation for each layer. In the matching, by re-evaluating the contribution of the difference between the corresponding pixels, the matching weight is determined according to the correlation between the grey value information of the matching pixels, and then the effective matching coefficient is determined. Finally, the proposed effective matching coefficient is adopted to obtain the final matching result. The results illustrate that this algorithm exhibits a strong discriminative ability for highly similar parts, with an accuracy of 97.1%, which is 11.5% higher than that of the traditional methods. It has excellent potential for application and can significantly improve sheet metal production efficiency. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Image matching of similar sheet metal parts.</p>
Full article ">Figure 2
<p>Similar features of similar sheet metal parts.</p>
Full article ">Figure 3
<p>Rotating target search method.</p>
Full article ">Figure 4
<p>Rotated rectangle target images.</p>
Full article ">Figure 5
<p>Optimizing the matching process.</p>
Full article ">Figure 6
<p>Calculation principle of the effective matching coefficient.</p>
Full article ">Figure 7
<p>Experiment layout.</p>
Full article ">Figure 8
<p>Part of the test image.</p>
Full article ">Figure 9
<p>Image noise and interference.</p>
Full article ">Figure 10
<p>Part of the sheet metal image.</p>
Full article ">Figure 11
<p>Comparison of implementations of various algorithms.</p>
Full article ">Figure 12
<p>Sheet metal part pixel difference mapping.</p>
Full article ">Figure 13
<p>Total of properly detected objects.</p>
Full article ">Figure 14
<p>Misidentified cases with further analysis.</p>
Full article ">
18 pages, 4616 KiB  
Article
Advancements in Buoy Wave Data Processing through the Application of the Sage–Husa Adaptive Kalman Filtering Algorithm
by Sha Jiang, Yonghua Chen and Qingkui Liu
Sensors 2023, 23(16), 7298; https://doi.org/10.3390/s23167298 - 21 Aug 2023
Cited by 1 | Viewed by 1182
Abstract
In this paper, we propose a combined filtering method rooted in the application of the Sage–Husa Adaptive Kalman filtering, designed specifically to process wave sensor data. This methodology aims to boost the measurement precision and real-time performance of wave parameters. (1) This study [...] Read more.
In this paper, we propose a combined filtering method rooted in the application of the Sage–Husa Adaptive Kalman filtering, designed specifically to process wave sensor data. This methodology aims to boost the measurement precision and real-time performance of wave parameters. (1) This study delineates the basic principles of the Kalman filter. (2) We discuss in detail the methodology for analyzing wave parameters from the collected wave acceleration data, and deeply study the key issues that may arise during this process. (3) To evaluate the efficacy of the Kalman filter, we have designed a simulation comparison encompassing various filtering algorithms. The results show that the Sage–Husa Adaptive Kalman Composite filter demonstrates superior performance in processing wave sensor data. (4) Additionally, in Chapter 5, we designed a turntable experiment capable of simulating the sinusoidal motion of waves and carried out a detailed errors analysis associated with the Kalman filter, to facilitate a deep understanding of potential problems that may be encountered in practical application, and their solutions. (5) Finally, the results reveal that the Sage–Husa Adaptive Kalman Composite filter improved the accuracy of effective wave height by 48.72% and the precision of effective wave period by 23.33% compared to traditional bandpass filter results. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Lab-developed acceleration wave buoy based on micro-electro-mechanical systems (MEMS) attitude sensors.</p>
Full article ">Figure 2
<p>Direction rose diagram. The radius of each blue sector represents the number of occurrences within that angular range.</p>
Full article ">Figure 3
<p>Wave power spectral density diagram.</p>
Full article ">Figure 4
<p>Comparison of original displacement signal and noise-added displacement signal. The blue line represents the original displacement signal calculated from the PM spectrum, and the red line represents the displacement signal after adding noise. The <span class="html-italic">x</span>-axis represents the number of sampling points, which total 512 displacement data points with a time interval of 0.1 s between each point; the <span class="html-italic">y</span>-axis represents the displacement value (unit: m).</p>
Full article ">Figure 5
<p>Comparison before and after bandpass filtering.</p>
Full article ">Figure 6
<p>Comparison before and after Sage−Husa Adaptive Kalman filtering.</p>
Full article ">Figure 7
<p>Graph after Sage−Husa Adaptive Kalman filtering. The red circle denotes the adaptive learning process of the Kalman filter.</p>
Full article ">Figure 8
<p>Training process of the Sage−Husa Adaptive Kalman filter.</p>
Full article ">Figure 9
<p>Comparison of spectra before and after combined filtering. The red circle denotes the zero−bias error.</p>
Full article ">Figure 10
<p>Comparison of waveforms before and after combined filtering.</p>
Full article ">Figure 11
<p>It depicts the actual scene of the turntable experiment simulating sinusoidal wave motion. There are support frames on both sides of the picture, with a rotating shaft mounted in the middle. The rotating shaft operates via the drive mechanism, which is connected to a computer on the shelf to the right of the picture through a data cable to control the rotation speed. The rotating shaft drives the upper and lower sets of mechanical arms to simulate the sinusoidal motion of waves. The mechanical arm at the bottom of the picture has four crossbeams, capable of simulating wave motion with four different motion radii of 2.00 m, 1.50 m, 1.05 m, and 0.55 m. A black insulation box is hung on one of the crossbeams, housing a MEMS nine-axis accelerometer. Simultaneously, corresponding unload buckles are hung on the mechanical arm at the top of the picture to balance the mechanical arms, ensuring uniform rotation of the two sets of mechanical arms.</p>
Full article ">Figure 12
<p>(<b>a</b>) It displays the comparison of the relative error in significant wave height obtained from 36 sets of simulated sine wave turntable experiments. The <span class="html-italic">x</span>-axis represents the experiment groups, while the <span class="html-italic">y</span>-axis represents the percentage of relative error in significant wave height (unit: %). In the figure, the error percentages calculated using the Sage–Husa adaptive Kalman combined filter are represented by blue bar graphs, while the corresponding results obtained using the Bandpass filter are illustrated with red bar graphs. (<b>b</b>) It showcases the comparison of absolute error in significant wave period. The <span class="html-italic">x</span>-axis still represents the experiment groups, and the <span class="html-italic">y</span>-axis represents the absolute error in significant wave period (unit: s). Blue bar graphs represent the absolute error in significant wave period calculated using the Sage–Husa adaptive Kalman combined filter, and red bar graphs represent the corresponding results obtained using the Bandpass filter.</p>
Full article ">Figure 13
<p>(<b>a</b>) It displays a sea wave direction spectrum obtained through the Sage–Husa adaptive Kalman combination filter, depicting the energy distribution of sea waves at different frequencies and directions. In this figure, the <span class="html-italic">x</span>-axis represents the direction angle of the sea wave (unit: radian), the <span class="html-italic">y</span>-axis represents the sea wave frequency (unit: Hz), and the <span class="html-italic">z</span>−axis represents the energy density (unit: m<sup>2</sup>/Hz). (<b>b</b>) It presents the sea wave direction spectrum obtained from the same set of data after Bandpass filtering.</p>
Full article ">
15 pages, 6638 KiB  
Article
Touchless Heart Rate Monitoring from an Unmanned Aerial Vehicle Using Videoplethysmography
by Anna Pająk, Jaromir Przybyło and Piotr Augustyniak
Sensors 2023, 23(16), 7297; https://doi.org/10.3390/s23167297 - 21 Aug 2023
Cited by 2 | Viewed by 1412
Abstract
Motivation: The advancement of preventive medicine and, subsequently, telemedicine drives the need for noninvasive and remote measurements in patients’ natural environments. Heart rate (HR) measurements are particularly promising and extensively researched due to their quick assessment and comprehensive representation of patients’ conditions. [...] Read more.
Motivation: The advancement of preventive medicine and, subsequently, telemedicine drives the need for noninvasive and remote measurements in patients’ natural environments. Heart rate (HR) measurements are particularly promising and extensively researched due to their quick assessment and comprehensive representation of patients’ conditions. However, in scenarios such as endurance training or emergencies, where HR measurement was not anticipated and direct access to victims is limited, no method enables obtaining HR results that are suitable even for triage. Methods: This paper presents the possibility of remotely measuring of human HR from a series of in-flight videos using videoplethysmography (VPG) along with skin detection, human pose estimation and image stabilization methods. An unmanned aerial vehicle (UAV) equipped with a camera captured ten segments of video footage featuring volunteers engaged in free walking and running activities in natural sunlight. The human pose was determined using the OpenPose algorithm, and subsequently, skin areas on the face and forearms were identified and tracked in consecutive frames. Ultimately, HR was estimated using several VPG methods: the green channel (G), green-red difference (GR), excess green (ExG), independent component analysis (ICA), and a plane orthogonal to the skin (POS). Results: When compared to simultaneous readings from a reference ECG-based wearable recorder, the root-mean-squared error ranged from 17.7 (G) to 27.7 (POS), with errors of less than 3.5 bpm achieved for the G and GR methods. Conclusions: These results demonstrate the acceptable accuracy of touchless human pulse measurement with the accompanying UAV-mounted camera. The method bridges the gap between HR-transmitting wearables and emergency HR recorders, and it has the potential to be advantageous in training or rescue scenarios in mountain, water, disaster, or battlefield settings. Full article
(This article belongs to the Special Issue Advanced Imaging and Sensing Technologies of Cardiovascular Disease)
Show Figures

Figure 1

Figure 1
<p>An overview of the architecture of the proposed method.</p>
Full article ">Figure 2
<p>The silhouette points of the model BODY_25 used to determine whether it was accurately detected (points 0, 1, 4, 7, 8) and used to determine the center of the silhouette (mean of points 1 and 8) in the tracking algorithm.</p>
Full article ">Figure 3
<p>An Example of detected volunteer’s face and forearms combined with segmentation markers calculated by OpenPose algorithm in a video frame.</p>
Full article ">Figure 4
<p>An example of results from each step of video processing pipeline: (<b>a</b>) raw signal (G channel); (<b>b</b>) detrended signal; (<b>c</b>) filtered signal; (<b>d</b>) VPG signal (POS).</p>
Full article ">Figure 5
<p>Example scenes from recordings. Two volunteers in the foreground are simultaneously free running; the person behind in the yellow vest is the UAV operator.</p>
Full article ">Figure 6
<p>Bland–Altman plots of difference between heart rate measurements obtained by the reference method and heart rates estimated using G and GR methods.</p>
Full article ">Figure 7
<p>Bland–Altman plots of difference between heart rate measurements obtained by the reference method and heart rates estimated using ExG and ICA methods.</p>
Full article ">Figure 8
<p>Bland–Altman plots of difference between heart rate measurements obtained by the reference method and heart rates estimated using POS method.</p>
Full article ">
16 pages, 1319 KiB  
Article
Improved Generative Adversarial Network for Super-Resolution Reconstruction of Coal Photomicrographs
by Liang Zou, Shifan Xu, Weiming Zhu, Xiu Huang, Zihui Lei and Kun He
Sensors 2023, 23(16), 7296; https://doi.org/10.3390/s23167296 - 21 Aug 2023
Cited by 1 | Viewed by 1498
Abstract
Analyzing the photomicrographs of coal and conducting maceral analysis are essential steps in understanding the coal’s characteristics, quality, and potential uses. However, due to limitations of equipment and technology, the obtained coal photomicrographs may have low resolution, failing to show clear details. In [...] Read more.
Analyzing the photomicrographs of coal and conducting maceral analysis are essential steps in understanding the coal’s characteristics, quality, and potential uses. However, due to limitations of equipment and technology, the obtained coal photomicrographs may have low resolution, failing to show clear details. In this study, we introduce a novel Generative Adversarial Network (GAN) to restore high-definition coal photomicrographs. Compared to traditional image restoration methods, the lightweight GAN-based network generates more explicit and realistic results. In particular, we employ the Wide Residual Block to eliminate the influence of artifacts and improve non-linear fitting ability. Moreover, we adopt a multi-scale attention block embedded in the generator network to capture long-range feature correlations across multiple scales. Experimental results on 468 photomicrographs demonstrate that the proposed method achieves a peak signal-to-noise ratio of 31.12 dB and a structural similarity index of 0.906, significantly higher than state-of-the-art super-resolution reconstruction approaches. Full article
Show Figures

Figure 1

Figure 1
<p>The detailed architecture of the proposed generator and discriminator network.</p>
Full article ">Figure 2
<p>The comparison between (<b>a</b>) traditional residual block with batch normalization and (<b>b</b>) wide residual block, which replaces BN with a subresidual module.</p>
Full article ">Figure 3
<p>Pyramid attention captures multi-scale feature correspondences by employing a series of Scale Agnostic attention.</p>
Full article ">Figure 4
<p>The SA attention structure.</p>
Full article ">Figure 5
<p>Training process for our modified GAN, which completely demonstrates how the network works during each iteration.</p>
Full article ">Figure 6
<p>Coal photomicrographs super-resolution images produced by 4× up-scales using our proposed method at the 0, 25th, 50th, 100th, and 300th epoch. The last image is the ground truth.</p>
Full article ">Figure 7
<p>Line chart of loss during training and validating process, represented by blue and yellow curves, respectively.</p>
Full article ">Figure 8
<p>The reconstruction results of Bicubic interpolation, SRCNN, SRGAN, EDSR, ESRGAN, RFB-ESRGAN and our method, and the corresponding reference HR image.</p>
Full article ">Figure 9
<p>Comparison of ablation experiment results of a sample image from the test dataset.</p>
Full article ">
17 pages, 4163 KiB  
Article
Enhanced Deep Learning Approach for Accurate Eczema and Psoriasis Skin Detection
by Mohamed Hammad, Paweł Pławiak, Mohammed ElAffendi, Ahmed A. Abd El-Latif and Asmaa A. Abdel Latif
Sensors 2023, 23(16), 7295; https://doi.org/10.3390/s23167295 - 21 Aug 2023
Cited by 12 | Viewed by 40134
Abstract
This study presents an enhanced deep learning approach for the accurate detection of eczema and psoriasis skin conditions. Eczema and psoriasis are significant public health concerns that profoundly impact individuals’ quality of life. Early detection and diagnosis play a crucial role in improving [...] Read more.
This study presents an enhanced deep learning approach for the accurate detection of eczema and psoriasis skin conditions. Eczema and psoriasis are significant public health concerns that profoundly impact individuals’ quality of life. Early detection and diagnosis play a crucial role in improving treatment outcomes and reducing healthcare costs. Leveraging the potential of deep learning techniques, our proposed model, named “Derma Care,” addresses challenges faced by previous methods, including limited datasets and the need for the simultaneous detection of multiple skin diseases. We extensively evaluated “Derma Care” using a large and diverse dataset of skin images. Our approach achieves remarkable results with an accuracy of 96.20%, precision of 96%, recall of 95.70%, and F1-score of 95.80%. These outcomes outperform existing state-of-the-art methods, underscoring the effectiveness of our novel deep learning approach. Furthermore, our model demonstrates the capability to detect multiple skin diseases simultaneously, enhancing the efficiency and accuracy of dermatological diagnosis. To facilitate practical usage, we present a user-friendly mobile phone application based on our model. The findings of this study hold significant implications for dermatological diagnosis and the early detection of skin diseases, contributing to improved healthcare outcomes for individuals affected by eczema and psoriasis. Full article
(This article belongs to the Special Issue Biomedical Signal Processing in Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>Visual examples from the database.</p>
Full article ">Figure 2
<p>Block diagram for the steps of our method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Structure of the proposed model, (<b>b</b>) the output after scaling and one round of using convolutional layer with maxpooling layer, (<b>c</b>) the final deep feature vector with size (256) and the final output after SoftMax layer.</p>
Full article ">Figure 4
<p>Confusion matrix of our model where 0 refers to eczema class and 1 refers to psoriasis class.</p>
Full article ">Figure 5
<p>The Roc curves of our method during several epochs.</p>
Full article ">Figure 6
<p>Loss and accuracy results of our model in each epoch.</p>
Full article ">Figure 7
<p>Validation and training accuracy (<b>a</b>) and loss (<b>b</b>) curves.</p>
Full article ">Figure 7 Cont.
<p>Validation and training accuracy (<b>a</b>) and loss (<b>b</b>) curves.</p>
Full article ">Figure 8
<p>A user-friendly mobile phone application based on the “Derma Care” model.</p>
Full article ">
13 pages, 3700 KiB  
Article
Hybrid Beamforming in Massive MIMO for Next-Generation Communication Technology
by Shahid Hamid, Shakti Raj Chopra, Akhil Gupta, Sudeep Tanwar, Bogdan Cristian Florea, Dragos Daniel Taralunga, Osama Alfarraj and Ahmed M. Shehata
Sensors 2023, 23(16), 7294; https://doi.org/10.3390/s23167294 - 21 Aug 2023
Cited by 12 | Viewed by 4899
Abstract
Hybrid beamforming is a viable method for lowering the complexity and expense of massive multiple-input multiple-output systems while achieving high data rates on track with digital beamforming. To this end, the purpose of the research reported in this paper is to assess the [...] Read more.
Hybrid beamforming is a viable method for lowering the complexity and expense of massive multiple-input multiple-output systems while achieving high data rates on track with digital beamforming. To this end, the purpose of the research reported in this paper is to assess the effectiveness of the three architectural beamforming techniques (Analog, Digital, and Hybrid beamforming) in massive multiple-input multiple-output systems, especially hybrid beamforming. In hybrid beamforming, the antennas are connected to a single radio frequency chain, unlike digital beamforming, where each antenna has a separate radio frequency chain. The beam formation toward a particular angle depends on the channel state information. Further, massive multiple-input multiple-output is discussed in detail along with the performance parameters like bit error rate, signal-to-noise ratio, achievable sum rate, power consumption in massive multiple-input multiple-output, and energy efficiency. Finally, a comparison has been established between the three beamforming techniques. Full article
(This article belongs to the Special Issue Massive MIMO Systems for 5G and beyond 5G Communication Networks)
Show Figures

Figure 1

Figure 1
<p>Analog beamforming.</p>
Full article ">Figure 2
<p>On–Off analog beamforming.</p>
Full article ">Figure 3
<p>Digital beamforming.</p>
Full article ">Figure 4
<p>MIMO.</p>
Full article ">Figure 5
<p>Massive MIMO with AWGN.</p>
Full article ">Figure 6
<p>System model for hybrid beamforming in massive MIMO.</p>
Full article ">Figure 7
<p>Performance with and without hybrid beamforming with different numbers of antennae.</p>
Full article ">Figure 8
<p>Achievable rate vs. Average SNR at ξ<sub>t</sub>, ξ<sub>r</sub> = 4.</p>
Full article ">Figure 9
<p>Achievable rate vs. Average SNR at ξ<sub>t</sub>, ξ<sub>r</sub> = 16.</p>
Full article ">Figure 10
<p>Achievable rate vs. Average SNR at ξ<sub>t</sub>, ξ<sub>r</sub> = 64.</p>
Full article ">
26 pages, 5455 KiB  
Article
Modified Nonlinear Hysteresis Approach for a Tactile Sensor
by Gasak Abdul-Hussain, William Holderbaum, Theodoros Theodoridis and Guowu Wei
Sensors 2023, 23(16), 7293; https://doi.org/10.3390/s23167293 - 21 Aug 2023
Cited by 2 | Viewed by 1623
Abstract
Soft tactile sensors based on piezoresistive materials have large-area sensing applications. However, their accuracy is often affected by hysteresis which poses a significant challenge during operation. This paper introduces a novel approach that employs a backpropagation (BP) neural network to address the hysteresis [...] Read more.
Soft tactile sensors based on piezoresistive materials have large-area sensing applications. However, their accuracy is often affected by hysteresis which poses a significant challenge during operation. This paper introduces a novel approach that employs a backpropagation (BP) neural network to address the hysteresis nonlinearity in conductive fiber-based tactile sensors. To assess the effectiveness of the proposed method, four sensor units were designed. These sensor units underwent force sequences to collect corresponding output resistance. A backpropagation network was trained using these sequences, thereby correcting the resistance values. The training process exhibited excellent convergence, effectively adjusting the network’s parameters to minimize the error between predicted and actual resistance values. As a result, the trained BP network accurately predicted the output resistances. Several validation experiments were conducted to highlight the primary contribution of this research. The proposed method reduced the maximum hysteresis error from 24.2% of the sensor’s full-scale output to 13.5%. This improvement established the approach as a promising solution for enhancing the accuracy of soft tactile sensors based on piezoresistive materials. By effectively mitigating hysteresis nonlinearity, the capabilities of soft tactile sensors in various applications can be enhanced. These sensors become more reliable and more efficient tools for the measurement and control of force, particularly in the fields of soft robotics and wearable technology. Consequently, their widespread applications extend to robotics, medical devices, consumer electronics, and gaming. Though the complete elimination of hysteresis in tactile sensors may not be feasible, the proposed method effectively modifies the hysteresis nonlinearity, leading to improved sensor output accuracy. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors 2023)
Show Figures

Figure 1

Figure 1
<p>Flow Diagram of the Experiment Setup and Hysteresis Error Reduction.</p>
Full article ">Figure 2
<p>A generic flow chart showing the integration of the hysteresis model, curve-fitting model, and NN.</p>
Full article ">Figure 3
<p>Materials used for sensor development: (<b>a</b>) conductive stretchable fabric, (<b>b</b>) silver-plated conductive thread, (<b>c</b>) designed sensor.</p>
Full article ">Figure 4
<p>The soft tactile sensor.</p>
Full article ">Figure 5
<p>Tactile sensors with different layers.</p>
Full article ">Figure 6
<p>(<b>a</b>) The experimental setup, (<b>b</b>) Force being applied to the sensor, (<b>c</b>) Functional diagram.</p>
Full article ">Figure 7
<p>The hysteresis phenomenon in the one-layer sensor.</p>
Full article ">Figure 8
<p>(<b>a</b>) The hysteresis phenomenon in the three-layer sensor; (<b>b</b>) The hysteresis phenomenon in the six-layer sensor; (<b>c</b>) The hysteresis phenomenon in the twelve-layer sensor.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) The hysteresis phenomenon in the three-layer sensor; (<b>b</b>) The hysteresis phenomenon in the six-layer sensor; (<b>c</b>) The hysteresis phenomenon in the twelve-layer sensor.</p>
Full article ">Figure 9
<p>The Approximated Curve Plotted for the One-Layer Sensor.</p>
Full article ">Figure 10
<p>Approximated curves of all different sensors (with different numbers of layers).</p>
Full article ">Figure 11
<p>The normal distribution graph of the one-layer sensor model.</p>
Full article ">Figure 12
<p>Graphs showing the validation of the system of sensors with different numbers of layers.</p>
Full article ">Figure 13
<p>Validation of the system with new experimental results.</p>
Full article ">Figure 14
<p>One-Layer Sensor: (<b>a</b>) Neural network performance; (<b>b</b>) Neural network regression; (<b>c</b>) Neural network training.</p>
Full article ">
18 pages, 794 KiB  
Article
Human Activity Recognition via Score Level Fusion of Wi-Fi CSI Signals
by Gunsik Lim, Beomseok Oh, Donghyun Kim and Kar-Ann Toh
Sensors 2023, 23(16), 7292; https://doi.org/10.3390/s23167292 - 21 Aug 2023
Cited by 2 | Viewed by 1934
Abstract
Wi-Fi signals are ubiquitous and provide a convenient, covert, and non-invasive means of recognizing human activity, which is particularly useful for healthcare monitoring. In this study, we investigate a score-level fusion structure for human activity recognition using the Wi-Fi channel state information (CSI) [...] Read more.
Wi-Fi signals are ubiquitous and provide a convenient, covert, and non-invasive means of recognizing human activity, which is particularly useful for healthcare monitoring. In this study, we investigate a score-level fusion structure for human activity recognition using the Wi-Fi channel state information (CSI) signals. The raw CSI signals undergo an important preprocessing stage before being classified using conventional classifiers at the first level. The output scores of two conventional classifiers are then fused via an analytic network that does not require iterative search for learning. Our experimental results show that the fusion provides good generalization and a shorter learning processing time compared with state-of-the-art networks. Full article
(This article belongs to the Special Issue Innovations in Wireless Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

Figure 1
<p>Pipeline of the fusion system.</p>
Full article ">Figure 2
<p>(<b>Left</b>) CSI samples before preprocessing. (<b>Right</b>) CSI samples after cropping, resizing, and filtering. Each colored line represents a sample sequence.</p>
Full article ">Figure 3
<p>Pipeline of preprocessing steps.</p>
Full article ">Figure 4
<p>Comparison of our fusion method with methods in <a href="#sensors-23-07292-t001" class="html-table">Table 1</a> for the HAR-RP database.</p>
Full article ">Figure 5
<p>Comparison of our fusion method with methods in <a href="#sensors-23-07292-t001" class="html-table">Table 1</a> for the HAR-RT database.</p>
Full article ">Figure 6
<p>Comparison of our fusion method with methods in <a href="#sensors-23-07292-t001" class="html-table">Table 1</a> for the HAR-RT database.</p>
Full article ">
16 pages, 8411 KiB  
Article
Accurate Detection for Zirconium Sheet Surface Scratches Based on Visible Light Images
by Bin Xu, Yuanhaoji Sun, Jinhua Li, Zhiyong Deng, Hongyu Li, Bo Zhang and Kai Liu
Sensors 2023, 23(16), 7291; https://doi.org/10.3390/s23167291 - 21 Aug 2023
Cited by 1 | Viewed by 1205
Abstract
Zirconium sheet has been widely used in various fields, e.g., chemistry and aerospace. The surface scratches on the zirconium sheets caused by complex processing environment have a negative impact on the performance, e.g., working life and fatigue fracture resistance. Therefore, it is necessary [...] Read more.
Zirconium sheet has been widely used in various fields, e.g., chemistry and aerospace. The surface scratches on the zirconium sheets caused by complex processing environment have a negative impact on the performance, e.g., working life and fatigue fracture resistance. Therefore, it is necessary to detect the defect of zirconium sheets. However, it is difficult to detect such scratch images due to lots of scattered additive noise and complex interlaced structural texture. Hence, we propose a framework for adaptively detecting scratches on the surface images of zirconium sheets, including noise removing and texture suppressing. First, the noise removal algorithm, i.e., an optimized threshold function based on dual-tree complex wavelet transform, uses selected parameters to remove scattered and numerous noise. Second, the texture suppression algorithm, i.e., an optimized relative total variation enhancement model, employs selected parameters to suppress interlaced texture. Finally, by connecting disconnection based on two types of connection algorithms and replacing the Gaussian filter in the standard Canny edge detection algorithm with our proposed framework, we can more robustly detect the scratches. The experimental results show that the proposed framework is of higher accuracy. Full article
(This article belongs to the Collection Computational Imaging and Sensing)
Show Figures

Figure 1

Figure 1
<p>Scheme of algorithm (Six images represented by numbers are the results using different parameters in Texture suppression and the area of red box is scratch region).</p>
Full article ">Figure 2
<p>Images of three functions. (<b>a</b>) Prototype I (The red line represents the asymptote, and the blue curve represents Equation (<a href="#FD1-sensors-23-07291" class="html-disp-formula">1</a>)). (<b>b</b>) Prototype II (The red line represents the asymptote, and the blue curve represents Equation (<a href="#FD2-sensors-23-07291" class="html-disp-formula">2</a>)). (<b>c</b>) Threshold functions.</p>
Full article ">Figure 3
<p>Two connection methods. (<b>a</b>) Convex hull of scratches (Points represented by numbers are different pixels). (<b>b</b>) Original image (The area of red box indicates the location of the fracture). (<b>c</b>) Binary image. (<b>d</b>) Expansion of scratches. (<b>e</b>) Corrosion of scratches.</p>
Full article ">Figure 4
<p>System structure and test piece. (<b>a</b>) System structure. (<b>b</b>) Test piece.</p>
Full article ">Figure 5
<p>Different types of scratches. (<b>a</b>) Three types of single scratch. (<b>b</b>) Three types of multiple scratches. (<b>c</b>) Three types of cross scratch. (<b>d</b>) Three types of other scratches.</p>
Full article ">Figure 6
<p>Selection of decomposition level (cropped). (<b>a</b>,<b>b</b>) The wavelet coefficient of the first level at 45° and its 3D grayscale. (<b>c</b>,<b>d</b>) The wavelet coefficient of the second level at 45° and its 3D grayscale. (<b>e</b>,<b>f</b>) The wavelet coefficient of the third level at 45° and its 3D grayscale. (<b>g</b>,<b>h</b>) The wavelet coefficient of the fourth level at 45° and its 3D grayscale.</p>
Full article ">Figure 7
<p>Comparison of denoising methods. (<b>a</b>) Noise added. (<b>b</b>) Hard threshold processing. (<b>c</b>) Soft threshold processing. (<b>d</b>) Semisoft threshold processing.</p>
Full article ">Figure 8
<p>Texture size. (<b>a</b>) Single scratch I. (<b>b</b>) Multiple scratches I. (<b>c</b>) Cross scratch I. (<b>d</b>) Other scratches I.</p>
Full article ">Figure 9
<p>Comparison of texture suppression methods (The black image is an enlargement of the area in the red circle). (<b>a</b>,<b>b</b>) Single scratch I. (<b>c</b>,<b>d</b>) Multiple scratches III. (<b>e</b>,<b>f</b>) Cross scratch I. (<b>g</b>,<b>h</b>) Other scratches II.</p>
Full article ">Figure 10
<p>Comparison of different scratches.</p>
Full article ">Figure 11
<p>The contrast of extracted scratches (From left to right, they are ROI, real value, and actual value; The area of red box represents the scratch region). (<b>a</b>) Single scratch. (<b>b</b>) Multiple scratches. (<b>c</b>) Cross scratch. (<b>d</b>) Other scratches.</p>
Full article ">
20 pages, 5207 KiB  
Systematic Review
The Role of Ultrasound in Cancer and Cancer-Related Pain—A Bibliometric Analysis and Future Perspectives
by Badrinathan Sridharan, Alok Kumar Sharma and Hae Gyun Lim
Sensors 2023, 23(16), 7290; https://doi.org/10.3390/s23167290 - 21 Aug 2023
Cited by 6 | Viewed by 2555
Abstract
Ultrasound has a deep penetrating ability with minimal or no tissue injury, while cancer-mediated complications during diagnosis, therapy, and surgery have become a serious challenge for clinicians and lead to the severity of the primary condition (cancer). The current study highlights the importance [...] Read more.
Ultrasound has a deep penetrating ability with minimal or no tissue injury, while cancer-mediated complications during diagnosis, therapy, and surgery have become a serious challenge for clinicians and lead to the severity of the primary condition (cancer). The current study highlights the importance of ultrasound imaging and focused ultrasound therapy during cancer diagnosis, pain reduction, guidance for surgical resection of cancer, and the effectiveness of chemotherapy. We performed the bibliometric analysis on research domains involving ultrasound, cancer management, pain, and other challenges (chemotherapy, surgical guidance, and postoperative care), to observe the trend by which the research field has grown over the years and propose a possible future trend. The data was obtained from the Web of Science, processed, and exported as plain text files for analysis in the Bibliometrix R web interface using the Biblioshiny package. A total of 3248 documents were identified from 1100 journal sources. A total of 390 articles were published in 2022, with almost a 100% growth rate from previous years. Based on the various network analysis, we conclude that the outcome of the constant research in this domain will result in better patient care during the management of various diseases, including cancer and other co-morbidities. Full article
Show Figures

Figure 1

Figure 1
<p>Annual scientific production and its growth rate.</p>
Full article ">Figure 2
<p>Top 20 countries with highest publications (<b>a</b>) and citations (<b>b</b>).</p>
Full article ">Figure 3
<p>Top 20 institutes based on article count.</p>
Full article ">Figure 4
<p>Top 20 relevant sources.</p>
Full article ">Figure 5
<p>Three-field plot (<b>a</b>) and Word cloud (<b>b</b>).</p>
Full article ">Figure 6
<p>Co-occurrence of keywords.</p>
Full article ">Figure 7
<p>Co-citation network.</p>
Full article ">Figure 8
<p>Historiographical direct citation network.</p>
Full article ">Figure 9
<p>Collaboration network (<b>a</b>) and collaboration map (<b>b</b>).</p>
Full article ">
14 pages, 12696 KiB  
Communication
Explainable Automated TI-RADS Evaluation of Thyroid Nodules
by Alisa Kunapinun, Dittapong Songsaeng, Sittaya Buathong, Matthew N. Dailey, Chadaporn Keatmanee and Mongkol Ekpanyapong
Sensors 2023, 23(16), 7289; https://doi.org/10.3390/s23167289 - 21 Aug 2023
Cited by 1 | Viewed by 8557
Abstract
A thyroid nodule, a common abnormal growth within the thyroid gland, is often identified through ultrasound imaging of the neck. These growths may be solid- or fluid-filled, and their treatment is influenced by factors such as size and location. The Thyroid Imaging Reporting [...] Read more.
A thyroid nodule, a common abnormal growth within the thyroid gland, is often identified through ultrasound imaging of the neck. These growths may be solid- or fluid-filled, and their treatment is influenced by factors such as size and location. The Thyroid Imaging Reporting and Data System (TI-RADS) is a classification method that categorizes thyroid nodules into risk levels based on features such as size, echogenicity, margin, shape, and calcification. It guides clinicians in deciding whether a biopsy or other further evaluation is needed. Machine learning (ML) can complement TI-RADS classification, thereby improving the detection of malignant tumors. When combined with expert rules (TI-RADS) and explanations, ML models may uncover elements that TI-RADS misses, especially when TI-RADS training data are scarce. In this paper, we present an automated system for classifying thyroid nodules according to TI-RADS and assessing malignancy effectively. We use ResNet-101 and DenseNet-201 models to classify thyroid nodules according to TI-RADS and malignancy. By analyzing the models’ last layer using the Grad-CAM algorithm, we demonstrate that these models can identify risk areas and detect nodule features relevant to the TI-RADS score. By integrating Grad-CAM results with feature probability calculations, we provide a precise heat map, visualizing specific features within the nodule and potentially assisting doctors in their assessments. Our experiments show that the utilization of ResNet-101 and DenseNet-201 models, in conjunction with Grad-CAM visualization analysis, improves TI-RADS classification accuracy by up to 10%. This enhancement, achieved through iterative analysis and re-training, underscores the potential of machine learning in advancing thyroid nodule diagnosis, offering a promising direction for further exploration and clinical application. Full article
(This article belongs to the Special Issue AI for Biomedical Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>ACR TI-RADS 2017: A risk stratification system for thyroid nodules derived from ultrasound findings. This system assigns scores ranging from 1 to 5, where higher scores correspond to an increased probability of malignancy. It plays an essential role in guiding the decision-making process for further evaluation and management of thyroid nodules. Adapted from Tessler [<a href="#B13-sensors-23-07289" class="html-bibr">13</a>].</p>
Full article ">Figure 2
<p>Preparation of a nodule image. The nodule, segmented using StableSeg GANs, was cropped with a 5% margin and then resized to 512 × 512 pixels.</p>
Full article ">Figure 3
<p>ResNet structure (<b>top image</b>). The ResNet model contains multiple residual blocks (<b>bottom image</b>), each with skip connections that allow the input to bypass one or more layers, focusing on more recent features.</p>
Full article ">Figure 4
<p>DenseNet structure. DenseNet promotes dense connectivity between layers, allowing for efficient feature reuse and improved gradient flow, ultimately enhancing model performance. Adapted from Huang et al. [<a href="#B20-sensors-23-07289" class="html-bibr">20</a>].</p>
Full article ">Figure 5
<p>StableSeg GANs. This GANs-based segmentation model utilizes DeepLabV3+ as a generator and ResNet16 as a discriminator. Reprinted from Kunapinun et al. [<a href="#B24-sensors-23-07289" class="html-bibr">24</a>].</p>
Full article ">Figure 6
<p>Overall input and output of the model, illustrating the integration of different image tensors and the multiple classification outputs for TI-RADS.</p>
Full article ">Figure 7
<p>Samples of accurate results. (<b>a</b>,<b>b</b>) Benign lesion exhibiting well-defined mixed solid–cystic composition, classified under TI-RADS-2, thereby denoting a low suspicion for malignancy (1.5%). Both ResNet50 (<b>a</b>) and DenseNet201 (<b>b</b>) models accurately identified mixed solid–cystic components, the hyperechoic solid region, and the smooth border, all of which contributed to a high likelihood of a benign diagnosis.</p>
Full article ">Figure 8
<p>Sample accurate results. (<b>a</b>,<b>b</b>) TI-RADS-5 malignant nodule characterized by hypoechoic composition, lobulated margins, internal micro-calcification, and a taller-than-wide appearance, all indicative of a high suspicion for malignancy (35%), which was subsequently confirmed pathologically. Both the ResNet50 (<b>a</b>) and DenseNet201 (<b>b</b>) models accurately identified the regions of micro-calcification and irregular borders. However, slight variations can be observed in the echogenicity and margin details between the two models.</p>
Full article ">Figure 9
<p>Sample incorrect result. (<b>a</b>,<b>b</b>) Malignant lesion characterized by a well-defined iso to very hypoechoic solid composition and internal micro-calcifications, classified as TI-RADS-5, indicating a high suspicion for malignancy (35%). The ResNet50 model (<b>a</b>) incorrectly suggested a higher likelihood of benignity, failing to detect the very hypoechoic component, in contrast to the DenseNet201 model (<b>b</b>), which indicated a greater possibility of malignancy by successfully identifying this component. Nevertheless, both models were unable to detect the internal punctate echogenic foci.</p>
Full article ">
24 pages, 3528 KiB  
Article
Towards Feasible Solutions for Load Monitoring in Quebec Residences
by Sayed Saeed Hosseini, Benoit Delcroix, Nilson Henao, Kodjo Agbossou and Sousso Kelouwani
Sensors 2023, 23(16), 7288; https://doi.org/10.3390/s23167288 - 21 Aug 2023
Cited by 1 | Viewed by 1190
Abstract
For many years, energy monitoring at the most disaggregate level has been mainly sought through the idea of Non-Intrusive Load Monitoring (NILM). Developing a practical application of this concept in the residential sector can be impeded by the technical characteristics of case studies. [...] Read more.
For many years, energy monitoring at the most disaggregate level has been mainly sought through the idea of Non-Intrusive Load Monitoring (NILM). Developing a practical application of this concept in the residential sector can be impeded by the technical characteristics of case studies. Accordingly, several databases, mainly from Europe and the US, have been publicly released to enable basic research to address NILM issues raised by their challenging features. Nevertheless, the resultant enhancements are limited to the properties of these datasets. Such a restriction has caused NILM studies to overlook residential scenarios related to geographically-specific regions and existent practices to face unexplored situations. This paper presents applied research on NILM in Quebec residences to reveal its barriers to feasible implementations. It commences with a concise discussion about a successful NILM idea to highlight its essential requirements. Afterward, it provides a comparative statistical analysis to represent the specificity of the case study by exploiting real data. Subsequently, this study proposes a combinatory approach to load identification that utilizes the promise of sub-meter smart technologies and integrates the intrusive aspect of load monitoring with the non-intrusive one to alleviate NILM difficulties in Quebec residences. A load disaggregation technique is suggested to manifest these complications based on supervised and unsupervised machine learning designs. The former is aimed at extracting overall heating demand from the aggregate one while the latter is designed for disaggregating the residual load. The results demonstrate that geographically-dependent cases create electricity consumption scenarios that can deteriorate the performance of existing NILM methods. From a realistic standpoint, this research elaborates on critical remarks to realize viable NILM systems, particularly in Quebec houses. Full article
Show Figures

Figure 1

Figure 1
<p>A simple representation of intrusive and non-intrusive approaches to household load monitoring and their technical means [<a href="#B8-sensors-23-07288" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>NILM procedure along with its common choice of learning methods practiced by the fundamental research [<a href="#B6-sensors-23-07288" class="html-bibr">6</a>].</p>
Full article ">Figure 3
<p>Operation modes of a common NILM system regarding its learning phase [<a href="#B6-sensors-23-07288" class="html-bibr">6</a>].</p>
Full article ">Figure 4
<p>An exemplification of the household power consumption profile in Quebec residences within two days in warm seasons at 1 min sampling intervals.</p>
Full article ">Figure 5
<p>An example of the household heating consumption profile in Quebec residences within two days in cold seasons at 1 min sampling intervals.</p>
Full article ">Figure 6
<p>The distribution of public and Quebec data in targeted houses along with the range of the domestic and TH loads from the former.</p>
Full article ">Figure 7
<p>The frequency histogram of public and Quebec data along with relevant domestic and TH shares of the latter.</p>
Full article ">Figure 8
<p>The diurnal behavior of energy consumption in public databases according to data availability and time interval similarity.</p>
Full article ">Figure 9
<p>The diurnal behavior of energy consumption in eight Quebec houses according to data available from the main reading.</p>
Full article ">Figure 10
<p>The seasonal decomposition of public and Quebec data for two fine examples based on the multiplicative model.</p>
Full article ">Figure 11
<p>Correlation between available instances in public and Quebec databases along with similar information for domestic and TH loads of the latter.</p>
Full article ">Figure 12
<p>Exemplification of seasonal behavior of power consumption profiles in a Quebec house at a 15 min sampling rate.</p>
Full article ">Figure 13
<p>An example of the proposed approach in a house with four thermal zones.</p>
Full article ">Figure 14
<p>The block diagram of the NILM practice proposed to tackle the Quebec case [<a href="#B47-sensors-23-07288" class="html-bibr">47</a>].</p>
Full article ">Figure 15
<p>The EWH power profile from Quebec House 1 in (<b>a</b>) 1 and (<b>b</b>) 15 min sampling time.</p>
Full article ">Figure 16
<p>The k-nearest neighbor analysis for one-day data from House 1 with MinPts equal to 4.</p>
Full article ">Figure 17
<p>The reachability plot for one-day data from House 1 based on the OPTICS algorithm.</p>
Full article ">
18 pages, 5323 KiB  
Article
To Bag or Not to Bag? How AudioMoth-Based Passive Acoustic Monitoring Is Impacted by Protective Coverings
by Patrick E. Osborne, Tatiana Alvares-Sanches and Paul R. White
Sensors 2023, 23(16), 7287; https://doi.org/10.3390/s23167287 - 20 Aug 2023
Cited by 2 | Viewed by 2548
Abstract
Bare board AudioMoth recorders offer a low-cost, open-source solution to passive acoustic monitoring (PAM) but need protecting in an enclosure. We were concerned that the choice of enclosure may alter the spectral characteristics of recordings. We focus on polythene bags as the simplest [...] Read more.
Bare board AudioMoth recorders offer a low-cost, open-source solution to passive acoustic monitoring (PAM) but need protecting in an enclosure. We were concerned that the choice of enclosure may alter the spectral characteristics of recordings. We focus on polythene bags as the simplest enclosure and assess how their use affects acoustic metrics. Using an anechoic chamber, a series of pure sinusoidal tones from 100 Hz to 20 kHz were recorded on 10 AudioMoth devices and a calibrated Class 1 sound level meter. The recordings were made on bare board AudioMoth devices, as well as after covering them with different bags. Linear phase finite impulse response filters were designed to replicate the frequency response functions between the incident pressure wave and the recorded signals. We applied these filters to ~1000 sound recordings to assess the effects of the AudioMoth and the bags on 19 acoustic metrics. While bare board AudioMoth showed very consistent spectral responses with accentuation in the higher frequencies, bag enclosures led to significant and erratic attenuation inconsistent between frequencies. Few acoustic metrics were insensitive to this uncertainty, rendering index comparisons unreliable. Biases due to enclosures on PAM devices may need to be considered when choosing appropriate acoustic indices for ecological studies. Archived recordings without adequate metadata may potentially produce biased acoustic index values and should be treated cautiously. Full article
Show Figures

Figure 1

Figure 1
<p>FRFs of AudioMoth with and without the bag treatments in <a href="#sensors-23-07287-t001" class="html-table">Table 1</a>. The correction factor is the amount that needs to be subtracted from the recording on the AudioMoth to recover the source signal. The AudioMoth FRFs are consistently coded, AM01 to AM10, to allow comparisons between graphs. Note that AM04 has been omitted from experiment B3 due to a recording error.</p>
Full article ">Figure 2
<p>Mean and 95% prediction intervals (PI) for the FRFs for the four experimental treatments. The prediction intervals (PI) show the limits within which 95% of all responses are likely to lie.</p>
Full article ">Figure 3
<p>The left-hand figure shows the difference between dB levels recorded at each frequency tested for experiments B1 and B2, i.e., repeat bagging of each AudioMoth. Ideally, all values should be zero. As there is no logical ordering between trials B1 and B2, the right-hand figure shows the mean and 95% prediction intervals (PI) for the absolute difference between repeat bagging. The dotted lines have been added for visualisation only and cover frequencies for which data were lacking.</p>
Full article ">Figure 4
<p>Comparison of acoustic metric values derived from 998 source signals and as recorded on 10 AudioMoths with no bag. The dashed line is the line of agreement. The metrics are described and colour-coded in <a href="#sensors-23-07287-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 5
<p>Comparison of acoustic metric values derived from 998 source signals and as recorded on 10 AudioMoths with a B1 bag. The dashed line is the line of agreement. The metrics are described and colour-coded in <a href="#sensors-23-07287-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 6
<p>Comparison of acoustic metric values derived from 998 source signals and as recorded on 9 AudioMoths (one failed during the experiment) with a B3 bag. The dashed line is the line of agreement. The metrics are described and colour-coded in <a href="#sensors-23-07287-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 7
<p>Comparison of acoustic metric values derived from 998 source signals and as recorded on 10 AudioMoths with a B4 bag. The dashed line is the line of agreement. The metrics are described and colour-coded in <a href="#sensors-23-07287-t002" class="html-table">Table 2</a>.</p>
Full article ">
24 pages, 6577 KiB  
Article
Enhancing Energy Efficiency and Fast Decision Making for Medical Sensors in Healthcare Systems: An Overview and Novel Proposal
by Ziyad Almudayni, Ben Soh and Alice Li
Sensors 2023, 23(16), 7286; https://doi.org/10.3390/s23167286 - 20 Aug 2023
Cited by 1 | Viewed by 1200
Abstract
In the realm of the Internet of Things (IoT), a network of sensors and actuators collaborates to fulfill specific tasks. As the demand for IoT networks continues to rise, it becomes crucial to ensure the stability of this technology and adapt it for [...] Read more.
In the realm of the Internet of Things (IoT), a network of sensors and actuators collaborates to fulfill specific tasks. As the demand for IoT networks continues to rise, it becomes crucial to ensure the stability of this technology and adapt it for further expansion. Through an analysis of related works, including the feedback-based optimized fuzzy scheduling approach (FOFSA) algorithm, the adaptive task allocation technique (ATAT), and the osmosis load balancing algorithm (OLB), we identify their limitations in achieving optimal energy efficiency and fast decision making. To address these limitations, this research introduces a novel approach to enhance the processing time and energy efficiency of IoT networks. The proposed approach achieves this by efficiently allocating IoT data resources in the Mist layer during the early stages. We apply the approach to our proposed system known as the Mist-based fuzzy healthcare system (MFHS) that demonstrates promising potential to overcome the existing challenges and pave the way for the efficient industrial Internet of healthcare things (IIoHT) of the future. Full article
(This article belongs to the Special Issue Internet of Health Things)
Show Figures

Figure 1

Figure 1
<p>Proposal design.</p>
Full article ">Figure 2
<p>Fuzzy logic system structure.</p>
Full article ">Figure 3
<p>Membership function of BT.</p>
Full article ">Figure 4
<p>Membership function of HR.</p>
Full article ">Figure 5
<p>Membership function of GL.</p>
Full article ">Figure 6
<p>FLS for health score.</p>
Full article ">Figure 7
<p>Membership function of health score.</p>
Full article ">Figure 8
<p>Fuzzy logic system for server allocation.</p>
Full article ">Figure 9
<p>Membership function for data priority.</p>
Full article ">Figure 10
<p>Membership function for Mist capacity.</p>
Full article ">Figure 11
<p>Membership function for server allocation.</p>
Full article ">Figure 12
<p>Fog node clusters.</p>
Full article ">Figure 13
<p>Work sequence.</p>
Full article ">Figure 14
<p>A comparative analysis of energy consumption.</p>
Full article ">Figure 15
<p>A comparative analysis of processing time.</p>
Full article ">
14 pages, 40696 KiB  
Article
Object Detection for Agricultural Vehicles: Ensemble Method Based on Hierarchy of Classes
by Esma Mujkic, Martin P. Christiansen and Ole Ravn
Sensors 2023, 23(16), 7285; https://doi.org/10.3390/s23167285 - 20 Aug 2023
Cited by 2 | Viewed by 1509
Abstract
Vision-based object detection is essential for safe and efficient field operation for autonomous agricultural vehicles. However, one of the challenges in transferring state-of-the-art object detectors to the agricultural domain is the limited availability of labeled datasets. This paper seeks to address this challenge [...] Read more.
Vision-based object detection is essential for safe and efficient field operation for autonomous agricultural vehicles. However, one of the challenges in transferring state-of-the-art object detectors to the agricultural domain is the limited availability of labeled datasets. This paper seeks to address this challenge by utilizing two object detection models based on YOLOv5, one pre-trained on a large-scale dataset for detecting general classes of objects and one trained to detect a smaller number of agriculture-specific classes. To combine the detections of the models at inference, we propose an ensemble module based on a hierarchical structure of classes. Results show that applying the proposed ensemble module increases [email protected] from 0.575 to 0.65 on the test dataset and reduces the misclassification of similar classes detected by different models. Furthermore, by translating detections from base classes to a higher level in the class hierarchy, we can increase the overall [email protected] to 0.701 at the cost of reducing class granularity. Full article
(This article belongs to the Special Issue Machine Learning and Sensors Technology in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Hierarchy of classes in the dataset.</p>
Full article ">Figure 2
<p>Diagram of the ensemble module.</p>
Full article ">Figure 3
<p>Confusion matrix. (<b>a</b>) Combined detections without ensemble module. (<b>b</b>) Combined detections with ensemble module.</p>
Full article ">Figure 4
<p>Detection examples. (<b>a</b>) Combined models without ensemble module. (<b>b</b>) Combined models with ensemble module.</p>
Full article ">Figure 5
<p>Confusion matrix for subcategory detection with ensemble module.</p>
Full article ">
23 pages, 4908 KiB  
Article
Time-Distributed Framework for 3D Reconstruction Integrating Fringe Projection with Deep Learning
by Andrew-Hieu Nguyen and Zhaoyang Wang
Sensors 2023, 23(16), 7284; https://doi.org/10.3390/s23167284 - 20 Aug 2023
Cited by 2 | Viewed by 1816
Abstract
In recent years, integrating structured light with deep learning has gained considerable attention in three-dimensional (3D) shape reconstruction due to its high precision and suitability for dynamic applications. While previous techniques primarily focus on processing in the spatial domain, this paper proposes a [...] Read more.
In recent years, integrating structured light with deep learning has gained considerable attention in three-dimensional (3D) shape reconstruction due to its high precision and suitability for dynamic applications. While previous techniques primarily focus on processing in the spatial domain, this paper proposes a novel time-distributed approach for temporal structured-light 3D shape reconstruction using deep learning. The proposed approach utilizes an autoencoder network and time-distributed wrapper to convert multiple temporal fringe patterns into their corresponding numerators and denominators of the arctangent functions. Fringe projection profilometry (FPP), a well-known temporal structured-light technique, is employed to prepare high-quality ground truth and depict the 3D reconstruction process. Our experimental findings show that the time-distributed 3D reconstruction technique achieves comparable outcomes with the dual-frequency dataset (p = 0.014) and higher accuracy than the triple-frequency dataset (p = 1.029 × 109), according to non-parametric statistical tests. Moreover, the proposed approach’s straightforward implementation of a single training network for multiple converters makes it more practical for scientific research and industrial applications. Full article
(This article belongs to the Special Issue Intelligent Sensing and Automatic Device for Industrial Process)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Illustration of a 3D reconstruction system and process; (<b>b</b>) an RVBUST RVC 3D Camera employed in this work.</p>
Full article ">Figure 2
<p>Flowchart of the FPP 3D imaging technique with DFFS (<b>a</b>) and TFFS (<b>b</b>) phase-shifting schemes.</p>
Full article ">Figure 3
<p>Examplars of input–output pair in (<b>a</b>) DFFS datasets and (<b>b</b>) TFFS datasets.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) Time-distributed concept for DFFS phase-shifting scheme, and (<b>c</b>) the comparable spatial F2ND approach.</p>
Full article ">Figure 5
<p>(<b>a</b>) Time-distributed concept for TFFS phase-shifting scheme, and (<b>b</b>) the comparable spatial F2ND approach.</p>
Full article ">Figure 6
<p>Evaluation of image quality metrics (SSIM and PSNR) for predicted numerators and denominators.</p>
Full article ">Figure 7
<p>3D shape reconstruction of a single-object scene using DFFS datasets.</p>
Full article ">Figure 8
<p>3D shape reconstruction of a scene with multiple objects using DFFS datasets.</p>
Full article ">Figure 9
<p>3D shape reconstruction of a single-object scene using TFFS datasets.</p>
Full article ">Figure 10
<p>3D shape reconstruction of a scene with multiple objects using TFFS datasets.</p>
Full article ">Figure 11
<p>Potential application of TD framework with different output formats in FPP technique.</p>
Full article ">
11 pages, 2184 KiB  
Article
Research on Speech Synthesis Based on Mixture Alignment Mechanism
by Yan Deng, Ning Wu, Chengjun Qiu, Yan Chen and Xueshan Gao
Sensors 2023, 23(16), 7283; https://doi.org/10.3390/s23167283 - 20 Aug 2023
Viewed by 2275
Abstract
In recent years, deep learning-based speech synthesis has attracted a lot of attention from the machine learning and speech communities. In this paper, we propose Mixture-TTS, a non-autoregressive speech synthesis model based on mixture alignment mechanism. Mixture-TTS aims to optimize the alignment information [...] Read more.
In recent years, deep learning-based speech synthesis has attracted a lot of attention from the machine learning and speech communities. In this paper, we propose Mixture-TTS, a non-autoregressive speech synthesis model based on mixture alignment mechanism. Mixture-TTS aims to optimize the alignment information between text sequences and mel-spectrogram. Mixture-TTS uses a linguistic encoder based on soft phoneme-level alignment and hard word-level alignment approaches, which explicitly extract word-level semantic information, and introduce pitch and energy predictors to optimally predict the rhythmic information of the audio. Specifically, Mixture-TTS introduces a post-net based on a five-layer 1D convolution network to optimize the reconfiguration capability of the mel-spectrogram. We connect the output of the decoder to the post-net through the residual network. The mel-spectrogram is converted into the final audio by the HiFi-GAN vocoder. We evaluate the performance of the Mixture-TTS on the AISHELL3 and LJSpeech datasets. Experimental results show that Mixture-TTS is somewhat better in alignment information between the text sequences and mel-spectrogram, and is able to achieve high-quality audio. The ablation studies demonstrate that the structure of Mixture-TTS is effective. Full article
(This article belongs to the Special Issue VOICE Sensors with Deep Learning)
Show Figures

Figure 1

Figure 1
<p>The overall architecture for FastSpeech2.</p>
Full article ">Figure 2
<p>Linguistic encoder architecture for PortaSpeech.</p>
Full article ">Figure 3
<p>The overall architecture for Mixture-TTS.</p>
Full article ">Figure 4
<p>Structure of FFT module and post-net.</p>
Full article ">Figure 5
<p>Comparison of feature predictions from different TTS models.</p>
Full article ">Figure 6
<p>The mel-spectrogram comparison for ablation studies of post-net.</p>
Full article ">
19 pages, 2517 KiB  
Article
Cross-Domain Sentiment Analysis Based on Feature Projection and Multi-Source Attention in IoT
by Yeqiu Kong, Zhongwei Xu and Meng Mei
Sensors 2023, 23(16), 7282; https://doi.org/10.3390/s23167282 - 20 Aug 2023
Cited by 4 | Viewed by 1767
Abstract
Social media is a real-time social sensor to sense and collect diverse information, which can be combined with sentiment analysis to help IoT sensors provide user-demanded favorable data in smart systems. In the case of insufficient data labels, cross-domain sentiment analysis aims to [...] Read more.
Social media is a real-time social sensor to sense and collect diverse information, which can be combined with sentiment analysis to help IoT sensors provide user-demanded favorable data in smart systems. In the case of insufficient data labels, cross-domain sentiment analysis aims to transfer knowledge from the source domain with rich labels to the target domain that lacks labels. Most domain adaptation sentiment analysis methods achieve transfer learning by reducing the domain differences between the source and target domains, but little attention is paid to the negative transfer problem caused by invalid source domains. To address these problems, this paper proposes a cross-domain sentiment analysis method based on feature projection and multi-source attention (FPMA), which not only alleviates the effect of negative transfer through a multi-source selection strategy but also improves the classification performance in terms of feature representation. Specifically, two feature extractors and a domain discriminator are employed to extract shared and private features through adversarial training. The extracted features are optimized by orthogonal projection to help train the classification in multi-source domains. Finally, each text in the target domain is fed into the trained module. The sentiment tendency is predicted in the weighted form of the attention mechanism based on the classification results from the multi-source domains. The experimental results on two commonly used datasets showed that FPMA outperformed baseline models. Full article
(This article belongs to the Special Issue Innovations in Wireless Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

Figure 1
<p>Diagram of the model structure.</p>
Full article ">Figure 2
<p>Network structure: (<b>a</b>) feature extractor, (<b>b</b>) domain discriminator, and (<b>c</b>) sentiment classifier.</p>
Full article ">Figure 2 Cont.
<p>Network structure: (<b>a</b>) feature extractor, (<b>b</b>) domain discriminator, and (<b>c</b>) sentiment classifier.</p>
Full article ">Figure 3
<p>Orthogonal projection.</p>
Full article ">Figure 4
<p>Refusion process.</p>
Full article ">Figure 5
<p>Domain filtering.</p>
Full article ">Figure 6
<p>Calculation of attention weights.</p>
Full article ">Figure 7
<p>Comparison results of different methods on the Amazon review dataset: (<b>a</b>) Book, (<b>b</b>) DVD, (<b>c</b>) Electronics, and (<b>d</b>) Kitchen.</p>
Full article ">Figure 8
<p>Comparison results of different methods on the online_shopping_10_cats dataset.</p>
Full article ">Figure 9
<p>Distribution comparison chart: (<b>a</b>) probability estimation of the domain discriminator, and (<b>b</b>) weights of the source domains.</p>
Full article ">
23 pages, 24055 KiB  
Article
Automatic Modulation Classification Based on CNN-Transformer Graph Neural Network
by Dong Wang, Meiyan Lin, Xiaoxu Zhang, Yonghui Huang and Yan Zhu
Sensors 2023, 23(16), 7281; https://doi.org/10.3390/s23167281 - 20 Aug 2023
Cited by 7 | Viewed by 2963
Abstract
In recent years, neural network algorithms have demonstrated tremendous potential for modulation classification. Deep learning methods typically take raw signals or convert signals into time–frequency images as inputs to convolutional neural networks (CNNs) or recurrent neural networks (RNNs). However, with the advancement of [...] Read more.
In recent years, neural network algorithms have demonstrated tremendous potential for modulation classification. Deep learning methods typically take raw signals or convert signals into time–frequency images as inputs to convolutional neural networks (CNNs) or recurrent neural networks (RNNs). However, with the advancement of graph neural networks (GNNs), a new approach has been introduced involving transforming time series data into graph structures. In this study, we propose a CNN-transformer graph neural network (CTGNet) for modulation classification, to uncover complex representations in signal data. First, we apply sliding window processing to the original signals, obtaining signal subsequences and reorganizing them into a signal subsequence matrix. Subsequently, we employ CTGNet, which adaptively maps the preprocessed signal matrices into graph structures, and utilize a graph neural network based on GraphSAGE and DMoNPool for classification. Extensive experiments demonstrated that our method outperformed advanced deep learning techniques, achieving the highest recognition accuracy. This underscores CTGNet’s significant advantage in capturing key features in signal data and providing an effective solution for modulation classification tasks. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Data preprocessing flow of the <span class="html-italic">I</span> signal and <span class="html-italic">Q</span> signal. <span class="html-italic">w</span> is the length of the sliding window, and <span class="html-italic">s</span> is the step size of the sliding window.</p>
Full article ">Figure 2
<p>The architecture of CTN. This illustration demonstrates the architecture, using the I signal as an example. <math display="inline"><semantics> <msub> <mi>G</mi> <mi>I</mi> </msub> </semantics></math> is the graph structure of the <span class="html-italic">I</span> signal obtained through CTN. Correspondingly, the graph structure of the <span class="html-italic">Q</span> signal is <math display="inline"><semantics> <msub> <mi>G</mi> <mi>Q</mi> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>The architecture of the multi-head attention. <math display="inline"><semantics> <mi mathvariant="bold">V</mi> </semantics></math>, <math display="inline"><semantics> <mi mathvariant="bold">K</mi> </semantics></math>, and <math display="inline"><semantics> <mi mathvariant="bold">Q</mi> </semantics></math> are the value matrix, key matrix, and query matrix obtained through linear transformation, respectively, and <math display="inline"><semantics> <msub> <mi>n</mi> <mi>h</mi> </msub> </semantics></math> is the number of heads of multi-head attention.</p>
Full article ">Figure 4
<p>The architecture of the CTGNet. <math display="inline"><semantics> <msub> <mi mathvariant="bold">A</mi> <mi>I</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">A</mi> <mi>Q</mi> </msub> </semantics></math> are the adjacency matrices of the graph structure of the <span class="html-italic">I</span> signal and <span class="html-italic">Q</span> signal, respectively. <math display="inline"><semantics> <msub> <mi>F</mi> <mi>j</mi> </msub> </semantics></math> is the node feature. <math display="inline"><semantics> <msub> <mi>T</mi> <mi>I</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>T</mi> <mi>Q</mi> </msub> </semantics></math> are the average feature vectors.</p>
Full article ">Figure 5
<p>Recognition accuracy for the two datasets with different sliding window sizes and step sizes. Here, the sliding window sizes were 8 and 16, and the corresponding step sizes were 4, 6, and 8, and 8, 12, and 16, respectively.</p>
Full article ">Figure 6
<p>Recognition accuracy of ten methods with a change in SNR, where SNR ranged from −20 dB to 18 dB. (<b>a</b>) RML2016.10a; (<b>b</b>) RML2016.10b.</p>
Full article ">Figure 7
<p>Confusion matrices of the different models on RML2016.10a. Each row of the confusion matrix corresponds to the ground truth class, while each column corresponds to the class predicted by the respective models. (<b>a</b>) AvgNet; (<b>b</b>) MCLDNN; (<b>c</b>) Resnet1d; (<b>d</b>) VGG; (<b>e</b>) CNN2d; (<b>f</b>) GRU; (<b>g</b>) LSTM; (<b>h</b>) GAF; (<b>i</b>) ConsCNN; (<b>j</b>) CTGNet.</p>
Full article ">Figure 7 Cont.
<p>Confusion matrices of the different models on RML2016.10a. Each row of the confusion matrix corresponds to the ground truth class, while each column corresponds to the class predicted by the respective models. (<b>a</b>) AvgNet; (<b>b</b>) MCLDNN; (<b>c</b>) Resnet1d; (<b>d</b>) VGG; (<b>e</b>) CNN2d; (<b>f</b>) GRU; (<b>g</b>) LSTM; (<b>h</b>) GAF; (<b>i</b>) ConsCNN; (<b>j</b>) CTGNet.</p>
Full article ">Figure 8
<p>Confusion matrices of the different models on RML2016.10b. Each row of the confusion matrix corresponds to the ground truth class, while each column corresponds to the class predicted by the respective models. (<b>a</b>) AvgNet; (<b>b</b>) MCLDNN; (<b>c</b>) Resnet1d; (<b>d</b>) VGG; (<b>e</b>) CNN2d; (<b>f</b>) GRU; (<b>g</b>) LSTM; (<b>h</b>) GAF; (<b>i</b>) ConsCNN; (<b>j</b>) CTGNet.</p>
Full article ">Figure 8 Cont.
<p>Confusion matrices of the different models on RML2016.10b. Each row of the confusion matrix corresponds to the ground truth class, while each column corresponds to the class predicted by the respective models. (<b>a</b>) AvgNet; (<b>b</b>) MCLDNN; (<b>c</b>) Resnet1d; (<b>d</b>) VGG; (<b>e</b>) CNN2d; (<b>f</b>) GRU; (<b>g</b>) LSTM; (<b>h</b>) GAF; (<b>i</b>) ConsCNN; (<b>j</b>) CTGNet.</p>
Full article ">
10 pages, 1577 KiB  
Communication
A Pre-Training Framework Based on Multi-Order Acoustic Simulation for Replay Voice Spoofing Detection
by Changhwan Go, Nam In Park, Oc-Yeub Jeon and Chanjun Chun
Sensors 2023, 23(16), 7280; https://doi.org/10.3390/s23167280 - 20 Aug 2023
Cited by 1 | Viewed by 1101
Abstract
Voice spoofing attempts to break into a specific automatic speaker verification (ASV) system by forging the user’s voice and can be used through methods such as text-to-speech (TTS), voice conversion (VC), and replay attacks. Recently, deep learning-based voice spoofing countermeasures have been developed. [...] Read more.
Voice spoofing attempts to break into a specific automatic speaker verification (ASV) system by forging the user’s voice and can be used through methods such as text-to-speech (TTS), voice conversion (VC), and replay attacks. Recently, deep learning-based voice spoofing countermeasures have been developed. However, the problem with replay is that it is difficult to construct a large number of datasets because it requires a physical recording process. To overcome these problems, this study proposes a pre-training framework based on multi-order acoustic simulation for replay voice spoofing detection. Multi-order acoustic simulation utilizes existing clean signal and room impulse response (RIR) datasets to generate audios, which simulate the various acoustic configurations of the original and replayed audios. The acoustic configuration refers to factors such as the microphone type, reverberation, time delay, and noise that may occur between a speaker and microphone during the recording process. We assume that a deep learning model trained on an audio that simulates the various acoustic configurations of the original and replayed audios can classify the acoustic configurations of the original and replay audios well. To validate this, we performed pre-training to classify the audio generated by the multi-order acoustic simulation into three classes: clean signal, audio simulating the acoustic configuration of the original audio, and audio simulating the acoustic configuration of the replay audio. We also set the weights of the pre-training model to the initial weights of the replay voice spoofing detection model using the existing replay voice spoofing dataset and then performed fine-tuning. To validate the effectiveness of the proposed method, we evaluated the performance of the conventional method without pre-training and proposed method using an objective metric, i.e., the accuracy and F1-score. As a result, the conventional method achieved an accuracy of 92.94%, F1-score of 86.92% and the proposed method achieved an accuracy of 98.16%, F1-score of 95.08%. Full article
(This article belongs to the Special Issue Sensors in Multimedia Forensics)
Show Figures

Figure 1

Figure 1
<p>Definition of multi-order acoustic simulation for replay voice spoofing detection.</p>
Full article ">Figure 2
<p>Multi-order acoustic simulation-based pre-training framework for replay voice spoofing detection.</p>
Full article ">Figure 3
<p>Architecture of pre-training and replay voice spoofing detection model.</p>
Full article ">Figure 4
<p>(<b>Left</b>) Accuracy of pre-training model on training and validation dataset, (<b>right</b>) losses of pre-training model on training and validation dataset.</p>
Full article ">
9 pages, 2452 KiB  
Communication
Dispersion Turning Attenuation Microfiber for Flowrate Sensing
by Yaqi Tang, Chao Wang, Xuefeng Wang, Meng Jiang, Junda Lao and Dongning Wang
Sensors 2023, 23(16), 7279; https://doi.org/10.3390/s23167279 - 20 Aug 2023
Cited by 3 | Viewed by 1207
Abstract
We demonstrated a new optical fiber modal interferometer (MI) for airflow sensing; the novelty of the proposed structure is that an MI is fabricated based on a piece of HAF, which makes the sensitive MI itself also a hotwire. The interferometer is made [...] Read more.
We demonstrated a new optical fiber modal interferometer (MI) for airflow sensing; the novelty of the proposed structure is that an MI is fabricated based on a piece of HAF, which makes the sensitive MI itself also a hotwire. The interferometer is made by applying arc-discharge tapering and then flame tapering on a 10 mm length high attenuation fiber (HAF, 2 dB/cm) with both ends spliced to a normal single mode fiber. When the diameter of the fiber in the processing region is reduced to about 2 μm, the near-infrared dispersion turning point (DTP) can be observed in the interferometer’s transmission spectrum. Due to the absorption of the HAF, the interferometer will have a large temperature increase under the action of a pump laser. At the same time, the spectrum of the interferometer with a DTP is very sensitive to the change in ambient temperature. Since airflow will significantly affect the temperature around the fiber, this thermosensitive interferometer with an integrated heat source is suitable for airflow sensing. Such an airflow sensor sample with a 31.2 mm length was made and pumped by a 980 nm laser with power up to 200 mW. In the comparative experiment with an electrical anemometer, this sensor exhibits a very high air-flow sensitivity of −2.69 nm/(m/s) at a flowrate of about 1.0 m/s. The sensitivity can be further improved by enlarging the waist length, increasing the pump power, etc. The optical anemometer with an extremely high sensitivity and a compact size has the potential to measure a low flowrate in constrained microfluidic channels. Full article
(This article belongs to the Topic Advances in Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram of the dispersion turning attenuation microfiber, (<b>b</b>) simulation results of dispersion turning attenuation microfiber for flowrate measurement.</p>
Full article ">Figure 2
<p>(<b>a</b>) Micrographs of fabricated attenuation microfiber and its transmission spectra, (<b>b</b>) fabrication process of attenuation microfiber, and (<b>c</b>) diagram of the sensing system.</p>
Full article ">Figure 3
<p>(<b>a</b>) Wavelength shift in different dips with different pump power, and Si means the sensitivity of the interference dips. (<b>b</b>) Wavelength shifts in interference dips A and A’ of attenuation microfiber and micro SMF with different temperatures.</p>
Full article ">Figure 4
<p>(<b>a</b>) Variation in the transmission spectrum of an attenuation microfiber with different flowrates. (<b>b</b>) Wavelength shifts in interference dips with different flowrates.</p>
Full article ">Figure 5
<p>(<b>a</b>) Variation in the transmission spectrum of an attenuation microfiber with different flowrates. (<b>b</b>) Wavelength shifts in interference dips with different total lengths of an attenuation microfiber.</p>
Full article ">
Previous Issue
Back to TopTop