Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 23, October-1
Previous Issue
Volume 23, September-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 18 (September-2 2023) – 352 articles

Cover Story (view full-size image): Single-molecule Total Internal Reflection Fluorescence (TIRF) microscopy combined with Lab-on-a-Chip technology offers a powerful approach in the study of biomolecules. Single-molecule TIRF allows for the direct observation of individual molecules in real-time. When integrated with Lab-on-a-Chip systems, which manage fluid flow and sample handling, this approach facilitates high-throughput experiments within a tightly controlled microenvironment. In our review, we detail recent implementations of single-molecule TIRF imaging for biological applications. We further explore the collaboration between Lab-on-a-Chip systems and TIRF imaging. Our analysis concludes with an assessment of the present challenges and potential of fluorescence-based single-molecule imaging techniques, hinting at the promising future of this rapidly advancing field. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 546 KiB  
Article
Generative Adversarial Network (GAN)-Based Autonomous Penetration Testing for Web Applications
by Ankur Chowdhary, Kritshekhar Jha and Ming Zhao
Sensors 2023, 23(18), 8014; https://doi.org/10.3390/s23188014 - 21 Sep 2023
Cited by 3 | Viewed by 4205
Abstract
The web application market has shown rapid growth in recent years. The expansion of Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) has created new web-based communication and sensing frameworks. Current security research utilizes source code analysis and manual exploitation of [...] Read more.
The web application market has shown rapid growth in recent years. The expansion of Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) has created new web-based communication and sensing frameworks. Current security research utilizes source code analysis and manual exploitation of web applications, to identify security vulnerabilities, such as Cross-Site Scripting (XSS) and SQL Injection, in these emerging fields. The attack samples generated as part of web application penetration testing on sensor networks can be easily blocked, using Web Application Firewalls (WAFs). In this research work, we propose an autonomous penetration testing framework that utilizes Generative Adversarial Networks (GANs). We overcome the limitations of vanilla GANs by using conditional sequence generation. This technique helps in identifying key features for XSS attacks. We trained a generative model based on attack labels and attack features. The attack features were identified using semantic tokenization, and the attack payloads were generated using conditional sequence GAN. The generated attack samples can be used to target web applications protected by WAFs in an automated manner. This model scales well on a large-scale web application platform, and it saves the significant effort invested in manual penetration testing. Full article
Show Figures

Figure 1

Figure 1
<p>Cross-Site Scripting (XSS) vulnerability present in an application exploited by a remote attacker.</p>
Full article ">Figure 2
<p>Expressions blocked by WAFs and corresponding bypass techniques.</p>
Full article ">Figure 3
<p>Example of conditional sequences generated from semantic tokens.</p>
Full article ">Figure 4
<p>GAN-based approach for generating attack payloads that bypass web application firewall (WAF) filters.</p>
Full article ">Figure 5
<p>Conditional Attack Sequence Generation by semantic tokenization and attack payload validation.</p>
Full article ">Figure 6
<p>Conditional Attack Sequence Generation.</p>
Full article ">Figure 7
<p>The result of GAN training loss for 250 epochs.</p>
Full article ">
13 pages, 3297 KiB  
Article
A New Dual-Input Deep Anomaly Detection Method for Early Faults Warning of Rolling Bearings
by Yuxiang Kang, Guo Chen, Hao Wang, Wenping Pan and Xunkai Wei
Sensors 2023, 23(18), 8013; https://doi.org/10.3390/s23188013 - 21 Sep 2023
Cited by 2 | Viewed by 1325
Abstract
To address the problem of low fault diagnosis accuracy caused by insufficient fault samples of rolling bearings, a dual-input deep anomaly detection method with zero fault samples is proposed for early fault warning of rolling bearings. First, the main framework of dual-input feature [...] Read more.
To address the problem of low fault diagnosis accuracy caused by insufficient fault samples of rolling bearings, a dual-input deep anomaly detection method with zero fault samples is proposed for early fault warning of rolling bearings. First, the main framework of dual-input feature extraction based on a convolutional neural network (CNN) is established, and the two outputs of the main frame are subjected to the autoencoder structure. Then, the secondary feature extraction is performed. At the same time, the experience pool structure is introduced to improve the feature learning ability of the network. A new objective loss function is also proposed to learn the network parameters. Then, the vibration acceleration signal is preprocessed by wavelet to obtain multiple signals in different frequency bands, and the two signals in the high-frequency band are two-dimensionally encoded and used as the network input. Finally, the unsupervised learning of the model is completed on five sets of actual full-life rolling bearing fault data sets relying only on some samples in a normal state. The verification results show that the proposed method can realize earlier than the RMS, Kurtosis, and other features. The early fault warning and the accuracy rate of more than 98% show that the method is highly capable of early fault warning and anomaly detection. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Dual-Input Deep Anomaly Detection.</p>
Full article ">Figure 2
<p>Rolling bearing vibration data preprocessing process.</p>
Full article ">Figure 3
<p>ABLT-1A bearing tester and fault bearing. (<b>a</b>) ABLT-1A test rig, (<b>b</b>) No. 2 bearing peel fault, (<b>c</b>) No. 3 bearing cage fault.</p>
Full article ">Figure 4
<p>Multiple feature comparison results in the IDES dataset. (<b>a</b>) No. 1 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>b</b>) No. 1 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>c</b>) No. 1 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS_d1, RMS_d2, (<b>d</b>) No. 2 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>e</b>) No. 2 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>f</b>) No. 2 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS_d1, RMS_d2, (<b>g</b>) No. 3 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>h</b>) No. 3 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>i</b>) No. 3 bearing the S4 and RMS_d1, RMS_d2, (<b>j</b>) No. 4 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>k</b>) No. 4 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>l</b>) No. 4 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS_d1, RMS_d2.</p>
Full article ">Figure 4 Cont.
<p>Multiple feature comparison results in the IDES dataset. (<b>a</b>) No. 1 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>b</b>) No. 1 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>c</b>) No. 1 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS_d1, RMS_d2, (<b>d</b>) No. 2 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>e</b>) No. 2 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>f</b>) No. 2 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS_d1, RMS_d2, (<b>g</b>) No. 3 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>h</b>) No. 3 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>i</b>) No. 3 bearing the S4 and RMS_d1, RMS_d2, (<b>j</b>) No. 4 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>k</b>) No. 4 bearing the <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>l</b>) No. 4 bearing the <span class="html-italic">S</span><sub>4</sub> and RMS_d1, RMS_d2.</p>
Full article ">Figure 5
<p>Rolling bearing life test bench of IMS.</p>
Full article ">Figure 6
<p>Multiple feature comparison results in IMS datasets. (<b>a</b>) The <span class="html-italic">S</span><sub>4</sub> and RMS1, (<b>b</b>) The <span class="html-italic">S</span><sub>4</sub> and kurtosis, (<b>c</b>) The <span class="html-italic">S</span><sub>4</sub> and RMS_d1, RMS_d2.</p>
Full article ">
24 pages, 9447 KiB  
Article
Identification, Taxonomy and Performance Assessment of Type 1 and Type 2 Spin Bowling Deliveries with a Smart Cricket Ball
by René E. D. Ferdinands, Batdelger Doljin and Franz Konstantin Fuss
Sensors 2023, 23(18), 8012; https://doi.org/10.3390/s23188012 - 21 Sep 2023
Viewed by 2612
Abstract
Spin bowling deliveries in cricket, finger spin and wrist spin, are usually (Type 1, T1) performed with forearm supination and pronation, respectively, but can also be executed with opposite movements (Type 2, T2), specifically forearm pronation and supination, respectively. The aim of this [...] Read more.
Spin bowling deliveries in cricket, finger spin and wrist spin, are usually (Type 1, T1) performed with forearm supination and pronation, respectively, but can also be executed with opposite movements (Type 2, T2), specifically forearm pronation and supination, respectively. The aim of this study is to identify the differences between T1 and T2 using an advanced smart cricket ball, as well as to assess the dynamics of T1 and T2. With the hand aligned to the ball’s coordinate system, the angular velocity vector, specifically the x-, y- and z-components of its unit vector and its yaw and pitch angles, were used to identify time windows where T1 and T2 deliveries were clearly separated. Such a window was found 0.44 s before the peak torque, and maximum separation was achieved when plotting the y-component against the z-component of the unit vector, or the yaw angle against the pitch angle. In terms of physical performance, T1 deliveries are easier to bowl than T2; in terms of skill performance, wrist spin deliveries are easier to bowl than finger spin. Because the smart ball allows differentiation between T1 and T2 deliveries, it is an ideal tool for talent identification and improving performance through more efficient training. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Movement)
Show Figures

Figure 1

Figure 1
<p>Directions of forearm and finger rotations when executing finger (F) spin and wrist (W) spin Type 1 and Type 2 deliveries; W1 = wrist spin type 1; F1 = finger spin type 1; W2 = wrist spin type 2; F2 = finger spin type 2; U = ulnar abduction of fingers; R = radial abduction of fingers; S = supination of forearm; P = pronation of forearm (cf. <a href="#sensors-23-08012-t001" class="html-table">Table 1</a> and <a href="#sensors-23-08012-f002" class="html-fig">Figure 2</a>).</p>
Full article ">Figure 2
<p>Schematic of the kinematics of forearm and finger movements when executing finger (F) spin and wrist (W) spin Type 1 (T1) and Type 2 (T2) deliveries; ulnar: left hand palmar view or right hand dorsal view, the fingers rotate the ball clockwise in ulnar direction; radial: left hand palmar view or right hand dorsal view, the fingers rotate the ball counterclockwise in radial direction; the terms “same direction” and “opposite direction” refer to the directions in which forearm and fingers move (cf. <a href="#sensors-23-08012-t001" class="html-table">Table 1</a> and <a href="#sensors-23-08012-f001" class="html-fig">Figure 1</a>); W1 = wrist spin type 1; F1 = finger spin type 1; W2 = wrist spin type 2; F2 = finger spin type 2.</p>
Full article ">Figure 3
<p>x-component of the unit vector of the angular velocity (ω) vs. time; (<b>a</b>): raw data (6 datasets per delivery); (<b>b</b>): average ± 1 standard deviation; W1 = wrist spin type 1, W2 = wrist spin type 2.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>): 4D vector diagrams of the angular velocity (<b>ω</b>) up to the release of the ball; the length of <b>ω</b>-vectors corresponds to the magnitude of ω, time is colour-coded (4th dimension); (<b>c</b>,<b>d</b>) plate carrée map projection of the pitch angle against the yaw angle of the <b>ω</b>-vectors; the bubble size denotes the magnitude of the torque imparted on the ball; F1 = finger spin type 1; F2 = finger spin type 2; W1 = wrist spin type 1; W2 = wrist spin type 2.</p>
Full article ">Figure 5
<p>Torques against time (for comparative reasons, deliveries with approximately the same TR magnitude are shown here); F1 = finger spin type 1; F2 = finger spin type 2; W1 = wrist spin type 1; W2 = wrist spin type 2; TR = resultant torque; Ts = spin torque; Tp = precession torque.</p>
Full article ">Figure 6
<p>Angle between the angular velocity (ω) vectors of Type 1 (T1) and Type 2 (T2) plotted against time, for finger spin (F) and wrist spin (W) separately.</p>
Full article ">Figure 7
<p>Kinematic parameters against time for finger spin and wrist spin Type 1 and 2 deliveries (average ± 1 standard deviation); the kinematic parameters are: x-, y-, and z-components of the unit vector of the angular velocity (ω), and Euler angles (yaw, pitch) of the unit vector; F1 = finger spin type 1, F2 = finger spin type 2; W1 = wrist spin type 1, W2 = wrist spin type 2; the yellow zones indicate time zones unsuitable for distinguishing between Type 1 and Type 2 deliveries.</p>
Full article ">Figure 7 Cont.
<p>Kinematic parameters against time for finger spin and wrist spin Type 1 and 2 deliveries (average ± 1 standard deviation); the kinematic parameters are: x-, y-, and z-components of the unit vector of the angular velocity (ω), and Euler angles (yaw, pitch) of the unit vector; F1 = finger spin type 1, F2 = finger spin type 2; W1 = wrist spin type 1, W2 = wrist spin type 2; the yellow zones indicate time zones unsuitable for distinguishing between Type 1 and Type 2 deliveries.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>e</b>): kinematic parameters (components and Euler angles of unit vector ω) against time, for identification of common and uncommon features of the averages of finger(F) spin and wrist(W) spin Type 1 (T1) and Type 2 (T2) deliveries; F1 = finger spin type 1, F2 = finger spin type 2; W1 = wrist spin type 1, W2 = wrist spin type 2; P = pronation, S = supination; (<b>f</b>) differentiation between T1 and T2 from critical time stamps at which T1 and T2 are maximally separated (8 parameters: ω<sub>y</sub>, ω<sub>z</sub>, yaw, and pitch of finger spin and wrist spin); p = p-value; log<sub>10</sub> = decadic logarithm; s (green) = significant: all 8 p-values &lt; 0.05; n (purple) = not significant: at least one p-value &gt; 0.05.</p>
Full article ">Figure 9
<p>Pitch angle vs. yaw angle of the angular velocity vector (of the smart ball) and z-component vs. y-component of the unit vector (^) of the angular velocity (ω); the 4 data clusters (taken at 0.44 s before the peak of the torque spike) are identified by 4 ellipses; W = wrist spin, F = finger spin; T1 = type 1, T2 = type 2; F1 = finger spin type 1, F2 = finger spin type 2, W1 = wrist spin type 1, W2 = wrist spin type 2; P = pronation, S = supination.</p>
Full article ">Figure 10
<p>3D vector diagram of the unit vectors identified as clusters in <a href="#sensors-23-08012-f009" class="html-fig">Figure 9</a>; each line represents the 3D unit angular velocity vector for a single delivery; there are 6 lines per cluster, demarcating the territories of the spin delivery types; the angular velocity vectors are shown on the cricket ball in pole view (<b>a</b>), with the same view projected on a real cricket ball held by a left hand (<b>b</b>); in seam view (<b>c</b>); and in isometric projection (<b>d</b>); F1 = finger spin type 1, F2 = finger spin type 2, W1 = wrist spin type 1, W2 = wrist spin type 2; the grip in subfigure (<b>b</b>) is a neutral one, applicable to all 4 deliveries (F1, F2, W1, W2), and not a grip at 0.44 s before the peak torque (approximately 0.6 s before release), as these grips are different (cf. <a href="#sensors-23-08012-f001" class="html-fig">Figure 1</a>).</p>
Full article ">
24 pages, 7551 KiB  
Article
Best Practices for Body Temperature Measurement with Infrared Thermography: External Factors Affecting Accuracy
by Siavash Mazdeyasna, Pejman Ghassemi and Quanzeng Wang
Sensors 2023, 23(18), 8011; https://doi.org/10.3390/s23188011 - 21 Sep 2023
Cited by 6 | Viewed by 4132
Abstract
Infrared thermographs (IRTs) are commonly used during disease pandemics to screen individuals with elevated body temperature (EBT). To address the limited research on external factors affecting IRT accuracy, we conducted benchtop measurements and computer simulations with two IRTs, with or without an external [...] Read more.
Infrared thermographs (IRTs) are commonly used during disease pandemics to screen individuals with elevated body temperature (EBT). To address the limited research on external factors affecting IRT accuracy, we conducted benchtop measurements and computer simulations with two IRTs, with or without an external temperature reference source (ETRS) for temperature compensation. The combination of an IRT and an ETRS forms a screening thermograph (ST). We investigated the effects of viewing angle (θ, 0–75°), ETRS set temperature (TETRS, 30–40 °C), ambient temperature (Tatm, 18–32 °C), relative humidity (RH, 15–80%), and working distance (d, 0.4–2.8 m). We discovered that STs exhibited higher accuracy compared to IRTs alone. Across the tested ranges of Tatm and RH, both IRTs exhibited absolute measurement errors of less than 0.97 °C, while both STs maintained absolute measurement errors of less than 0.12 °C. The optimal TETRS for EBT detection was 36–37 °C. When θ was below 30°, the two STs underestimated calibration source (CS) temperature (TCS) of less than 0.05 °C. The computer simulations showed absolute temperature differences of up to 0.28 °C and 0.04 °C between estimated and theoretical temperatures for IRTs and STs, respectively, considering d of 0.2–3.0 m, Tatm of 15–35 °C, and RH of 5–95%. The results highlight the importance of precise calibration and environmental control for reliable temperature readings and suggest proper ranges for these factors, aiming to enhance current standard documents and best practice guidelines. These insights enhance our understanding of IRT performance and their sensitivity to various factors, thereby facilitating the development of best practices for accurate EBT measurement. Full article
(This article belongs to the Special Issue Human Health and Performance Monitoring Sensors)
Show Figures

Figure 1

Figure 1
<p>Principle of the total radiation received by an IRT. [<math display="inline"><semantics> <mi>σ</mi> </semantics></math>: Stefan–Boltzmann constant, <math display="inline"><semantics> <mi>ε</mi> </semantics></math>: emissivity, <math display="inline"><semantics> <mi>τ</mi> </semantics></math>: atmospheric transmittance, <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>: reflected temperature, <span class="html-italic">T</span>: object temperature, <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>a</mi> <mi>t</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>: atmosphere temperature].</p>
Full article ">Figure 2
<p>Schematic of the experimental setup. The red dash-dotted box shows the ST systems.</p>
Full article ">Figure 3
<p>Accuracy of (<b>a</b>) IRT-1 and (<b>b</b>) IRT-2 without ETRS compensation. Horizontal dashed lines represent the recommended laboratory accuracy and the rectangular gray area denotes the required evaluation range of 34 °C to 39 °C.</p>
Full article ">Figure 4
<p>Effects of temperature compensation and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> on the accuracy of (<b>a</b>) ST-1 and (<b>b</b>) ST-2. Horizontal dashed and solid lines represent the recommended laboratory accuracy and offset errors, respectively, and the rectangular gray area denotes the required evaluation range of 34 °C to 39 °C.</p>
Full article ">Figure 5
<p>Effect of viewing angle on temperature accuracy for the two STs. The error bars represent the standard deviation.</p>
Full article ">Figure 6
<p>Effects of ambient RH and temperature: <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>I</mi> <mi>R</mi> <mi>T</mi> </mrow> </msub> <mo>−</mo> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>T</mi> </mrow> </msub> <mo>−</mo> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> versus (<b>a</b>) ambient RH with ambient temperature at 24 °C and (<b>b</b>) ambient temperature with ambient RH at 35%. The working distance was kept at 0.8 m.</p>
Full article ">Figure 7
<p>Effect of the working distance: <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>I</mi> <mi>R</mi> <mi>T</mi> </mrow> </msub> <mo>−</mo> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>T</mi> </mrow> </msub> <mo>−</mo> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> versus working distance with (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 37 °C and (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 35 °C.</p>
Full article ">Figure 8
<p>Computer simulation results with <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 37 °C. ((<b>a</b>): estimated <span class="html-italic">τ</span> based on environmental factors. (<b>b</b>): <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>e</mi> <mo>,</mo> <mi>t</mi> <mi>o</mi> <mi>t</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math> received by IRT based on the estimated <span class="html-italic">τ</span>. (<b>c</b>): calculated <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> measured by an IRT assuming <span class="html-italic">τ</span> = 1. (<b>d</b>): calculated <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> measured by an ST assuming <span class="html-italic">τ</span> = 1 and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 35 °C).</p>
Full article ">Figure 9
<p>Computer simulation of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> effect: Offset error <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>T</mi> <mrow> <mi>S</mi> <mi>T</mi> </mrow> </msub> <mo>−</mo> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> of an ST to measure a CS at 37 °C with different <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> values assuming <span class="html-italic">τ</span> = 1. ((<b>a</b>): <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 20 °C; (<b>b</b>): <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 30 °C; (<b>c</b>): <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 36 °C; (<b>d</b>): <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 38 °C; (<b>e</b>): <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 40 °C; (<b>f</b>): <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> = 50 °C).</p>
Full article ">Figure 10
<p>Computer simulations depicting the effects of (<b>a</b>) ambient RH, (<b>b</b>) ambient temperature, and (<b>c</b>) working distance, with both <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>E</mi> <mi>T</mi> <mi>R</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics></math> set at 37 °C. If not specified, the default values for ambient temperature and distance were set to 24 °C and 0.8 m, respectively. For the sake of comparison, the ambient RH values were set at 35% and 50% in cases (<b>b</b>,<b>c</b>), mirroring the experimental conditions depicted in <a href="#sensors-23-08011-f006" class="html-fig">Figure 6</a>b and <a href="#sensors-23-08011-f007" class="html-fig">Figure 7</a>a.</p>
Full article ">
18 pages, 10090 KiB  
Article
Remote Sensing Image Scene Classification in Hybrid Classical–Quantum Transferring CNN with Small Samples
by Zhouwei Zhang, Xiaofei Mi, Jian Yang, Xiangqin Wei, Yan Liu, Jian Yan, Peizhuo Liu, Xingfa Gu and Tao Yu
Sensors 2023, 23(18), 8010; https://doi.org/10.3390/s23188010 - 21 Sep 2023
Cited by 4 | Viewed by 2120
Abstract
The scope of this research lies in the combination of pre-trained Convolutional Neural Networks (CNNs) and Quantum Convolutional Neural Networks (QCNN) in application to Remote Sensing Image Scene Classification(RSISC). Deep learning (RL) is improving by leaps and bounds pretrained CNNs in Remote Sensing [...] Read more.
The scope of this research lies in the combination of pre-trained Convolutional Neural Networks (CNNs) and Quantum Convolutional Neural Networks (QCNN) in application to Remote Sensing Image Scene Classification(RSISC). Deep learning (RL) is improving by leaps and bounds pretrained CNNs in Remote Sensing Image (RSI) analysis, and pre-trained CNNs have shown remarkable performance in remote sensing image scene classification (RSISC). Nonetheless, CNNs training require massive, annotated data as samples. When labeled samples are not sufficient, the most common solution is using pre-trained CNNs with a great deal of natural image datasets (e.g., ImageNet). However, these pre-trained CNNs require a large quantity of labelled data for training, which is often not feasible in RSISC, especially when the target RSIs have different imaging mechanisms from RGB natural images. In this paper, we proposed an improved hybrid classical–quantum transfer learning CNNs composed of classical and quantum elements to classify open-source RSI dataset. The classical part of the model is made up of a ResNet network which extracts useful features from RSI datasets. To further refine the network performance, a tensor quantum circuit is subsequently employed by tuning parameters on near-term quantum processors. We tested our models on the open-source RSI dataset. In our comparative study, we have concluded that the hybrid classical–quantum transferring CNN has achieved better performance than other pre-trained CNNs based RSISC methods with small training samples. Moreover, it has been proven that the proposed algorithm improves the classification accuracy while greatly decreasing the amount of model parameters and the sum of training data. Full article
Show Figures

Figure 1

Figure 1
<p>Limitations of RSISC, which include (<b>a</b>) low type separability, (<b>b</b>) complex variance of scene scales, (<b>c</b>) coexistence of multiple objects. The sources are from Eurosat dataset [<a href="#B17-sensors-23-08010" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>An illustration of tensor quantum circuits.</p>
Full article ">Figure 3
<p>A hybrid classical–quantum transferring CNN for RSISC. ResNet network is utilized as features extraction and tensor quantum circuits are applied as a fully connected layer for RSISC.</p>
Full article ">Figure 4
<p>(<b>Top</b>): general scheme for the hybrid classical–quantum transferring CNN for RSISC. (<b>Middle</b>): detailed scheme for classifying the scene image dataset. (<b>Bottom</b>): architecture of the tensor quantum circuit with inputs of 4 qubits. R<sub>x</sub>(•), R<sub>y</sub>(•) and R<sub>z</sub>(•) separately denote Pauli rotation X, Y, Z gates.</p>
Full article ">Figure 5
<p>This outline shows all sample images of all 10 categories covered in the EuroSAT dataset. The image size has 64 × 64 pixels. Each category contains 2000 to 3000 images. In sum, the dataset has 27,000 geo-referenced images [<a href="#B17-sensors-23-08010" class="html-bibr">17</a>].</p>
Full article ">Figure 6
<p>This outline shows all sample images of all 30 categories covered in the AID dataset. The image size has 600 × 600 pixels. Each category contains 220 to 420 images. In all, the dataset has 10,000 georefenced images [<a href="#B34-sensors-23-08010" class="html-bibr">34</a>].</p>
Full article ">Figure 7
<p>Confusion matrix of the proposed method on the EuroSAT dataset in a training and a test set (10/90 ratio) using RSIs in the RGB color space.</p>
Full article ">Figure 8
<p>Relation between the loss function and training steps of the proposed method.</p>
Full article ">Figure 9
<p>The images misclassified as others. (<b>a</b>,<b>b</b>) The images of highways misclassified as rivers. (<b>c</b>,<b>d</b>) The images of rivers misclassified as highways.</p>
Full article ">
33 pages, 13213 KiB  
Article
Fault-Tolerant Trust-Based Task Scheduling Algorithm Using Harris Hawks Optimization in Cloud Computing
by Sudheer Mangalampalli, Ganesh Reddy Karri, Amit Gupta, Tulika Chakrabarti, Sri Hari Nallamala, Prasun Chakrabarti, Bhuvan Unhelkar and Martin Margala
Sensors 2023, 23(18), 8009; https://doi.org/10.3390/s23188009 - 21 Sep 2023
Cited by 1 | Viewed by 1521
Abstract
Cloud computing is a distributed computing model which renders services for cloud users around the world. These services need to be rendered to customers with high availability and fault tolerance, but there are still chances of having single-point failures in the cloud paradigm, [...] Read more.
Cloud computing is a distributed computing model which renders services for cloud users around the world. These services need to be rendered to customers with high availability and fault tolerance, but there are still chances of having single-point failures in the cloud paradigm, and one challenge to cloud providers is effectively scheduling tasks to avoid failures and acquire the trust of their cloud services by users. This research proposes a fault-tolerant trust-based task scheduling algorithm in which we carefully schedule tasks within precise virtual machines by calculating priorities for tasks and VMs. Harris hawks optimization was used as a methodology to design our scheduler. We used Cloudsim as a simulating tool for our entire experiment. For the entire simulation, we used synthetic fabricated data with different distributions and real-time supercomputer worklogs. Finally, we evaluated the proposed approach (FTTATS) with state-of-the-art approaches, i.e., ACO, PSO, and GA. From the simulation results, our proposed FTTATS greatly minimizes the makespan for ACO, PSO and GA algorithms by 24.3%, 33.31%, and 29.03%, respectively. The rate of failures for ACO, PSO, and GA were minimized by 65.31%, 65.4%, and 60.44%, respectively. Trust-based SLA parameters improved, i.e., availability improved for ACO, PSO, and GA by 33.38%, 35.71%, and 28.24%, respectively. The success rate improved for ACO, PSO, and GA by 52.69%, 39.41%, and 38.45%, respectively. Turnaround efficiency was minimized for ACO, PSO, and GA by 51.8%, 47.2%, and 33.6%, respectively. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Proposed System Architecture.</p>
Full article ">Figure 2
<p>Flow of Proposed FTTA task scheduler.</p>
Full article ">Figure 3
<p>Makespan calculation by D01.</p>
Full article ">Figure 4
<p>Makespan calculation by D02.</p>
Full article ">Figure 5
<p>Makespan calculation by D03.</p>
Full article ">Figure 6
<p>Makespan calculation by D04.</p>
Full article ">Figure 7
<p>Makespan calculation by D05.</p>
Full article ">Figure 8
<p>Makespan calculation by D06.</p>
Full article ">Figure 9
<p>Rate of Failures calculation by D01.</p>
Full article ">Figure 10
<p>Rate of Failures calculation by D02.</p>
Full article ">Figure 11
<p>Rate of Failures calculation by D03.</p>
Full article ">Figure 12
<p>Rate of Failures calculation by D04.</p>
Full article ">Figure 13
<p>Rate of Failures calculation by D05.</p>
Full article ">Figure 14
<p>Rate of Failures calculation by D06.</p>
Full article ">Figure 15
<p>Availability of VMs calculation by D01.</p>
Full article ">Figure 16
<p>Availability of VMs calculation by D02.</p>
Full article ">Figure 17
<p>Availability of VMs calculation by D03.</p>
Full article ">Figure 18
<p>Availability of VMs calculation by D04.</p>
Full article ">Figure 19
<p>Availability of VMs calculation by D05.</p>
Full article ">Figure 20
<p>Availability of VMs calculation by D06.</p>
Full article ">Figure 21
<p>Success rate of VMs calculated by D01.</p>
Full article ">Figure 22
<p>Success rate of VMs calculated by D02.</p>
Full article ">Figure 23
<p>Success rate of VMs calculated by D03.</p>
Full article ">Figure 24
<p>Success rate of VMs calculated by D04.</p>
Full article ">Figure 25
<p>Success rate of VMs calculated by D05.</p>
Full article ">Figure 26
<p>Success rate of VMs calculated by D06.</p>
Full article ">Figure 27
<p>Turnaround efficiency of VMs calculated by D01.</p>
Full article ">Figure 28
<p>Turnaround efficiency of VMs calculated by D02.</p>
Full article ">Figure 29
<p>Turnaround efficiency of VMs calculated by D03.</p>
Full article ">Figure 30
<p>Turnaround efficiency of VMs calculated by D04.</p>
Full article ">Figure 31
<p>Turnaround efficiency of VMs calculated by D05.</p>
Full article ">Figure 32
<p>Turnaround efficiency of VMs calculated by D06.</p>
Full article ">
15 pages, 2040 KiB  
Article
Design of a Sensor-Technology-Augmented Gait and Balance Monitoring System for Community-Dwelling Older Adults in Hong Kong: A Pilot Feasibility Study
by Yang Zhao, Lisha Yu, Xiaomao Fan, Marco Y. C. Pang, Kwok-Leung Tsui and Hailiang Wang
Sensors 2023, 23(18), 8008; https://doi.org/10.3390/s23188008 - 21 Sep 2023
Cited by 2 | Viewed by 1650
Abstract
Routine assessments of gait and balance have been recognized as an effective approach for preventing falls by issuing early warnings and implementing appropriate interventions. However, current limited public healthcare resources cannot meet the demand for continuous monitoring of deteriorations in gait and balance. [...] Read more.
Routine assessments of gait and balance have been recognized as an effective approach for preventing falls by issuing early warnings and implementing appropriate interventions. However, current limited public healthcare resources cannot meet the demand for continuous monitoring of deteriorations in gait and balance. The objective of this study was to develop and evaluate the feasibility of a prototype surrogate system driven by sensor technology and multi-sourced heterogeneous data analytics, for gait and balance assessment and monitoring. The system was designed to analyze users’ multi-mode data streams collected via inertial sensors and a depth camera while performing a 3-m timed up and go test, a five-times-sit-to-stand test, and a Romberg test, for predicting scores on clinical measurements by physiotherapists. Generalized regression of sensor data was conducted to build prediction models for gait and balance estimations. Demographic correlations with user acceptance behaviors were analyzed using ordinal logistic regression. Forty-four older adults (38 females) were recruited in this pilot study (mean age = 78.5 years, standard deviation [SD] = 6.2 years). The participants perceived that using the system for their gait and balance monitoring was a good idea (mean = 5.45, SD = 0.76) and easy (mean = 4.95, SD = 1.09), and that the system is useful in improving their health (mean = 5.32, SD = 0.83), is trustworthy (mean = 5.04, SD = 0.88), and has a good fit between task and technology (mean = 4.97, SD = 0.84). In general, the participants showed a positive intention to use the proposed system in their gait and balance management (mean = 5.22, SD = 1.10). Demographic correlations with user acceptance are discussed. This study provides preliminary evidence supporting the feasibility of using a sensor-technology-augmented system to manage the gait and balance of community-dwelling older adults. The intervention is validated as being acceptable, viable, and valuable. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the proposed system architecture.</p>
Full article ">Figure 2
<p>Boxplots of the participants’ perceptions of the proposed system in terms of a positive attitude (ATT), perceived usefulness (PU), perceived ease of use (PEOU), intention to use (ITU), trust (T), and the task–technology fit (TTF).</p>
Full article ">Figure 3
<p>Segmentation of a 3M-TUG task using inertial sensor data and the measurements of step width and step length obtained from Kinect data (using the left and right ankles as two skeleton points).</p>
Full article ">Figure 4
<p>Comparison of the inertial sensor data and the Kinect data (using the left and right ankles as two skeleton points) for a 360-degree turning task.</p>
Full article ">Figure 5
<p>Comparison of the inertial sensor data and the Kinect data (using the left and right ankles as two skeleton points) for the FTSTS task.</p>
Full article ">
19 pages, 39258 KiB  
Article
Simulation and Experimental Verification of Magnetic Field Diffusion at the Launch Load during Electromagnetic Launch
by Yuxin Yang, Qiang Yin, Changsheng Li, Haojie Li and He Zhang
Sensors 2023, 23(18), 8007; https://doi.org/10.3390/s23188007 - 21 Sep 2023
Cited by 1 | Viewed by 1321
Abstract
The unique magnetic field environment during electromagnetic launch imposes higher requirements on the design and protection of the internal electronic system within the launch load. This low-frequency, Tesla-level extreme magnetic field environment is fundamentally distinct from the Earth’s geomagnetic field. The excessive change [...] Read more.
The unique magnetic field environment during electromagnetic launch imposes higher requirements on the design and protection of the internal electronic system within the launch load. This low-frequency, Tesla-level extreme magnetic field environment is fundamentally distinct from the Earth’s geomagnetic field. The excessive change rate of magnetic flux can readily induce voltage within the circuit, thus disrupting the normal operation of intelligent microchips. Existing simulation methods primarily focus on the physical environments of rails and armatures, making it challenging to precisely compute the magnetic field environment at the load’s location. In this paper, we propose a computational rail model based on the magneto–mechanical coupling model of a railgun. This model accounts for the dynamic current distribution during the launch process and simulates the magnetic flux density distribution at the load location. To validate the model’s accuracy, three-axis magnetic sensors were placed in front of the armature, and the dynamic magnetic field distribution during the launch process was obtained using the projectile-borne-storage testing method. The results indicate that compared to the previous literature methods, the approach proposed in this paper achieves higher accuracy and is closer to experimental results, providing valuable support for the design and optimization of the launch load. Full article
(This article belongs to the Special Issue Sensors and Extreme Environments)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of in-bore electromagnetic railgun during launch.</p>
Full article ">Figure 2
<p>Equivalent circuit of pulse-forming network.</p>
Full article ">Figure 3
<p>Magneto–mechanical coupling calculation model.</p>
Full article ">Figure 4
<p>Current.</p>
Full article ">Figure 5
<p>Velocity and displacement.</p>
Full article ">Figure 6
<p>Comparison of current line distribution between two moments during launch: (<b>a</b>) 1 ms and (<b>b</b>) 8.1 ms.</p>
Full article ">Figure 7
<p>Sub-regional calculation model of the A&amp;R.</p>
Full article ">Figure 8
<p>Equivalent model using computational method: (<b>a</b>) 1 ms and (<b>b</b>) 8.1 ms.</p>
Full article ">Figure 9
<p>Numerical calculation flow of ‘calculated rail’.</p>
Full article ">Figure 10
<p>Equivalence model at 8.1 ms.</p>
Full article ">Figure 11
<p>Cloud map of magnetic flux density distribution in front of the armature: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>o</mi> <mi>y</mi> </mrow> </semantics></math>-plane and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>o</mi> <mi>z</mi> </mrow> </semantics></math>-plane.</p>
Full article ">Figure 12
<p>Magnetic flux density at different points in front of the armature: (<b>a</b>) P1∼P3 and (<b>b</b>) P4∼P9.</p>
Full article ">Figure 13
<p>The change in magnetic flux density at different positions at 9 ms.</p>
Full article ">Figure 14
<p>The change rate of magnetic flux density with distance.</p>
Full article ">Figure 15
<p>Different current inputs: (<b>a</b>) Bi-exponential current and (<b>b</b>) three-segment current.</p>
Full article ">Figure 16
<p>Magnetic flux density calculated by different methods: (<b>a</b>) Bi-exponential current and (<b>b</b>) three-segment current.</p>
Full article ">Figure 17
<p>Methods of the previous literature: (<b>a</b>) surface current and (<b>b</b>) mean rail.</p>
Full article ">Figure 18
<p>Comparison of simulation methods: (<b>a</b>) surface current and (<b>b</b>) mean rail.</p>
Full article ">Figure 19
<p>Comparison of the magnetic flux change rate: (<b>a</b>) different input currents and (<b>b</b>) different equivalent models.</p>
Full article ">Figure 20
<p>The magnetic field environment testing plan for the electromagnetic launch.</p>
Full article ">Figure 21
<p>Magnetic field test system: (<b>a</b>) assembly circuit and (<b>b</b>) after encapsulation.</p>
Full article ">Figure 22
<p>Triaxial magnetic flux density obtained from the experiment.</p>
Full article ">Figure 23
<p>Triaxial simulation values: (<b>a</b>) <span class="html-italic">z</span>-axis, (<b>b</b>) <span class="html-italic">y</span>-axis, and (<b>c</b>) <span class="html-italic">x</span>-axis; (<b>d</b>) comparison of <span class="html-italic">z</span>-axis.</p>
Full article ">
17 pages, 2812 KiB  
Article
A Novel Efficient Dynamic Throttling Strategy for Blockchain-Based Intrusion Detection Systems in 6G-Enabled VSNs
by Lampis Alevizos, Vinh Thong Ta and Max Hashem Eiza
Sensors 2023, 23(18), 8006; https://doi.org/10.3390/s23188006 - 21 Sep 2023
Cited by 2 | Viewed by 1389
Abstract
Vehicular Social Networks (VSNs) have emerged as a new social interaction paradigm, where vehicles can form social networks on the roads to improve the convenience/safety of passengers. VSNs are part of Vehicle to Everything (V2X) services, which is one of the industrial verticals [...] Read more.
Vehicular Social Networks (VSNs) have emerged as a new social interaction paradigm, where vehicles can form social networks on the roads to improve the convenience/safety of passengers. VSNs are part of Vehicle to Everything (V2X) services, which is one of the industrial verticals in the coming sixth generation (6G) networks. The lower latency, higher connection density, and near-100% coverage envisaged in 6G will enable more efficient implementation of VSNs applications. The purpose of this study is to address the problem of lateral movements of attackers who could compromise one device in a VSN, given the large number of connected devices and services in VSNs and attack other devices and vehicles. This challenge is addressed via our proposed Blockchain-based Collaborative Distributed Intrusion Detection (BCDID) system with a novel Dynamic Throttling Strategy (DTS) to detect and prevent attackers’ lateral movements in VSNs. Our experiments showed how the proposed DTS improve the effectiveness of the BCDID system in terms of detection capabilities and handling queries three times faster than the default strategy with 350k queries tested. We concluded that our DTS strategy can increase transaction processing capacity in the BCDID system and improve its performance while maintaining the integrity of data on-chain. Full article
(This article belongs to the Special Issue Security, Privacy and Trust in 6G Communication Networks)
Show Figures

Figure 1

Figure 1
<p>Operation processes blueprint.</p>
Full article ">Figure 2
<p>(<b>a</b>) CPU and memory performance; (<b>b</b>) Time to complete and TPS per user group.</p>
Full article ">Figure 3
<p>Peer environment indexing and monitoring.</p>
Full article ">Figure 4
<p>Dynamic throttling algorithm flowchart.</p>
Full article ">Figure 5
<p>PREFER_MSPID_SCOPE_ROUND_ROBIN drawback.</p>
Full article ">Figure 6
<p>(<b>a</b>) CPU and memory performance using D_THROTTLE besides the percentage between 0 and 60; (<b>b</b>) Time to complete and TPS per user group using D_THROTTLE. We considered the user numbers from 100 till 1000, the TPS from 0 to 2200, and the transaction numbers from 0 to 400 K.</p>
Full article ">Figure 7
<p>(<b>a</b>) Overall time to completion—seconds vs. transactions split; (<b>b</b>) Time to completion per transaction group—seconds vs. transactions split. The horizontal lines show the number of transactions we ran in each case, while the vertical lines show the times in seconds.</p>
Full article ">
18 pages, 15324 KiB  
Article
Intelligent Tapping Machine: Tap Geometry Inspection
by En-Yu Lin, Ju-Chin Chen and Jenn-Jier James Lien
Sensors 2023, 23(18), 8005; https://doi.org/10.3390/s23188005 - 21 Sep 2023
Cited by 1 | Viewed by 1666
Abstract
Currently, the majority of industrial metal processing involves the use of taps for cutting. However, existing tap machines require relocation to specialized inspection stations and only assess the condition of the cutting edges for defects. They do not evaluate the quality of the [...] Read more.
Currently, the majority of industrial metal processing involves the use of taps for cutting. However, existing tap machines require relocation to specialized inspection stations and only assess the condition of the cutting edges for defects. They do not evaluate the quality of the cutting angles and the amount of removed material. Machine vision, a key component of smart manufacturing, is commonly used for visual inspection. Taps are employed for processing various materials. Traditional tap replacement relies on the technician’s accumulated empirical experience to determine the service life of the tap. Therefore, we propose the use of visual inspection of the tap’s external features to determine whether replacement or regrinding is needed. We examined the bearing surface of the tap and utilized single images to identify the cutting angle, clearance angle, and cone angles. By inspecting the side of the tap, we calculated the wear of each cusp. This inspection process can facilitate the development of a tap life system, allowing for the estimation of the durability and wear of taps and nuts made of different materials. Statistical analysis can be employed to predict the lifespan of taps in production lines. Experimental error is 16 μm. Wear from tapping 60 times is equivalent to 8 s of electric grinding. We have introduced a parameter, thread removal quantity, which has not been proposed by anyone else. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Tap surface detection target; (<b>a</b>) Tap placement inspection; (<b>b</b>) Tap cutting angle and clearance angle requirements.</p>
Full article ">Figure 2
<p>Tap side detection target; (<b>a</b>) Illustrations of the guide cone angle, cutting cone angle, and flute cone angle; (<b>b</b>) Illustrations of tooth peak height and tooth length.</p>
Full article ">Figure 3
<p>Experimental environment for tap inspection; (<b>a</b>) Frontal view; (<b>b</b>) Side view.</p>
Full article ">Figure 4
<p>Three types of shooting environments for tapping machine inspection are as follows: (<b>a</b>) Using ring light with a background panel; (<b>b</b>) Using only a backlight panel; (<b>c</b>) Using only a ring light.</p>
Full article ">Figure 5
<p>Flowchart of Tap Inspection; (<b>a</b>) Tap placement inspection; (<b>b</b>) Frontal inspection; (<b>c</b>) Side inspection.</p>
Full article ">Figure 5 Cont.
<p>Flowchart of Tap Inspection; (<b>a</b>) Tap placement inspection; (<b>b</b>) Frontal inspection; (<b>c</b>) Side inspection.</p>
Full article ">Figure 6
<p>Inspection of tapping machine placement, Green represents the center to outer circle, and red represents the center to inner circle.</p>
Full article ">Figure 7
<p>Identifying obtuse angles and acute angles; (<b>a</b>) Binary image; (<b>b</b>) Edge detection; (<b>c</b>) Preserve areas of interest; (<b>d</b>) Candidate points of acute and obtuse angles; (<b>e</b>) Identify the target points of acute and obtuse angles.</p>
Full article ">Figure 8
<p>Tap cutting angle measurement results on the surface. The connection between the center and the acute angle is labeled as La, the perpendicular line drawn from the obtuse angle to La is labeled as Lb, and the intersection point between the acute angle and Lb, as well as the cutting edge, is labeled as Lc.</p>
Full article ">Figure 9
<p>The grayscale value of the tool handle is too close to the background (red circle).</p>
Full article ">Figure 10
<p>Side inspection results of the tap; (<b>a</b>) Detection of the guiding section angle, cutting section angle, and form section angle; (<b>b</b>) Tooth peak height inspection.</p>
Full article ">Figure 11
<p>Area measurement of the tooth edge, the orange represents the cone angle when viewed from the front.</p>
Full article ">Figure 12
<p>Two different materials of tapping tools: (<b>a</b>) Stainless steel; (<b>b</b>) Titanium alloy.</p>
Full article ">Figure 13
<p>Results of different edge detection methods for tap detection, the red represents the missing portion: (<b>a</b>) Original image; (<b>b</b>) Sobel; (<b>c</b>) Roberts; (<b>d</b>) Prewitt; (<b>e</b>) Dual Parity Morphological Gradients; (<b>f</b>) Canny.</p>
Full article ">Figure 14
<p>Detecting the results of the L4 baseline using different linear fitting methods. (<b>a</b>) Original; (<b>b</b>) Least squares; (<b>c</b>) Total least squares; (<b>d</b>) Hough line; (<b>e</b>) Ransac; (<b>f</b>) Weighted least squares.</p>
Full article ">Figure 15
<p>Results of artificial wear on stainless steel 6912: (<b>a</b>) Cone angle; (<b>b</b>) Tooth peak height; (<b>c</b>) Tooth peak area; (<b>d</b>) Amount of thread removal from the nut.</p>
Full article ">Figure 16
<p>Results of artificial wear on titanium 7912p: (<b>a</b>) Cone angle; (<b>b</b>) Tooth peak height; (<b>c</b>) Tooth peak area; (<b>d</b>) Amount of thread removal from the nut.</p>
Full article ">
20 pages, 14767 KiB  
Article
Enhancing Feature Detection and Matching in Low-Pixel-Resolution Hyperspectral Images Using 3D Convolution-Based Siamese Networks
by Chamika Janith Perera, Chinthaka Premachandra and Hiroharu Kawanaka
Sensors 2023, 23(18), 8004; https://doi.org/10.3390/s23188004 - 21 Sep 2023
Cited by 4 | Viewed by 2686
Abstract
Today, hyperspectral imaging plays an integral part in the remote sensing and precision agriculture field. Identifying the matching key points between hyperspectral images is an important step in tasks such as image registration, localization, object recognition, and object tracking. Low-pixel resolution hyperspectral imaging [...] Read more.
Today, hyperspectral imaging plays an integral part in the remote sensing and precision agriculture field. Identifying the matching key points between hyperspectral images is an important step in tasks such as image registration, localization, object recognition, and object tracking. Low-pixel resolution hyperspectral imaging is a recent introduction to the field, bringing benefits such as lower cost and form factor compared to traditional systems. However, the use of limited pixel resolution challenges even state-of-the-art feature detection and matching methods, leading to difficulties in generating robust feature matches for images with repeated textures, low textures, low sharpness, and low contrast. Moreover, the use of narrower optics in these cameras adds to the challenges during the feature-matching stage, particularly for images captured during low-altitude flight missions. In order to enhance the robustness of feature detection and matching in low pixel resolution images, in this study we propose a novel approach utilizing 3D Convolution-based Siamese networks. Compared to state-of-the-art methods, this approach takes advantage of all the spectral information available in hyperspectral imaging in order to filter out incorrect matches and produce a robust set of matches. The proposed method initially generates feature matches through a combination of Phase Stretch Transformation-based edge detection and SIFT features. Subsequently, a 3D Convolution-based Siamese network is utilized to filter out inaccurate matches, producing a highly accurate set of feature matches. Evaluation of the proposed method demonstrates its superiority over state-of-the-art approaches in cases where they fail to produce feature matches. Additionally, it competes effectively with the other evaluated methods when generating feature matches in low-pixel resolution hyperspectral images. This research contributes to the advancement of low pixel resolution hyperspectral imaging techniques, and we believe it can specifically aid in mosaic generation of low pixel resolution hyperspectral images. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed method.</p>
Full article ">Figure 2
<p>The DJI M600 Pro drone with mounted camera.</p>
Full article ">Figure 3
<p>Intermediate results of feature match generation method: (<b>a</b>) selected image pair from the 750 nm band, (<b>b</b>) PST edge map, (<b>c</b>) random sample 100 detected matches.</p>
Full article ">Figure 4
<p>Proposed network architecture.</p>
Full article ">Figure 5
<p>Training data creation process: red square = <math display="inline"><semantics> <msub> <mi>P</mi> <mi>r</mi> </msub> </semantics></math> window, yellow square = <math display="inline"><semantics> <msub> <mi>P</mi> <mi>y</mi> </msub> </semantics></math> window. (<b>a</b>) matches and (<b>b</b>) non-matches.</p>
Full article ">Figure 6
<p>Four samples from the created dataset: (<b>a</b>) matches and (<b>b</b>) non-matches.</p>
Full article ">Figure 7
<p>(<b>a</b>) Training and validation loss curve and (<b>b</b>) Matthews Correlation Coefficient curve.</p>
Full article ">Figure 8
<p>Evaluation of band selection for initial feature matching for ten image pairs: (<b>a</b>) total matches produced for each band, (<b>b</b>) total matches produced by the proposed method, (<b>c</b>) inlier ratio for each band, (<b>d</b>) mean reprojection error for each band.</p>
Full article ">Figure 9
<p>Selected samples from the datasets. Each figure name refers to the corresponding dataset and image pair in <a href="#sensors-23-08004-t001" class="html-table">Table 1</a>. The dataset number is represented in the base number and subscript presents the image pair.</p>
Full article ">Figure 10
<p>Line graphs showing the SSIM values obtained for each image pair from <span class="html-italic">a</span> to <span class="html-italic">r</span>.</p>
Full article ">Figure 11
<p>(<b>a</b>) Image pair <span class="html-italic">o</span> in Dataset 3 and (<b>b</b>) the same image pair after the PST edge detection.</p>
Full article ">
11 pages, 851 KiB  
Communication
Revealing Long-Term Indoor Air Quality Prediction: An Intelligent Informer-Based Approach
by Hui Long, Jueling Luo, Yalu Zhang, Shijie Li, Si Xie, Haodong Ma and Haonan Zhang
Sensors 2023, 23(18), 8003; https://doi.org/10.3390/s23188003 - 21 Sep 2023
Cited by 3 | Viewed by 2173
Abstract
Indoor air pollution is an urgent issue, posing a significant threat to the health of indoor workers and residents. Individuals engaged in indoor occupations typically spend an average of around 21 h per day in enclosed spaces, while residents spend approximately 13 h [...] Read more.
Indoor air pollution is an urgent issue, posing a significant threat to the health of indoor workers and residents. Individuals engaged in indoor occupations typically spend an average of around 21 h per day in enclosed spaces, while residents spend approximately 13 h indoors on average. Accurately predicting indoor air quality is crucial for the well-being of indoor workers and frequent home dwellers. Despite the development of numerous methods for indoor air quality prediction, the task remains challenging, especially under constraints of limited air quality data collection points. To address this issue, we propose a neural network capable of capturing time dependencies and correlations among data indicators, which integrates the informer model with a data-correlation feature extractor based on MLP. In the experiments of this study, we employ the Informer model to predict indoor air quality in an industrial park in Changsha, Hunan Province, China. The model utilizes indoor and outdoor temperature, humidity, and outdoor particulate matter (PM) values to forecast future indoor particle levels. Experimental results demonstrate the superiority of the Informer model over other methods for both long-term and short-term indoor air quality predictions. The model we propose holds significant implications for safeguarding personal health and well-being, as well as advancing indoor air quality management practices. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Figure 1
<p>Diagram of the Informer Architecture.</p>
Full article ">Figure 2
<p>The predicted variations of indoor PM values exhibit significant fluctuations, with numerous outliers. Overall, the model closely approximates the true distribution trends in the following ways: (<b>a</b>) The model demonstrates a relatively close fit to the data but fails to predict sudden spikes in certain trends; (<b>b</b>) The model nearly perfectly predicts the data; (<b>c</b>) The model accurately forecasts the overall trends, but lags in predicting specific time steps; (<b>d</b>) The model correctly predicts the trends but encounters significant fluctuations, struggling to handle extreme outlier points.</p>
Full article ">
17 pages, 7422 KiB  
Article
Effect of Ambient Environment on Laser Reduction of Graphene Oxide for Applications in Electrochemical Sensing
by Abdullah A. Faqihi, Neil Keegan, Lidija Šiller and John Hedley
Sensors 2023, 23(18), 8002; https://doi.org/10.3390/s23188002 - 21 Sep 2023
Cited by 1 | Viewed by 1507
Abstract
Electrochemical sensors play an important role in a variety of applications. With the potential for enhanced performance, much of the focus has been on developing nanomaterials, in particular graphene, for such sensors. Recent work has looked towards laser scribing technology for the reduction [...] Read more.
Electrochemical sensors play an important role in a variety of applications. With the potential for enhanced performance, much of the focus has been on developing nanomaterials, in particular graphene, for such sensors. Recent work has looked towards laser scribing technology for the reduction of graphene oxide as an easy and cost-effective option for sensor fabrication. This work looks to develop this approach by assessing the quality of sensors produced with the effect of different ambient atmospheres during the laser scribing process. The graphene oxide was reduced using a laser writing system in a range of atmospheres and sensors characterised with Raman spectroscopy, XPS and cyclic voltammetry. Although providing a slightly higher defect density, sensors fabricated under argon and nitrogen atmospheres exhibited the highest average electron transfer rates of approximately 2 × 10−3 cms−1. Issues of sensor reproducibility using this approach are discussed. Full article
(This article belongs to the Special Issue Research Progress in Electrochemical Aptasensors and Biosensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of the hardware setup used for sensor fabrication. (A) A LightScribe DVD writer is contained within (B) an enclosed chamber. The gas type and pressure are controlled via (C) a rotary vane vacuum pump, (D) gas cylinders, (E) pressure gauge and (F) stop values. (G) A PC is used to control the image writing process.</p>
Full article ">Figure 2
<p>Fabrication of the rGO electrochemical sensors: (<b>a</b>) PET film is glued onto a LightScribe DVD; (<b>b</b>) 15 mL of GO solution is added and left to dry; (<b>c</b>) DVD is placed within a chamber-enclosed LightScribe DVD writer, the required atmosphere is set, and the electrode pattern is written onto the GO film reducing it to the darker coloured rGO; (<b>d</b>) upon completion, the PET film is removed and cut into corresponding sensors; (<b>e</b>) copper tape is attached to enable an electrical connection; (<b>f</b>) sensor is encapsulated in Kapton tape with a 3 mm diameter sensing window. The inset is an SEM image of the rGO surface.</p>
Full article ">Figure 3
<p>SEM images of the starting graphene oxide material (GO) and the reduced graphene oxide (rGO) produced via laser heating of the GO in air, under vacuum, nitrogen and argon atmospheres.</p>
Full article ">Figure 4
<p>Examples of a (<b>a</b>) Raman scan of GO and a (<b>b</b>) Raman scan of rGO. Three components, corresponding to the D, G and 2D peaks of graphene were fit to each profile.</p>
Full article ">Figure 5
<p>Relative intensities of the (<b>a</b>) D peak compared to the G peak and the (<b>b</b>) 2D peak compared to the G peak of 12 GO and 9 rGO (per environment) samples. The standard deviations of the measurements are indicated by the error bars.</p>
Full article ">Figure 6
<p>Examples of XPS spectra of the carbon peak from (<b>a</b>) GO and (<b>b</b>) rGO. Four components were fitted to each spectra and, subsequently, attributed to a specific bonding type.</p>
Full article ">Figure 7
<p>(<b>a</b>) Percentage abundance of carbon, nitrogen and oxidation obtained from XPS survey scans of the samples; (<b>b</b>) bond-type contribution to the C peak in each XPS spectra. The standard deviations of the measurements (n = 8 for GO, n = 6 for rGO (per environment)) are indicated by the error bars.</p>
Full article ">Figure 8
<p>(<b>a</b>,<b>c</b>) Peak separation and (<b>b</b>,<b>d</b>) peak anodic (circle) and cathodic (square) currents for a range of cyclic voltammetry scan rates performed on the sensors. Outer sphere potentials are provided in (<b>a</b>,<b>b</b>), whilst inner sphere scans are shown in (<b>c</b>,<b>d</b>). Second-order best fit polynomials are shown for (<b>a</b>,<b>c</b>) peak separation, whilst linear fits (with R<sup>2</sup> &gt; 0.99 for all cases) are shown for the (<b>b</b>,<b>d</b>) current versus square root of the scan rate.</p>
Full article ">Figure A1
<p>Outer sphere redox scans of 3 replicas of the sensors (plotted in red, green and blue respectively) fabricated in (<b>a</b>) air, (<b>b</b>) vacuum, (<b>c</b>) nitrogen, and (<b>d</b>) argon atmospheres.</p>
Full article ">Figure A2
<p>Inner sphere redox scans of 3 replicas of the sensors (plotted in red, green and blue respectively) fabricated in (<b>a</b>) air, (<b>b</b>) vacuum, (<b>c</b>) nitrogen, and (<b>d</b>) argon atmospheres.</p>
Full article ">Figure A2 Cont.
<p>Inner sphere redox scans of 3 replicas of the sensors (plotted in red, green and blue respectively) fabricated in (<b>a</b>) air, (<b>b</b>) vacuum, (<b>c</b>) nitrogen, and (<b>d</b>) argon atmospheres.</p>
Full article ">
15 pages, 4788 KiB  
Article
Deep-Learning-Aided Evaluation of Spondylolysis Imaged with Ultrashort Echo Time Magnetic Resonance Imaging
by Suraj Achar, Dosik Hwang, Tim Finkenstaedt, Vadim Malis and Won C. Bae
Sensors 2023, 23(18), 8001; https://doi.org/10.3390/s23188001 - 21 Sep 2023
Cited by 4 | Viewed by 1847
Abstract
Isthmic spondylolysis results in fracture of pars interarticularis of the lumbar spine, found in as many as half of adolescent athletes with persistent low back pain. While computed tomography (CT) is the gold standard for the diagnosis of spondylolysis, the use of ionizing [...] Read more.
Isthmic spondylolysis results in fracture of pars interarticularis of the lumbar spine, found in as many as half of adolescent athletes with persistent low back pain. While computed tomography (CT) is the gold standard for the diagnosis of spondylolysis, the use of ionizing radiation near reproductive organs in young subjects is undesirable. While magnetic resonance imaging (MRI) is preferable, it has lowered sensitivity for detecting the condition. Recently, it has been shown that ultrashort echo time (UTE) MRI can provide markedly improved bone contrast compared to conventional MRI. To take UTE MRI further, we developed supervised deep learning tools to generate (1) CT-like images and (2) saliency maps of fracture probability from UTE MRI, using ex vivo preparation of cadaveric spines. We further compared quantitative metrics of the contrast-to-noise ratio (CNR), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) between UTE MRI (inverted to make the appearance similar to CT) and CT and between CT-like images and CT. Qualitative results demonstrated the feasibility of successfully generating CT-like images from UTE MRI to provide easier interpretability for bone fractures thanks to improved image contrast and CNR. Quantitatively, the mean CNR of bone against defect-filled tissue was 35, 97, and 146 for UTE MRI, CT-like, and CT images, respectively, being significantly higher for CT-like than UTE MRI images. For the image similarity metrics using the CT image as the reference, CT-like images provided a significantly lower mean MSE (0.038 vs. 0.0528), higher mean PSNR (28.6 vs. 16.5), and higher SSIM (0.73 vs. 0.68) compared to UTE MRI images. Additionally, the saliency maps enabled quick detection of the location with probable pars fracture by providing visual cues to the reader. This proof-of-concept study is limited to the data from ex vivo samples, and additional work in human subjects with spondylolysis would be necessary to refine the models for clinical use. Nonetheless, this study shows that the utilization of UTE MRI and deep learning tools could be highly useful for the evaluation of isthmic spondylolysis. Full article
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)
Show Figures

Figure 1

Figure 1
<p>Experimental pars defect being created on a cadaveric lumbar spine with a bone saw.</p>
Full article ">Figure 2
<p>Right-sided imaging of a cadaveric spine, showing typical sagittal images of an experimental pars fracture at L4. On the conventional T2 images, (<b>A</b>) the experimental pars defect on the right L5 level is not visible. In contrast, raw (<b>B</b>) and inverted (<b>C</b>) UTE images can depict the fracture (arrow), albeit with a lower contrast compared to the CT image (<b>D</b>).</p>
Full article ">Figure 3
<p>Architecture of a standard U-Net model (<b>A</b>), which was modified (red box) for (<b>A</b>) image regression (<b>B</b>) and saliency mapping (<b>C</b>). The final output images are shown in blue-filled boxes.</p>
Full article ">Figure 4
<p>Deep learning model training. We developed an image regression deep learning model that takes (<b>A</b>) UTE MRI images of the lumbar spine as an input and compared against (<b>B</b>) registered CT images as the ground truth. Saliency mapping model also takes (<b>A</b>) UTE MRI images as the input and compared against (<b>C</b>) annotations of pars defect regions of interest (ROI) as the ground truth.</p>
Full article ">Figure 5
<p>Contrast-to-noise ratios (CNRs) were measured on UTE, CT-like, and CT images of experimental pars defects created on three cadaveric spines. (<b>A</b>) Regions of interest for measuring the mean signal intensity of the bone, pars defect, and paraspinal muscles. (<b>B</b>) CNR of bone vs. pars defect and bone vs. surrounding muscles suggested the lowest CNR for UTE images compared to CT-like and CT images. (<b>C</b>) Width of pars defect was measured on UTE and CT-like images and correlated against the measurements on the reference CT images.</p>
Full article ">Figure 6
<p>Results of image regression. UTE images (<b>A</b>,<b>E</b>) were used as input and trained on CT images (<b>B</b>,<b>F</b>) to synthesize CT-like images (<b>C</b>,<b>G</b>). Difference between CT and CT-like images is shown as color maps (<b>D</b>,<b>H</b>). Arrows indicate pars defects created experimentally. Compared to (<b>E</b>) UTE image that depicted both the bone and air in the vertebral body (square) and the facet joint (arrowhead) with low signal intensity, (<b>F</b>) CT-like image correctly depicted the bony structures with high signal intensity and the air with low signal intensity. The overall correspondence between CT-like and CT images was excellent, but CT-like images were not as sharp.</p>
Full article ">Figure 7
<p>Three-dimensional bone renders of (<b>A</b>) CT and (<b>B</b>) CT-like datasets of a cadaveric spine showing the pars defect (arrow).</p>
Full article ">Figure 8
<p>Saliency mapping model takes in (<b>A</b>,<b>E</b>) UTE images and outputs saliency colormaps. (<b>B</b>,<b>F</b>) Saliency maps overlaid onto UTE images show colored areas covering the pars defects. Saliency maps can also be fused with (<b>C</b>,<b>G</b>) CT-like images to create 3D fused rendering that highlights pars defects. Note the high similarity between the fused rending using (<b>C</b>,<b>G</b>) CT-like vs. (<b>D</b>,<b>H</b>) CT images.</p>
Full article ">
17 pages, 2074 KiB  
Article
Development and Analytical Evaluation of a Point-of-Care Electrochemical Biosensor for Rapid and Accurate SARS-CoV-2 Detection
by Mesfin Meshesha, Anik Sardar, Ruchi Supekar, Lopamudra Bhattacharjee, Soumyo Chatterjee, Nyancy Halder, Kallol Mohanta, Tarun Kanti Bhattacharyya and Biplab Pal
Sensors 2023, 23(18), 8000; https://doi.org/10.3390/s23188000 - 20 Sep 2023
Cited by 2 | Viewed by 3156
Abstract
The COVID-19 pandemic has underscored the critical need for rapid and accurate screening and diagnostic methods for potential respiratory viruses. Existing COVID-19 diagnostic approaches face limitations either in terms of turnaround time or accuracy. In this study, we present an electrochemical biosensor that [...] Read more.
The COVID-19 pandemic has underscored the critical need for rapid and accurate screening and diagnostic methods for potential respiratory viruses. Existing COVID-19 diagnostic approaches face limitations either in terms of turnaround time or accuracy. In this study, we present an electrochemical biosensor that offers nearly instantaneous and precise SARS-CoV-2 detection, suitable for point-of-care and environmental monitoring applications. The biosensor employs a stapled hACE-2 N-terminal alpha helix peptide to functionalize an in situ grown polypyrrole conductive polymer on a nitrocellulose membrane backbone through a chemical process. We assessed the biosensor’s analytical performance using heat-inactivated omicron and delta variants of the SARS-CoV-2 virus in artificial saliva (AS) and nasal swab (NS) samples diluted in a strong ionic solution, as well as clinical specimens with known Ct values. Virus identification was achieved through electrochemical impedance spectroscopy (EIS) and frequency analyses. The assay demonstrated a limit of detection (LoD) of 40 TCID50/mL, with 95% sensitivity and 100% specificity. Notably, the biosensor exhibited no cross-reactivity when tested against the influenza virus. The entire testing process using the biosensor takes less than a minute. In summary, our biosensor exhibits promising potential in the battle against pandemic respiratory viruses, offering a platform for the development of rapid, compact, portable, and point-of-care devices capable of multiplexing various viruses. The biosensor has the capacity to significantly bolster our readiness and response to future viral outbreaks. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic presentation of biosensor development. (<b>a</b>) Nitrocellulose membrane (NC) (1 mm × 10 mm in dimension) as the base of the sensor substrate, (<b>b</b>) polymerization of conducting polymer (polypyrrole) on NC membrane, (<b>c</b>) covalent attachment of organic linker (glutaraldehyde), (<b>d</b>) functionalization with lactam stapled SARS-CoV-2 specific peptide, (<b>e</b>) blocking with skim milk protein, and (<b>f</b>) interaction of the SARS-CoV-2 virus with the receptor peptide.</p>
Full article ">Figure 2
<p>Selective binding of SARS-CoV-2 to a lactam-based stapled hACE-2 functionalized on Ppy substrate on glass slide. The Ppy-coated and glutaraldehyde-linked glass slides were treated with hACE-2 peptide and blocked with skim milk protein before addition of virus or controls. Alexa Fluor 488 fluorophore-tagged peptide was then used to probe virus binding. (<b>a</b>) Artificial saliva without virus spike-in was used as a media control. (<b>b</b>) Heat-attenuated influenza vaccine containing a mix of Influenza A (H1N1, H3N2) and B viruses. (<b>c</b>) SARS-CoV-2 delta variant at concentration of 10<sup>5</sup> virus copies/µL. (<b>d</b>) SARS-CoV-2 omicron variant at concentration of 10<sup>5</sup> virus copies/µL.</p>
Full article ">Figure 3
<p>Characterization of biosensor and detection of SARS-CoV-2 virus using electrochemical impedance spectroscopy. (<b>a</b>) Characteristic impedance measurement at different stages of sensor development across a range of frequencies. Blue lines: after coating with polypyrrole, red lines: after GA linker addition, purple lines: after hACE2 peptide attachment, and green lines: after blocking with skimmed milk protein. (<b>b</b>) Sensor response in terms of relative impedance change (|dZ|/Z) for different virus concentrations and media control. Green lines are media control; purple, grey, and red lines are virus in artificial saliva at concentrations of 20, 40, and 1000 TCID<sub>50</sub>/mL, respectively. (<b>c</b>) Heatmap for optimization of separation factor between virus and control class. Frequency band corresponds to the optimum separation factor (sf) between media control and different virus concentrations. The greener shades indicate higher sf values while the reddish shades indicate lower sf values. The rows are sorted in descending order based on the sf values of viral RNA copies 20 TCID<sub>50</sub>/mL. (<b>d</b>) Sensor response in terms of relative impedance change (|dZ|/Z) for different virus concentrations and media control (10 representative samples from 20 replicates) separated by a threshold line. Green bars are media control; purple, grey, and red bars are viral RNA copies (20 TCID<sub>50</sub>/mL, 40 TCID<sub>50</sub>/mL, and 1000 TCID<sub>50</sub>/mL, respectively). A dashed black threshold line is drawn at 3 standard deviations below the mean of the control data set.</p>
Full article ">Figure 4
<p>Classification of virus detection: (<b>a</b>) Relative impedance change (|dZ|/Z) for different virus concentrations and media control. The colors indicate virus concentrations: green (media control), purple (20 TCID<sub>50</sub>/mL), grey (40 TCID<sub>50</sub>/mL), blue (100 TCID<sub>50</sub>/mL), yellow (200 TCID<sub>50</sub>/mL), orange (500 TCID<sub>50</sub>/mL), and red (1000 TCID<sub>50</sub>/mL). sf values are annotated for each box. (<b>b</b>) Limit of detection validation at 40 TCID<sub>50</sub>/mL. Y-axis: relative impedance value (|dZ|/Z). Green and grey bars represent media control and virus, respectively. (<b>c</b>) Comparable sensitivity of virus spiked in nasal swabs in 0.45 M KCl buffer. Y-axis: relative impedance value (|dZ|/Z). Green and grey bars represent media control and virus, respectively. (<b>d</b>) Relative impedance changes for influenza vaccine and media control. y-axis: relative impedance value (|dZ|/Z). Green and red bars represent media control and influenza, respectively. (<b>e</b>) Evaluation of frozen clinical specimens. X-axis represents Ct values from RT-PCR experiments. Each sample is tested in five replicates. The colors of boxes correspond to the different Ct values. sf values are annotated for each box. Threshold lines are denoted by black dashed lines for all figures and represent 3 standard deviations below the mean of control data’s relative impedance change.</p>
Full article ">
27 pages, 1738 KiB  
Review
A Comprehensive Study on Cyber Attacks in Communication Networks in Water Purification and Distribution Plants: Challenges, Vulnerabilities, and Future Prospects
by Muhammad Muzamil Aslam, Ali Tufail, Ki-Hyung Kim, Rosyzie Anna Awg Haji Mohd Apong and Muhammad Taqi Raza
Sensors 2023, 23(18), 7999; https://doi.org/10.3390/s23187999 - 20 Sep 2023
Cited by 8 | Viewed by 2450
Abstract
In recent years, the Internet of Things (IoT) has had a big impact on both industry and academia. Its profound impact is particularly felt in the industrial sector, where the Industrial Internet of Things (IIoT), also known as Industry 4.0, is revolutionizing manufacturing [...] Read more.
In recent years, the Internet of Things (IoT) has had a big impact on both industry and academia. Its profound impact is particularly felt in the industrial sector, where the Industrial Internet of Things (IIoT), also known as Industry 4.0, is revolutionizing manufacturing and production through the fusion of cutting-edge technologies and network-embedded sensing devices. The IIoT revolutionizes several industries, including crucial ones such as oil and gas, water purification and distribution, energy, and chemicals, by integrating information technology (IT) with industrial control and automation systems. Water, a vital resource for life, is a symbol of the advancement of technology, yet knowledge of potential cyberattacks and their catastrophic effects on water treatment facilities is still insufficient. Even seemingly insignificant errors can have serious consequences, such as aberrant pH values or fluctuations in the concentration of hydrochloric acid (HCI) in water, which can result in fatalities or serious diseases. The water purification and distribution industry has been the target of numerous hostile cyber security attacks, some of which have been identified, revealed, and documented in this paper. Our goal is to understand the range of security threats that are present in this industry. Through the lens of IIoT, the survey provides a technical investigation that covers attack models, actual cases of cyber intrusions in the water sector, a range of security difficulties encountered, and preventative security solutions. We also explore upcoming perspectives, illuminating the predicted advancements and orientations in this dynamic subject. For industrial practitioners and aspiring scholars alike, our work is a useful, enlightening, and current resource. We want to promote a thorough grasp of the cybersecurity landscape in the water industry by combining key insights and igniting group efforts toward a safe and dependable digital future. Full article
Show Figures

Figure 1

Figure 1
<p>Cyber attacks in various industries [<a href="#B29-sensors-23-07999" class="html-bibr">29</a>] (vulnerabilities).</p>
Full article ">Figure 2
<p>Paper architecture.</p>
Full article ">Figure 3
<p>(<b>a</b>) Old architecture of a water distribution system. (<b>b</b>) Architecture of a water purification and distribution system.</p>
Full article ">Figure 4
<p>An ICS architecture.</p>
Full article ">Figure 5
<p>Attacker means of accessing an ICS system.</p>
Full article ">Figure 6
<p>Attacker way to KWC incident.</p>
Full article ">
15 pages, 4217 KiB  
Article
Design of A High-Precision Component-Type Vertical Pendulum Tiltmeter Based on FPGA
by Xin Xu, Zheng Chen, Hong Li, Shigui Ma, Liheng Wu, Wenbo Wang, Yunkai Dong and Weiwei Zhan
Sensors 2023, 23(18), 7998; https://doi.org/10.3390/s23187998 - 20 Sep 2023
Cited by 1 | Viewed by 1344
Abstract
This paper presents a high-precision component-type vertical pendulum tiltmeter based on an FPGA (Field Programmable Gate Array) that improves the utility and reliability of geophysical field tilt observation instruments. The system is designed for rapid deployment and offers flexible and efficient adaptability. It [...] Read more.
This paper presents a high-precision component-type vertical pendulum tiltmeter based on an FPGA (Field Programmable Gate Array) that improves the utility and reliability of geophysical field tilt observation instruments. The system is designed for rapid deployment and offers flexible and efficient adaptability. It comprises a pendulum body, a triangular platform, a locking motor and sealing cover, a ratiometric measurement bridge, a high-speed ADC, and an FPGA embedded system. The pendulum body is a plumb-bob-type single-suspension wire vertical pendulum capable of measuring ground tilt in two orthogonal directions simultaneously. It is installed on a triangular platform, sealed as a whole, and equipped with a locking motor to withstand a free-fall impact of 2 m. The system utilizes a differential capacitance ratio bridge in the measurement circuit, replacing analog circuits with high-speed AD sampling and FPGA digital signal processing technology. This approach reduces hardware expenses and interferences from active devices. The system also features online compilation functionality for flexible measurement parameter settings, high reliability, ease of use, and rapid deployment without the need for professional technical personnel. The proposed tiltmeter holds significant importance for further research in geophysics. Full article
(This article belongs to the Special Issue Application of FPGA-Based Sensor Systems)
Show Figures

Figure 1

Figure 1
<p>Vertical pendulum inclinometer structure and measurement circuit diagram.</p>
Full article ">Figure 2
<p>Schematic diagram of a high-precision tiltmeter system based on an FPGA.</p>
Full article ">Figure 3
<p>Schematic diagram of ground tilt.</p>
Full article ">Figure 4
<p>Schematic diagram of a vertical pendulum tilt sensor. Note: (<b>a</b>) Schematic diagram of the non-tilted condition. (<b>b</b>) Schematic diagram of the inclined state. (<b>c</b>) Schematic diagram of the three-dimensional structure of the pendulum body.</p>
Full article ">Figure 5
<p>Measurement circuit of a high-precision component-type vertical pendulum tiltmeter based on an FPGA.</p>
Full article ">Figure 6
<p>FPGA and ADC control block diagram.</p>
Full article ">Figure 7
<p>FPGA and ADC acquisition and control flowchart.</p>
Full article ">Figure 8
<p>Schematic diagram of phase-sensitive detection operation.</p>
Full article ">Figure 9
<p>Ground tilt record curve. Note: (<b>a</b>) Filtered signal; (<b>b</b>) Signal before filtering.</p>
Full article ">Figure 10
<p>Schematic diagram of signal limiting and aliasing. Note: (<b>a</b>) Two synthesized signals. (<b>b</b>) Amplitude-limited signal. (<b>c</b>) Reference signal. (<b>d</b>) Phase-sensitive detector output signal when the reference signal is 1560 Hz. (<b>e</b>) Phase-sensitive detector output signal when the reference signal is 3120 Hz.</p>
Full article ">Figure 11
<p>Schematic diagram of the tilting pendulum system and tilt adjustment triangular platform.</p>
Full article ">Figure 12
<p>Tiltmeter measurement curve (2–4 April 2023).</p>
Full article ">Figure 13
<p>(<b>a</b>) Theoretical and observational curve of Sixian Station CH1 on 7 April 2023. (<b>b</b>) Theoretical and observational curve of Sixian Station CH2 on 7 April 2023.</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) Theoretical and observational curve of Sixian Station CH1 on 7 April 2023. (<b>b</b>) Theoretical and observational curve of Sixian Station CH2 on 7 April 2023.</p>
Full article ">Figure 14
<p>Calibration curve of wide-range linearity on-site.</p>
Full article ">
21 pages, 2476 KiB  
Article
Accurate and Low-Power Ultrasound–Radiofrequency (RF) Indoor Ranging Using MEMS Loudspeaker Arrays
by Chesney Buyle, Lieven De Strycker and Liesbet Van der Perre
Sensors 2023, 23(18), 7997; https://doi.org/10.3390/s23187997 - 20 Sep 2023
Viewed by 1535
Abstract
Accurately positioning energy-constrained devices in indoor environments is of great interest to many professional, care, and personal applications. Hybrid RF–acoustic ranging systems have shown to be a viable technology in this regard, enabling accurate distance measurements at ultra-low energy costs. However, they often [...] Read more.
Accurately positioning energy-constrained devices in indoor environments is of great interest to many professional, care, and personal applications. Hybrid RF–acoustic ranging systems have shown to be a viable technology in this regard, enabling accurate distance measurements at ultra-low energy costs. However, they often suffer from self-interference due to multipaths in indoor environments. We replace the typical single loudspeaker beacons used in these systems with a phased loudspeaker array to promote the signal-to-interference-plus-noise ratio towards the tracked device. Specifically, we optimize the design of a low-cost uniform planar array (UPA) through simulation to achieve the best ranging performance using ultrasonic chirps. Furthermore, we compare the ranging performance of this optimized UPA configuration to a traditional, single-loudspeaker system. Simulations show that vertical phased-array configurations guarantee the lowest ranging errors in typical shoe-box environments, having a limited height with respect to their length and width. In these cases, a P50 ranging error of around 3 cm and P95 ranging error below 30 cm were achieved. Compared to a single-speaker system, a 10 × 2 vertical phased array was able to lower the P80 and P95 up to an order of magnitude. Full article
(This article belongs to the Special Issue Advanced Technology in Acoustic Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Visual representation of the low-power RF–acoustic ranging approach from [<a href="#B30-sensors-23-07997" class="html-bibr">30</a>]. Loudspeaker beacon <span class="html-italic">B</span> initiates a ranging measurement by broadcasting a linear ultrasonic chirp with length <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>t</mi> <mrow> <mi>c</mi> <mi>h</mi> <mi>i</mi> <mi>r</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math> over a frequency interval <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>f</mi> </mrow> </semantics></math>. At the end of the chirp transmission, beacon <span class="html-italic">B</span> sends out an RF signal, signaling tracked device <span class="html-italic">T</span> to immediately wake up and sample a part of the still propagating chirp signal over a period <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>t</mi> <mrow> <mi>R</mi> <mi>X</mi> </mrow> </msub> </mrow> </semantics></math>. Depending on its distance to loudspeaker beacon <span class="html-italic">B</span>, the tracked device’s fragment will contain a specific frequency interval of the original chirp. Cross-correlation is used to find out which frequency interval was seen by tracked device <span class="html-italic">T</span>, resulting in a propagation delay and consequently the distance.</p>
Full article ">Figure 2
<p>Beam patterns of the three ULA configurations with different interelement spacing. The target steering angle is set to 50<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>. The color of each beam pattern indicates the signal frequency.</p>
Full article ">Figure 3
<p>Shoe-box rooms considered in the simulations, referred to as room 1 (green), room 2 (blue), and room 3 (red). The rooms have the same height, but different floor sizes. Their specific dimensions can be found in <a href="#sensors-23-07997-t001" class="html-table">Table 1</a>. The black markers indicate the positions where the uniform planar arrays (UPAs) are mounted. The heights of the three marker rows are the same for all rooms, but they are only indicated for room 1.</p>
Full article ">Figure 4
<p>P50 ranging errors obtained for the different UPA configurations from <a href="#sensors-23-07997-t002" class="html-table">Table 2</a>. The subplots represent the twelve UPA positions in the rooms, as shown in <a href="#sensors-23-07997-f003" class="html-fig">Figure 3</a>. The colors of the markers indicate the shoe-box rooms considered. In essence, the lowest P50 ranging errors are obtained by using a 10 × 2 vertical UPA configuration, not mounted near walls. When the UPAs are mounted near the west wall (positions 0–2), the horizontal array configurations present the lowest P50 ranging errors. By making the west wall completely absorbent, we confirmed that the vertical array configurations previously suffered from a strong, interfering reflection on this wall. This strong reflection is spatially shown in <a href="#sensors-23-07997-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 5
<p>P95 ranging errors obtained for the different UPA configurations from <a href="#sensors-23-07997-t002" class="html-table">Table 2</a>. The subplots represent the twelve UPA positions in the rooms, as shown in <a href="#sensors-23-07997-f003" class="html-fig">Figure 3</a>. The colors of the markers indicate the considered shoe-box rooms. In essence, the lowest P95 ranging errors are obtained by using a 10 × 2 vertical UPA configuration, not mounted near walls. When the UPAs are mounted near the west wall (positions 0–2), the horizontal array configurations show the best ranging performance over the entire room. By making the west wall completely absorbent, we confirmed that the vertical array configurations previously suffered from a strong, interfering reflection on this wall. This strong reflection is spatially shown in <a href="#sensors-23-07997-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 6
<p>Spatial beam pattern plot of the (<b>a</b>) 2 × 10 array and (<b>b</b>) 10 × 2 array when beamforming a 25 <math display="inline"><semantics> <mi mathvariant="normal">k</mi> </semantics></math><math display="inline"><semantics> <mi>Hz</mi> </semantics></math> sine wave toward the east wall in room 2. A strong side lobe is directed towards the west wall in the case of the 10 × 2 vertical UPA. This side lobe is also present for the 2 × 10 horizontal UPA, but not as prominent.</p>
Full article ">Figure 7
<p>Room distributions of the ranging errors obtained at UPA position 7 for the (<b>a</b>) 2 × 10 array and (<b>b</b>) 10 × 2 array configuration. Overall, larger ranging errors can be observed near the walls for the 2 × 10 horizontal UPA compared to the 10 × 2 vertical UPA. This indicates that reflections through the floor and ceiling potentially play a significant role in the ranging performance.</p>
Full article ">Figure 8
<p>P95 ranging errors were obtained for the different UPA configurations at UPA position 7 in room 2. The energy-absorption coefficients of the floor and ceiling are changed between 0 and 1, while those of the walls are kept constant at 0.3. For high energy-absorption coefficients, the horizontal UPAs barely outperform the vertical ones. This indicates that the reflections through the vertical walls are overall less impactful. Even in the case of a highly reverberant floor and ceiling, the 10 × 2 and 8 × 2 vertical UPAs manage to keep the P95 ranging error below <math display="inline"><semantics> <mrow> <mn>0.4</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math>.</p>
Full article ">Figure 9
<p>P95 ranging errors obtained for the different UPA configurations in the room with length <math display="inline"><semantics> <mrow> <mn>3.16</mn> </mrow> </semantics></math> m, width <math display="inline"><semantics> <mrow> <mn>8.0</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math> and height <math display="inline"><semantics> <mrow> <mn>5.09</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math>. The energy-absorption coefficient of the east and west sidewalls are changed between 0 and 1, while those of the other sidewalls, floor and ceiling are kept constant at 0.3. Overall, the horizontal UPAs manage to outperform the vertical array configurations. Considering the small width of the room, a small beam width in this dimension proves to be key to achieving low P95 ranging errors.</p>
Full article ">Figure 10
<p>Cumulative distributions functions of the room ranging errors for both the 10 × 2 vertical UPA and single-speaker case. The twelve subplots represent the specific transmit locations.</p>
Full article ">
22 pages, 2801 KiB  
Article
Efficient Precoding and Power Allocation Techniques for Maximizing Spectral Efficiency in Beamspace MIMO-NOMA Systems
by Yongfei Liu, Lu Si, Yuhuan Wang, Bo Zhang and Weizhang Xu
Sensors 2023, 23(18), 7996; https://doi.org/10.3390/s23187996 - 20 Sep 2023
Cited by 1 | Viewed by 1229
Abstract
Beamspace MIMO-NOMA is an effective way to improve spectral efficiency. This paper focuses on a downlink non-orthogonal multiple access (NOMA) transmission scheme for a beamspace multiple-input multiple-output (MIMO) system. To increase the sum rate, we jointly optimize precoding and power allocation, which presents [...] Read more.
Beamspace MIMO-NOMA is an effective way to improve spectral efficiency. This paper focuses on a downlink non-orthogonal multiple access (NOMA) transmission scheme for a beamspace multiple-input multiple-output (MIMO) system. To increase the sum rate, we jointly optimize precoding and power allocation, which presents a non-convex problem. To solve this difficulty, we employ an alternating algorithm to optimize the precoding and power allocation. Regarding the precoding subproblem, we demonstrate that the original optimization problem can be transformed into an unconstrained optimization problem. Drawing inspiration from fraction programming (FP), we reconstruct the problem and derive a closed-form expression of the optimization variable. In addition, we effectively reduce the complexity of precoding by utilizing Neumann series expansion (NSE). For the power allocation subproblem, we adopt a dynamic power allocation scheme that considers both the intra-beam power optimization and the inter-beam power optimization. Simulation results show that the energy efficiency of the proposed beamspace MIMO-NOMA is significantly better than other conventional schemes. Full article
Show Figures

Figure 1

Figure 1
<p>The system model of beamspace MIMO architecture.</p>
Full article ">Figure 2
<p>The system model of beamspace MIMO-NOMA architecture.</p>
Full article ">Figure 3
<p>Spectrum efficiency comparison versus SNRs of the two schemes with different users.</p>
Full article ">Figure 4
<p>Spectrum efficiency comparison versus SNRs with different users.</p>
Full article ">Figure 5
<p>Energy efficiency comparison versus SNRs with different users.</p>
Full article ">Figure 6
<p>Spectrum efficiency comparison versus users of the two schemes at <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> dB.</p>
Full article ">Figure 7
<p>Spectrum efficiency comparison versus users at <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> dB.</p>
Full article ">Figure 8
<p>Energy efficiency comparison versus users at <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> dB.</p>
Full article ">Figure 9
<p>Spectral efficiency comparison versus users with different SNRs.</p>
Full article ">Figure 10
<p>Spectral efficiency comparison versus antennas number at <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> dB.</p>
Full article ">Figure 11
<p>Energy efficiency comparison versus antennas number at <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> dB.</p>
Full article ">
16 pages, 5033 KiB  
Article
Effect of Protective Layer on the Performance of Monocrystalline Silicon Cell for Indoor Light Harvesting
by Tarek M. Hammam, Badriyah Alhalaili, M. S. Abd El-sadek and Amr Attia Abuelwafa
Sensors 2023, 23(18), 7995; https://doi.org/10.3390/s23187995 - 20 Sep 2023
Cited by 4 | Viewed by 1915
Abstract
The development of renewable energy sources has grown increasingly as the world shifts toward lowering carbon emissions and supporting sustainability. Solar energy is one of the most promising renewable energy sources, and its harvesting potential has gone beyond typical solar panels to small, [...] Read more.
The development of renewable energy sources has grown increasingly as the world shifts toward lowering carbon emissions and supporting sustainability. Solar energy is one of the most promising renewable energy sources, and its harvesting potential has gone beyond typical solar panels to small, portable devices. Also, the trend toward smart buildings is becoming more prevalent at the same time as sensors and small devices are becoming more integrated, and the demand for dependable, sustainable energy sources will increase. Our work aims to tackle the issue of identifying the most suitable protective layer for small optical devices that can efficiently utilize indoor light sources. To conduct our research, we designed and tested a model that allowed us to compare the performance of many small panels made of monocrystalline cells laminated with three different materials: epoxy resin, an ethylene–tetrafluoroethylene copolymer (ETFE), and polyethylene terephthalate (PET), under varying light intensities from LED and CFL sources. The methods employed encompass contact angle measurements of the protective layers, providing insights into their wettability and hydrophobicity, which indicates protective layer performance against humidity. Reflection spectroscopy was used to evaluate the panels’ reflectance properties across different wavelengths, which affect the light amount arrived at the solar cell. Furthermore, we characterized the PV panels’ electrical behavior by measuring short-circuit current (ISC), open-circuit voltage (VOC), maximum power output (Pmax), fill factor (FF), and load resistance (R). Our findings offer valuable insights into each PV panel’s performance and the protective layer material’s effect. Panels with ETFE layers exhibited remarkable hydrophobicity with a mean contact angle of 77.7°, indicating resistance against humidity-related effects. Also, panels with ETFE layers consistently outperformed others as they had the highest open circuit voltage (VOC) ranging between 1.63–4.08 V, fill factor (FF) between 35.9–67.3%, and lowest load resistance (R) ranging between 11,268–772 KΩ.cm−2 under diverse light intensities from various light sources, as determined by our results. This makes ETFE panels a promising option for indoor energy harvesting, especially for powering sensors with low power requirements. This information could influence future research in developing energy harvesting solutions, thereby making a valuable contribution to the progress of sustainable energy technology. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>The structure of each monocrystalline PV panel. (<b>a</b>) a PET panel that consists of a solar cell between two layers of EVE above the PCB board and laminated with a PET film; (<b>b</b>) an ETFE panel that consists of a solar cell between two layers of EVE above the PCB board, and laminated with a film of ETFE, (<b>c</b>) an Epoxy resin panel that consists of two layers of epoxy encapsulating the solar cell above the PCB board.</p>
Full article ">Figure 2
<p>A 3D schematic illustrates the connections of the photovoltaic (PV) panel, positioned under the light source within the enclosed black box, to a variable resistance, ammeter, and voltameter, arranged on a breadboard to make up an indoor electrical characterization measurement system.</p>
Full article ">Figure 3
<p>A circuit diagram used in the electrical characterization measurement system consists of a PV panel, a voltmeter, an ammeter, and a variable resistance.</p>
Full article ">Figure 4
<p>A histogram of the differences in the mean values of the measured contact angles (CA) for the ETFE, epoxy resin, and PET panels.</p>
Full article ">Figure 5
<p>Water droplets on the surfaces of (<b>a</b>) PET panel, (<b>b</b>) ETFE panel and (<b>c</b>) Epoxy resin panel as captured from OCA camera.</p>
Full article ">Figure 6
<p>The reflection spectrum of PET, ETFE, and Epoxy resin protective layer PV panels in the wavelength range of 200–1000 nm, and the shaded area shows the behavior of each PV panel reflection in the typically LED emit region between 400 and 700 nm.</p>
Full article ">Figure 7
<p>The I-V curves for the three PV panels under LED illumination were (<b>a</b>) I-V for the PET panel, (<b>b</b>) curves for the ETFE panel and (<b>c</b>) for the epoxy panel.</p>
Full article ">Figure 8
<p>The I-V curves for the three PV panels under CFL illumination were (<b>a</b>) I-V for the PET panel, (<b>b</b>) curves for the ETFE panel and (<b>c</b>) for the epoxy panel.</p>
Full article ">Figure 9
<p>Output power P versus voltage V of the three PV panels under LED, (<b>a</b>) P-V for the PET panel, (<b>b</b>) curves for the ETFE panel and (<b>c</b>) for the epoxy panel.</p>
Full article ">Figure 10
<p>Output power P versus voltage V of the three PV panels under CFL, (<b>a</b>) P-V for the PET panel, (<b>b</b>) curves for the ETFE panel and (<b>c</b>) for the epoxy panel.</p>
Full article ">
14 pages, 4284 KiB  
Article
Solid-Phase Spectrometric Determination of Organic Thiols Using a Nanocomposite Based on Silver Triangular Nanoplates and Polyurethane Foam
by Aleksei Furletov, Vladimir Apyari, Pavel Volkov, Irina Torocheshnikova and Stanislava Dmitrienko
Sensors 2023, 23(18), 7994; https://doi.org/10.3390/s23187994 - 20 Sep 2023
Cited by 2 | Viewed by 1200
Abstract
Adsorption of silver nanoparticles on polymers may affect the processes in which they participate, adjusting the analytical characteristics of methods for the quantitation of various substances. In the present study, a composite material based on silver triangular nanoplates (AgTNPs) and polyurethane foam was [...] Read more.
Adsorption of silver nanoparticles on polymers may affect the processes in which they participate, adjusting the analytical characteristics of methods for the quantitation of various substances. In the present study, a composite material based on silver triangular nanoplates (AgTNPs) and polyurethane foam was proposed for chemical analysis. The prospects of its application for the solid-phase/colorimetric determination of organic thiols were substantiated. It was found that aggregation of AgTNPs upon the action of thiols is manifested by a decrease in the AgTNPs’ localized surface plasmon resonance band and its significant broadening. Spectral changes accompanying the process can be registered using household color-recording devices and even visually. Four thiols differing in their functional groups were tested. It was found that their limits of detection increase in the series cysteamine < 2-mercaptoethanol < cysteine = 3-mercaptopropionic acid and come to 50, 160, 500, and 500 nM, respectively. The applicability of the developed approach was demonstrated during the analysis of pharmaceuticals and food products. Full article
(This article belongs to the Special Issue Colorimetric Sensors: Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>SEM image of silver triangular nanoplates on the surface of polyurethane foam (magnification 50,000 times).</p>
Full article ">Figure 2
<p>(<b>a</b>) Diffuse reflectance spectra of silver triangular nanoplates on the surface of polyurethane foam. Specific adsorption values <span class="html-italic">a</span>, µmol Ag g<sup>–1</sup>: 4.6 (1), 9.7 (2), 14.2 (3), 17.0 (4), 18.3 (5); (<b>b</b>) Dependence of the change in the Gurevich–Kubelka–Munk function at 625 nm on the specific adsorption.</p>
Full article ">Figure 3
<p>(<b>a</b>) Diffuse reflectance spectra of AgTNPs/PUF composite at different concentrations of cysteine. <span class="html-italic">c</span>(cysteine), μM = 0 (1), 17 (2), 35 (3), <span class="html-italic">c</span>(AgTNPs) = 17 μmol Ag g<sup>–1</sup>, pH 5.0, <span class="html-italic">t</span> = 40 min. Inset: change in the color of the nanocomposite upon interaction with thiols; (<b>b</b>) Change in the diffuse reflectance of AgTNPs/PUF composite depending on the nature of the organic thiol. <span class="html-italic">c</span>(AgTNPs) = 17 μmol Ag g<sup>–1</sup>, <span class="html-italic">c</span>(thiol) = 0.2 mg L<sup>–1</sup>, pH 5.0, <span class="html-italic">t</span> = 40 min.</p>
Full article ">Figure 4
<p>Probable mechanism of silver triangular nanoplate aggregation under the influence of thiols (in the example of cysteine, Cit is the citrate-ion residue).</p>
Full article ">Figure 5
<p>Change in the diffuse reflectance of AgTNPs/PUF composite in a solution containing cysteamine (1), cysteine (2), 3-mercaptopropionic acid (3), and 2-mercaptoethanol (4), depending on the interaction time. <span class="html-italic">c</span>(AgTNPs) = 17 μmol Ag g<sup>–1</sup>, pH 5.0. (1) <span class="html-italic">c</span>(cysteamine) = 2 μM; (2) <span class="html-italic">c</span>(cysteine) = 10 μM; (3) <span class="html-italic">c</span>(3-MPA) = 10 µM; (4) <span class="html-italic">c</span>(2-ME) = 5 μM.</p>
Full article ">Figure 6
<p>Change in the diffuse reflectance of AgTNPs/PUF composite in a solution containing cysteamine (1), cysteine (2), 3-mercaptopropionic acid (3) and 2-mercaptoethanol (4), depending on the pH value. <span class="html-italic">c</span>(AgTNPs) = 17 μmol Ag g<sup>–1</sup>, <span class="html-italic">t</span> = 40 min. (1) <span class="html-italic">c</span>(cysteamine) = 2 μM; (2) <span class="html-italic">c</span>(cysteine) = 10 μM; (3) <span class="html-italic">c</span>(3-MPA) = 10 µM; (4) <span class="html-italic">c</span>(2-ME) = 5 μM.</p>
Full article ">
5 pages, 180 KiB  
Editorial
Special Issue: “Intelligent Systems for Clinical Care and Remote Patient Monitoring”
by Giovanna Sannino, Antonio Celesti and Ivanoe De Falco
Sensors 2023, 23(18), 7993; https://doi.org/10.3390/s23187993 - 20 Sep 2023
Viewed by 878
Abstract
The year 2020 was definitely like no other [...] Full article
(This article belongs to the Special Issue Intelligent Systems for Clinical Care and Remote Patient Monitoring)
13 pages, 2961 KiB  
Communication
Blockchain-Based Smart Farm Security Framework for the Internet of Things
by Ahmed Abubakar Aliyu and Jinshuo Liu
Sensors 2023, 23(18), 7992; https://doi.org/10.3390/s23187992 - 20 Sep 2023
Cited by 14 | Viewed by 2967
Abstract
Smart farming, as a branch of the Internet of Things (IoT), combines the recognition of agricultural economic competencies and the progress of data and information collected from connected devices with statistical analysis to characterize the essentials of the assimilated information, allowing farmers to [...] Read more.
Smart farming, as a branch of the Internet of Things (IoT), combines the recognition of agricultural economic competencies and the progress of data and information collected from connected devices with statistical analysis to characterize the essentials of the assimilated information, allowing farmers to make intelligent conclusions that will maximize the harvest benefit. However, the integration of advanced technologies requires the adoption of high-tech security approaches. In this paper, we present a framework that promises to enhance the security and privacy of smart farms by leveraging the decentralized nature of blockchain technology. The framework stores and manages data acquired from IoT devices installed in smart farms using a distributed ledger architecture, which provides secure and tamper-proof data storage and ensures the integrity and validity of the data. The study uses the AWS cloud, ESP32, the smart farm security monitoring framework, and the Ethereum Rinkeby smart contract mechanism, which enables the automated execution of pre-defined rules and regulations. As a result of a proof-of-concept implementation, the system can detect and respond to security threats in real time, and the results illustrate its usefulness in improving the security of smart farms. The number of accepted blockchain transactions on smart farming requests fell from 189,000 to 109,450 after carrying out the first three tests while the next three testing phases showed a rise in the number of blockchain transactions accepted on smart farming requests from 176,000 to 290,786. We further observed that the lesser the time taken to induce the device alarm, the higher the number of blockchain transactions accepted on smart farming requests, which demonstrates the efficacy of blockchain-based poisoning attack mitigation in smart farming. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Arduino Sensor Kit [<a href="#B15-sensors-23-07992" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Illustration of smart farming application in cloud-based IoT [<a href="#B11-sensors-23-07992" class="html-bibr">11</a>].</p>
Full article ">Figure 3
<p>Blockchain-based solution in smart farming [<a href="#B10-sensors-23-07992" class="html-bibr">10</a>].</p>
Full article ">Figure 4
<p>The security framework activity diagram [<a href="#B10-sensors-23-07992" class="html-bibr">10</a>].</p>
Full article ">Figure 5
<p>Alert for smart-contract web applications—frontend.</p>
Full article ">Figure 6
<p>Frontend GUI for smart-contract web applications.</p>
Full article ">Figure 7
<p>Testing stages and the time taken to induce the device alarm.</p>
Full article ">Figure 8
<p>Number of accepted blockchain-based transactions on requests.</p>
Full article ">
21 pages, 2838 KiB  
Article
Classification of User Emotional Experiences on B2C Websites Utilizing Infrared Thermal Imaging
by Lanxin Li, Wenzhe Tang, Han Yang and Chengqi Xue
Sensors 2023, 23(18), 7991; https://doi.org/10.3390/s23187991 - 20 Sep 2023
Cited by 3 | Viewed by 1106
Abstract
The acquisition of physiological signals for analyzing emotional experiences has been intrusive, and potentially yields inaccurate results. This study employed infrared thermal images (IRTIs), a noninvasive technique, to classify user emotional experiences while interacting with business-to-consumer (B2C) websites. By manipulating the usability and [...] Read more.
The acquisition of physiological signals for analyzing emotional experiences has been intrusive, and potentially yields inaccurate results. This study employed infrared thermal images (IRTIs), a noninvasive technique, to classify user emotional experiences while interacting with business-to-consumer (B2C) websites. By manipulating the usability and aesthetics of B2C websites, the facial thermal images of 24 participants were captured as they engaged with the different websites. Machine learning techniques were leveraged to classify their emotional experiences, with participants’ self-assessments serving as the ground truth. The findings revealed significant fluctuations in emotional valence, while the participants’ arousal levels remained consistent, enabling the categorization of emotional experiences into positive and negative states. The support vector machine (SVM) model performed well in distinguishing between baseline and emotional experiences. Furthermore, this study identified key regions of interest (ROIs) and effective classification features in machine learning. These findings not only established a significant connection between user emotional experiences and IRTIs but also broadened the research perspective on the utility of IRTIs in the field of emotion analysis. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Two-dimensional emotion model.</p>
Full article ">Figure 2
<p>Background colors and product display shapes for the websites.</p>
Full article ">Figure 3
<p>The high aesthetic (<b>left</b>) and low aesthetic (<b>right</b>) websites.</p>
Full article ">Figure 4
<p>The procedure of the experiment.</p>
Full article ">Figure 5
<p>Thermal experiment data processing.</p>
Full article ">Figure 6
<p>Results of feature selection based on NCA.</p>
Full article ">Figure 7
<p>Proportion of features selected from each ROI and feature.</p>
Full article ">Figure 8
<p>The mean grayscale value difference of the five ROIs under the negative emotional experience, positive emotional experience, and baseline, and the significant analysis results of Student’s test. P vs. Base means positive emotional experience versus baseline, N vs. Base means negative emotional experience versus baseline, P vs. N means positive emotional experience versus negative emotional experience. * (<span class="html-italic">p</span> ≤ 0.05), ** (<span class="html-italic">p</span> ≤ 0.01), and *** (<span class="html-italic">p</span> ≤ 0.001) indicate significance, and N means not significant.</p>
Full article ">
12 pages, 1339 KiB  
Article
A Stacked Long Short-Term Memory Approach for Predictive Blood Glucose Monitoring in Women with Gestational Diabetes Mellitus
by Huiqi Y. Lu, Ping Lu, Jane E. Hirst, Lucy Mackillop and David A. Clifton
Sensors 2023, 23(18), 7990; https://doi.org/10.3390/s23187990 - 20 Sep 2023
Cited by 1 | Viewed by 1667
Abstract
Gestational diabetes mellitus (GDM) is a subtype of diabetes that develops during pregnancy. Managing blood glucose (BG) within the healthy physiological range can reduce clinical complications for women with gestational diabetes. The objectives of this study are to (1) develop benchmark glucose prediction [...] Read more.
Gestational diabetes mellitus (GDM) is a subtype of diabetes that develops during pregnancy. Managing blood glucose (BG) within the healthy physiological range can reduce clinical complications for women with gestational diabetes. The objectives of this study are to (1) develop benchmark glucose prediction models with long short-term memory (LSTM) recurrent neural network models using time-series data collected from the GDm-Health platform, (2) compare the prediction accuracy with published results, and (3) suggest an optimized clinical review schedule with the potential to reduce the overall number of blood tests for mothers with stable and within-range glucose measurements. A total of 190,396 BG readings from 1110 patients were used for model development, validation and testing under three different prediction schemes: 7 days of BG readings to predict the next 7 or 14 days and 14 days to predict 14 days. Our results show that the optimized BG schedule based on a 7-day observational window to predict the BG of the next 14 days achieved the accuracies of the root mean square error (RMSE) = 0.958 ± 0.007, 0.876 ± 0.003, 0.898 ± 0.003, 0.622 ± 0.003, 0.814 ± 0.009 and 0.845 ± 0.005 for the after-breakfast, after-lunch, after-dinner, before-breakfast, before-lunch and before-dinner predictions, respectively. This is the first machine learning study that suggested an optimized blood glucose monitoring frequency, which is 7 days to monitor the next 14 days based on the accuracy of blood glucose prediction. Moreover, the accuracy of our proposed model based on the fingerstick blood glucose test is on par with the prediction accuracies compared with the benchmark performance of one-hour prediction models using continuous glucose monitoring (CGM) readings. In conclusion, the stacked LSTM model is a promising approach for capturing the patterns in time-series data, resulting in accurate predictions of BG levels. Using a deep learning model with routine fingerstick glucose collection is a promising, predictable and low-cost solution for BG monitoring for women with gestational diabetes. Full article
(This article belongs to the Special Issue AI and IoT Enabled Solutions for Healthcare)
Show Figures

Figure 1

Figure 1
<p>Flowchart of participants, data cleaning and model preparation.</p>
Full article ">Figure 2
<p>Distribution of BG readings for 7-days-predict-14-days (in the order of AB, AL, AD, BB, BL, BD left to right and then the first to the second row), as an exemplar.</p>
Full article ">Figure 3
<p>LSTM prediction performance comparisons: (<b>a</b>) RMSE results of 7-days-predict-7-days vs. 7-days-predict-14 days, (<b>b</b>) MAE results of 7-days-predict-7-days vs. 7-days-predict-14-days, (<b>c</b>) RMSE results of 7-days-predict-14-days vs. 14-days-predict-14-days and (<b>d</b>) MAE results of 7-days-predict-14-days vs. 14-days-predict-14-days.</p>
Full article ">Figure 4
<p>Model development pipeline and the architecture of three-layer stacked LSTM.</p>
Full article ">
25 pages, 7387 KiB  
Article
Machine Learning Based Method for Impedance Estimation and Unbalance Supply Voltage Detection in Induction Motors
by Khaled Laadjal, Acácio M. R. Amaral, Mohamed Sahraoui and Antonio J. Marques Cardoso
Sensors 2023, 23(18), 7989; https://doi.org/10.3390/s23187989 - 20 Sep 2023
Cited by 1 | Viewed by 1412
Abstract
Induction motors (IMs) are widely used in industrial applications due to their advantages over other motor types. However, the efficiency and lifespan of IMs can be significantly impacted by operating conditions, especially Unbalanced Supply Voltages (USV), which are common in industrial plants. Detecting [...] Read more.
Induction motors (IMs) are widely used in industrial applications due to their advantages over other motor types. However, the efficiency and lifespan of IMs can be significantly impacted by operating conditions, especially Unbalanced Supply Voltages (USV), which are common in industrial plants. Detecting and accurately assessing the severity of USV in real-time is crucial to prevent major breakdowns and enhance reliability and safety in industrial facilities. This paper presented a reliable method for precise online detection of USV by monitoring a relevant indicator, denominated by negative voltage factor (NVF), which, in turn, is obtained using the voltage symmetrical components. On the other hand, impedance estimation proves to be fundamental to understand the behavior of motors and identify possible problems. IM impedance affects its performance, namely torque, power factor and efficiency. Furthermore, as the presence of faults or abnormalities is manifested by the modification of the IM impedance, its estimation is particularly useful in this context. This paper proposed two machine learning (ML) models, the first one estimated the IM stator phase impedance, and the second one detected USV conditions. Therefore, the first ML model was capable of estimating the IM phases impedances using just the phase currents with no need for extra sensors, as the currents were used to control the IM. The second ML model required both phase currents and voltages to estimate NVF. The proposed approach used a combination of a Regressor Decision Tree (DTR) model with the Short Time Least Squares Prony (STLSP) technique. The STLSP algorithm was used to create the datasets that will be used in the training and testing phase of the DTR model, being crucial in the creation of both features and targets. After the training phase, the STLSP technique was again used on completely new data to obtain the DTR model inputs, from which the ML models can estimate desired physical quantities (phases impedance or NVF). Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>General scheme of the proposed strategy.</p>
Full article ">Figure 2
<p>(<b>a</b>) Experimental test bench; (<b>b</b>) the fault detection algorithm; (<b>c</b>) the acquisition system; (<b>d</b>) AC programmable power supply; (<b>e</b>) AC power supply platform.</p>
Full article ">Figure 3
<p>Correlation matrix between the most relevant features (A_IA, A_IB, and A_IC) and the targets (ZA, ZB, and ZC).</p>
Full article ">Figure 4
<p>Dataset used for ML models training and testing stages. The features represent the amplitudes of the phase currents at the converter switching frequency (A_IA, A_IB, and A_IC) and the targets represent the phase impedances (ZA, ZB, and ZC).</p>
Full article ">Figure 5
<p>Scatterplots that relate the features (A_IA, A_IB, and A_IC) with the Targets (ZA, ZB, and ZC).</p>
Full article ">Figure 6
<p>MAE and MSE generated during the ML test phase in relation to the ZA estimation: (<b>a</b>) LR model and (<b>b</b>) DTR model.</p>
Full article ">Figure 7
<p>MAE and MSE generated during the ML test phase in relation to the ZB estimation: (<b>a</b>) LR model and (<b>b</b>) DTR model.</p>
Full article ">Figure 8
<p>MAE and MSE generated during the ML test phase in relation to the ZC estimation: (<b>a</b>) LR model and (<b>b</b>) DTR model.</p>
Full article ">Figure 9
<p>Training dataset (TRDS).</p>
Full article ">Figure 10
<p>LRM response (function 19) to the PADS: {Features = [A_IA, A_IB, A_IC]; Target = ZA}.</p>
Full article ">Figure 11
<p>LRM response (function 20) to the PADS: {Features = [A_IA, A_IB, A_IC]; Target = ZB}.</p>
Full article ">Figure 12
<p>LRM response (function 21) to the PADS: {Features = [A_IA, A_IB, A_IC]; Target = ZC}.</p>
Full article ">Figure 13
<p>Results of the pre-pruning technique applied to the DTRM of ZA.</p>
Full article ">Figure 14
<p>Decision tree resulting from the training phase up to a depth of two (hyper-parameter MDT = 21 and Target = ZA).</p>
Full article ">Figure 15
<p>Decision tree resulting from the training phase up to a depth of two (hyper-parameter MDT = 21 and Target = ZB).</p>
Full article ">Figure 16
<p>Decision tree resulting from the training phase up to a depth of two (hyper-parameter MDT = 23 and Target = ZC).</p>
Full article ">Figure 17
<p>DTRM of ZA (<a href="#sensors-23-07989-f014" class="html-fig">Figure 14</a>) response to the PADS: {Features = [A_IA, A_IB, A_IC], MDT = 21; Target = ZA}.</p>
Full article ">Figure 18
<p>DTRM of ZB (<a href="#sensors-23-07989-f015" class="html-fig">Figure 15</a>) response to the PADS: {Features = [A_IA, A_IB, A_IC], MDT = 21; Target = ZB}.</p>
Full article ">Figure 19
<p>DTRM of ZC (<a href="#sensors-23-07989-f016" class="html-fig">Figure 16</a>) response to the PADS: {Features = [A_IA, A_IB, A_IC], MDT = 23; Target = ZC}.</p>
Full article ">Figure 20
<p>Correlation matrix between the most relevant features (A_VA, A_VB, A_VC, A_IA, A_IB, and A_IC) and the target (NVF).</p>
Full article ">Figure 21
<p>Mutual information between the most relevant features (A_VA, A_VB, A_VC, A_IA, A_IB, and A_IC) and the target (NVF).</p>
Full article ">Figure 22
<p>Dataset used for ML models training and testing stages. The features represent the amplitudes of the phase currents and phase voltages at the converter switching frequency (A_IA, A_IB, A_IC, A_VA, A_VB, and A_VC) and the target represent the Negative Voltage Factor (NVF).</p>
Full article ">Figure 23
<p>MAE [%] generated during the ML (LR and DTR) models test phase in relation to the NVF estimation.</p>
Full article ">Figure 24
<p>Training dataset (TRDS).</p>
Full article ">Figure 25
<p>Decision tree resulting from the training phase up to a depth of two (hyper-parameter MDT = 18 and Target = NVF).</p>
Full article ">Figure 26
<p>DTRM of NVF (<a href="#sensors-23-07989-f026" class="html-fig">Figure 26</a>) response to the PADS: {Features = [A_IA, A_IB, A_IC, A_VA, A_VB and A_VC], MDT = 18; Target = NVF}: (<b>a</b>) without a low pass-filter and (<b>b</b>) with low pass-filter.</p>
Full article ">
12 pages, 12728 KiB  
Article
A Free-Space Optical Communication System Based on Bipolar Complementary Pulse Width Modulation
by Jinji Zheng, Xicai Li, Qinqin Wu and Yuanqin Wang
Sensors 2023, 23(18), 7988; https://doi.org/10.3390/s23187988 - 20 Sep 2023
Cited by 1 | Viewed by 1301
Abstract
In this work, we propose a bipolar complementary pulse width modulation strategy based on the differential signaling system, and the modulation–demodulation methods are introduced in detail. The proposed modulation–demodulation strategy can effectively identify each symbol’s start and end time so that the transmitter [...] Read more.
In this work, we propose a bipolar complementary pulse width modulation strategy based on the differential signaling system, and the modulation–demodulation methods are introduced in detail. The proposed modulation–demodulation strategy can effectively identify each symbol’s start and end time so that the transmitter and receiver can maintain correct bit synchronization. The system with differential signaling has the advantages of not requiring channel state information and reducing background radiation. To further reduce the noise in the system, a multi-bandpass spectrum noise reduction method is proposed according to the spectrum characteristics of the received modulation signals. The proposed modulation method has an error bit rate of 10−5 at a signal-to-noise ratio of 7 dB. The fabricated optical communication system can stably transfer voice and text over a distance of 5.6 km. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed system block diagram. (<b>a</b>) Transmitting part; (<b>b</b>) receiving part.</p>
Full article ">Figure 2
<p>Schematic diagram of the modulation and demodulation signal. (<b>a</b>) The clock signal; (<b>b</b>) the sequence of the signal to be transmitted; (<b>c</b>) signal of CH1_T; (<b>d</b>) the complementary signal CH2_T; (<b>e</b>) the received signal in CH1_R; (<b>f</b>) the received signal in CH2_R; (<b>g</b>) the differential signal; (<b>h</b>) the sequence of the demodulation signal.</p>
Full article ">Figure 3
<p>Diagram of the signals and frequency spectrums of the proposed strategy and NRZ modulation. (<b>a</b>) The ideal received signals for the proposed strategy and NRZ modulation; (<b>b</b>) the frequency spectrums of the received signals based on the proposed strategy and NRZ.</p>
Full article ">Figure 4
<p>Bit error-rate (BER) against signal-to-noise ratio (SNR) for a range of <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi mathvariant="sans-serif">σ</mi> </mrow> <mrow> <mi>x</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The denoising effect on signals. (<b>a1</b>) Simulation of received signal with SNR = 10.35 dB; (<b>a2</b>) signal denoised by BSS; (<b>a3</b>) signal denoised by MWSS; (<b>a4</b>) signal denoised by MBSS. (<b>b1</b>) Simulation of received signal with SNR = 3.49 dB; (<b>b2</b>) signal denoised by BSS; (<b>b3</b>) signal denoised by MWSS; (<b>b4</b>) signal denoised by MBSS.</p>
Full article ">Figure 6
<p>Spectral distribution of BC-PWM, NRZ, RZ, and PPM encoding methods.</p>
Full article ">Figure 7
<p>The denoising effect of using MBSS for BC-PWM, NRZ, RZ, and PPM encoding methods. (<b>a1</b>) The noisy signals of BC-PWM; (<b>a2</b>) the filtering effects of BSS and MBSS for BC-PWM; (<b>b1</b>) the noisy signals of NRZ; (<b>b2</b>) the filtering effects of BSS and MBSS for NRZ; (<b>c1</b>) the noisy signals of RZ; (<b>c2</b>) the filtering effects of BSS and MBSS for RZ; (<b>d1</b>) the noisy signals of PPM; (<b>d2</b>) the filtering effects of BSS and MBSS for PPM.</p>
Full article ">Figure 8
<p>Physical map of the dual-channel FSO system. (<b>a</b>) The light sources; (<b>b</b>) the modulation circuit; (<b>c</b>) the APD signal receiving unit; (<b>d</b>) the signal processing unit.</p>
Full article ">Figure 9
<p>The signals in practical experiments. (<b>a</b>) Modulation signal in CH1_T; (<b>b</b>) modulation signal in CH2_T; (<b>c</b>) original (blue) and denoised (orange) received signals in CH1_R; (<b>d</b>) original (blue) and denoised (orange) received signals in CH2_R; (<b>e</b>) differential signal.</p>
Full article ">Figure 10
<p>(<b>a</b>) The RMSE between the standard differential signals and the received differential signals denoised by MBSS, BSS, and MWSS; (<b>b</b>) the BER performance between the standard differential signals and the received differential signals.</p>
Full article ">Figure 11
<p>(<b>a</b>) Test scenario; (<b>b</b>) the full-duplex dual-channel FSO communication system; (<b>c</b>) the smallest transmitting unit.</p>
Full article ">
14 pages, 1122 KiB  
Article
Inertial Measurement Unit Sensor-to-Segment Calibration Comparison for Sport-Specific Motion Analysis
by Mitchell Ekdahl, Alex Loewen, Ashley Erdman, Sarp Sahin and Sophia Ulman
Sensors 2023, 23(18), 7987; https://doi.org/10.3390/s23187987 - 20 Sep 2023
Cited by 1 | Viewed by 2150
Abstract
Wearable inertial measurement units (IMUs) can be utilized as an alternative to optical motion capture as a method of measuring joint angles. These sensors require functional calibration prior to data collection, known as sensor-to-segment calibration. This study aims to evaluate previously described sensor-to-segment [...] Read more.
Wearable inertial measurement units (IMUs) can be utilized as an alternative to optical motion capture as a method of measuring joint angles. These sensors require functional calibration prior to data collection, known as sensor-to-segment calibration. This study aims to evaluate previously described sensor-to-segment calibration methods to measure joint angle range of motion (ROM) during highly dynamic sports-related movements. Seven calibration methods were selected to compare lower extremity ROM measured using IMUs to an optical motion capture system. The accuracy of ROM measurements for each calibration method varied across joints and sport-specific tasks, with absolute mean differences between IMU measurement and motion capture measurement ranging from <0.1° to 24.1°. Fewer significant differences were observed at the pelvis than at the hip, knee, or ankle across all tasks. For each task, one or more calibration movements demonstrated non-significant differences in ROM for at least nine out of the twelve ROM variables. These results suggest that IMUs may be a viable alternative to optical motion capture for sport-specific lower-extremity ROM measurement, although the sensor-to-segment calibration methods used should be selected based on the specific tasks and variables of interest for a given application. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Coordinate systems (CS) for the anatomical (black) and IMU (red dashed) reference frames of the thigh and shank segments. The sensor coordinate system may not be aligned with the anatomical coordinate system due to variability in sensor placement.</p>
Full article ">Figure 2
<p>Placement of seven Delsys Trigno Avanti IMU sensors: Sacrum, anterior thighs, anterior shanks, and the dorsal side of the feet. All sensors were secured with athletic tape to minimize motion artifacts during functional tasks.</p>
Full article ">Figure 3
<p>Mean differences (±SD) in ROM between motion capture and IMUs using each calibration method for the gait task. Positive mean differences indicate higher ROM measured by the IMUs relative to motion capture.</p>
Full article ">
1 pages, 187 KiB  
Correction
Correction: Becker, C.N.; Koerner, L.J. Plastic Classification Using Optical Parameter Features Measured with the TMF8801 Direct Time-of-Flight Depth Sensor. Sensors 2023, 23, 3324
by Cienna N. Becker and Lucas J. Koerner
Sensors 2023, 23(18), 7986; https://doi.org/10.3390/s23187986 - 20 Sep 2023
Viewed by 658
Abstract
There was an error in the original publication [...] Full article
(This article belongs to the Section Optical Sensors)
10 pages, 17710 KiB  
Article
Automatic Calibration of a Device for Blood Pressure Waveform Measurement
by Rafał Siemasz, Krzysztof Tomczuk, Ziemowit Malecha, Piotr Andrzej Felisiak and Artur Weiser
Sensors 2023, 23(18), 7985; https://doi.org/10.3390/s23187985 - 20 Sep 2023
Cited by 1 | Viewed by 1601
Abstract
This article presents a prototype of a new, non-invasive, cuffless, self-calibrating blood pressure measuring device equipped with a pneumatic pressure sensor. The developed sensor has a double function: it measures the waveform of blood pressure and calibrates the device. The device was used [...] Read more.
This article presents a prototype of a new, non-invasive, cuffless, self-calibrating blood pressure measuring device equipped with a pneumatic pressure sensor. The developed sensor has a double function: it measures the waveform of blood pressure and calibrates the device. The device was used to conduct proof-of-concept measurements on 10 volunteers. The main novelty of the device is the pneumatic pressure sensor, which works on the principle of a pneumatic nozzle flapper amplifier with negative feedback. The developed device does not require a cuff and can be used on arteries where cuff placement would be impossible (e.g., on the carotid artery). The obtained results showed that the systolic and diastolic pressure measurement errors of the proposed device did not exceed ±6.6% and ±8.1%, respectively. Full article
(This article belongs to the Special Issue Flexible Pressure Sensors: From Design to Applications)
Show Figures

Figure 1

Figure 1
<p>A device for blood pressure waveform measurement. A—pneumatic sensor, B—computer interface, C—notebook, D—elastic double-line pneumatic tube.</p>
Full article ">Figure 2
<p>Sensor (2) with connected computer interface (1) and artery segment: A—compressor; B—pressure regulator; E, F, G—orifices; H, I, J, K—shut-off valves; L—sensor body; M—measurement chamber; N—membrane; O—venting gap; C—pressure transducer (Motorola MPX5050 [<a href="#B37-sensors-23-07985" class="html-bibr">37</a>]); D—differential pressure transducer (Nenutec 984m.333704 [<a href="#B38-sensors-23-07985" class="html-bibr">38</a>]); <math display="inline"><semantics> <msub> <mi>p</mi> <mn>1</mn> </msub> </semantics></math>—arterial pressure; <math display="inline"><semantics> <msub> <mi>p</mi> <mn>2</mn> </msub> </semantics></math>—pressure readout by the main pressure transmitter; <math display="inline"><semantics> <msub> <mi>p</mi> <mi>s</mi> </msub> </semantics></math>—output pressure from pressure regulator.</p>
Full article ">Figure 3
<p>View of the pneumatic sensor without casing.</p>
Full article ">Figure 4
<p>Individual characteristics determined using two methods: A—method based on the measurements of the systolic pressure <math display="inline"><semantics> <msub> <mi>p</mi> <mi>s</mi> </msub> </semantics></math> and the diastolic pressure <math display="inline"><semantics> <msub> <mi>p</mi> <mi>d</mi> </msub> </semantics></math> using a cuff monitor, B—method based on the oscillometric measurement of the mean BP values <math display="inline"><semantics> <msub> <mi>p</mi> <mi>m</mi> </msub> </semantics></math> using the described device.</p>
Full article ">Figure 5
<p>Oscilloscope printout of the pressure in the sensor measurement chamber: <math display="inline"><semantics> <msub> <mi>p</mi> <mi>t</mi> </msub> </semantics></math>—test pressure, <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>o</mi> <mi>s</mi> </mrow> </msub> </semantics></math>—oscillatory component of the test pressure.</p>
Full article ">Figure 6
<p>BP waveform curve of a volunteer.</p>
Full article ">
Previous Issue
Back to TopTop