Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 23, November-2
Previous Issue
Volume 23, October-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 21 (November-1 2023) – 357 articles

Cover Story (view full-size image): In Industry 5.0, human–robot collaboration (HRC) enhances factory automation. Collaborative robots (cobots) ensure worker safety, but HRC may lead to mental stress and cognitive workload issues. Our research focuses on factory workers’ cognitive load under varying task conditions, examining their effect on subjective, behavioural, and physiological measures. We aim to predict traditional measures through physiological data. This study addresses the need for neuroergonomics in manufacturing to create a stress-free environment for employees working with cobots. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 5498 KiB  
Article
Reinforcement Learning Algorithms for Autonomous Mission Accomplishment by Unmanned Aerial Vehicles: A Comparative View with DQN, SARSA and A2C
by Gonzalo Aguilar Jiménez, Arturo de la Escalera Hueso and Maria J. Gómez-Silva
Sensors 2023, 23(21), 9013; https://doi.org/10.3390/s23219013 - 6 Nov 2023
Cited by 3 | Viewed by 2044
Abstract
Unmanned aerial vehicles (UAV) can be controlled in diverse ways. One of the most common is through artificial intelligence (AI), which comprises different methods, such as reinforcement learning (RL). The article aims to provide a comparison of three RL algorithms—DQN as the benchmark, [...] Read more.
Unmanned aerial vehicles (UAV) can be controlled in diverse ways. One of the most common is through artificial intelligence (AI), which comprises different methods, such as reinforcement learning (RL). The article aims to provide a comparison of three RL algorithms—DQN as the benchmark, SARSA as a same-family algorithm, and A2C as a different-structure one—to address the problem of a UAV navigating from departure point A to endpoint B while avoiding obstacles and, simultaneously, using the least possible time and flying the shortest distance. Under fixed premises, this investigation provides the results of the performances obtained for this activity. A neighborhood environment was selected because it is likely one of the most common areas of use for commercial drones. Taking DQN as the benchmark and not having previous knowledge of the behavior of SARSA or A2C in the employed environment, the comparison outcomes showed that DQN was the only one achieving the target. At the same time, SARSA and A2C did not. However, a deeper analysis of the results led to the conclusion that a fine-tuning of A2C could overcome the performance of DQN under certain conditions, demonstrating a greater speed at maximum finding with a more straightforward structure. Full article
(This article belongs to the Special Issue Design, Communication, and Control of Autonomous Vehicle Systems)
Show Figures

Figure 1

Figure 1
<p>Departure point A (0,0). Obtained from AirSim. Crossroad of the neighborhood in Unreal Engine 4.</p>
Full article ">Figure 2
<p>View of components of the environment. Obtained from AirSim. Objects found are trees, houses, traffic signs, and many bushes. These objects affect the trajectory planning.</p>
Full article ">Figure 3
<p>Different light intensities. Proof of the many distinct ways an object can look depending on the darkness. Light intensity affects the data observation and processing.</p>
Full article ">Figure 4
<p>Foggy conditions, obtained from AirSim. Weather conditions like wind might also affect trajectory planning.</p>
Full article ">Figure 5
<p>Bird’s eye view of the neighborhood. Drone’s location is A (0,0).</p>
Full article ">Figure 6
<p>DQN X-Y collision points. Impacts referenced to abscissa and ordinate axes.</p>
Full article ">Figure 7
<p>SARSA X-Y collision points. Impacts referenced to abscissa and ordinate axes.</p>
Full article ">Figure 8
<p>A2C X-Y collision points. Impacts referenced to abscissa and ordinate axes.</p>
Full article ">Figure 9
<p>All X-Y collision points. Impacts referenced to abscissa and ordinate axes. Colors are conserved from previous corresponding Figures.</p>
Full article ">Figure 10
<p>Episode duration. Abscissa references the episode counter; ordinates, the reference time.</p>
Full article ">Figure 11
<p>Episode reward. Abscissa references the episode counter, and ordinates reference the reward.</p>
Full article ">Figure 12
<p>Episode duration. Abscissa references the episode counter; ordinates, the reference time. The black lines show the apprenticeship tendency. Colors correspond to DQN, SARSA and A2C as used previously. Black lines show the apprenticeship tendency of DQN.</p>
Full article ">Figure 13
<p>RL DQN powerline surveillance [<a href="#B18-sensors-23-09013" class="html-bibr">18</a>]. Distances between the object matter under study and the rest of the environment were excellent; therefore, it was easier for the algorithm to isolate the powerlines from the other objects.</p>
Full article ">Figure 14
<p>Neighborhood created by Microsoft in [<a href="#B18-sensors-23-09013" class="html-bibr">18</a>]. The experiment used the miniature image in black and white in the left-low corner, employing aa stereo camera.</p>
Full article ">Figure 15
<p>The environment used by Kjell [<a href="#B17-sensors-23-09013" class="html-bibr">17</a>]. Columns and distances between them were the same for the whole environment, making them more accessible for learning by pattern recognition using RL.</p>
Full article ">
14 pages, 4140 KiB  
Article
Fabrication and Evaluation of Embroidery-Based Electrode for EMG Smart Wear Using Moss Stitch Technique
by Soohyeon Rho, Hyelim Kim, Daeyoung Lim and Wonyoung Jeong
Sensors 2023, 23(21), 9012; https://doi.org/10.3390/s23219012 - 6 Nov 2023
Cited by 1 | Viewed by 1525
Abstract
Wearable 2.0 research has been conducted on the manufacture of smart fitness wear that collects bio-signals through the wearing of a textile-based electrode. Among them, the electromyography (EMG) suit measures the electrical signals generated by the muscles to check their activity, such as [...] Read more.
Wearable 2.0 research has been conducted on the manufacture of smart fitness wear that collects bio-signals through the wearing of a textile-based electrode. Among them, the electromyography (EMG) suit measures the electrical signals generated by the muscles to check their activity, such as contraction and relaxation. General gel-type electrodes have been reported to cause skin diseases due to an uncomfortable feel and skin irritation when attached to the skin for a long time. Dry electrodes of various materials are being developed to solve this problem. Previous research has reported EMG detectio performance and conducted economic comparisons according to the size and shape of the embroidery electrode. On the other hand, these embroidery electrodes still have foreign body sensations. In this study, a moss sEMG electrode was produced with various shapes (W3 and WF) and loop lengths (1–5 mm). The optimized conditions of the embroidery-based electrodes were derived and analyzed with the tactile comfort factors and sensing performances. As the loop length of the electrode increased, MIU and Qmax increased, but the SMD decreased due to the free movement of the threads constituting the loop. Impedance and sEMG detection performance showed different trends depending on the electrode type. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>3D image of a (<b>a</b>) lock- and (<b>b</b>) moss-based embroidery textile electrode and (<b>c</b>) the loop height of the moss stitch [<a href="#B17-sensors-23-09012" class="html-bibr">17</a>].</p>
Full article ">Figure 1 Cont.
<p>3D image of a (<b>a</b>) lock- and (<b>b</b>) moss-based embroidery textile electrode and (<b>c</b>) the loop height of the moss stitch [<a href="#B17-sensors-23-09012" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>Embroidery design parameter for sEMG moss electrodes in different shapes: (<b>a</b>) wave three lines shape and (<b>b</b>) wave fill shape. IED = 20 mm; electrode size = 20 mm.</p>
Full article ">Figure 3
<p>Procedure for preparing the leg sleeves with sEMG electrodes embroidered with a moss stitch. (<b>a</b>) Embroidery with a moss stitch technique, (<b>b</b>) sewing automatically with a 10 mm seam allowance, and (<b>c</b>) leg sleeves embroidered with a moss sEMG electrode.</p>
Full article ">Figure 4
<p>Clothing pressure measurement with leg sleeves embroidered with moss electrodes. (<b>a</b>) Pressure measurement machine and sensor with 20 mm diameter; (<b>b</b>) attached pressure sensor on to rectus femoris; (<b>c</b>) worn leg sleeves embroidered with moss electrodes.</p>
Full article ">Figure 5
<p>KES-FB4 surface testing results and Qmax of moss sEMG electrodes with various loop lengths. (<b>a</b>) MIU; (<b>b</b>) SMD; (<b>c</b>) Qmax, where loop length 0 represents the base fabric. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 6
<p>Partial clothing pressure results of the W3 and WF shape moss sEMG electrodes with various loop lengths. W3(E) and WF(E) are the electrodes at the rectus femoris position and non-electrode position, respectively.</p>
Full article ">Figure 7
<p>Sheet resistance of moss sEMG electrodes with various shapes (W3 and WF) and loop lengths (L = 1 mm, 2 mm, 3 mm, 4 mm, and 5 mm).</p>
Full article ">Figure 8
<p>Skin–electrode impedance of moss sEMG electrodes for two shapes (W3 and WF) and various loop lengths (L = 1 mm, 2 mm, 3 mm, 4 mm, and 5 mm). * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 9
<p>Graph of filtered (20-500 Hz) in analog and full-wave of the sEMG signal obtained by (<b>a</b>) W3_L1, (<b>b</b>) W3_L2, (<b>c</b>) W3_L3, (<b>d</b>) W3_L4, and (<b>e</b>) W3_L5. Here, the values of sections A, B, and C in the box were used for EMG signal measurement and analysis.</p>
Full article ">Figure 10
<p>Graph of filtered (20-500 Hz) in analog and full-wave of the sEMG signal obtained by (<b>a</b>) WF_L1, (<b>b</b>) WF_L2, (<b>c</b>) WF_L3, (<b>d</b>) WF_L4, and (<b>e</b>) WF_L5. Three contractions among the five trials, excepting the first and last trials, were used to calculate the average activated EMG for comparison.</p>
Full article ">Figure 11
<p>Average rectified EMG at the baseline and during knee extension for moss sEMG electrodes of various shapes and loop lengths. (<b>a</b>) Baseline electrode noise, (<b>b</b>) activated muscle signal during knee extension, and (<b>c</b>) signal-to-noise ratio (SNR). * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">
19 pages, 5498 KiB  
Article
Integral Imaging Display System Based on Human Visual Distance Perception Model
by Lijin Deng, Zhihong Li, Yuejianan Gu and Qi Wang
Sensors 2023, 23(21), 9011; https://doi.org/10.3390/s23219011 - 6 Nov 2023
Cited by 1 | Viewed by 1522
Abstract
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This [...] Read more.
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This research examines the visual characteristics of the human eye and analyzes the path of light from a point source to the eye in the process of capturing and reconstructing the light field. Then, an overall depth of field (DOF) model of II is derived based on the human visual system (HVS). On this basis, an II system based on the human visual distance (HVD) perception model is proposed, and an interactive II display system is constructed. The experimental results confirm the effectiveness of the proposed method. The display system improves the viewing distance range, enhances spatial resolution and provides better stereoscopic display effects. When comparing our method with three other methods, it is clear that our approach produces better results in optical experiments and objective evaluations: the cumulative probability of blur detection (CPBD) value is 38.73%, the structural similarity index (SSIM) value is 86.56%, and the peak signal-to-noise ratio (PSNR) value is 31.12. These values align with subjective evaluations based on the characteristics of the human visual system. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

Figure 1
<p>Interactive II display system based on HVD perception model.</p>
Full article ">Figure 2
<p>Discrete phenomena occur at off-focus point B.</p>
Full article ">Figure 3
<p>Facet braiding in II display.</p>
Full article ">Figure 4
<p>Optical path diagrams of single-lens imaging in the acquisition stage of II.</p>
Full article ">Figure 5
<p>Optical path diagrams of single-lens imaging in the reconstruction stage of II.</p>
Full article ">Figure 6
<p>Analysis of visual limits of the human eye.: (<b>a</b>) spatial resolution of the human eye; (<b>b</b>) line resolution of the human eye.</p>
Full article ">Figure 7
<p>Analysis of pixel acquisition.</p>
Full article ">Figure 8
<p>Displacement relationship of homonymous image points during the collection process.</p>
Full article ">Figure 9
<p>Analysis of pixel calibration.</p>
Full article ">Figure 10
<p>Workflow of the interactive II display system.</p>
Full article ">Figure 11
<p>Building image collection scene with 3Ds Max: (<b>a</b>) 3Ds Max simulated pixel collection scene; (<b>b</b>) collected EIA; (<b>c</b>) collected RGB image; (<b>d</b>) collected depth image.</p>
Full article ">Figure 12
<p>Optical reconstruction experimental platform: (<b>a</b>) optical experimental platform 1; (<b>b</b>) optical experimental platform 2.</p>
Full article ">Figure 13
<p>Overall DOF model verification experiment: (<b>a</b>) Collection Scene 1; (<b>b</b>) Collection Scene 2; (<b>c</b>) computer reconstruction of Scene 1; (<b>d</b>) computer reconstruction of Scene 2; (<b>e</b>) optical experiment of Scene 1; (<b>f</b>) optical experiment of Scene 2.</p>
Full article ">Figure 14
<p>Objective evaluations at different positions of reconstruction distance: (<b>a</b>) CPBD for reconstructed images; (<b>b</b>) SSIM for reconstructed images; (<b>c</b>) PSNR for reconstructed images.</p>
Full article ">Figure 15
<p>Optical reconstruction experimental results: (<b>a</b>) RODC algorithm [<a href="#B28-sensors-23-09011" class="html-bibr">28</a>]; (<b>b</b>) RIOP algorithm [<a href="#B29-sensors-23-09011" class="html-bibr">29</a>]; (<b>c</b>) LFR algorithm [<a href="#B30-sensors-23-09011" class="html-bibr">30</a>]; (<b>d</b>) our method.</p>
Full article ">Figure 16
<p>Optical reconstruction results of two types of pixels at <span class="html-italic">L</span> = 2 m.: (<b>a</b>) optical experiment before improvement; (<b>b</b>) optical experiment after improvement.</p>
Full article ">Figure 17
<p>Human face–eye detection and distance measurement device: (<b>a</b>) visual distance detection results; (<b>b</b>) custom binocular camera.</p>
Full article ">Figure 18
<p>Optical reconstruction at various viewing distances after improvement using the HVD perception model: (<b>a</b>) <span class="html-italic">L</span> = 2 m; (<b>b</b>) <span class="html-italic">L</span> = 2.74 m; (<b>c</b>) <span class="html-italic">L</span> = 4 m.</p>
Full article ">Figure 18 Cont.
<p>Optical reconstruction at various viewing distances after improvement using the HVD perception model: (<b>a</b>) <span class="html-italic">L</span> = 2 m; (<b>b</b>) <span class="html-italic">L</span> = 2.74 m; (<b>c</b>) <span class="html-italic">L</span> = 4 m.</p>
Full article ">
21 pages, 3793 KiB  
Article
Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
by Mohamed Gaballa and Maysam Abbod
Sensors 2023, 23(21), 9010; https://doi.org/10.3390/s23219010 - 6 Nov 2023
Cited by 3 | Viewed by 1733
Abstract
In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm [...] Read more.
In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm will be developed and incorporated into the NOMA system so that this developed DQN model can be employed to estimate the channel coefficients for each user device in NOMA system. The developed DQN scheme will be structured as a simplified approach to efficiently predict the channel parameters for each user in order to maximize the downlink sum rates for all users in the system. In order to approximate the channel parameters for each user device, this proposed DQN approach is first initialized using random channel statistics, and then the proposed DQN model will be dynamically updated based on the interaction with the environment. The predicted channel parameters will be utilized at the receiver side to recover the desired data. Furthermore, this work inspects how the channel estimation process based on the simplified DQN algorithm and the power allocation policy, can both be integrated for the purpose of multiuser detection in the examined NOMA system. Simulation results, based on several performance metrics, have demonstrated that the proposed simplified DQN algorithm can be a competitive algorithm for channel parameters estimation when compared to different benchmark schemes for channel estimation processes such as deep neural network (DNN) based long-short term memory (LSTM), RL based Q algorithm, and channel estimation scheme based on minimum mean square error (MMSE) procedure. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Figure 1
<p>DQN basic structure with two hidden layers.</p>
Full article ">Figure 2
<p>Proposed DQN Architecture.</p>
Full article ">Figure 3
<p>LSTM Cell Structure.</p>
Full article ">Figure 4
<p>Near-greedy action selection scheme.</p>
Full article ">Figure 5
<p>BER vs. power (DQN—MMSE).</p>
Full article ">Figure 6
<p>Outage Probability vs. power (DQN—MMSE).</p>
Full article ">Figure 7
<p>Achievable rates vs. power (DQN—MMSE).</p>
Full article ">Figure 8
<p>Sum rate vs. power (MMSE, LSTM, RL Q-learning, DQN).</p>
Full article ">Figure 9
<p>Sum rate v. number of users (MMSE, LSTM, RL Q-learning, DQN).</p>
Full article ">Figure 10
<p>BER vs. power (DQN—Q learning—Optimization).</p>
Full article ">
11 pages, 2818 KiB  
Article
Minimum Detection Concentration of Hydrogen in Air Depending on Substrate Type and Design of the 3ω Sensor
by Dong-Wook Oh, Kwangu Kang and Jung-Hee Lee
Sensors 2023, 23(21), 9009; https://doi.org/10.3390/s23219009 - 6 Nov 2023
Cited by 1 | Viewed by 1459
Abstract
Hydrogen has emerged as a promising carbon-neutral fuel source, spurring research and development efforts to facilitate its widespread adoption. However, the safe handling of hydrogen requires precise leak detection sensors due to its low activation energy and explosive potential. Various detection methods exist, [...] Read more.
Hydrogen has emerged as a promising carbon-neutral fuel source, spurring research and development efforts to facilitate its widespread adoption. However, the safe handling of hydrogen requires precise leak detection sensors due to its low activation energy and explosive potential. Various detection methods exist, with thermal conductivity measurement being a prominent technique for quantifying hydrogen concentrations. However, challenges remain in achieving high measurement sensitivity at low hydrogen concentrations below 1% for thermal-conductivity-based hydrogen sensors. Recent research explores the 3ω method’s application for measuring hydrogen concentrations in ambient air, offering high spatial and temporal resolutions. This study aims to enhance hydrogen leak detection sensitivity using the 3ω method by conducting thermal analyses on sensor design variables. Factors including substrate material, type, and sensor geometry significantly impact the measurement sensitivity. Comparative evaluations consider the minimum detectable hydrogen concentration while accounting for the uncertainty of the 3ω signal. The proposed suspended-type 3ω sensor is capable of detecting hydrogen leaks in ambient air and provides real-time measurements that are ideal for monitoring hydrogen diffusion. This research serves to bridge the gap between precision and real-time monitoring of hydrogen leak detection, promising significant advancements in the related safety applications. Full article
(This article belongs to the Special Issue Gas Sensors: Materials, Mechanism and Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic of a 3ω sensor with a microheater on a substrate.</p>
Full article ">Figure 2
<p>Schematic of a 3ω sensor and a thermal resistance circuit of the substrate and gas mixture.</p>
Full article ">Figure 3
<p>Properties of hydrogen and air mixture depending on hydrogen concentration (mf), (<b>a</b>) thermal conductivity and (<b>b</b>) thermal diffusivity.</p>
Full article ">Figure 4
<p>Calculation results of (<b>a</b>) temperature amplitude and (<b>b</b>) phase lag depending on AC frequency and hydrogen concentration for 10 μm width 3ω sensor on a SiO<sub>2</sub> substrate.</p>
Full article ">Figure 5
<p>Calculation results of temperature amplitude ratio depending on AC frequency and hydrogen concentration for (<b>a</b>) SiO<sub>2</sub> substrate, (<b>b</b>) polyimide substrate, and (<b>c</b>) suspended-type sensors.</p>
Full article ">Figure 6
<p>Calculation results of temperature amplitude ratio depending on AC frequency and hydrogen concentration for polyimide substrate sensor with widths of (<b>a</b>) 4 μm, (<b>b</b>) 10 μm, and (<b>c</b>) 40 μm.</p>
Full article ">Figure 7
<p>Calculation results of temperature amplitude ratio depending on AC frequency and hydrogen concentration for suspended sensor with widths of (<b>a</b>) 4 μm, (<b>b</b>) 10 μm, and (<b>c</b>) 40 μm.</p>
Full article ">Figure 8
<p>Minimum detectable hydrogen concentration depending on substrate types and sensor width.</p>
Full article ">
16 pages, 5201 KiB  
Article
Smartphone Photogrammetric Assessment for Head Measurements
by Omar C. Quispe-Enriquez, Juan José Valero-Lanzuela and José Luis Lerma
Sensors 2023, 23(21), 9008; https://doi.org/10.3390/s23219008 - 6 Nov 2023
Cited by 1 | Viewed by 1944
Abstract
The assessment of cranial deformation is relevant in the field of medicine dealing with infants, especially in paediatric neurosurgery and paediatrics. To address this demand, the smartphone-based solution PhotoMeDAS has been developed, harnessing mobile devices to create three-dimensional (3D) models of infants’ heads [...] Read more.
The assessment of cranial deformation is relevant in the field of medicine dealing with infants, especially in paediatric neurosurgery and paediatrics. To address this demand, the smartphone-based solution PhotoMeDAS has been developed, harnessing mobile devices to create three-dimensional (3D) models of infants’ heads and, from them, automatic cranial deformation reports. Therefore, it is crucial to examine the accuracy achievable with different mobile devices under similar conditions so prospective users can consider this aspect when using the smartphone-based solution. This study compares the linear accuracy obtained from three smartphone models (Samsung Galaxy S22 Ultra, S22, and S22+). Twelve measurements are taken with each mobile device using a coded cap on a head mannequin. For processing, three different bundle adjustment implementations are tested with and without self-calibration. After photogrammetric processing, the 3D coordinates are obtained. A comparison is made among spatially distributed distances across the head with PhotoMeDAS vs. ground truth established with a Creaform ACADEMIA 50 while-light 3D scanner. With a homogeneous scale factor for all the smartphones, the results showed that the average accuracy for the S22 smartphone is −1.15 ± 0.53 mm, for the S22+, 0.95 ± 0.40 mm, and for the S22 Ultra, −1.8 ± 0.45 mm. Worth noticing is that a substantial improvement is achieved regardless of whether the scale factor is introduced per device. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Materials: (<b>a</b>) PhotoMeDAS coded cap and three targets; (<b>b</b>) Mannequin head; (<b>c</b>) Data acquisition; (<b>d</b>) PhotoMeDAS app during data acquisition.</p>
Full article ">Figure 2
<p>Scanning setup: (<b>a</b>) ACADEMIA 50; (<b>b</b>) Scanner calibration plate; (<b>c</b>) Coded cap with additional round retroreflective targets.</p>
Full article ">Figure 3
<p>Workflow schema.</p>
Full article ">Figure 4
<p>Three-dimensional point cloud visualisation in CloudCompare.</p>
Full article ">Figure 5
<p>ACADEMIA 50: (<b>a</b>) Data acquisition; (<b>b</b>) Print-out of the data acquisition in VXelements software.</p>
Full article ">Figure 6
<p>Virtual 3D model: (<b>a</b>) Three−dimensional scanning model imported into Agisoft Metashape; (<b>b</b>) Measurement of the corner target coordinates.</p>
Full article ">Figure 7
<p>Measured distances: (<b>a</b>) preauricular distance; (<b>b</b>) lateral distance; (<b>c</b>) maximum length frontal-occipital right distance; (<b>d</b>) maximum length frontal-occipital left distance.</p>
Full article ">Figure 8
<p>Accuracy bias for Galaxy S22: (<b>a</b>) Processing I, (<b>b</b>) Processing II, and (<b>c</b>) Processing III.</p>
Full article ">Figure 9
<p>Accuracy bias for Galaxy S22+: (<b>a</b>) Processing I, (<b>b</b>) Processing II, and (<b>c</b>) Processing III.</p>
Full article ">Figure 10
<p>Accuracy bias for Galaxy S22 Ultra: (<b>a</b>) Processing I, (<b>b</b>) Processing II, and (<b>c</b>) Processing III.</p>
Full article ">Figure 11
<p>Media, Standard Deviation, Minimum, and Maximum by smartphone and procedure for S22, S22+, and S22 Ultra: (<b>I</b>) Processing I, (<b>II</b>) Processing II, and (<b>III</b>) Processing III.</p>
Full article ">Figure 12
<p>Distance differences after correcting the scale factor by model and procedure for S22, S22+, and S22 Ultra: (<b>I</b>) Processing I, (<b>II</b>) Processing II, and (<b>III</b>) Processing III.</p>
Full article ">
6 pages, 1620 KiB  
Brief Report
Automatic Alignment Method for Controlled Free-Space Excitation of Whispering-Gallery Resonances
by Davide D’Ambrosio, Marialuisa Capezzuto, Antonio Giorgini, Pietro Malara, Saverio Avino and Gianluca Gagliardi
Sensors 2023, 23(21), 9007; https://doi.org/10.3390/s23219007 - 6 Nov 2023
Cited by 1 | Viewed by 962
Abstract
Whispering-gallery mode microresonators have gained wide popularity as experimental platforms for different applications, ranging from biosensing to nonlinear optics. Typically, the resonant modes of dielectric microresonators are stimulated via evanescent wave coupling, facilitated using tapered optical fibers or coupling prisms. However, this method [...] Read more.
Whispering-gallery mode microresonators have gained wide popularity as experimental platforms for different applications, ranging from biosensing to nonlinear optics. Typically, the resonant modes of dielectric microresonators are stimulated via evanescent wave coupling, facilitated using tapered optical fibers or coupling prisms. However, this method poses serious shortcomings due to fabrication and access-related limitations, which could be elegantly overcome by implementing a free-space coupling approach; although additional alignment procedures are needed in this case. To address this issue, we have developed a new algorithm to excite the microresonator automatically. Here, we show the working mechanism and the preliminary results of our experimental method applied to a home-made silica microsphere, using a visible laser beam with a spatial light modulator and a software control. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup. A grating-like phase mask is imaged onto the SLM panel to tilt the laser beam by acting on the reflection angle r (shown in turquoise). Light scattered from the resonator is imaged using a CCD camera through a 1:1 telescope. The telescope, and a second photodiode (PD) for backscattering detection, are omitted for sake of clarity.</p>
Full article ">Figure 2
<p>Images recorded with the CCD camera when a WGM is excited (<b>a</b>) or not (<b>b</b>). On the corresponding lower panels (<b>c</b>,<b>d</b>), we report a 1 D integration of the total number of counts. We ruled out via software the contribution of the two stray light spots, visible in both upper panels, so as to obtain a robust selection criterion to distinguish among the system’s available alignment conditions.</p>
Full article ">Figure 3
<p>Microresonator’s self-aligned modes, normalized for ease of visualization. In each plot, the tilt angle and the number of iterations are also reported. For N = 100 iterations of the algorithm, the system tends to align on the mode in the lower panel, while for lower N values, the system finds local maxima of the scattered light that correspond to lower-Q whispering-gallery modes (from the top, panels 1 and 2).</p>
Full article ">Figure 4
<p>Microresonator’s self-aligned modes in the presence of dust particles (normalized). Split modes are visible in the spectra, revealing the presence of scatterers on the surface of the WGMR.</p>
Full article ">
19 pages, 1612 KiB  
Article
Modeling and Optimization of Connected and Automated Vehicle Platooning Cooperative Control with Measurement Errors
by Weiming Luo, Xu Li, Jinchao Hu and Weiming Hu
Sensors 2023, 23(21), 9006; https://doi.org/10.3390/s23219006 - 6 Nov 2023
Cited by 2 | Viewed by 1973
Abstract
This paper presents a cooperative control method for connected and automated vehicle (CAV) platooning, thus specifically addressing the challenge of sensor measurement errors that can disrupt the stability of the CAV platoon. Initially, the state-space equation of the CAV platooning system was formulated, [...] Read more.
This paper presents a cooperative control method for connected and automated vehicle (CAV) platooning, thus specifically addressing the challenge of sensor measurement errors that can disrupt the stability of the CAV platoon. Initially, the state-space equation of the CAV platooning system was formulated, thereby taking into account the measurement error of onboard sensors. The superposition effect of the sensor measurement errors was statistically analyzed, thereby elucidating its impact on cooperative control in CAV platooning. Subsequently, the application of a Kalman filter was proposed as a means to mitigate the adverse effects of measurement errors. Additionally, the CAV formation control problem was transformed into an optimal control decision problem by introducing an optimal control decision strategy that does not impose pure state variable inequality constraints. The proposed method was evaluated through simulation experiments utilizing real vehicle trajectory data from the Next Generation Simulation (NGSIM). The results demonstrate that the method presented in this study effectively mitigates the influence of measurement errors, thereby enabling coordinated vehicle-following behavior, achieving smooth acceleration and deceleration throughout the platoon, and eliminating traffic oscillations. Overall, the proposed method ensures the stability and comfort of the CAV platooning formation. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>The conceptual flowchart of the proposed method.</p>
Full article ">Figure 2
<p>The platoon of CAVs.</p>
Full article ">Figure 3
<p>Acceleration of the leading vehicle.</p>
Full article ">Figure 4
<p>Control decisions and performance of a single CAV in the platoon: (<b>a</b>) control decisions; (<b>b</b>) deviation-from-equilibrium distance; (<b>c</b>) relative speed.</p>
Full article ">Figure 4 Cont.
<p>Control decisions and performance of a single CAV in the platoon: (<b>a</b>) control decisions; (<b>b</b>) deviation-from-equilibrium distance; (<b>c</b>) relative speed.</p>
Full article ">Figure 5
<p>Optimal control decisions of the following CAVs: (<b>a</b>) affected by sensor measurement errors; (<b>b</b>) eliminating sensor measurement errors with Kalman filtering.</p>
Full article ">Figure 6
<p>Optimal equilibrium spacing and speed of the following CAVs: (<b>a</b>) equilibrium distance affected by sensor measurement errors; (<b>b</b>) equilibrium distance unaffected by measurement errors; (<b>c</b>) speed affected by sensor measurement errors; (<b>d</b>) speed unaffected by measurement errors.</p>
Full article ">Figure 7
<p>Control performance of CAV platoon: (<b>a</b>) deviation-from-equilibrium distance; (<b>b</b>) relative speed.</p>
Full article ">
12 pages, 5944 KiB  
Article
Quartz-Enhanced Photoacoustic Sensor Based on a Multi-Laser Source for In-Sequence Detection of NO2, SO2, and NH3
by Pietro Patimisco, Nicoletta Ardito, Edoardo De Toma, Dominik Burghart, Vladislav Tigaev, Mikhail A. Belkin and Vincenzo Spagnolo
Sensors 2023, 23(21), 9005; https://doi.org/10.3390/s23219005 - 6 Nov 2023
Cited by 4 | Viewed by 1262
Abstract
In this work, we report on the implementation of a multi-quantum cascade laser (QCL) module as an innovative light source for quartz-enhanced photoacoustic spectroscopy (QEPAS) sensing. The source is composed of three different QCLs coupled with a dichroitic beam combiner module that provides [...] Read more.
In this work, we report on the implementation of a multi-quantum cascade laser (QCL) module as an innovative light source for quartz-enhanced photoacoustic spectroscopy (QEPAS) sensing. The source is composed of three different QCLs coupled with a dichroitic beam combiner module that provides an overlapping collimated beam output for all three QCLs. The 3λ-QCL QEPAS sensor was tested for detection of NO2, SO2, and NH3 in sequence in a laboratory environment. Sensitivities of 19.99 mV/ppm, 19.39 mV/ppm, and 73.99 mV/ppm were reached for NO2, SO2, and NH3 gas detection, respectively, with ultimate detection limits of 9 ppb, 9.3 ppb, and 2.4 ppb for these three gases, respectively, at an integration time of 100 ms. The detection limits were well below the values of typical natural abundance of NO2, SO2, and NH3 in air. Full article
(This article belongs to the Special Issue Photonics for Advanced Spectroscopy and Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic of the internal structure of the 3λ-QCL module. F2 is a band-pass filter; F1 is a low-pass filter; M1 is a mirror. (<b>b</b>) Top view of the Solidworks 3D model of the 3λ-QCL module without the top and lateral sides.</p>
Full article ">Figure 2
<p>(<b>a</b>) Combined 7.38 μm and 9.06 μm beam profiles overlapped at 25 cm from the 3λ-QCL module. (<b>b</b>) Combined 7.38 μm and 6.25 μm beam profiles overlapped at 25 cm from the 3λ-QCL module.</p>
Full article ">Figure 3
<p>(<b>a</b>) Combined 7.38 μm and 9.06 μm beam profiles overlapped at the focal plane of the lens. (<b>b</b>) Combined 7.38 μm and 6.25 μm beam profiles overlapped at the focal plane of the lens. (<b>c</b>) Representation of the three beam spots as circumferences. The 7.41 µm, 6.25 µm, and 9.06 µm beam waists are depicted as green, blue, and red solid circumferences, respectively. The radii of the circumferences are equal to the mean value of widths of the beam waists along the x- and y-directions. The coordinates of the center of the circumferences are those of the peak values of the three beam spots.</p>
Full article ">Figure 4
<p>Schematic of the 3λ-QCL-based QEPAS sensor for NH<sub>3</sub>, SO<sub>2</sub>, and NO<sub>2</sub> detection. L—lens; ADM—acoustic detection module; DAQ—data acquisition board; PC—personal computer; PM—power meter.</p>
Full article ">Figure 5
<p>(<b>a</b>) HITRAN simulation of an absorption cross-section of a mixture of 10 ppm of NH<sub>3</sub> in N<sub>2</sub>, a mixture of 10 ppm of SO<sub>2</sub> in N<sub>2</sub>, a mixture of 10 ppm of NO<sub>2</sub> in N<sub>2</sub>, and a mixture of 1% of water vapor in standard air within the emission spectral range of the 9.06 μm QCL. (<b>b</b>) Simulation of an absorption cross-section of a mixture of 10 ppm of NO<sub>2</sub> in N<sub>2</sub>, a mixture of 10 ppm of SO<sub>2</sub> in N<sub>2</sub>, a mixture of 10 ppm of NH<sub>3</sub> in N<sub>2</sub>, and a mixture of 1% of water vapor in N<sub>2</sub> within the emission spectral range of the 6.25 μm QCL. (<b>c</b>) Simulation of an absorption cross-section of a mixture of 10 ppm of SO<sub>2</sub> in N<sub>2</sub>, a mixture of 10 ppm of NH<sub>3</sub> in N<sub>2</sub>, a mixture of 10 ppm of NO<sub>2</sub> in N<sub>2</sub>, and a mixture of 1% of water vapor in N<sub>2</sub> within the emission spectral range of the 7.38 μm QCL.</p>
Full article ">Figure 6
<p>(<b>a</b>) QEPAS spectral scans measured for different concentrations of NH<sub>3</sub> in N<sub>2</sub> and pure N<sub>2</sub> using the 9.06 μm QCL. (<b>b</b>) QEPAS spectral scans measured for different concentrations of NO<sub>2</sub> in N<sub>2</sub> and pure N<sub>2</sub> obtained when the 6.25 μm QCL is turned on. (<b>c</b>) QEPAS spectral scans measured for different concentrations of SO<sub>2</sub> in N<sub>2</sub> and pure N<sub>2</sub> using the 7.38 μm QCL The peak at 275 mA observed for pure N<sub>2</sub> is due to residual H<sub>2</sub>O in the gas line.</p>
Full article ">Figure 7
<p>(<b>a</b>) QEPAS signal as a function of the NH<sub>3</sub> concentration (black squares) with the corresponding best linear fit (red line). (<b>b</b>) QEPAS signal as a function of the NO<sub>2</sub> concentration (black squares) with the corresponding best linear fit (red line). (<b>c</b>) QEPAS signal as a function of the SO<sub>2</sub> concentration (black squares) with the corresponding best linear fit (red line).</p>
Full article ">Figure 8
<p>Allan deviation of the QEPAS signal as a function of the lock-in integration time.</p>
Full article ">Figure 9
<p>QEPAS spectral scan of NH<sub>3</sub> (<b>a</b>), NO<sub>2</sub> (<b>b</b>), and SO<sub>2</sub> (<b>c</b>) in Mix #1 NO<sub>2</sub>; NH<sub>3</sub> (<b>d</b>), NO<sub>2</sub> (<b>e</b>), and SO<sub>2</sub> (<b>f</b>) in Mix #2; NH<sub>3</sub> (<b>g</b>), NO<sub>2</sub> (<b>h</b>), and SO<sub>2</sub> (<b>i</b>) in Mix #3. The peak at 275 mA observed for pure N<sub>2</sub> is due to residual H<sub>2</sub>O in the gas line.</p>
Full article ">
16 pages, 721 KiB  
Article
A Deep Learning Approach for Automatic and Objective Grading of the Motor Impairment Severity in Parkinson’s Disease for Use in Tele-Assessments
by Mehar Singh, Prithvi Prakash, Rachneet Kaur, Richard Sowers, James Robert Brašić and Manuel Enrique Hernandez
Sensors 2023, 23(21), 9004; https://doi.org/10.3390/s23219004 - 6 Nov 2023
Cited by 2 | Viewed by 2359
Abstract
Wearable sensors provide a tool for at-home monitoring of motor impairment progression in neurological conditions such as Parkinson’s disease (PD). This study examined the ability of deep learning approaches to grade the motor impairment severity in a modified version of the Movement Disorders [...] Read more.
Wearable sensors provide a tool for at-home monitoring of motor impairment progression in neurological conditions such as Parkinson’s disease (PD). This study examined the ability of deep learning approaches to grade the motor impairment severity in a modified version of the Movement Disorders Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) using low-cost wearable sensors. We hypothesized that expanding training datasets with motion data from healthy older adults (HOAs) and initializing classifiers with weights learned from unsupervised pre-training would lead to an improvement in performance when classifying lower vs. higher motor impairment relative to a baseline deep learning model (XceptionTime). This study evaluated the change in classification performance after using expanded training datasets with HOAs and transferring weights from unsupervised pre-training compared to a baseline deep learning model (XceptionTime) using both upper extremity (finger tapping, hand movements, and pronation–supination movements of the hands) and lower extremity (toe tapping and leg agility) tasks consistent with the MDS-UPDRS. Overall, we found a 12.2% improvement in accuracy after expanding the training dataset and pre-training using max-vote inference on hand movement tasks. Moreover, we found that the classification performance improves for every task except toe tapping after the addition of HOA training data. These findings suggest that learning from HOA motion data can implicitly improve the representations of PD motion data for the purposes of motor impairment classification. Further, our results suggest that unsupervised pre-training can improve the performance of motor impairment classifiers without any additional annotated PD data, which may provide a viable solution for a widely deployable telemedicine solution. Full article
Show Figures

Figure 1

Figure 1
<p>The setup for sensor placement for upper extremity and lower extremity data collection. Images of the hand and the foot are reproduced with permission [<a href="#B18-sensors-23-09004" class="html-bibr">18</a>].</p>
Full article ">Figure 2
<p>The divisions of PD and HOA sequences used to create training, validation, and test groups. Abbreviations: HOA = healthy older adult; PD = Parkinson’s disease.</p>
Full article ">Figure 3
<p>XceptionTime (XT) block architecture. Abbreviation: B = batch size; BN = batch normalization; f = <math display="inline"><semantics> <mfrac> <mrow> <mi>the</mi> <mspace width="4.pt"/> <mi>number</mi> <mspace width="4.pt"/> <mi>of</mi> <mspace width="4.pt"/> <mi>output</mi> <mspace width="4.pt"/> <mi>channels</mi> </mrow> <mn>4</mn> </mfrac> </semantics></math>.</p>
Full article ">Figure 4
<p>Example predictions, made by one of our unsupervised XceptionTime models, of masked portions of an input motion sequence from an HOA participant. The two colors correspond to the two sensors that make up the multivariate time-series sequence. Abbreviations: HOA = healthy older adult.</p>
Full article ">Figure 5
<p>The combination of training and validation sets used to train unsupervised learning models and generate pre-trained weights. Abbreviations: HOA = healthy older adult; PD = Parkinson’s disease; <span class="html-italic">W</span> = weights.</p>
Full article ">Figure 6
<p>A workflow depicting the process of computing CKA similarity indexes for the representations among models with different initializations and training sets. Abbreviations: CKA = centered kernel alignment; FTA = fine-tune all; FTL = fine-tune last, then all; HOA = healthy older adult; PD = Parkinson’s disease; RandInit = random initialization; <span class="html-italic">W</span> = weights.</p>
Full article ">
15 pages, 1087 KiB  
Article
User Experience Evaluation of Upper Limb Rehabilitation Robots: Implications for Design Optimization: A Pilot Study
by Tzu-Ning Yeh and Li-Wei Chou
Sensors 2023, 23(21), 9003; https://doi.org/10.3390/s23219003 - 6 Nov 2023
Cited by 2 | Viewed by 1467
Abstract
With the development of science and technology, people are trying to use robots to assist in stroke rehabilitation training. This study aims to analyze the result of the formative test to provide the orientation of upper limb rehabilitation robot design optimization. We invited [...] Read more.
With the development of science and technology, people are trying to use robots to assist in stroke rehabilitation training. This study aims to analyze the result of the formative test to provide the orientation of upper limb rehabilitation robot design optimization. We invited 21 physical therapists (PTs) and eight occupational therapists (OTs) who had no experience operating any upper limb rehabilitation robots before, and 4 PTs and 1 OT who had experience operating upper limb rehabilitation robots. Data statistics use the Likert scale. The general group scored 3.5 for safety-related topics, while the experience group scored 4.5. In applicability-related questions, the main function score was 2.3 in the general group and 2.4 in the experience group; and the training trajectory score was 3.5 in the general group and 5.0 in the experience group. The overall ease of use score was 3.1 in the general group and 3.6 in the experience group. There was no statistical difference between the two groups. The methods to retouch the trajectory can be designed through the feedback collected in the formative test and gathering further detail in the next test. Further details about the smooth trajectory must be confirmed in the next test. The optimization of the recording process is also important to prevent users from making additional effort to know it well. Full article
Show Figures

Figure 1

Figure 1
<p>Upper limb rehabilitation robot U100 (permitted via HIWIN Technologies Corp., Taichung City, Taiwan).</p>
Full article ">Figure 2
<p>Experiment process.</p>
Full article ">Figure 3
<p>The coordinate frames of U100.</p>
Full article ">
10 pages, 14986 KiB  
Communication
Self-Modulated Ghost Imaging in Dynamic Scattering Media
by Ying Yu, Mingxuan Hou, Changlun Hou, Zhen Shi, Jufeng Zhao and Guangmang Cui
Sensors 2023, 23(21), 9002; https://doi.org/10.3390/s23219002 - 6 Nov 2023
Cited by 1 | Viewed by 1230
Abstract
In this paper, self-modulated ghost imaging (SMGI) in a surrounded scattering medium is proposed. Different from traditional ghost imaging, SMGI can take advantage of the dynamic scattering medium that originally affects the imaging quality and generate pseudo-thermal light through the dynamic scattering of [...] Read more.
In this paper, self-modulated ghost imaging (SMGI) in a surrounded scattering medium is proposed. Different from traditional ghost imaging, SMGI can take advantage of the dynamic scattering medium that originally affects the imaging quality and generate pseudo-thermal light through the dynamic scattering of free particles’ Brownian motion in the scattering environment for imaging. Theoretical analysis and simulation were used to establish the relationship between imaging quality and particle concentration. An experimental setup was also built to verify the feasibility of the SMGI. Compared with the reconstructed image quality and evaluation indexes of traditional ghost imaging, SMGI has better image quality, which demonstrates a promising future in dynamic high-scattering media such as dense fog and turbid water. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Ghost imaging experimental schematic diagram.</p>
Full article ">Figure 2
<p>(<b>a</b>) Self-modulated experimental schematic diagram. (<b>b</b>) Scene of the laser beam passing through the scattering medium. (<b>c</b>) Pattern of the object.</p>
Full article ">Figure 3
<p>From (<b>1</b>) to (<b>6</b>), the strength of scattering is increasing. (<b>a1</b>–<b>a6</b>) Intensity distribution received by the detector. (<b>b1</b>–<b>b6</b>) Sequentially corresponding simulation models. The density of particles is (<b>a1</b>,<b>b1</b>) 5 × 10<sup>4</sup> cm<sup>−3</sup>, (<b>a2</b>,<b>b2</b>) 5 × 10<sup>5</sup> cm<sup>−3</sup>, (<b>a3</b>,<b>b3</b>) 5 × 10<sup>6</sup> cm<sup>−3</sup>, (<b>a4</b>,<b>b4</b>) 1 × 10<sup>7</sup> cm<sup>−3</sup>, (<b>a5</b>,<b>b5</b>) 1.5 × 10<sup>7</sup> cm<sup>−3</sup>, and (<b>a6</b>,<b>b6</b>) 2 × 10<sup>7</sup> cm<sup>−3</sup>.</p>
Full article ">Figure 4
<p>The total intensity transmittance corresponding to each concentration.</p>
Full article ">Figure 5
<p>(<b>a</b>) The principle diagram of the statistics. (<b>b</b>) Statistical results of average light intensity. (<b>c</b>) The complex amplitude of the light field is approximately Gaussian random distribution in time.</p>
Full article ">Figure 6
<p>(<b>a</b>) The principle diagram of the statistics. (<b>b</b>) Statistical results of average light intensity. (<b>c</b>) The complex amplitude of the light field is approximately Gaussian random distribution in the transverse and longitudinal space.</p>
Full article ">Figure 7
<p>(<b>a</b>,<b>b</b>) Reconstructed images of SMGI and traditional GI, respectively. The results were reconstructed with 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, and 10,000 frame samples in turn. (<b>c</b>) Canny edge detector result. The PSNR (<b>d</b>) and SSIM (<b>e</b>) line charts of SMGI and GI.</p>
Full article ">Figure 8
<p>Concentration of particles ((<b>a</b>) 0.05 mg/cm<sup>3</sup>, (<b>b</b>) 1.0 mg/cm<sup>3</sup>, (<b>c</b>) 1.5mg/cm<sup>3</sup>, (<b>d</b>) 2.0mg/cm<sup>3</sup>).</p>
Full article ">
20 pages, 6639 KiB  
Article
Long-Range Network of Air Quality Index Sensors in an Urban Area
by Ionut-Marian Dobra, Vladut-Alexandru Dobra, Adina-Alexandra Dobra, Gabriel Harja, Silviu Folea and Vlad-Dacian Gavra
Sensors 2023, 23(21), 9001; https://doi.org/10.3390/s23219001 - 6 Nov 2023
Viewed by 1369
Abstract
In recent times the escalating pollution within densely populated metropolitan areas has emerged as a significant and pressing concern. Authorities are actively grappling with the challenge of devising solutions to promote a cleaner and more environmentally friendly urban landscapes. This paper outlines the [...] Read more.
In recent times the escalating pollution within densely populated metropolitan areas has emerged as a significant and pressing concern. Authorities are actively grappling with the challenge of devising solutions to promote a cleaner and more environmentally friendly urban landscapes. This paper outlines the potential of establishing a LoRa node network within a densely populated urban environment. Each LoRa node in this network is equipped with an air quality measurement sensor. This interconnected system efficiently transmits all the analyzed data to a gateway, which subsequently sends it to a server or database in real time. These data are then harnessed to create a pollution map for the corresponding area, providing users with the opportunity to assess local pollution levels and their recent variations. Furthermore, this information proves valuable when determining the optimal route between two points in the city, enabling users to select the path with the lowest pollution levels, thus enhancing the overall quality of the urban environment. This advantage contributes to alleviating congestion and reducing excessive pollution often concentrated behind buildings or on adjacent streets. Full article
Show Figures

Figure 1

Figure 1
<p>Communication between end node–gateway–server.</p>
Full article ">Figure 2
<p>End node: NUCLEO-WL55JC1 (<b>left</b> side) and gateway: NUCLEO-F746ZG (<b>right</b> side).</p>
Full article ">Figure 3
<p>Data received by the server.</p>
Full article ">Figure 4
<p>Pimorini BME680 Air Quality sensor.</p>
Full article ">Figure 5
<p>End node cluster.</p>
Full article ">Figure 6
<p>Multiple end node clusters.</p>
Full article ">Figure 7
<p>Gateways’ coverage of a neighborhood from Cluj-Napoca.</p>
Full article ">Figure 8
<p>End node coverage of a neighborhood from Cluj-Napoca.</p>
Full article ">Figure 9
<p>End node cluster mounted on signal pole (Highlighted by arrow).</p>
Full article ">Figure 10
<p>End nodes’ positions on the TTN mapper map (green dot) to be in the range of a gateway for data transmission.</p>
Full article ">Figure 11
<p>Diagram of the centralized values for gas measurement from sensors 1, 2, and 3.</p>
Full article ">Figure 12
<p>Air pollution map and possible routes with different pollution levels (Red dot—start and destination point, and blue dot—the location of the end nodes).</p>
Full article ">
20 pages, 7311 KiB  
Article
Human Respiration Rate Measurement with High-Speed Digital Fringe Projection Technique
by Anna Lena Lorenz and Song Zhang
Sensors 2023, 23(21), 9000; https://doi.org/10.3390/s23219000 - 6 Nov 2023
Viewed by 2187
Abstract
This paper proposes a non-contact continuous respiration monitoring method based on Fringe Projection Profilometry (FPP). This method aims to overcome the limitations of traditional intrusive techniques by providing continuous monitoring without interfering with normal breathing. The FPP sensor captures three-dimensional (3D) respiratory motion [...] Read more.
This paper proposes a non-contact continuous respiration monitoring method based on Fringe Projection Profilometry (FPP). This method aims to overcome the limitations of traditional intrusive techniques by providing continuous monitoring without interfering with normal breathing. The FPP sensor captures three-dimensional (3D) respiratory motion from the chest wall and abdomen, and the analysis algorithms extract respiratory parameters. The system achieved a high Signal-to-Noise Ratio (SNR) of 37 dB with an ideal sinusoidal respiration signal. Experimental results demonstrated that a mean correlation of 0.95 and a mean Root-Mean-Square Error (RMSE) of 0.11 breaths per minute (bpm) were achieved when comparing to a reference signal obtained from a spirometer. Full article
(This article belongs to the Special Issue Optical Instruments and Sensors and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Overview of the developed system from capturing data to the extraction of the respiratory parameters.</p>
Full article ">Figure 2
<p>Setup of the 3D sensor prototype.</p>
Full article ">Figure 3
<p>Comparison between two point clouds with and without preprocessing. Color represents the distance from the sensor with light gold points being closer dark blue points being further away.</p>
Full article ">Figure 4
<p>Maximum variation in the <span class="html-italic">z</span> direction in each pixel over a time frame of 30 s.</p>
Full article ">Figure 5
<p>Dynamic adaption of the ROI selection before and after body movement. (<b>a</b>) Depth map of the body centered before movement. (<b>b</b>) Depth map of the body after movement. (<b>c</b>) ROI before movement. (<b>d</b>) ROI after movement.</p>
Full article ">Figure 6
<p>Comparison of the volume signal before and after baseline correction.</p>
Full article ">Figure 7
<p>Comparison of the measured signal and the reference signal with and without preprocessing.</p>
Full article ">Figure 8
<p>Comparison of the measured signal and the reference signal with and without the adaptive ROI selection.</p>
Full article ">Figure 9
<p>Result of the motion separation algorithm and the introduced binary trust status with different levels of body movements.</p>
Full article ">Figure 10
<p>Result of the motion separation algorithm for three different breathing patterns and subject behavior. (<b>a</b>) Shallow breathing with slight body movement. (<b>b</b>) Holding the breath with body movement. (<b>c</b>) Fast breathing.</p>
Full article ">Figure 11
<p>Comparison of the extracted respiration signal and the reference.</p>
Full article ">Figure 12
<p>Comparison of the 3D measured RR and BRR and its reference.</p>
Full article ">
16 pages, 9569 KiB  
Article
Enhancing UAV Visual Landing Recognition with YOLO’s Object Detection by Onboard Edge Computing
by Ming-You Ma, Shang-En Shen and Yi-Cheng Huang
Sensors 2023, 23(21), 8999; https://doi.org/10.3390/s23218999 - 6 Nov 2023
Cited by 8 | Viewed by 2376
Abstract
A visual camera system combined with the unmanned aerial vehicle (UAV) onboard edge computer should deploy an efficient object detection ability, increase the frame per second rate of the object of interest, and the wide searching ability of the gimbal camera for finding [...] Read more.
A visual camera system combined with the unmanned aerial vehicle (UAV) onboard edge computer should deploy an efficient object detection ability, increase the frame per second rate of the object of interest, and the wide searching ability of the gimbal camera for finding the emergent landing platform and for future reconnaissance area missions. This paper proposes an approach to enhance the visual capabilities of this system by using the You Only Look Once (YOLO)-based object detection (OD) with Tensor RTTM acceleration technique, an automated visual tracking gimbal camera control system, and multithread programing for image transmission to the ground station. With lightweight edge computing (EC), the mean average precision (mAP) was satisfied and we achieved a higher frame per second (FPS) rate via YOLO accelerated with TensorRT for an onboard UAV. The OD compares four YOLO models to recognize objects of interest for landing spots at the home university first. Then, the trained dataset with YOLOv4-tiny was successfully applied to another field with a distance of more than 100 km. The system’s capability to accurately recognize a different landing point in new and unknown environments is demonstrated successfully. The proposed approach substantially reduces the data transmission and processing time to ground stations with automated visual tracking gimbal control, and results in rapid OD and the feasibility of using NVIDIA JetsonTM Xavier NX by deploying YOLOs with more than 35 FPS for the UAV. The enhanced visual landing and future reconnaissance mission capabilities of real-time UAVs were demonstrated. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of YOLOv4-tiny.</p>
Full article ">Figure 2
<p>Illustration of landing point detection by the UAV.</p>
Full article ">Figure 3
<p>Hardware architecture of the UAV.</p>
Full article ">Figure 4
<p>Control signal and communication wiring diagram of the UAV hardware.</p>
Full article ">Figure 5
<p>Flowchart of the process of scanning for landing points by the UAV.</p>
Full article ">Figure 6
<p>Illustration for labelling NCHU’s turf, red court, and blue court as sports fields.</p>
Full article ">Figure 7
<p>Performance of the loss functions of YOLOv3, YOLOv3-tiny, YOLOv4, and YOLOv4-tiny.</p>
Full article ">Figure 7 Cont.
<p>Performance of the loss functions of YOLOv3, YOLOv3-tiny, YOLOv4, and YOLOv4-tiny.</p>
Full article ">Figure 7 Cont.
<p>Performance of the loss functions of YOLOv3, YOLOv3-tiny, YOLOv4, and YOLOv4-tiny.</p>
Full article ">Figure 8
<p>Performance of YOLOv3, YOLOv4, YOLOv3-tiny, and YOLOv4-tiny in mAP and FPS.</p>
Full article ">Figure 9
<p>OD images of the actual identification performance in NCHU by (<b>a</b>) YOLOv4, (<b>b</b>) YOLOv3, (<b>c</b>) YOLOv4-tiny, and (<b>d</b>) YOLOv3-tiny.</p>
Full article ">Figure 9 Cont.
<p>OD images of the actual identification performance in NCHU by (<b>a</b>) YOLOv4, (<b>b</b>) YOLOv3, (<b>c</b>) YOLOv4-tiny, and (<b>d</b>) YOLOv3-tiny.</p>
Full article ">Figure 9 Cont.
<p>OD images of the actual identification performance in NCHU by (<b>a</b>) YOLOv4, (<b>b</b>) YOLOv3, (<b>c</b>) YOLOv4-tiny, and (<b>d</b>) YOLOv3-tiny.</p>
Full article ">Figure 10
<p>Images of the actual identification performance in AURD with a distance of more than 100 km away from the trained dataset of NCHU.</p>
Full article ">Figure 10 Cont.
<p>Images of the actual identification performance in AURD with a distance of more than 100 km away from the trained dataset of NCHU.</p>
Full article ">
24 pages, 13986 KiB  
Article
A 3.0 µm Pixels and 1.5 µm Pixels Combined Complementary Metal-Oxide Semiconductor Image Sensor for High Dynamic Range Vision beyond 106 dB
by Satoko Iida, Daisuke Kawamata, Yorito Sakano, Takaya Yamanaka, Shohei Nabeyoshi, Tomohiro Matsuura, Masahiro Toshida, Masahiro Baba, Nobuhiko Fujimori, Adarsh Basavalingappa, Sungin Han, Hidetoshi Katayama and Junichiro Azami
Sensors 2023, 23(21), 8998; https://doi.org/10.3390/s23218998 - 6 Nov 2023
Viewed by 1836
Abstract
We propose a new concept image sensor suitable for viewing and sensing applications. This is a report of a CMOS image sensor with a pixel architecture consisting of a 1.5 μm pixel with four-floating-diffusions-shared pixel structures and a 3.0 μm pixel with an [...] Read more.
We propose a new concept image sensor suitable for viewing and sensing applications. This is a report of a CMOS image sensor with a pixel architecture consisting of a 1.5 μm pixel with four-floating-diffusions-shared pixel structures and a 3.0 μm pixel with an in-pixel capacitor. These pixels are four small quadrate pixels and one big square pixel, also called quadrate–square pixels. They are arranged in a staggered pitch array. The 1.5 μm pixel pitch allows for a resolution high enough to recognize distant road signs. The 3 μm pixel with intra-pixel capacitance provides two types of signal outputs: a low-noise signal with high conversion efficiency and a highly saturated signal output, resulting in a high dynamic range (HDR). Two types of signals with long exposure times are read out from the vertical pixel, and four types of signals are read out from the horizontal pixel. In addition, two signals with short exposure times are read out again from the square pixel. A total of eight different signals are read out. This allows two rows to be read out simultaneously while reducing motion blur. This architecture achieves both an HDR of 106 dB and LED flicker mitigation (LFM), as well as being motion-artifact-free and motion-blur-less. As a result, moving subjects can be accurately recognized and detected with good color reproducibility in any lighting environment. This allows a single sensor to deliver the performance required for viewing and sensing applications. Full article
Show Figures

Figure 1

Figure 1
<p>Requirements for luminance.</p>
Full article ">Figure 2
<p>(u′, v′) chromaticity diagram, also known as the CIE 1976 UCS (uniform chromaticity scale) diagram.</p>
Full article ">Figure 3
<p>Pixel configuration.</p>
Full article ">Figure 4
<p>Pixel array.</p>
Full article ">Figure 5
<p>Sensor block diagram.</p>
Full article ">Figure 6
<p>Chip implementation.</p>
Full article ">Figure 7
<p>Cross-section of quadrate–square pixel.</p>
Full article ">Figure 8
<p>OCL shape differences. (<b>a</b>) Square OCL, (<b>b</b>) Quadrate OCL.</p>
Full article ">Figure 9
<p>Relationship between OCL shape and Quantum Efficiency.</p>
Full article ">Figure 10
<p>Pixel circuit.</p>
Full article ">Figure 11
<p>Pixel drive line layout.</p>
Full article ">Figure 12
<p>Square pixel readout. (<b>a</b>) Pixel circuit of square pixel. (<b>b</b>) Potential diagram of square pixel.</p>
Full article ">Figure 13
<p>Timing sequence.</p>
Full article ">Figure 14
<p>Photo response of quadrate pixels.</p>
Full article ">Figure 15
<p>Quantum efficiency of quadrate pixel.</p>
Full article ">Figure 16
<p>Macbeth chart at 6500 K.</p>
Full article ">Figure 17
<p>OFG dependency of dark current and PRNU. (<b>a</b>) OFG dependency with vertical transfer gate. (<b>b</b>) OFG dependency with planar transfer gate.</p>
Full article ">Figure 18
<p>OFG dependency of full-well capacity.</p>
Full article ">Figure 19
<p>Definition of overflow charge location and potential difference.</p>
Full article ">Figure 20
<p>Exposure time and potential difference that can prevent overflow.</p>
Full article ">Figure 21
<p>Potential difference between TGL and OFG and PRNU. (<b>a</b>) Actual setting. (<b>b</b>) Simulation.</p>
Full article ">Figure 22
<p>OFG dependency of dark current in the FC.</p>
Full article ">Figure 23
<p>White spot of FC. (<b>a</b>) VTG structure. (<b>b</b>) PTG structure.</p>
Full article ">Figure 24
<p>Cross-section of square pixel.</p>
Full article ">Figure 25
<p>OFG dependency on FWC and PRNU.</p>
Full article ">Figure 26
<p>Photo response of square pixels.</p>
Full article ">Figure 27
<p>SNR curve of synthesized signal.</p>
Full article ">Figure 28
<p>Quantum efficiency (square pixel).</p>
Full article ">Figure 29
<p>Image of a moving object. (<b>a</b>) One-shot HDR. (<b>b</b>) DOL HDR. (<b>c</b>) Quadrate–square HDR.</p>
Full article ">Figure 30
<p>Motion detection flow.</p>
Full article ">Figure 31
<p>Image of a road sign. (<b>a</b>) 3 μm pixel. (<b>b</b>) 2.25 μm pixel. (<b>c</b>) Quadrate–square pixel.</p>
Full article ">Figure 32
<p>The interpolation process. (<b>a</b>) 3.0 μm Bayer array. (<b>b</b>) Quadrate–square pixel array.</p>
Full article ">Figure 33
<p>Synthesized image. (<b>a</b>) Square pixel PD + FC + Quadrate pixel RGC. (<b>b</b>) Quadrate pixel Gray image.</p>
Full article ">
34 pages, 8989 KiB  
Systematic Review
Human Posture Estimation: A Systematic Review on Force-Based Methods—Analyzing the Differences in Required Expertise and Result Benefits for Their Utilization
by Sebastian Helmstetter and Sven Matthiesen
Sensors 2023, 23(21), 8997; https://doi.org/10.3390/s23218997 - 6 Nov 2023
Cited by 1 | Viewed by 2495
Abstract
Force-based human posture estimation (FPE) provides a valuable alternative when camera-based human motion capturing is impractical. It offers new opportunities for sensor integration in smart products for patient monitoring, ergonomic optimization and sports science. Due to the interdisciplinary research on the topic, an [...] Read more.
Force-based human posture estimation (FPE) provides a valuable alternative when camera-based human motion capturing is impractical. It offers new opportunities for sensor integration in smart products for patient monitoring, ergonomic optimization and sports science. Due to the interdisciplinary research on the topic, an overview of existing methods and the required expertise for their utilization is lacking. This paper presents a systematic review by the PRISMA 2020 review process. In total, 82 studies are selected (59 machine learning (ML)-based and 23 digital human model (DHM)-based posture estimation methods). The ML-based methods use input data from hardware sensors—mostly pressure mapping sensors—and trained ML models for estimating human posture. The ML-based human posture estimation algorithms mostly reach an accuracy above 90%. DHMs, which represent the structure and kinematics of the human body, adjust posture to minimize physical stress. The required expert knowledge for the utilization of these methods and their resulting benefits are analyzed and discussed. DHM-based methods have shown their general applicability without the need for application-specific training but require expertise in human physiology. ML-based methods can be used with less domain-specific expertise, but an application-specific training of these models is necessary. Full article
(This article belongs to the Special Issue Sensing Technology and Wearables for Physical Activity)
Show Figures

Figure 1

Figure 1
<p>A general schemata for the data processing of human posture estimation in computer vision on 2D rgb images (<b>right hand side</b>) and examples of identified human postures in diverse activity contexts (<b>left hand side</b>) described in a graphical abstract by Ben Gamra and Akhloufi [<a href="#B26-sensors-23-08997" class="html-bibr">26</a>] (Reprinted with permission from [<a href="#B26-sensors-23-08997" class="html-bibr">26</a>], 2023 Elsevier).</p>
Full article ">Figure 2
<p>Different human postures that are estimated based on force data in representative studies in the literature review. (<b>a</b>) Human sitting postures classified by pressure data from a sensor on top of a chair [<a href="#B29-sensors-23-08997" class="html-bibr">29</a>]; (<b>b</b>) Sleeping postures that can estimated using ML-based algorithms [<a href="#B13-sensors-23-08997" class="html-bibr">13</a>] (Reprinted with permission from [<a href="#B13-sensors-23-08997" class="html-bibr">13</a>,<a href="#B29-sensors-23-08997" class="html-bibr">29</a>], 2023 Springer Nature).</p>
Full article ">Figure 3
<p>Flowchart of the review process according to PRISMA 2020 [<a href="#B30-sensors-23-08997" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>Sandwich structures of film pressure sensors consisting of a top and bottom copper layer connected by a piezoresistive ink: (<b>left side</b>) single-point sensor and (<b>right side</b>) sensor matrix [<a href="#B33-sensors-23-08997" class="html-bibr">33</a>] (Reprinted with permission from [<a href="#B33-sensors-23-08997" class="html-bibr">33</a>], 2023 Springer Nature).</p>
Full article ">Figure 5
<p>(<b>a</b>) Sample of the textile pressure sensors self-manufactured by [<a href="#B43-sensors-23-08997" class="html-bibr">43</a>].; (<b>b</b>) textile pressure sensors attached to expandable fabric pants [<a href="#B43-sensors-23-08997" class="html-bibr">43</a>] (Reprinted with permission from [<a href="#B43-sensors-23-08997" class="html-bibr">43</a>], 2023 MDP).</p>
Full article ">Figure 6
<p>Visualization of the dynamic digital human model used by Rahmati and Mallakzadeh [<a href="#B78-sensors-23-08997" class="html-bibr">78</a>] for the posture estimation of a weightlifting sportsperson. (Reprinted with permission from [<a href="#B78-sensors-23-08997" class="html-bibr">78</a>], 2023 Elsevier B.V.)</p>
Full article ">Figure 7
<p>Using the RAMIS DHM for estimating the drivers posture closing a car door (<b>left</b>: force-based estimated posture, <b>right</b>: video-recorded real posture, green arrows displays the external forces for the FPE) [<a href="#B90-sensors-23-08997" class="html-bibr">90</a>].</p>
Full article ">Figure 8
<p>Comparison of estimated lower-body motion and a motion-captured ground truth [<a href="#B102-sensors-23-08997" class="html-bibr">102</a>] (Reprinted with permission from [<a href="#B102-sensors-23-08997" class="html-bibr">102</a>], 2023 IEEE Xplore).</p>
Full article ">Figure 9
<p>Sankey diagram showing the relations between input data sources, estimation methods and applications, including the number of studies found.</p>
Full article ">Figure 10
<p>Histogram of published studies on force-based human posture estimation, divided into machine learning (ML)-based estimation methods and digital human model (DHM)-based estimation methods, over the years.</p>
Full article ">
13 pages, 4507 KiB  
Communication
Low-Loss Paper-Substrate Triple-Band-Frequency Reconfigurable Microstrip Antenna for Sub-7 GHz Applications
by Ajit Kumar Singh, Santosh Kumar Mahto, Rashmi Sinha, Mohammad Alibakhshikenari, Ahmed Jamal Abdullah Al-Gburi, Ashfaq Ahmad, Lida Kouhalvandi, Bal S. Virdee and Mariana Dalarsson
Sensors 2023, 23(21), 8996; https://doi.org/10.3390/s23218996 - 6 Nov 2023
Cited by 3 | Viewed by 1662
Abstract
In this paper, a low-cost resin-coated commercial-photo-paper substrate is used to design a printed reconfigurable multiband antenna. The two PIN diodes are used mainly to redistribute the surface current that provides reconfigurable properties to the proposed antenna. The antenna size of 40 mm [...] Read more.
In this paper, a low-cost resin-coated commercial-photo-paper substrate is used to design a printed reconfigurable multiband antenna. The two PIN diodes are used mainly to redistribute the surface current that provides reconfigurable properties to the proposed antenna. The antenna size of 40 mm × 40 mm × 0.44 mm with a partial ground, covers wireless and mobile bands ranging from 1.91 GHz to 6.75 GHz. The parametric analysis is performed to achieve optimized design parameters of the antenna. The U-shaped and C-shaped emitters are meant to function at 2.4 GHz and 5.9 GHz, respectively, while the primary emitter is designed to operate at 3.5 GHz. The proposed antenna achieved peak gain and radiation efficiency of 3.4 dBi and 90%, respectively. Simulated and measured results of the reflection coefficient, radiation pattern, gain, and efficiency show that the antenna design is in favorable agreement. Since the proposed antenna achieved wideband (1.91–6.75 GHz) using PIN diode configuration, using this technique the need for numerous electronic components to provide multiband frequency is avoided. Full article
(This article belongs to the Special Issue Metasurface-Based Antennas for 5G and Beyond)
Show Figures

Figure 1

Figure 1
<p>The evolution of the proposed antenna: (<b>a</b>) Step-I; (<b>b</b>) step-II; (<b>c</b>) step-III; (<b>d</b>) step-IV; and (<b>e</b>) reflection coefficient. (<b>f</b>) Peak gain and current distributions; (<b>g</b>) step-I-2.4 GHz; (<b>h</b>) step-II-2 GHz; (<b>i</b>) step-III-3.2 GHz; and (<b>j</b>) step-IV-5.6 GHz.</p>
Full article ">Figure 2
<p>(<b>a</b>) Proposed antenna (layout- simulated) and fabricated antenna; (<b>b</b>) front view; (<b>c</b>) back view; and (<b>d</b>) equivalent circuits for the PIN diode ON and OFF condition.</p>
Full article ">Figure 3
<p>Parametric analysis of proposed antenna (<b>a</b>) Ground length (L<sub>g</sub>), (<b>b</b>) Feed width (W<sub>f</sub>).</p>
Full article ">Figure 4
<p>S-parameter characteristics; (<b>a</b>) reflection coefficient, when D1 = D2 = ON, OFF; (<b>b</b>) measured reflection coefficient, when D1 = D2 = ON; (<b>c</b>) reflection coefficient, when D1 = D2 = ON-OFF, OFF-ON; and (<b>d</b>) measured reflection coefficient, when D1 = OFF, D2 = ON.</p>
Full article ">Figure 4 Cont.
<p>S-parameter characteristics; (<b>a</b>) reflection coefficient, when D1 = D2 = ON, OFF; (<b>b</b>) measured reflection coefficient, when D1 = D2 = ON; (<b>c</b>) reflection coefficient, when D1 = D2 = ON-OFF, OFF-ON; and (<b>d</b>) measured reflection coefficient, when D1 = OFF, D2 = ON.</p>
Full article ">Figure 5
<p>(<b>a</b>) Peak gain (D1 = D2 = ON, OFF); (<b>b</b>) peak gain (D1 = ON, OFF, D2 = ON, OFF); (<b>c</b>) radiation efficiency and E and H plane at (<b>d</b>) 3.5 GHz (ON-ON condition) and (<b>e</b>) 3.5 GHz (OFF-OFF condition); surface current (<b>f</b>) 2.48 GHz, (<b>g</b>) 3.5 GHz, and (<b>h</b>) 5.6 GHz; (<b>i</b>) 3D polar plot at 2.48 GHz; and (<b>j</b>) radiation pattern measurement at the ancheoic chamber.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Peak gain (D1 = D2 = ON, OFF); (<b>b</b>) peak gain (D1 = ON, OFF, D2 = ON, OFF); (<b>c</b>) radiation efficiency and E and H plane at (<b>d</b>) 3.5 GHz (ON-ON condition) and (<b>e</b>) 3.5 GHz (OFF-OFF condition); surface current (<b>f</b>) 2.48 GHz, (<b>g</b>) 3.5 GHz, and (<b>h</b>) 5.6 GHz; (<b>i</b>) 3D polar plot at 2.48 GHz; and (<b>j</b>) radiation pattern measurement at the ancheoic chamber.</p>
Full article ">Figure 6
<p>(<b>a</b>) Simulated bending and (<b>b</b>) reflection coefficient to frequency for bending analysis along the X and Y axis.</p>
Full article ">
14 pages, 951 KiB  
Communication
Anti-Swing Control for Quadrotor-Slung Load Transportation System with Underactuated State Constraints
by Feng Ding, Chong Sun and Shunfan He
Sensors 2023, 23(21), 8995; https://doi.org/10.3390/s23218995 - 6 Nov 2023
Cited by 2 | Viewed by 1290
Abstract
Quadrotors play a crucial role in the national economy. The control technology for quadrotor-slung load transportation systems has become a research hotspot. However, the underactuated load’s swing poses significant challenges to the stability of the system. In this paper, we propose a Lyapunov-based [...] Read more.
Quadrotors play a crucial role in the national economy. The control technology for quadrotor-slung load transportation systems has become a research hotspot. However, the underactuated load’s swing poses significant challenges to the stability of the system. In this paper, we propose a Lyapunov-based control strategy, to ensure the stability of the quadrotor-slung load transportation system while satisfying the constraints of the load’s swing angles. Firstly, a position controller without swing angle constraints is proposed, to ensure the stability of the system. Then, a barrier Lyapunov function based on the load’s swing angle constraints is constructed, and an anti-swing controller is designed to guarantee the states’ asymptotic stability. Finally, a PD controller is designed, to drive the actual angles to the virtual ones, which are extracted from the position controller. The effectiveness of the control method is verified by comparing it to the results of the LQR algorithm. The proposed control method not only guarantees the payload’s swing angle constraints but also reduces energy consumption. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>The structure and coordinates of the quadrotor-slung load transportation system.</p>
Full article ">Figure 2
<p>The control structure of the QSLTS.</p>
Full article ">Figure 3
<p>The position of the quadrotor.</p>
Full article ">Figure 4
<p>The position of the payload.</p>
Full article ">Figure 5
<p>The force and energy consumption on the quadrotor.</p>
Full article ">
21 pages, 1991 KiB  
Article
A Deep Learning Framework for Anesthesia Depth Prediction from Drug Infusion History
by Mingjin Chen, Yongkang He and Zhijing Yang
Sensors 2023, 23(21), 8994; https://doi.org/10.3390/s23218994 - 6 Nov 2023
Viewed by 2230
Abstract
In the target-controlled infusion (TCI) of propofol and remifentanil intravenous anesthesia, accurate prediction of the depth of anesthesia (DOA) is very challenging. Patients with different physiological characteristics have inconsistent pharmacodynamic responses during different stages of anesthesia. For example, in TCI, older adults transition [...] Read more.
In the target-controlled infusion (TCI) of propofol and remifentanil intravenous anesthesia, accurate prediction of the depth of anesthesia (DOA) is very challenging. Patients with different physiological characteristics have inconsistent pharmacodynamic responses during different stages of anesthesia. For example, in TCI, older adults transition smoothly from the induction period to the maintenance period, while younger adults are more prone to anesthetic awareness, resulting in different DOA data distributions among patients. To address these problems, a deep learning framework that incorporates domain adaptation and knowledge distillation and uses propofol and remifentanil doses at historical moments to continuously predict the bispectral index (BIS) is proposed in this paper. Specifically, a modified adaptive recurrent neural network (AdaRNN) is adopted to address data distribution differences among patients. Moreover, a knowledge distillation pipeline is developed to train the prediction network by enabling it to learn intermediate feature representations of the teacher network. The experimental results show that our method exhibits better performance than existing approaches during all anesthetic phases in the TCI of propofol and remifentanil intravenous anesthesia. In particular, our method outperforms some state-of-the-art methods in terms of root mean square error and mean absolute error by 1 and 0.8, respectively, in the internal dataset as well as in the publicly available dataset. Full article
(This article belongs to the Special Issue Biosignal Sensing and Analysis for Healthcare Monitoring)
Show Figures

Figure 1

Figure 1
<p>Visualization of the results of the baseline method [<a href="#B12-sensors-23-08994" class="html-bibr">12</a>]. The baseline method performs poorly during the induction and recovery periods. In addition, the baseline model predicts that the patient’s BIS is approximately 40 during the maintenance period. However, some patients have BIS values of approximately 50 or 30 during the maintenance period.</p>
Full article ">Figure 2
<p>An overview of our deep learning framework for DOA prediction. The teacher model has an extra input (i.e., the BIS history), which allows the teacher model to learn a more accurate representation of the DOA and transfer the DOA representation to the student model through various kinds of layers by knowledge distillation. The AdaRNN model includes temporal distribution characterization (TDC) and temporal distribution matching (TDM), which are used to determine the K-segment intervals with the largest distribution differences and reduce cross-domain distribution differences according to the drug infusion history, thereby improving the generalizability of the model. GRU denotes the gated recurrent unit, and FC denotes the fully connected network.</p>
Full article ">Figure 3
<p>An overview of the temporal distribution characterization (TDC) approach, which divides the drug infusion history data into K intervals and obtains the largest distribution difference between every two intervals.</p>
Full article ">Figure 4
<p>Overview of the temporal distribution matching, which reduces cross-domain shifts in the K intervals according to the GRU output.</p>
Full article ">Figure 5
<p>Visualization of the test cases with other compared methods in the VitalDB dataset.</p>
Full article ">Figure 6
<p>Visualization of the test cases with other compared methods in our in-house dataset.</p>
Full article ">Figure 7
<p>Ablation analysis of the various methods in all periods for different evaluation metrics. (<b>a</b>) RMSE; (<b>b</b>) MAE; (<b>c</b>) MDPE, where * denotes a negative value; (<b>d</b>) MDAPE.</p>
Full article ">
17 pages, 3904 KiB  
Article
Analysis of the Impact of Atmospheric Models on the Orbit Prediction of Space Debris
by Yigao Ding, Zhenwei Li, Chengzhi Liu, Zhe Kang, Mingguo Sun, Jiannan Sun and Long Chen
Sensors 2023, 23(21), 8993; https://doi.org/10.3390/s23218993 - 6 Nov 2023
Cited by 1 | Viewed by 1933
Abstract
Atmospheric drag is an important influencing factor in precise orbit determination and the prediction of low-orbit space debris. It has received widespread attention. Currently, calculating atmospheric drag mainly relies on different atmospheric density models. This experiment was designed to explore the impact of [...] Read more.
Atmospheric drag is an important influencing factor in precise orbit determination and the prediction of low-orbit space debris. It has received widespread attention. Currently, calculating atmospheric drag mainly relies on different atmospheric density models. This experiment was designed to explore the impact of different atmospheric density models on the orbit prediction of space debris. In the experiment, satellite laser ranging data published by the ILRS (International Laser Ranging Service) were used as the basis for the precise orbit determination for space debris. The prediction error of space debris orbits at different orbital heights using different atmospheric density models was used as a criterion to evaluate the impact of atmospheric density models on the determination of space-target orbits. Eight atmospheric density models, DTM78, DTM94, DTM2000, J71, RJ71, JB2006, MSIS86, and NRLMSISE00, were compared in the experiment. The experimental results indicated that the DTM2000 atmospheric density model is best for determining and predicting the orbits of LEO (low-Earth-orbit) targets. Full article
Show Figures

Figure 1

Figure 1
<p>F10.7 solar radio flux versus time.</p>
Full article ">Figure 2
<p>Comparison of the prediction accuracies of different atmospheric density models using the SpinSat satellite from 22 May 2015.</p>
Full article ">Figure 3
<p>Comparison of the prediction accuracies of different atmospheric density models using the GRACE-A satellite on 5 July 2015.</p>
Full article ">Figure 4
<p>Comparison of the prediction accuracies of different atmospheric density models using the CryoSat2 satellite on 28 January 2015.</p>
Full article ">Figure 5
<p>Comparison of the prediction accuracies of different atmospheric density models on the Stella satellite on 27 May 2015.</p>
Full article ">Figure 6
<p>Comparison of the prediction accuracies of different atmospheric density models using the Ajisai satellite on 5 February 2015.</p>
Full article ">Figure 7
<p>Comparison of prediction accuracies of different atmospheric density models using the Ajisai satellite, observed from a single station (7090) on 5 December 2015.</p>
Full article ">Figure 8
<p>Comparison of prediction accuracies of different atmospheric density models using the Cryosat2 satellite, observed from a single station (7090) on 8 February 2015.</p>
Full article ">Figure 9
<p>Comparison of prediction accuracies of different atmospheric density models using the GRACE-A satellite, observed from a single station (7090) on 8 June 2015.</p>
Full article ">
19 pages, 737 KiB  
Article
A Rivest–Shamir–Adleman-Based Robust and Effective Three-Factor User Authentication Protocol for Healthcare Use in Wireless Body Area Networks
by Kaijun Liu, Guosheng Xu, Qiang Cao, Chenyu Wang, Jingjing Jia, Yuan Gao and Guoai Xu
Sensors 2023, 23(21), 8992; https://doi.org/10.3390/s23218992 - 5 Nov 2023
Viewed by 1295
Abstract
In healthcare, wireless body area networks (WBANs) can be used to constantly collect patient body data and assist in real-time medical services for patients from physicians. In such security- and privacy-critical systems, the user authentication mechanism can be fundamentally expected to prevent illegal [...] Read more.
In healthcare, wireless body area networks (WBANs) can be used to constantly collect patient body data and assist in real-time medical services for patients from physicians. In such security- and privacy-critical systems, the user authentication mechanism can be fundamentally expected to prevent illegal access and privacy leakage occurrences issued by hacker intrusion. Currently, a significant quantity of new WBAN-oriented authentication protocols have been designed to verify user identity and ensure that body data are accessed only with a session key. However, those newly published protocols still unavoidably affect session key security and user privacy due to the lack of forward secrecy, mutual authentication, user anonymity, etc. To solve this problem, this paper designs a robust user authentication protocol. By checking the integrity of the message sent by the other party, the communication entity verifies the other party’s identity validity. Compared with existing protocols, the presented protocol enhances security and privacy while maintaining the efficiency of computation. Full article
Show Figures

Figure 1

Figure 1
<p>Network model of WBANs.</p>
Full article ">Figure 2
<p>System architecture of WBANs in healthcare.</p>
Full article ">Figure 3
<p>System model in the proposed scheme.</p>
Full article ">Figure 4
<p>Comparison of communication and computation costs in all schemes [<a href="#B28-sensors-23-08992" class="html-bibr">28</a>,<a href="#B31-sensors-23-08992" class="html-bibr">31</a>,<a href="#B34-sensors-23-08992" class="html-bibr">34</a>].</p>
Full article ">
20 pages, 12092 KiB  
Article
Low-Cost Optimized U-Net Model with GMM Automatic Labeling Used in Forest Semantic Segmentation
by Alexandru-Toma Andrei and Ovidiu Grigore
Sensors 2023, 23(21), 8991; https://doi.org/10.3390/s23218991 - 5 Nov 2023
Cited by 1 | Viewed by 1569
Abstract
Currently, Convolutional Neural Networks (CNN) are widely used for processing and analyzing image or video data, and an essential part of state-of-the-art studies rely on training different CNN architectures. They have broad applications, such as image classification, semantic segmentation, or face recognition. Regardless [...] Read more.
Currently, Convolutional Neural Networks (CNN) are widely used for processing and analyzing image or video data, and an essential part of state-of-the-art studies rely on training different CNN architectures. They have broad applications, such as image classification, semantic segmentation, or face recognition. Regardless of the application, one of the important factors influencing network performance is the use of a reliable, well-labeled dataset in the training stage. Most of the time, especially if we talk about semantic classification, labeling is time and resource-consuming and must be done manually by a human operator. This article proposes an automatic label generation method based on the Gaussian mixture model (GMM) unsupervised clustering technique. The other main contribution of this paper is the optimization of the hyperparameters of the traditional U-Net model to achieve a balance between high performance and the least complex structure for implementing a low-cost system. The results showed that the proposed method decreased the resources needed, computation time, and model complexity while maintaining accuracy. Our methods have been tested in a deforestation monitoring application by successfully identifying forests in aerial imagery. Full article
(This article belongs to the Special Issue Machine Learning Based Remote Sensing Image Classification)
Show Figures

Figure 1

Figure 1
<p>Samples of the aerial imagery. (<b>a</b>) RGB image; (<b>b</b>) CIR image.</p>
Full article ">Figure 2
<p>Algorithm diagram.</p>
Full article ">Figure 3
<p>GMM training image. (<b>a</b>) RGB image; (<b>b</b>) CIR image.</p>
Full article ">Figure 4
<p>Results of GMM-based clustering.</p>
Full article ">Figure 5
<p>Information criterion values for each number of clusters. (<b>a</b>) AIC; (<b>b</b>) BIC.</p>
Full article ">Figure 6
<p>Forest sample. (<b>a</b>) Ground truth; (<b>b</b>) Clusters.</p>
Full article ">Figure 7
<p>Image filtering. (<b>a</b>) Ground truth; (<b>b</b>) Merged clustered image; (<b>c</b>) Filtered image.</p>
Full article ">Figure 8
<p>Samples of labeled training dataset.</p>
Full article ">Figure 9
<p>U-Net architecture.</p>
Full article ">Figure 10
<p>Encoder (<b>a</b>) and decoder (<b>b</b>) block diagram.</p>
Full article ">Figure 11
<p>Training histories for different learning rates.</p>
Full article ">Figure 11 Cont.
<p>Training histories for different learning rates.</p>
Full article ">Figure 12
<p>Scenarios performances.</p>
Full article ">
19 pages, 933 KiB  
Article
Fast DOA Estimation Algorithms via Positive Incremental Modified Cholesky Decomposition for Augmented Coprime Array Sensors
by Jing Song, Lin Cao, Zongmin Zhao, Dongfeng Wang and Chong Fu
Sensors 2023, 23(21), 8990; https://doi.org/10.3390/s23218990 - 5 Nov 2023
Viewed by 1318
Abstract
This paper proposes a fast direction of arrival (DOA) estimation method based on positive incremental modified Cholesky decomposition atomic norm minimization (PI-CANM) for augmented coprime array sensors. The approach incorporates coprime sampling on the augmented array to generate a non-uniform, discontinuous virtual array. [...] Read more.
This paper proposes a fast direction of arrival (DOA) estimation method based on positive incremental modified Cholesky decomposition atomic norm minimization (PI-CANM) for augmented coprime array sensors. The approach incorporates coprime sampling on the augmented array to generate a non-uniform, discontinuous virtual array. It then utilizes interpolation to convert this into a uniform, continuous virtual array. Based on this, the problem of DOA estimation is equivalently formulated as a gridless optimization problem, which is solved via atomic norm minimization to reconstruct a Hermitian Toeplitz covariance matrix. Furthermore, by positive incremental modified Cholesky decomposition, the covariance matrix is transformed from positive semi-definite to positive definite, which simplifies the constraint of optimization problem and reduces the complexity of the solution. Finally, the Multiple Signal Classification method is utilized to carry out statistical signal processing on the reconstructed covariance matrix, yielding initial DOA angle estimates. Experimental outcomes highlight that the PI-CANM algorithm surpasses other algorithms in estimation accuracy, demonstrating stability in difficult circumstances such as low signal-to-noise ratios and limited snapshots. Additionally, it boasts an impressive computational speed. This method enhances both the accuracy and computational efficiency of DOA estimation, showing potential for broad applicability. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Overview of algorithm application scenarios, framework structure, and simulation results.</p>
Full article ">Figure 2
<p>Schematic diagram of augmented coprime array structure. (<b>a</b>) Two uniform linear subarrays. (<b>b</b>) Augmented coprime array composed of two sparse uniform linear arrays.</p>
Full article ">Figure 3
<p>Weight coefficients corresponding to virtual array elements at different positions.</p>
Full article ">Figure 4
<p>Illustration of various array representations with an example of 2<span class="html-italic">M</span> = 6 and <span class="html-italic">N</span> = 5. <math display="inline"><semantics> <mi mathvariant="double-struck">S</mi> </semantics></math> represents the augmented coprime array. <math display="inline"><semantics> <mi mathvariant="double-struck">D</mi> </semantics></math> is the virtual array derived from the difference coarray of the augmented coprime array. <math display="inline"><semantics> <mi mathvariant="double-struck">U</mi> </semantics></math> represents the contiguous part of the virtual array. <math display="inline"><semantics> <mi mathvariant="double-struck">V</mi> </semantics></math> is the interpolated virtual array.</p>
Full article ">Figure 5
<p>Resolution effect in terms of the normalized spatial spectrum with the number of snapshots L = 200. The vertical dashed lines denote the actual directions of the incident sources. (<b>a</b>) SS-MUSIC algorithm. (<b>b</b>) CO-LASSO algorithm. (<b>c</b>) SBL algorithm. (<b>d</b>) NNM algorithm. (<b>e</b>) ANM algorithm. (<b>f</b>) Proposed algorithm.</p>
Full article ">Figure 6
<p>Resolution effect in terms of the normalized spatial spectrum with the number of snapshots L = 500. The vertical dashed lines denote the actual directions of the incident sources. (<b>a</b>) SS-MUSIC algorithm. (<b>b</b>) CO-LASSO algorithm. (<b>c</b>) SBL algorithm. (<b>d</b>) NNM algorithm. (<b>e</b>) ANM algorithm. (<b>f</b>) Proposed algorithm.</p>
Full article ">Figure 7
<p>RMSE performance comparison with single incident source. (<b>a</b>) RMSE versus SNR with the number of snapshots T = 100. (<b>b</b>) RMSE versus the number of snapshots with SNR = 0 dB.</p>
Full article ">Figure 7 Cont.
<p>RMSE performance comparison with single incident source. (<b>a</b>) RMSE versus SNR with the number of snapshots T = 100. (<b>b</b>) RMSE versus the number of snapshots with SNR = 0 dB.</p>
Full article ">Figure 8
<p>Runtime performance versus number of sensors.</p>
Full article ">
20 pages, 5818 KiB  
Article
Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
by Pedro Amaral, Filipe Silva and Vítor Santos
Sensors 2023, 23(21), 8989; https://doi.org/10.3390/s23218989 - 5 Nov 2023
Cited by 2 | Viewed by 2208
Abstract
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions [...] Read more.
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about the operator’s intentions. In this context, this paper proposes a novel learning-based framework to enable an assistive robot to recognize the object grasped by the human operator based on the pattern of the hand and finger joints. The framework combines the strengths of the commonly available software MediaPipe in detecting hand landmarks in an RGB image with a deep multi-class classifier that predicts the manipulated object from the extracted keypoints. This study focuses on the comparison between two deep architectures, a convolutional neural network and a transformer, in terms of prediction accuracy, precision, recall and F1-score. We test the performance of the recognition system on a new dataset collected with different users and in different sessions. The results demonstrate the effectiveness of the proposed methods, while providing valuable insights into the factors that limit the generalization ability of the models. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

Figure 1
<p>The prototype collaborative cell LARCC.</p>
Full article ">Figure 2
<p>The proposed learning-based framework combines the MediaPipe pre-trained models for hand/object detection and tracking with a multi-class classifier for object recognition based on the hand keypoints.</p>
Full article ">Figure 3
<p>The objects used in the study include a water bottle, a Rubik’s cube, a smartphone and a screwdriver.</p>
Full article ">Figure 4
<p>The keypoints projected in the image (<b>left</b>) and the normalized representation (<b>right</b>).</p>
Full article ">Figure 5
<p>The distribution of test dataset samples from each class within each cluster.</p>
Full article ">Figure 6
<p>“Full Dataset” confusion matrices: (<b>a</b>) CNN model, (<b>b</b>) transformer model.</p>
Full article ">Figure 7
<p>“Session-Based Testing” confusion matrices (CNN model): (<b>a</b>) Session 1, (<b>b</b>) Session 2, (<b>c</b>) Session 3, and (<b>d</b>) Session 4.</p>
Full article ">Figure 8
<p>“Full User Dataset” confusion matrices (CNN model): (<b>a</b>) User1, (<b>b</b>) User2 and (<b>c</b>) User3.</p>
Full article ">Figure 9
<p>“Session-Based User1 Testing” confusion matrices (CNN model): (<b>a</b>) Session 1, (<b>b</b>) Session 2, (<b>c</b>) Session 3, and (<b>d</b>) Session 4.</p>
Full article ">
20 pages, 1472 KiB  
Article
Machine Vision System for Automatic Adjustment of Optical Components in LED Modules for Automotive Lighting
by Silvia Satorres Martínez, Diego Manuel Martínez Gila, Sergio Illana Rico and Daniel Teba Camacho
Sensors 2023, 23(21), 8988; https://doi.org/10.3390/s23218988 - 5 Nov 2023
Viewed by 1377
Abstract
This paper presents a machine vision system that performs the automatic positioning of optical components in LED modules of automotive headlamps. The automatic adjustment of the module is a process of great interest at the industrial level, as it allows us to reduce [...] Read more.
This paper presents a machine vision system that performs the automatic positioning of optical components in LED modules of automotive headlamps. The automatic adjustment of the module is a process of great interest at the industrial level, as it allows us to reduce reworks, increasing the company profits. We propose a machine vision system with a flexible hardware–software structure that allows it to adapt to a wide range of LED modules. Its hardware is composed of image-capturing devices, which enable us to obtain the LED module light pattern, and mechanisms for manipulating and holding the module to be adjusted. Its software design follows a component-based approach which allows us to increase the reusage of the code, decreasing the time required for configuring any type of LED module. To assess the efficiency and robustness of the industrial system, a series of tests, using three commercial models of LED modules, have been performed. In all cases, the automatically adjusted LED modules followed the ECE R112 regulation for automotive lighting. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>The low-beam lighting pattern. Left: the cut-off line shape; right: low-beam projection in an inspection tunnel.</p>
Full article ">Figure 2
<p>LED module of a headlamp producing both low- and high-beam light “BiLED module”.</p>
Full article ">Figure 3
<p>LED module and manual positioning: (<b>a</b>) folder; (<b>b</b>) submodules in the elliptical module: (1) lens, lens support, and housing; (2) heatsink, flex board, collimators, and folder; and (<b>c</b>) LED module rework station.</p>
Full article ">Figure 4
<p>Machine vision system for the automatic precision assembly of LED modules.</p>
Full article ">Figure 5
<p>Deployment diagram describing the hardware architecture.</p>
Full article ">Figure 6
<p>Image acquisition devices. The photometric tunnel and the colour camera inside the tunnel.</p>
Full article ">Figure 7
<p>Tooling and the cut-off light pattern for a sample of the eleven discrete positions of the folder.</p>
Full article ">Figure 8
<p>Component diagram describing the software architecture.</p>
Full article ">Figure 9
<p>Calibration pattern used to perform the geometric calibration by the MVS.</p>
Full article ">Figure 10
<p>Measurement obtained from the cut-off pattern: (<b>a</b>) measured sections on the left side of the V point used to obtain the cut-off sharpness; (<b>b</b>) color measurements in the CIE diagram from points placed on the border of the cut-off line.</p>
Full article ">Figure 11
<p>Algorithm for the optimal position adjustment.</p>
Full article ">Figure 12
<p>Results of the analysed modules belonging to the TD-IZQ model.</p>
Full article ">Figure 13
<p>Results of the analysed modules belonging to the IO1-TI-IZQ model.</p>
Full article ">Figure 14
<p>Results of the analysed modules belonging to the F56-USA-IZQ model.</p>
Full article ">
19 pages, 1484 KiB  
Article
Quantifying Digital Biomarkers for Well-Being: Stress, Anxiety, Positive and Negative Affect via Wearable Devices and Their Time-Based Predictions
by Berrenur Saylam and Özlem Durmaz İncel
Sensors 2023, 23(21), 8987; https://doi.org/10.3390/s23218987 - 5 Nov 2023
Cited by 5 | Viewed by 2908
Abstract
Wearable devices have become ubiquitous, collecting rich temporal data that offers valuable insights into human activities, health monitoring, and behavior analysis. Leveraging these data, researchers have developed innovative approaches to classify and predict time-based patterns and events in human life. Time-based techniques allow [...] Read more.
Wearable devices have become ubiquitous, collecting rich temporal data that offers valuable insights into human activities, health monitoring, and behavior analysis. Leveraging these data, researchers have developed innovative approaches to classify and predict time-based patterns and events in human life. Time-based techniques allow the capture of intricate temporal dependencies, which is the nature of the data coming from wearable devices. This paper focuses on predicting well-being factors, such as stress, anxiety, and positive and negative affect, on the Tesserae dataset collected from office workers. We examine the performance of different methodologies, including deep-learning architectures, LSTM, ensemble techniques, Random Forest (RF), and XGBoost, and compare their performances for time-based and non-time-based versions. In time-based versions, we investigate the effect of previous records of well-being factors on the upcoming ones. The overall results show that time-based LSTM performs the best among conventional (non-time-based) RF, XGBoost, and LSTM. The performance even increases when we consider a more extended previous period, in this case, 3 past-days rather than 1 past-day to predict the next day. Furthermore, we explore the corresponding biomarkers for each well-being factor using feature ranking. The obtained rankings are compatible with the psychological literature. In this work, we validated them based on device measurements rather than subjective survey responses. Full article
(This article belongs to the Special Issue Smart Sensing for Pervasive Health)
Show Figures

Figure 1

Figure 1
<p>The most important features per target (<b>a</b>) Stress (<b>b</b>) Anxiety (<b>c</b>) Positive Affect (<b>d</b>) Negative Affect.</p>
Full article ">Figure 2
<p>Modality rankings.</p>
Full article ">Figure 3
<p>Forecast visuals with XGBoost 7 days prior (<b>a</b>) Stress (<b>b</b>) Anxiety (<b>c</b>) Positive Affect (<b>d</b>) Negative Affect.</p>
Full article ">
22 pages, 10665 KiB  
Article
Design and Implementation of a Prototype Seismogeodetic System for Tectonic Monitoring
by Javier Ramírez-Zelaya, Belén Rosado, Vanessa Jiménez, Jorge Gárate, Luis Miguel Peci, Amós de Gil, Alejandro Pérez-Peña and Manuel Berrocoso
Sensors 2023, 23(21), 8986; https://doi.org/10.3390/s23218986 - 5 Nov 2023
Cited by 1 | Viewed by 1646
Abstract
This manuscript describes the design, development, and implementation of a prototype system based on seismogeodetic techniques, consisting of a low-cost MEMS seismometer/accelerometer, a biaxial inclinometer, a multi-frequency GNSS receiver, and a meteorological sensor, installed at the Doñana Biological Station (Huelva, Spain) that transmits [...] Read more.
This manuscript describes the design, development, and implementation of a prototype system based on seismogeodetic techniques, consisting of a low-cost MEMS seismometer/accelerometer, a biaxial inclinometer, a multi-frequency GNSS receiver, and a meteorological sensor, installed at the Doñana Biological Station (Huelva, Spain) that transmits multiparameter data in real and/or deferred time to the control center at the University of Cadiz. The main objective of this system is to know, detect, and monitor the tectonic activity in the Gulf of Cadiz region and adjacent areas in which important seismic events occur produced by the interaction of the Eurasian and African plates, in addition to the ability to integrate into a regional early warning system (EWS) to minimize the consequences of dangerous geological phenomena. Full article
Show Figures

Figure 1

Figure 1
<p>Instrumentation located in the Doñana National Park, Huelva, Spain: (<b>A</b>) the prototype seismogeodetic system, (<b>B</b>) metal tripod that supports the Leica Geodetic (AR10) antenna, Leica GNSS receiver, and weather sensor, (<b>C</b>) box containing the GNSS receiver and main communication switch, (<b>D</b>) Vaisala weather sensor (WXT520), (<b>E</b>) concrete chamber located 1m away from the tripod, (<b>F</b>) Leica GR30 GNSS receiver, (<b>G</b>) contents of the concrete chamber: Seismometer, Inclinometer, communications connectors, power supply connectors, and desiccant bags that prevent humidity, (<b>H</b>) GNSS receiver UBX–M8030, (<b>I</b>) Raspberry Shake 4D Seismometer/Accelerometer, (<b>J</b>) Biaxial Digital Tilt Logger DTL202B, and (<b>K</b>) Vaisala weather sensor, owned by AEMET.</p>
Full article ">Figure 2
<p>Network diagram and hardware components of the prototype seismogeodetic system (communications, sensors, servers, virtual machines, NAS, mirror backup, etc.); it is divided into three parts: Prototype Seismogeodetic (Doñana Station), UCA–HUB, and Control Center (LAGC). Initially, the prototype, and the UCA–HUB are interconnected by the VPN service provided by the CSIC, facilitating data transmission over the Internet to the management and control center, which has a Citrix XenServer virtual infrastructure with virtual machines that have services and applications dedicated to the automatic acquisition, processing, visualization, and filtering of data produced.</p>
Full article ">Figure 3
<p>Structure of the components involved in the implementation of the prototype; it is divided into two groups and three subgroups that show the components of hardware, services, communications, and virtual infrastructure of the control center (LAGC–UCA), and Doñana Station, as well as the virtual machines that contain the acquisition, processing, and filtering modules.</p>
Full article ">Figure 4
<p>Results (E, N, U) of the DONA station time series; the GNSS processing was performed with the BERNESE 5.2 software using ITRF14. This figure shows the time series with the linear fit and the CATS filter, as well as the velocities per component.</p>
Full article ">Figure 5
<p>Results (E, N, U) of the DONA station time series; the GNSS processing was performed with the BERNESE 5.2 software using ITRF14. In addition, Kalman and Wavelets filters were applied.</p>
Full article ">Figure 6
<p>Seismogram with a simulation of a seismic event (EHZ, ENE, ENN, and ENZ components) to know the resolution of Raspberry Shake RS4D seismometer/accelerometer and check the quality of the generated data.</p>
Full article ">Figure 7
<p>Seismic events display on SEISAN 12.0 software for seismic analysis. (<b>A</b>) Seismogram of the earthquake that occurred at 22:03:49 on 1 January 2022, recorded on Z channel of the RS4D seismometer with a filter of 2–15 Hz. (<b>B</b>) Unfiltered seismogram of the same event with the waves phases P and S, and coda selected. (<b>C</b>) Signal amplification, and impulsive arrival of the P-wave and arrival of the S-wave.</p>
Full article ">Figure 8
<p>Map showing the geodynamic context, seismic activity (2015–2022), and main faults of the southern region of the Iberian Peninsula and North Africa. The most important faults are Gorringe Bank Region, Gulf of Cadiz (GC), Azores–Gibraltar Fault, Saint Vincent Cape (SVCP), Alboran Sea, Betic Mountain Ranges, Eastern Betic Shear Zone (EBSZ), Trans-Alboran Shear Zone (TASZ), Horseshoe Abyssal Plain (HAP), Horseshoe fault (HF), São Vicente Canyon (SVC), Guadalquivir Bank (GVB), and Marquês de Pombal Fault Block (MPFB). In addition, clusters A, B, C, and D are shown, which reflect the concentration and distribution of the seismic epicenters.</p>
Full article ">Figure 9
<p>Map showing the location of the 4.4 Mw earthquake that occurred on 1 January 2022 in the Gulf of Cadiz (Lat: 36.3276, Lon: −7.6271, depth: 6 km) recorded by the RS4D seismometer/accelerometer. We also show seismic events of different magnitudes that occurred between 2005 and 2022 in the Gulf of Cadiz and adjacent areas (data taken from the public seismic catalog of IGN), the location of the Seismogeodesic System in the Doñana Biological Station, Huelva, Spain, the Control Center (LAGC–UCA), and the focal mechanism produced by the studied earthquake (A).</p>
Full article ">Figure 10
<p>Seismogram (<b>A</b>) and spectrogram (<b>B</b>) of the earthquake that occurred on 1 January 2022 at 22:03:49 registered by the RS4D seismometer/accelerometer integrated in the prototype. In this seismic signal, a low signal-to-noise ratio was found in certain periods of time, which allowed the use of a first filter of 0.5 Hz to 10 Hz and a later one of 2 Hz to 8 Hz.</p>
Full article ">Figure 11
<p>East, North, and Vertical components of the GNSS-GPS time series (1 Hz sample rate) for the position of the GR30 GNSS receiver seconds after the magnitude 4.4 Mw earthquake of 1 January 2022, with epicenter about 130 km southwest of Doñana, Huelva, Spain. A small change in the trend is shown 45 s (approximately) after the event occurred; this corresponds to the arrival of the seismic wave.</p>
Full article ">Figure 12
<p>Inclinometry results (30 s sample rate) where the displacement produced in both sensors (Tilt 1, Tilt2) is observed and that corresponds to the arrival of the seismic wave of the 4.4 Mw earthquake that occurred on 1 January 2022 in the gulf of Cadiz.</p>
Full article ">Figure 13
<p>Accelerographic signals from the ALMT (Almonte) station corresponding to the 5.4 Mw earthquake that occurred on 14 August 2022 in the Gulf of Cadiz.</p>
Full article ">Figure 14
<p>Accelerographic signals from the LEPE station corresponding to the 5.4 Mw earthquake that occurred on 14 August 2022 in the Gulf of Cadiz.</p>
Full article ">
13 pages, 3421 KiB  
Article
Prediction of Three-Directional Ground Reaction Forces during Walking Using a Shoe Sole Sensor System and Machine Learning
by Takeshi Yamaguchi, Yuya Takahashi and Yoshihiro Sasaki
Sensors 2023, 23(21), 8985; https://doi.org/10.3390/s23218985 - 5 Nov 2023
Cited by 2 | Viewed by 2453
Abstract
We developed a shoe sole sensor system with four high-capacity, compact triaxial force sensors using a nitrogen added chromium strain-sensitive thin film mounted on the sole of a shoe. Walking experiments were performed, including straight walking and turning (side-step and cross-step turning), in [...] Read more.
We developed a shoe sole sensor system with four high-capacity, compact triaxial force sensors using a nitrogen added chromium strain-sensitive thin film mounted on the sole of a shoe. Walking experiments were performed, including straight walking and turning (side-step and cross-step turning), in six healthy young male participants and two healthy young female participants wearing the sole sensor system. A regression model to predict three-directional ground reaction forces (GRFs) from force sensor outputs was created using multiple linear regression and Gaussian process regression (GPR). The predicted GRF values were compared with the GRF values measured with a force plate. In the model trained on data from the straight walking and turning trials, the percent root-mean-square error (%RMSE) for predicting the GRFs in the anteroposterior and vertical directions was less than 15%, except for the GRF in the mediolateral direction. The model trained separately for straight walking, side-step turning, and cross-step turning showed a %RMSE of less than 15% in all directions in the GPR model, which is considered accurate for practical use. Full article
(This article belongs to the Collection Wearable and Unobtrusive Biomedical Monitoring)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A triaxial force sensor using a Cr–N thin film, and (<b>b</b>) schematic diagram of the structure of the sensor and layout of the Cr–N thin film.</p>
Full article ">Figure 2
<p>Sole sensor system. MT1 and MT5 represent the first and fifth metatarsal heads, respectively. (<b>a</b>) Location of the four triaxial force sensors, and (<b>b</b>) side view of the sole sensor system.</p>
Full article ">Figure 3
<p>Experimental setup for gait trials, and the <span class="html-italic">X</span>, <span class="html-italic">Y</span>, and <span class="html-italic">Z</span> coordinates in the walking experiment system.</p>
Full article ">Figure 4
<p>Schematic of footprints for each type of gait trial. (<b>a</b>) Straight walking; (<b>b</b>) Side-step turning; (<b>c</b>) Cross-step turning.</p>
Full article ">Figure 5
<p>Forces obtained by the shoe sole sensor system and ground reaction forces (GRFs) obtained by a force plate. Twelve force sensor outputs (<b><span class="html-italic">f</span></b><span class="html-italic"><sub>xi</sub></span>, <b><span class="html-italic">f</span></b><span class="html-italic"><sub>yi</sub></span>, <b><span class="html-italic">f</span></b><span class="html-italic"><sub>zi</sub></span> [<span class="html-italic">i</span> = 1–4]) were used to estimate a model to predict the GRF in each direction.</p>
Full article ">Figure 6
<p>Examples of time-series changes in the predicted and measured ground reaction force (GRF) values in the test trials for each movement, using the regression model trained on the data of each movement for (<b>a</b>) straight walking, (<b>b</b>) side-step turning, and (<b>c</b>) cross-step turning. The horizontal axis indicates the normalized period with 0% for heel ground contact and 100% for toe-off, and the vertical axis indicates the measured GRFs and predicted GRFs divided by the participant’s body mass. The solid black line shows the GRF values measured with the force plate, the solid red line shows the predictions by multiple linear regression (MLR), the solid blue line shows the predictions by Gaussian process regression (GPR), and the light blue shaded area shows the 95% confidence interval of the prediction by GRP.</p>
Full article ">Figure 6 Cont.
<p>Examples of time-series changes in the predicted and measured ground reaction force (GRF) values in the test trials for each movement, using the regression model trained on the data of each movement for (<b>a</b>) straight walking, (<b>b</b>) side-step turning, and (<b>c</b>) cross-step turning. The horizontal axis indicates the normalized period with 0% for heel ground contact and 100% for toe-off, and the vertical axis indicates the measured GRFs and predicted GRFs divided by the participant’s body mass. The solid black line shows the GRF values measured with the force plate, the solid red line shows the predictions by multiple linear regression (MLR), the solid blue line shows the predictions by Gaussian process regression (GPR), and the light blue shaded area shows the 95% confidence interval of the prediction by GRP.</p>
Full article ">
19 pages, 4290 KiB  
Article
Acoustic-Sensing-Based Attribute-Driven Imbalanced Compensation for Anomalous Sound Detection without Machine Identity
by Yifan Zhou, Yanhua Long and Haoran Wei
Sensors 2023, 23(21), 8984; https://doi.org/10.3390/s23218984 - 5 Nov 2023
Viewed by 1525
Abstract
Acoustic sensing provides crucial data for anomalous sound detection (ASD) in condition monitoring. However, building a robust acoustic-sensing-based ASD system is challenging due to the unsupervised nature of training data, which only contain normal sound samples. Recent discriminative models based on machine identity [...] Read more.
Acoustic sensing provides crucial data for anomalous sound detection (ASD) in condition monitoring. However, building a robust acoustic-sensing-based ASD system is challenging due to the unsupervised nature of training data, which only contain normal sound samples. Recent discriminative models based on machine identity (ID) classification have shown excellent ASD performance by leveraging strong prior knowledge like machine ID. However, such strong priors are often unavailable in real-world applications, limiting these models. To address this, we propose utilizing the imbalanced and inconsistent attribute labels from acoustic sensors, such as machine running speed and microphone model, as weak priors to train an attribute classifier. We also introduce an imbalanced compensation strategy to handle extremely imbalanced categories and ensure model trainability. Furthermore, we propose a score fusion method to enhance anomaly detection robustness. The proposed algorithm was applied in our DCASE2023 Challenge Task 2 submission, ranking sixth internationally. By exploiting acoustic sensor data attributes as weak prior knowledge, our approach provides an effective framework for robust ASD when strong priors are absent. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The framework of proposed AIC.</p>
Full article ">Figure 2
<p>A schematic diagram of the effects of imbalanced compensation on data. ‘Catagory’ means the catagory of operating status.</p>
Full article ">Figure 3
<p>An overview of datasets.</p>
Full article ">Figure 4
<p>The taxonomy of various labels in the dataset.</p>
Full article ">Figure 5
<p>Mel-spectrograms of the 7 machines. (<b>a</b>) ToyCar; (<b>b</b>) ToyTrain; (<b>c</b>) Bearing; (<b>d</b>) Fan; (<b>e</b>) Gearbox; (<b>f</b>) Slider; (<b>g</b>) Valve.</p>
Full article ">Figure 5 Cont.
<p>Mel-spectrograms of the 7 machines. (<b>a</b>) ToyCar; (<b>b</b>) ToyTrain; (<b>c</b>) Bearing; (<b>d</b>) Fan; (<b>e</b>) Gearbox; (<b>f</b>) Slider; (<b>g</b>) Valve.</p>
Full article ">Figure 6
<p>The relationship between the number of samples <span class="html-italic">R</span> in the imbalanced compensation module and model performance.</p>
Full article ">Figure 7
<p>t-SNE visualization comparison between attribute classifier and AIC. (<b>a</b>) AC_fan; (<b>b</b>) AIC_fan; (<b>c</b>) AC_slider; (<b>d</b>) AIC_slider.</p>
Full article ">Figure 8
<p>System ensemble performance with the varying of score fusion weight <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">
Previous Issue
Back to TopTop