Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 24, May-2
Previous Issue
Volume 24, April-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 9 (May-1 2024) – 293 articles

Cover Story (view full-size image): In millimeter-wave (mmW) applications, efforts are focused on improving communication and sensing systems for range, accuracy, frequency coverage, and tunability. However, mmW signals encounter higher propagation losses when obstructed, leading to signal attenuation and reduced coverage. This work introduces a novel automatic synthesizing method (ASM) utilizing genetic algorithms (GA) to design a 3D transmitting conformal meta-lens. The meta-lens enables beam manipulation, including beam deflection using single, dual, and orbital angular momentum (OAM) beams, addressing the challenges of mmW frequencies. The proposed meta-lens offers potential for low-cost, high-gain beam deflection in sensing applications, facilitating wider 2D beam scanning and independent beam deflection enhancements. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3911 KiB  
Article
A Systematic Optimization Method for Permanent Magnet Synchronous Motors Based on SMS-EMOA
by Bo Yuan, Ping Chen, Ershen Wang, Jianrui Yu and Jian Wang
Sensors 2024, 24(9), 2956; https://doi.org/10.3390/s24092956 - 6 May 2024
Cited by 1 | Viewed by 1494
Abstract
The efficient design of Permanent Magnet Synchronous Motors (PMSMs) is crucial for their operational performance. A key design parameter, cogging torque, is significantly influenced by various structural parameters of the motor, complicating the optimization of motor structures. This paper proposes an optimization method [...] Read more.
The efficient design of Permanent Magnet Synchronous Motors (PMSMs) is crucial for their operational performance. A key design parameter, cogging torque, is significantly influenced by various structural parameters of the motor, complicating the optimization of motor structures. This paper proposes an optimization method for PMSM structures based on heuristic optimization algorithms, named the Permanent Magnet Synchronous Motor Self-Optimization Lift Algorithm (PMSM-SLA). Initially, a dataset capturing the efficiency of motors under various structural parameter scenarios is created using finite element simulation methods. Building on this dataset, a batch optimization solution aimed at PMSM structure optimization was introduced to identify the set of structural parameters that maximize motor efficiency. The approach presented in this study enhances the efficiency of optimizing PMSM structures, overcoming the limitations of traditional trial-and-error methods and supporting the industrial application of PMSM structural design. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Analysis model of the surface-mounted PMSM(SPMSM).</p>
Full article ">Figure 2
<p>The training step of the SMS-EMOA.</p>
Full article ">Figure 3
<p>Flowchart of PMSM-SLA modelling. Reprinted with permission from Ref. [<a href="#B10-sensors-24-02956" class="html-bibr">10</a>], 2007, @ European Journal of Operational Research.</p>
Full article ">Figure 4
<p>The prototype test platform of the 20p24s SPMSM.</p>
Full article ">Figure 5
<p>Objective value distributions and Pareto-front points.</p>
Full article ">Figure 6
<p>Flowchart of cogging torque calculation. (<b>a</b>) structure optimization. (<b>b</b>) validation for PMSM optimization.</p>
Full article ">Figure 7
<p>Hyperparameter optimization.</p>
Full article ">
18 pages, 642 KiB  
Article
Reliability and Detectability of Emergency Management Systems in Smart Cities under Common Cause Failures
by Thiago C. Jesus, Paulo Portugal, Daniel G. Costa and Francisco Vasques
Sensors 2024, 24(9), 2955; https://doi.org/10.3390/s24092955 - 6 May 2024
Cited by 2 | Viewed by 1560
Abstract
Urban areas are undergoing significant changes with the rise of smart cities, with technology transforming how cities develop through enhanced connectivity and data-driven services. However, these advancements also bring new challenges, especially in dealing with urban emergencies that can disrupt city life and [...] Read more.
Urban areas are undergoing significant changes with the rise of smart cities, with technology transforming how cities develop through enhanced connectivity and data-driven services. However, these advancements also bring new challenges, especially in dealing with urban emergencies that can disrupt city life and infrastructure. The emergency management systems have become crucial elements for enabling cities to better handle urban emergencies, although ensuring the reliability and detectability of such system remains critical. This article introduces a new method to perform reliability and detectability assessments. By using Fault Tree Markov chain models, this article evaluates their performance under extreme conditions, providing valuable insights for designing and operating urban emergency systems. These analyses fill a gap in the existing research, offering a comprehensive understanding of emergency management systems functionality in complex urban settings. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Deployed scenario.</p>
Full article ">Figure 2
<p>Steps of the proposed evaluation methodology.</p>
Full article ">Figure 3
<p>Fault Tree models of EMS (TOP), emergencies, paths, and EDUs.</p>
Full article ">Figure 4
<p>Hardware Markov chain model.</p>
Full article ">Figure 5
<p>Overall Fault Tree model of the EMS in <a href="#sensors-24-02955-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 6
<p>Reliability assessment.</p>
Full article ">Figure 7
<p>EDUs susceptibility to the different emergencies.</p>
Full article ">Figure 8
<p>Scenario at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>250</mn> </mrow> </semantics></math> h.</p>
Full article ">Figure 9
<p>Scenario at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> h.</p>
Full article ">Figure 10
<p>Scenario at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> h.</p>
Full article ">Figure 11
<p>Reliability assessment.</p>
Full article ">
17 pages, 4516 KiB  
Article
Improving Adversarial Robustness of ECG Classification Based on Lipschitz Constraints and Channel Activation Suppression
by Xin Chen, Yujuan Si, Zhanyuan Zhang, Wenke Yang and Jianchao Feng
Sensors 2024, 24(9), 2954; https://doi.org/10.3390/s24092954 - 6 May 2024
Viewed by 1092
Abstract
Deep neural networks (DNNs) are increasingly important in the medical diagnosis of electrocardiogram (ECG) signals. However, research has shown that DNNs are highly vulnerable to adversarial examples, which can be created by carefully crafted perturbations. This vulnerability can lead to potential medical accidents. [...] Read more.
Deep neural networks (DNNs) are increasingly important in the medical diagnosis of electrocardiogram (ECG) signals. However, research has shown that DNNs are highly vulnerable to adversarial examples, which can be created by carefully crafted perturbations. This vulnerability can lead to potential medical accidents. This poses new challenges for the application of DNNs in the medical diagnosis of ECG signals. This paper proposes a novel network Channel Activation Suppression with Lipschitz Constraints Net (CASLCNet), which employs the Channel-wise Activation Suppressing (CAS) strategy to dynamically adjust the contribution of different channels to the class prediction and uses the 1-Lipschitz’s distance network as a robust classifier to reduce the impact of adversarial perturbations on the model itself in order to increase the adversarial robustness of the model. The experimental results demonstrate that CASLCNet achieves ACCrobust scores of 91.03% and 83.01% when subjected to PGD attacks on the MIT-BIH and CPSC2018 datasets, respectively, which proves that the proposed method in this paper enhances the model’s adversarial robustness while maintaining a high accuracy rate. Full article
(This article belongs to the Special Issue Sensors Technology and Application in ECG Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Overall flow of the proposed methodology.</p>
Full article ">Figure 2
<p>CASLCNet framework.</p>
Full article ">Figure 3
<p>Dimension reduction diagram.</p>
Full article ">Figure 4
<p>Illustration of channel-wise activation frequency and magnitude using CASLCNet and Resnet18 on MIT-BIH datasets. (<b>a</b>) Channel-wise activation magnitude of Resnet18; (<b>b</b>) channel-wise activation magnitude of CASLCNet; (<b>c</b>) channel-wise activation frequency of Resnet18; and (<b>d</b>) channel-wise activation frequency of CASLCNet.</p>
Full article ">Figure 5
<p>Illustration of channel-wise activation frequency and magnitude using CASLCNet and Resnet18 on CPSC2018 datasets. (<b>a</b>) Channel-wise activation magnitude of Resnet18; (<b>b</b>) channel-wise activation magnitude of CASLCNet; (<b>c</b>) channel-wise activation frequency of Resnet18; and (<b>d</b>) channel-wise activation frequency of CASLCNet.</p>
Full article ">Figure 6
<p>CASLCNet network loss function and accuracy display in MIT-BIH and CPSC2018 dataset. (<b>a</b>) CASLCNet indicator in MIT-BIH dataset; and (<b>b</b>) CASLCNet indicator in CPSC2018 dataset.</p>
Full article ">Figure 7
<p>Accuracy and F1 scores of proposed method with Jacob and SNR under different adversarial attacks in MIT-BIH dataset. (<b>a</b>) Accuracy under different adversarial perturbations in PGD adversarial attacks; (<b>b</b>) F1 score under different adversarial perturbations in PGD adversarial attacks; (<b>c</b>) accuracy under different adversarial perturbations in SAP adversarial attacks; and (<b>d</b>) F1 score under different adversarial perturbations in SAP adversarial attacks.</p>
Full article ">Figure 8
<p>Accuracy and F1 scores of proposed method with Jacob and SNR under different adversarial attacks in CPSC2018 dataset. (<b>a</b>) Accuracy under different adversarial perturbations in PGD adversarial attacks; (<b>b</b>) F1 score under different adversarial perturbations in PGD adversarial attacks; (<b>c</b>) accuracy under different adversarial perturbations in SAP adversarial attacks; and (<b>d</b>) F1 score under different adversarial perturbations in SAP adversarial attacks.</p>
Full article ">
19 pages, 3326 KiB  
Article
MultiFuseYOLO: Redefining Wine Grape Variety Recognition through Multisource Information Fusion
by Jialiang Peng, Cheng Ouyang, Hao Peng, Wenwu Hu, Yi Wang and Ping Jiang
Sensors 2024, 24(9), 2953; https://doi.org/10.3390/s24092953 - 6 May 2024
Cited by 1 | Viewed by 1085
Abstract
Based on the current research on the wine grape variety recognition task, it has been found that traditional deep learning models relying only on a single feature (e.g., fruit or leaf) for classification can face great challenges, especially when there is a high [...] Read more.
Based on the current research on the wine grape variety recognition task, it has been found that traditional deep learning models relying only on a single feature (e.g., fruit or leaf) for classification can face great challenges, especially when there is a high degree of similarity between varieties. In order to effectively distinguish these similar varieties, this study proposes a multisource information fusion method, which is centered on the SynthDiscrim algorithm, aiming to achieve a more comprehensive and accurate wine grape variety recognition. First, this study optimizes and improves the YOLOV7 model and proposes a novel target detection and recognition model called WineYOLO-RAFusion, which significantly improves the fruit localization precision and recognition compared with YOLOV5, YOLOX, and YOLOV7, which are traditional deep learning models. Secondly, building upon the WineYOLO-RAFusion model, this study incorporated the method of multisource information fusion into the model, ultimately forming the MultiFuseYOLO model. Experiments demonstrated that MultiFuseYOLO significantly outperformed other commonly used models in terms of precision, recall, and F1 score, reaching 0.854, 0.815, and 0.833, respectively. Moreover, the method improved the precision of the hard to distinguish Chardonnay and Sauvignon Blanc varieties, which increased the precision from 0.512 to 0.813 for Chardonnay and from 0.533 to 0.775 for Sauvignon Blanc. In conclusion, the MultiFuseYOLO model offers a reliable and comprehensive solution to the task of wine grape variety identification, especially in terms of distinguishing visually similar varieties and realizing high-precision identifications. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>Embrapa WGISD species map.</p>
Full article ">Figure 2
<p>Self-mined blade dataset.</p>
Full article ">Figure 3
<p>Structure of Res-Attention module.</p>
Full article ">Figure 4
<p>CFP structure diagram.</p>
Full article ">Figure 5
<p>Flowchart of multisource fusion method.</p>
Full article ">Figure 6
<p>Plot of detection and localization results for each model.</p>
Full article ">
19 pages, 17742 KiB  
Article
A Lightweight Remote Sensing Small Target Image Detection Algorithm Based on Improved YOLOv8
by Haijiao Nie, Huanli Pang, Mingyang Ma and Ruikai Zheng
Sensors 2024, 24(9), 2952; https://doi.org/10.3390/s24092952 - 6 May 2024
Cited by 6 | Viewed by 2662
Abstract
In response to the challenges posed by small objects in remote sensing images, such as low resolution, complex backgrounds, and severe occlusions, this paper proposes a lightweight improved model based on YOLOv8n. During the detection of small objects, the feature fusion part of [...] Read more.
In response to the challenges posed by small objects in remote sensing images, such as low resolution, complex backgrounds, and severe occlusions, this paper proposes a lightweight improved model based on YOLOv8n. During the detection of small objects, the feature fusion part of the YOLOv8n algorithm retrieves relatively fewer features of small objects from the backbone network compared to large objects, resulting in low detection accuracy for small objects. To address this issue, firstly, this paper adds a dedicated small object detection layer in the feature fusion network to better integrate the features of small objects into the feature fusion part of the model. Secondly, the SSFF module is introduced to facilitate multi-scale feature fusion, enabling the model to capture more gradient paths and further improve accuracy while reducing model parameters. Finally, the HPANet structure is proposed, replacing the Path Aggregation Network with HPANet. Compared to the original YOLOv8n algorithm, the recognition accuracy of [email protected] on the VisDrone data set and the AI-TOD data set has increased by 14.3% and 17.9%, respectively, while the recognition accuracy of [email protected]:0.95 has increased by 17.1% and 19.8%, respectively. The proposed method reduces the parameter count by 33% and the model size by 31.7% compared to the original model. Experimental results demonstrate that the proposed method can quickly and accurately identify small objects in complex backgrounds. Full article
Show Figures

Figure 1

Figure 1
<p>The YOLOv8 architecture.</p>
Full article ">Figure 2
<p>The YOLOv8 architecture with the addition of a small object detection layer.</p>
Full article ">Figure 3
<p>The SSFF architecture.</p>
Full article ">Figure 4
<p>The HPANet network architecture.</p>
Full article ">Figure 5
<p>The model architecture after improvement in this paper.</p>
Full article ">Figure 6
<p>Data sets: (<b>a</b>) VisDrone data set; (<b>b</b>) AI-TOD data set.</p>
Full article ">Figure 7
<p>mAP@0.5 and mAP@0.95 data visualization bar graphs from <a href="#sensors-24-02952-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 8
<p>Precision.recall.mAP@0.5 and mAP@0.95 visualization comparison of the three improvement points.</p>
Full article ">Figure 9
<p>YOLOv8n+Layer for Small Target and YOLOv8n PR curves.</p>
Full article ">Figure 10
<p>Comparison of mAP@0.5 and mAP@0.95 curves before and after improvement.</p>
Full article ">Figure 11
<p>(<b>a</b>) YOLOv8n detection visualization results on the data set VisDrone. (<b>b</b>) Detection visualization results of the improved model on the data set VisDrone.</p>
Full article ">Figure 12
<p><a href="#sensors-24-02952-t003" class="html-table">Table 3</a> mAP@0.5 accuracy visualization bar graphs.</p>
Full article ">Figure 13
<p>Visualization of evaluation parameters: (<b>a</b>) P.curve; (<b>b</b>) R.curve; (<b>c</b>) PR.curve; (<b>d</b>) F1.curve.</p>
Full article ">Figure 14
<p><a href="#sensors-24-02952-t004" class="html-table">Table 4</a> data visualization bar chart.</p>
Full article ">Figure 15
<p>(<b>a</b>) YOLOv8n detection visualization results on the data set AI-TOD. (<b>b</b>) Detection visualization results of the improved model on the data set AI-TOD. The red box highlights the differences between the left and right pictures.</p>
Full article ">
20 pages, 3643 KiB  
Article
Dielectric Properties of Materials Used for Microwave-Based NOx Gas Dosimeters
by Stefanie Walter, Johanna Baumgärtner, Gunter Hagen, Daniela Schönauer-Kamin, Jaroslaw Kita and Ralf Moos
Sensors 2024, 24(9), 2951; https://doi.org/10.3390/s24092951 - 6 May 2024
Viewed by 976
Abstract
Nitrogen oxides (NOx), primarily generated from combustion processes, pose significant health and environmental risks. To improve the coordination of measures against excessive NOx emissions, it is necessary to effectively monitor ambient NOx concentrations, which requires the development of precise [...] Read more.
Nitrogen oxides (NOx), primarily generated from combustion processes, pose significant health and environmental risks. To improve the coordination of measures against excessive NOx emissions, it is necessary to effectively monitor ambient NOx concentrations, which requires the development of precise and cost-efficient detection methods. This study focuses on developing a microwave- or radio frequency (RF)-based gas dosimeter for NOx detection and addresses the optimization of the dosimeter design by examining the dielectric properties of LTCC-based (Low-Temperature Co-fired Ceramics) sensor substrates and barium-based NOx storage materials. The measurements taken utilizing the Microwave Cavity Perturbation (MCP) method revealed that these materials exhibit more pronounced changes in dielectric losses when storing NOx at elevated temperatures. Consequently, operating such a dosimeter at high temperatures (above 300 °C) is recommended to maximize the sensor signal. To evaluate their high-temperature applicability, LTCC substrates were analyzed by measuring their dielectric losses at temperatures up to 600 °C. In terms of NOx storage materials, coating barium on high-surface-area alumina resolved issues related to limited NOx adsorption in pure barium carbonate powders. Additionally, the adsorption of both NO and NO2 was enabled by the application of a platinum catalyst. The change in dielectric losses, which provides the main signal for an RF-based gas dosimeter, only depends on the stored amount of NOx and not on the specific type of nitrogen oxide. Although the change in dielectric losses increases with the temperature, the maximum storage capacity of the material decreases significantly. In addition, at temperatures above 350 °C, NOx is mostly weakly bound, so it will desorb in the absence of NOx. Therefore, in the future development of a reliable RF-based NOx dosimeter, the trade-off between the sensor signal strength and adsorption behavior must be addressed. Full article
(This article belongs to the Special Issue Sensors for Environmental Threats)
Show Figures

Figure 1

Figure 1
<p>The operating principle of a gas dosimeter with a sensitive layer as an adsorbent: the scheme and sensor response |<span class="html-italic">SR</span>| (blue) in correlation with the analyte concentration <span class="html-italic">c</span> (grey) for the accumulation and regeneration period. Adapted from [<a href="#B7-sensors-24-02951" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>A schematic setup of the planar RF-based NO<sub>x</sub> dosimeter with a substrate length of 48.0 mm, a thickness of 1.5 mm, and a width of 6.3 mm at the front (the width increases to 14.0 mm at the mounting location). The heating structure is not the topic of this paper and will be presented in detail in future publications. Adapted from [<a href="#B24-sensors-24-02951" class="html-bibr">24</a>].</p>
Full article ">Figure 3
<p>LTCC materials DuPont 9K7 (<b>left</b>) and 951 (<b>right</b>). Manufactured platelets that were used to measure dielectric properties and shrinkage behavior.</p>
Full article ">Figure 4
<p>Photograph of analyzed barium carbonate (<b>left</b>) and barium nitrate (<b>right</b>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Permittivity <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and (<b>b</b>) loss factor <math display="inline"><semantics> <mrow> <mrow> <mrow> <mi>tan</mi> </mrow> <mo>⁡</mo> <mrow> <mi>δ</mi> </mrow> </mrow> </mrow> </semantics></math> of different LTCC materials measured at 1.18 GHz depending on substrate temperature (red: DuPont 9K7; green: DuPont 951). Adapted from [<a href="#B24-sensors-24-02951" class="html-bibr">24</a>].</p>
Full article ">Figure 6
<p>The calculated transmission losses for a stripline with a four-layer LTCC substrate in the fired state as a function of the loss factor, <math display="inline"><semantics> <mrow> <mrow> <mrow> <mi>tan</mi> </mrow> <mo>⁡</mo> <mrow> <mi>δ</mi> </mrow> </mrow> </mrow> </semantics></math>, of a substrate with (<b>a</b>) <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>ε</mi> </mrow> <mrow> <mo>′</mo> </mrow> </msup> <mo>=</mo> <mn>7.8</mn> </mrow> </semantics></math>, which is the data sheet value for the 951 LTCC at 10 GHz and room temperature and (<b>b</b>) <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>7.1</mn> </mrow> </semantics></math>, which is the data sheet value for the 9K7 LTCC at 10 GHz and room temperature. The measured loss factors for DuPont 951 and 9K7 at temperatures of 200 and 400 °C are also marked.</p>
Full article ">Figure 7
<p>(<b>a</b>) Permittivity <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and (<b>b</b>) dielectric losses <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>″</mo> </msup> </mrow> </semantics></math> of barium-based NO<sub>x</sub> storage materials depending on their temperature <span class="html-italic">T</span> (blue: BaCO<sub>3</sub>; red: Ba(NO<sub>3</sub>)<sub>2</sub>).</p>
Full article ">Figure 8
<p>The storage and regeneration behavior of pure BaCO<sub>3</sub> over time; (<b>a</b>) the NO<sub>x</sub> concentration <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mi>NOx</mi> </mrow> </msub> </mrow> </semantics></math> without (dashed line) and with (solid line) the storage material measured by FTIR downstream of the MCP setup as well as the temperature <span class="html-italic">T</span> of the storage material (green line); (<b>b</b>) the storage utilization <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mrow> <mi>Ba</mi> <mo>,</mo> <mi>NOx</mi> </mrow> </mrow> </msub> </mrow> </semantics></math> calculated based on an integration of the measured nitrogen oxide concentration; (<b>c</b>) the permittivity <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and dielectric losses <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>″</mo> </msup> </mrow> </semantics></math> of the storage material.</p>
Full article ">Figure 9
<p>(<b>a</b>) The permittivity <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and (<b>b</b>) dielectric losses <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>″</mo> </msup> </mrow> </semantics></math> of pure BaCO<sub>3</sub> during the NO<sub>x</sub> storage and the weakly bound desorption phases at different temperatures over the storage utilization period <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mrow> <mi>Ba</mi> <mo>,</mo> <mi>NOx</mi> </mrow> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>The storage behavior of the storage material with 16.9 wt.% barium carbonate coating over time during exposure to NO<sub>2</sub>; (<b>a</b>) the NO<sub>x</sub> concentration <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mi>NOx</mi> </mrow> </msub> </mrow> </semantics></math> without (dashed line) and with (solid line) the storage material measured by FTIR downstream of the MCP setup as well as the temperature <span class="html-italic">T</span> of the storage material (green line); (<b>b</b>) the calculated storage utilization <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mrow> <mi>Ba</mi> <mo>,</mo> <mi>NOx</mi> </mrow> </mrow> </msub> </mrow> </semantics></math> based on an integration of the measured nitrogen oxide concentration; (<b>c</b>) the permittivity <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and dielectric losses <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>″</mo> </msup> </mrow> </semantics></math> of the storage material.</p>
Full article ">Figure 11
<p>(<b>a</b>) The permittivity <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and (<b>b</b>) dielectric losses <math display="inline"><semantics> <mrow> <msup> <mi>ε</mi> <mo>″</mo> </msup> </mrow> </semantics></math> of the storage material with 16.9 wt.% barium carbonate coating during NO (pale color tone) as well as NO<sub>2</sub> (bold color) storage at different temperatures over the calculated storage utilization <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mrow> <mi>Ba</mi> <mo>,</mo> <mi>NOx</mi> </mrow> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>The temperature-dependent NO<sub>x</sub> storage behavior of the material with 16.9 wt.% barium carbonate coating for (<b>a</b>) NO<sub>2</sub> dosing and for (<b>b</b>) NO dosing; besides the total storage utilization <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mrow> <mi>Ba</mi> <mo>,</mo> <mi>NOx</mi> </mrow> </mrow> </msub> </mrow> </semantics></math>, the proportions of strongly and weakly bonded NO<sub>x</sub> are also shown.</p>
Full article ">Figure 13
<p>The temperature-dependent proportion of strongly bound nitrogen oxide <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mrow> <mi>strongly</mi> <mo> </mo> <mi>bound</mi> </mrow> </mrow> </msub> </mrow> </semantics></math> relative to the total stored NO<sub>x</sub> in the material with 16.9 wt.% barium carbonate coating during NO<sub>2</sub> and NO exposure.</p>
Full article ">
21 pages, 10381 KiB  
Article
Damage Severity Assessment of Multi-Layer Complex Structures Based on a Damage Information Extraction Method with Ladder Feature Mining
by Jiajie Tu, Jiajia Yan, Xiaojin Ji, Qijian Liu and Xinlin Qing
Sensors 2024, 24(9), 2950; https://doi.org/10.3390/s24092950 - 6 May 2024
Viewed by 954
Abstract
Multi-layer complex structures are widely used in large-scale engineering structures because of their diverse combinations of properties and excellent overall performance. However, multi-layer complex structures are prone to interlaminar debonding damage during use. Therefore, it is necessary to monitor debonding damage in engineering [...] Read more.
Multi-layer complex structures are widely used in large-scale engineering structures because of their diverse combinations of properties and excellent overall performance. However, multi-layer complex structures are prone to interlaminar debonding damage during use. Therefore, it is necessary to monitor debonding damage in engineering applications to determine structural integrity. In this paper, a damage information extraction method with ladder feature mining for Lamb waves is proposed. The method is able to optimize and screen effective damage information through ladder-type damage extraction. It is suitable for evaluating the severity of debonding damage in aluminum-foamed silicone rubber, a novel multi-layer complex structure. The proposed method contains ladder feature mining stages of damage information selection and damage feature fusion, realizing a multi-level damage information extraction process from coarse to fine. The results show that the accuracy of damage severity assessment by the damage information extraction method with ladder feature mining is improved by more than 5% compared to other methods. The effectiveness and accuracy of the method in assessing the damage severity of multi-layer complex structures are demonstrated, providing a new perspective and solution for damage monitoring of multi-layer complex structures. Full article
Show Figures

Figure 1

Figure 1
<p>Lamb wave propagation. (<b>a</b>) Schematic diagram of Lamb wave excitation–reception. (<b>b</b>) Formation of Lamb in a free plate.</p>
Full article ">Figure 2
<p>Lamb wave modes. (<b>a</b>) S mode; (<b>b</b>) A mode.</p>
Full article ">Figure 3
<p>The flowchart of the experiment.</p>
Full article ">Figure 4
<p>Multi-layer complex material preparation process.</p>
Full article ">Figure 5
<p>Multi-channel damage monitoring system.</p>
Full article ">Figure 6
<p>The sensor network layout of the distance test.</p>
Full article ">Figure 7
<p>Structural damage dimensions and location diagram.</p>
Full article ">Figure 8
<p>Comparison of the received signals of the multi-layer complex structure at different sensing distances.</p>
Full article ">Figure 9
<p>Comparison of signals without damage and signals with different damage lengths. (<b>a</b>) Baseline signals and damage signals. (<b>b</b>) Undamaged and damaged scattered signals.</p>
Full article ">Figure 10
<p>Hilbert transform of baseline, damage and scattered signals of different damage lengths. (<b>a</b>) Damage length: 1 cm. (<b>b</b>) Damage length: 3 cm. (<b>c</b>) Damage length: 5 cm. (<b>d</b>) Damage length: 7 cm. (<b>e</b>) Damage length: 9 cm. (<b>f</b>) Damage length: 11 cm. (<b>g</b>) Damage length: 13 cm. (<b>h</b>) Damage length: 15 cm.</p>
Full article ">Figure 10 Cont.
<p>Hilbert transform of baseline, damage and scattered signals of different damage lengths. (<b>a</b>) Damage length: 1 cm. (<b>b</b>) Damage length: 3 cm. (<b>c</b>) Damage length: 5 cm. (<b>d</b>) Damage length: 7 cm. (<b>e</b>) Damage length: 9 cm. (<b>f</b>) Damage length: 11 cm. (<b>g</b>) Damage length: 13 cm. (<b>h</b>) Damage length: 15 cm.</p>
Full article ">Figure 11
<p>The correlation coefficient of the acquired signal and the maximum of the scattered signal at the excitation frequency of 30–300 kHz. (<b>a</b>) Correlation coefficient. (<b>b</b>) Maximum.</p>
Full article ">Figure 12
<p>The RMS and VAR of the scattered signal at the excitation frequency of 70–90 kHz. (<b>a</b>) Root mean square. (<b>b</b>) Variance.</p>
Full article ">Figure 13
<p>Spectrum and time frequency diagrams of scattered signals: (<b>a</b>) 70 kHz; (<b>b</b>) 75 kHz; (<b>c</b>) 80 kHz; (<b>d</b>) 85 kHz; (<b>e</b>) 90 kHz.</p>
Full article ">Figure 14
<p>Normalized features. (<b>a</b>) Features 1–5. (<b>b</b>) Features 6–9. (<b>c</b>) Features 10–12. (<b>d</b>) Features 13–16.</p>
Full article ">Figure 15
<p>Importance of individual features.</p>
Full article ">Figure 16
<p>Different damage severities based on dual feature fusion.</p>
Full article ">Figure 17
<p>The damage severity identification results of different methods by ten iterations of ten-fold cross-validation.</p>
Full article ">
20 pages, 12167 KiB  
Article
Helping Blind People Grasp: Evaluating a Tactile Bracelet for Remotely Guiding Grasping Movements
by Piper Powell, Florian Pätzold, Milad Rouygari, Marcin Furtak, Silke M. Kärcher and Peter König
Sensors 2024, 24(9), 2949; https://doi.org/10.3390/s24092949 - 6 May 2024
Viewed by 1673
Abstract
The problem of supporting visually impaired and blind people in meaningful interactions with objects is often neglected. To address this issue, we adapted a tactile belt for enhanced spatial navigation into a bracelet worn on the wrist that allows visually impaired people to [...] Read more.
The problem of supporting visually impaired and blind people in meaningful interactions with objects is often neglected. To address this issue, we adapted a tactile belt for enhanced spatial navigation into a bracelet worn on the wrist that allows visually impaired people to grasp target objects. Participants’ performance in locating and grasping target items when guided using the bracelet, which provides direction commands via vibrotactile signals, was compared to their performance when receiving auditory instructions. While participants were faster with the auditory commands, they also performed well with the bracelet, encouraging future development of this system and similar systems. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Participants wore the feelSpace tactile bracelet (<b>B</b>) along with a camera attached to a helmet (<b>A</b>), and they were guided to target objects on the shelf (<b>C</b>).</p>
Full article ">Figure 2
<p>Conceptual representation of the localization task.</p>
Full article ">Figure 3
<p>Conceptual representation of the grasping task.</p>
Full article ">Figure 4
<p>Confusion matrix of vibration directions.</p>
Full article ">Figure 5
<p>Distribution of trial times on successful trials in the auditory and tactile conditions. The horizontal bar represents the distribution mean.</p>
Full article ">Figure 6
<p>Mean trial times per participant.</p>
Full article ">Figure 7
<p>Mean trial times per block grouped by condition.</p>
Full article ">Figure 8
<p>Count of failed trials per block summarized across participants by condition and type of failure.</p>
Full article ">Figure 9
<p>Random effects of condition order and experimenter on trial times. Individual points are a single trial time for a single participant.</p>
Full article ">Figure 10
<p>Reordered correlation matrix highlighting the principal component groupings (from left to right: PC1, PC2, and PC3). Abbreviations are noted in <a href="#sensors-24-02949-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 11
<p>Component loadings of the first three principal components. Abbreviations are noted in <a href="#sensors-24-02949-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 12
<p>Each construct’s mean participant evaluation.</p>
Full article ">Figure 13
<p>Comparison of successful trial time distributions in the auditory and tactile conditions between blindfolded and blind participants. The violin plot for the blindfolded participants is re-used from <a href="#sensors-24-02949-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 14
<p>Example of target objects and hand detections.</p>
Full article ">
22 pages, 5800 KiB  
Article
Enhancing Fetal Electrocardiogram Signal Extraction Accuracy through a CycleGAN Utilizing Combined CNN–BiLSTM Architecture
by Yuyao Yang, Lin Chen and Shuicai Wu
Sensors 2024, 24(9), 2948; https://doi.org/10.3390/s24092948 - 6 May 2024
Viewed by 1302
Abstract
The fetal electrocardiogram (FECG) records changes in the graph of fetal cardiac action potential during conduction, reflecting the developmental status of the fetus in utero and its physiological cardiac activity. Morphological alterations in the FECG can indicate intrauterine hypoxia, fetal distress, and neonatal [...] Read more.
The fetal electrocardiogram (FECG) records changes in the graph of fetal cardiac action potential during conduction, reflecting the developmental status of the fetus in utero and its physiological cardiac activity. Morphological alterations in the FECG can indicate intrauterine hypoxia, fetal distress, and neonatal asphyxia early on, enhancing maternal and fetal safety through prompt clinical intervention, thereby reducing neonatal morbidity and mortality. To reconstruct FECG signals with clear morphological information, this paper proposes a novel deep learning model, CBLS-CycleGAN. The model’s generator combines spatial features extracted by the CNN with temporal features extracted by the BiLSTM network, thus ensuring that the reconstructed signals possess combined features with spatial and temporal dependencies. The model’s discriminator utilizes PatchGAN, employing small segments of the signal as discriminative inputs to concentrate the training process on capturing signal details. Evaluating the model using two real FECG signal databases, namely “Abdominal and Direct Fetal ECG Database” and “Fetal Electrocardiograms, Direct and Abdominal with Reference Heartbeat Annotations”, resulted in a mean MSE and MAE of 0.019 and 0.006, respectively. It detects the FQRS compound wave with a sensitivity, positive predictive value, and F1 of 99.51%, 99.57%, and 99.54%, respectively. This paper’s model effectively preserves the morphological information of FECG signals, capturing not only the FQRS compound wave but also the fetal P-wave, T-wave, P-R interval, and ST segment information, providing clinicians with crucial diagnostic insights and a scientific foundation for developing rational treatment protocols. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Signal before (<b>top</b>) and after (<b>bottom</b>) pre-processing. (<b>a</b>) Raw AECG signal and pre-processed AECG signal. (<b>b</b>) Raw FECG signal and pre-processed FECG signal.</p>
Full article ">Figure 2
<p>Training framework of the CycleGAN. Inputs are pre-processed signals. There are two generators (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>G</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>G</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>) and two discriminators (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mi>y</mi> </mrow> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Generator based on combined CNN–BiLSTM structure.</p>
Full article ">Figure 4
<p>Schematic diagram of single-layer CNN network structure.</p>
Full article ">Figure 5
<p>Specific structure of BiLSTM networks.</p>
Full article ">Figure 6
<p>Internal structure of the LSTM module.</p>
Full article ">Figure 7
<p>Discriminator based on PatchGAN structure.</p>
Full article ">Figure 8
<p>Visualized example of the proposed model’s FECG signal extraction performance when using ADFECGDB. Above is the scalp FECG signal, and below is the extracted FECG signal. (<b>a</b>) ADFECGDB r07; (<b>b</b>) ADFECGDB r08.</p>
Full article ">Figure 9
<p>Visualized example of the proposed model’s FECG signal extraction performance when using FECGSYN. Above is the ground truth FECG signal, and below is the extracted FECG signal. (<b>a</b>) FECGSYN25; (<b>b</b>) FECGSYN15.</p>
Full article ">Figure 10
<p>Phase envelopes of FECG signals obtained through extraction using various CycleGAN models.</p>
Full article ">Figure 11
<p>Visualized example of the proposed model’s FQRS compound wave extraction performance when using B2_Labour_dataset. Above is the AECG signal, and below is the extracted FECG signal. The positions of the FECG signal, MECG signal, and MECG signal overlapping with the FECG signal in the AECG signal are indicated by ‘F’, ‘M’, and ‘F + M’. The R peaks detected by the improved Pan–Tompkins algorithms are marked with red circles. (<b>a</b>) B2_Labour_01; (<b>b</b>) B2_Labour_10.</p>
Full article ">Figure 12
<p>Ablation experiments were conducted on the generator and discriminator network depths using the ADFECGDB database. An experimental comparison of CNN generators with varying numbers of layers is presented on the (<b>a</b>), while the (<b>b</b>) side showcases an experimental evaluation of PatchGAN discriminators with different layer depths.</p>
Full article ">
16 pages, 2063 KiB  
Review
Motion Capture Technology in Sports Scenarios: A Survey
by Xiang Suo, Weidi Tang and Zhen Li
Sensors 2024, 24(9), 2947; https://doi.org/10.3390/s24092947 - 6 May 2024
Cited by 2 | Viewed by 4403
Abstract
Motion capture technology plays a crucial role in optimizing athletes’ skills, techniques, and strategies by providing detailed feedback on motion data. This article presents a comprehensive survey aimed at guiding researchers in selecting the most suitable motion capture technology for sports science investigations. [...] Read more.
Motion capture technology plays a crucial role in optimizing athletes’ skills, techniques, and strategies by providing detailed feedback on motion data. This article presents a comprehensive survey aimed at guiding researchers in selecting the most suitable motion capture technology for sports science investigations. By comparing and analyzing the characters and applications of different motion capture technologies in sports scenarios, it is observed that cinematography motion capture technology remains the gold standard in biomechanical analysis and continues to dominate sports research applications. Wearable sensor-based motion capture technology has gained significant traction in specialized areas such as winter sports, owing to its reliable system performance. Computer vision-based motion capture technology has made significant advancements in recognition accuracy and system reliability, enabling its application in various sports scenarios, from single-person technique analysis to multi-person tactical analysis. Moreover, the emerging field of multimodal motion capture technology, which harmonizes data from various sources with the integration of artificial intelligence, has proven to be a robust research method for complex scenarios. A comprehensive review of the literature from the past 10 years underscores the increasing significance of motion capture technology in sports, with a notable shift from laboratory research to practical training applications on sports fields. Future developments in this field should prioritize research and technological advancements that cater to practical sports scenarios, addressing challenges such as occlusion, outdoor capture, and real-time feedback. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Pipeline of cinematography motion capture.</p>
Full article ">Figure 2
<p>Pipeline of wearable sensor motion capture.</p>
Full article ">Figure 3
<p>Typical pipeline of HPE.</p>
Full article ">
12 pages, 2515 KiB  
Article
Stretchable and Flexible Painted Thermoelectric Generators on Japanese Paper Using Inks Dispersed with P- and N-Type Single-Walled Carbon Nanotubes
by Takumi Nakajima, Koki Hoshino, Hisatoshi Yamamoto, Keisuke Kaneko, Yutaro Okano and Masayuki Takashiri
Sensors 2024, 24(9), 2946; https://doi.org/10.3390/s24092946 - 6 May 2024
Cited by 2 | Viewed by 1161
Abstract
As power sources for Internet-of-Things sensors, thermoelectric generators must exhibit compactness, flexibility, and low manufacturing costs. Stretchable and flexible painted thermoelectric generators were fabricated on Japanese paper using inks with dispersed p- and n-type single-walled carbon nanotubes (SWCNTs). The p- and n-type SWCNT [...] Read more.
As power sources for Internet-of-Things sensors, thermoelectric generators must exhibit compactness, flexibility, and low manufacturing costs. Stretchable and flexible painted thermoelectric generators were fabricated on Japanese paper using inks with dispersed p- and n-type single-walled carbon nanotubes (SWCNTs). The p- and n-type SWCNT inks were dispersed using the anionic surfactant of sodium dodecylbenzene sulfonate and the cationic surfactant of dimethyldioctadecylammonium chloride, respectively. The bundle diameters of the p- and n-type SWCNT layers painted on Japanese paper differed significantly; however, the crystallinities of both types of layers were almost the same. The thermoelectric properties of both types of layers exhibited mostly the same values at 30 °C; however, the properties, particularly the electrical conductivity, of the n-type layer increased linearly, and of the p-type layer decreased as the temperature increased. The p- and n-type SWCNT inks were used to paint striped patterns on Japanese paper. By folding at the boundaries of the patterns, painted generators can shrink and expand, even on curved surfaces. The painted generator (length: 145 mm, height: 13 mm) exhibited an output voltage of 10.4 mV and a maximum power of 0.21 μW with a temperature difference of 64 K at 120 °C on the hot side. Full article
(This article belongs to the Special Issue Feature Papers in Wearables 2024)
Show Figures

Figure 1

Figure 1
<p>Fabrication process of painted thermoelectric generators on Japanese paper. (<b>a</b>) Ultrasonic dispersion of n- and p-type SWCNT inks, (<b>b</b>) painting n- and p-type SWCNTs on Japanese paper, and (<b>c</b>) completed painted generator on a curved surface with various shrink and expand conditions.</p>
Full article ">Figure 2
<p>Surface morphology of (<b>a</b>) p-type and (<b>b</b>) n-type painted SWCNT layers on Japanese paper, and (<b>c</b>) Japanese paper alone (low magnification).</p>
Full article ">Figure 3
<p>Raman spectra of p- and n-type painted SWCNT layers on Japanese paper and surfactant-free SWCNT film.</p>
Full article ">Figure 4
<p>Temperature-dependent in-plane thermoelectric properties of p- and n-type painted SWCNT layers on Japanese paper: (<b>a</b>) Seebeck coefficient, (<b>b</b>) electrical conductivity, (<b>c</b>) power factor.</p>
Full article ">Figure 5
<p>(<b>a</b>) Photograph of performance measurement of painted thermoelectric generator on Japanese paper. (<b>b</b>) Output voltage and (<b>c</b>) maximum power of the generator as a function of temperature difference. Insets show (<b>b</b>) relationship between generator temperatures and temperature difference and (<b>c</b>) relationship between heater temperature and total resistance of generator.</p>
Full article ">
20 pages, 7000 KiB  
Article
An Improved Initial Alignment Method Based on SE2(3)/EKF for SINS/GNSS Integrated Navigation System with Large Misalignment Angles
by Jin Sun, Yuxin Chen and Bingbo Cui
Sensors 2024, 24(9), 2945; https://doi.org/10.3390/s24092945 - 6 May 2024
Cited by 1 | Viewed by 977
Abstract
This paper proposes an improved initial alignment method for a strap-down inertial navigation system/global navigation satellite system (SINS/GNSS) integrated navigation system with large misalignment angles. Its methodology is based on the three-dimensional special Euclidean group and extended Kalman filter (SE2(3)/EKF) and [...] Read more.
This paper proposes an improved initial alignment method for a strap-down inertial navigation system/global navigation satellite system (SINS/GNSS) integrated navigation system with large misalignment angles. Its methodology is based on the three-dimensional special Euclidean group and extended Kalman filter (SE2(3)/EKF) and aims to overcome the challenges of achieving fast alignment under large misalignment angles using traditional methods. To accurately characterize the state errors of attitude, velocity, and position, these elements are constructed as elements of a Lie group. The nonlinear error on the Lie group can then be well quantified. Additionally, a group vector mixed error model is developed, taking into account the zero bias errors of gyroscopes and accelerometers. Using this new error definition, a GNSS-assisted SINS dynamic initial alignment algorithm is derived, which is based on the invariance of velocity and position measurements. Simulation experiments demonstrate that the alignment method based on SE2(3)/EKF can achieve a higher accuracy in various scenarios with large misalignment angles, while the attitude error can be rapidly reduced to a lower level. Full article
(This article belongs to the Special Issue GNSS Signals and Precise Point Positioning)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the coordinate system.</p>
Full article ">Figure 2
<p>The dynamic characteristics of the vehicle during the experiment.</p>
Full article ">Figure 3
<p>The trajectory of the vehicle during the experiment.</p>
Full article ">Figure 4
<p>Eastward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Northward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>Upward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 7
<p>Latitude alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Longitude alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>Height alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 10
<p>Eastward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>95</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 11
<p>Northward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>95</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 12
<p>Upward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>95</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 13
<p>Latitude alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>95</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 14
<p>Longitude alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>95</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 15
<p>Height alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <mo>−</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>95</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 16
<p>Eastward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>165</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 17
<p>Northward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>165</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 18
<p>Upward attitude angle error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>165</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 19
<p>Latitude alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>165</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 20
<p>Longitude alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>165</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 21
<p>Height alignment error of misalignment angles (<math display="inline"><semantics> <mrow> <msup> <mrow> <mo stretchy="false">[</mo> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>20</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <mrow> <mn>165</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mtd> </mtr> </mtable> <mo stretchy="false">]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">
16 pages, 3948 KiB  
Article
Gait Pattern Analysis: Integration of a Highly Sensitive Flexible Pressure Sensor on a Wireless Instrumented Insole
by Partha Sarati Das, Daniella Skaf, Lina Rose, Fatemeh Motaghedi, Tricia Breen Carmichael, Simon Rondeau-Gagné and Mohammed Jalal Ahamed
Sensors 2024, 24(9), 2944; https://doi.org/10.3390/s24092944 - 6 May 2024
Cited by 1 | Viewed by 1677
Abstract
Gait phase monitoring wearable sensors play a crucial role in assessing both health and athletic performance, offering valuable insights into an individual’s gait pattern. In this study, we introduced a simple and cost-effective capacitive gait sensor manufacturing approach, utilizing a micropatterned polydimethylsiloxane dielectric [...] Read more.
Gait phase monitoring wearable sensors play a crucial role in assessing both health and athletic performance, offering valuable insights into an individual’s gait pattern. In this study, we introduced a simple and cost-effective capacitive gait sensor manufacturing approach, utilizing a micropatterned polydimethylsiloxane dielectric layer placed between screen-printed silver electrodes. The sensor demonstrated inherent stretchability and durability, even when the electrode was bent at a 45-degree angle, it maintained an electrode resistance of approximately 3 Ω. This feature is particularly advantageous for gait monitoring applications. Furthermore, the fabricated flexible capacitive pressure sensor exhibited higher sensitivity and linearity at both low and high pressure and displayed very good stability. Notably, the sensors demonstrated rapid response and recovery times for both under low and high pressure. To further explore the capabilities of these new sensors, they were successfully tested as insole-type pressure sensors for real-time gait signal monitoring. The sensors displayed a well-balanced combination of sensitivity and response time, making them well-suited for gait analysis. Beyond gait analysis, the proposed sensor holds the potential for a wide range of applications within biomedical, sports, and commercial systems where soft and conformable sensors are preferred. Full article
(This article belongs to the Special Issue Intelligent Wearable Sensor-Based Gait and Movement Analysis)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Graphic demonstration of the wearable gait monitoring system, the inset shows the image of the electrode and the micropatterned PDMS dielectric layer. (<b>B</b>) Front-end electronics for the wearable gait monitoring system with wired and wireless options.</p>
Full article ">Figure 2
<p>Fabrication of the capacitive pressure sensor. (<b>A</b>) The fabrication process of micropatterned PDMS. (<b>B</b>) The fabrication process of screen-printed electrode patterning. (<b>C</b>) The fabricated device.</p>
Full article ">Figure 3
<p>Mechanical characterization under low pressure. (<b>A</b>) Measured sensitivity at low-pressure regions of the capacitive sensor fabricated micropatterned PDMS film. (<b>B</b>) Measured capacitance under repeated 5 kPa. (<b>C</b>) Measured capacitance under repeated varying applied forces. (<b>D</b>) Measured capacitance under small loads (1 g, 2 g, and 5 g). (<b>E</b>) The sensor’s response to soft finger tapping at approximately 4 Hz frequency. (<b>F</b>) Measured response and relaxation time of the sensor.</p>
Full article ">Figure 4
<p>Mechanical characterization under high pressure. (<b>A</b>) Measured sensitivity of the capacitive sensor at high pressure regions of the fabricated micropatterned PDMS film. (<b>B</b>) Measured capacitance under repeated varying applied forces. (<b>C</b>) Measured capacitance under repeated 100 kPa. (<b>D</b>) Variation in relative capacitive change while loading and unloading.</p>
Full article ">Figure 5
<p>Gait signal acquisition for identification of foot pressure distribution in different dynamic phases. Record of (<b>A</b>) the gait cycle of the reverse strike pattern, (<b>B</b>) forward strike pattern using the fabricated smart insole system, (<b>C</b>) gait signal of the left and right foot under normal walking conditions, and (<b>D</b>) putting the left and right foot on different areas under different forces.</p>
Full article ">Figure 6
<p>Gait signal acquisition (<b>A</b>) limping of the left and right foot, (<b>B</b>) foot tapping frequency, (<b>C</b>) four sensors data on the Android app, (<b>D</b>) single sensor data plotted on the Android app.</p>
Full article ">
25 pages, 33251 KiB  
Article
Matched Stochastic Resonance Enhanced Underwater Passive Sonar Detection under Non-Gaussian Impulsive Background Noise
by Haitao Dong, Shilei Ma, Jian Suo and Zhigang Zhu
Sensors 2024, 24(9), 2943; https://doi.org/10.3390/s24092943 - 6 May 2024
Cited by 1 | Viewed by 1027
Abstract
Remote passive sonar detection with low-frequency band spectral lines has attracted much attention, while complex low-frequency non-Gaussian impulsive noisy environments would strongly affect the detection performance. This is a challenging problem in weak signal detection, especially for the high false alarm rate caused [...] Read more.
Remote passive sonar detection with low-frequency band spectral lines has attracted much attention, while complex low-frequency non-Gaussian impulsive noisy environments would strongly affect the detection performance. This is a challenging problem in weak signal detection, especially for the high false alarm rate caused by heavy-tailed impulsive noise. In this paper, a novel matched stochastic resonance (MSR)-based weak signal detection model is established, and two MSR-based detectors named MSR-PED and MSR-PSNR are proposed based on a theoretical analysis of the MSR output response. Comprehensive detection performance analyses in both Gasussian and non-Gaussian impulsive noise conditions are presented, which revealed the superior performance of our proposed detector under non-Gasussian impulsive noise. Numerical analysis and application verification have revealed the superior detection performance with the proposed MSR-PSNR detector compared with energy-based detection methods, which can break through the high false alarm rate problem caused by heavy-tailed impulsive noise. For a typical non-Gasussian impulsive noise assumption with α=1.5, the proposed MSR-PED and MSR-PSNR can achieve approximately 16 dB and 22 dB improvements, respectively, in the detection performance compared to the classical PED method. For stronger, non-Gaussian impulsive noise conditions corresponding to α=1, the improvement in detection performance can be more significant. Our proposed MSR-PSNR methods can overcome the challenging problem of a high false alarm rate caused by heavy-tailed impulsive noise. This work can lay a solid foundation for breaking through the challenges of underwater passive sonar detection under non-Gaussian impulsive background noise, and can provide important guidance for future research work. Full article
Show Figures

Figure 1

Figure 1
<p>Time–frequency map of typical ocean ambient noise measured in the South China Sea (50 min).</p>
Full article ">Figure 2
<p>Probability density functions for Lévy distribution <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>α</mi> <mo>,</mo> <mi>β</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>ζ</mi> <mo>;</mo> <mi>σ</mi> <mo>,</mo> <mi>μ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> with different stability indexes and asymmetry parameters.</p>
Full article ">Figure 3
<p>A comparison MSR outputting SNR responses under different noise intensities <span class="html-italic">D</span>.</p>
Full article ">Figure 4
<p>The framework of the proposed MSR-based passive sonar detection.</p>
Full article ">Figure 5
<p>Comparison of input and MSR output results of <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis under Gaussian noise (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>): (<b>a</b>) received signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>b</b>) received signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>); (<b>c</b>) MSR output signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>d</b>) MSR output signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>); (<b>e</b>) matched filtering processed to the received signal and the corresponding MSR output signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>f</b>) matched filtering processed to the received signal and the corresponding MSR output signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>Comparison of probability density function (PDF) of different test statistics under Gaussian noise (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>): (<b>a</b>) energy detector with low pass filter for received signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>L</mi> <mi>P</mi> <mi>F</mi> <mo>−</mo> <mi>E</mi> <mi>D</mi> </mrow> </msub> </semantics></math>); (<b>b</b>) PED for received signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>P</mi> <mi>E</mi> <mi>D</mi> </mrow> </msub> </semantics></math>); (<b>c</b>) PED for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>P</mi> <mi>E</mi> <mi>D</mi> </mrow> </msub> </semantics></math>); (<b>d</b>) PSNR for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>P</mi> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </msub> </semantics></math>); (<b>e</b>) matched filtering for received signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </msub> </semantics></math>); (<b>f</b>) matched filtering for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>M</mi> <mi>F</mi> </mrow> </msub> </semantics></math>).</p>
Full article ">Figure 7
<p>Detection performance comparison of different methods under Gaussian noise (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>): (<b>a</b>) detection probability (<math display="inline"><semantics> <msub> <mi>P</mi> <mi>D</mi> </msub> </semantics></math>) curve, varying with SNR (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>F</mi> <mi>A</mi> </mrow> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>); (<b>b</b>) receiver operating curve (ROC) corresponding to −20 dB and −30 dB.</p>
Full article ">Figure 8
<p>Comparison of input and MSR output results under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypotheses (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>): (<b>a</b>) received signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>b</b>) received signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>); (<b>c</b>) MSR output signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>d</b>) MSR output signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>); (<b>e</b>) matched filtering processed to the received signal and the corresponding MSR output signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>f</b>) matched filtering processed to the received signal and the corresponding MSR output signal under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>Comparison of probability density function (PDF) of different test statistics under non-Gaussian impulsive noise (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>): (<b>a</b>) PED for received signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>P</mi> <mi>E</mi> <mi>D</mi> </mrow> </msub> </semantics></math>); (<b>b</b>) PED for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>P</mi> <mi>E</mi> <mi>D</mi> </mrow> </msub> </semantics></math>); (<b>c</b>) PSNR for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>P</mi> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </msub> </semantics></math>); (<b>d</b>) matched filtering for received signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </msub> </semantics></math>); (<b>e</b>) matched filtering for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>M</mi> <mi>F</mi> </mrow> </msub> </semantics></math>).</p>
Full article ">Figure 10
<p>Detection performance comparison of different methods under non-Gaussian impulsive noise (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>): (<b>a</b>) detection probability (<math display="inline"><semantics> <msub> <mi>P</mi> <mi>D</mi> </msub> </semantics></math>) curve, varying with SNR (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>F</mi> <mi>A</mi> </mrow> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>); (<b>b</b>) receiver operating curve (ROC) corresponding to 0 dB.</p>
Full article ">Figure 11
<p>Comparison of input and MSR output results under <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypotheses (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>): (<b>a</b>) received signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>b</b>) received signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>); (<b>c</b>) MSR output signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>d</b>) MSR output signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>); (<b>e</b>) filtering processes, matched to the received signal and the corresponding MSR output signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>f</b>) filtering processes, matched to the received signal and the corresponding MSR output signal under the <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> hypothesis (<math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 12
<p>Comparison of probability density function (PDF) of different test statistics under non-Gaussian impulsive noise (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>): (<b>a</b>) PED for received signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>P</mi> <mi>E</mi> <mi>D</mi> </mrow> </msub> </semantics></math>); (<b>b</b>) PED for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>P</mi> <mi>E</mi> <mi>D</mi> </mrow> </msub> </semantics></math>); (<b>c</b>) PSNR for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>P</mi> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </msub> </semantics></math>); (<b>d</b>) matched filtering for received signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </msub> </semantics></math>); (<b>e</b>) matched filtering for MSR output signal (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>S</mi> <mi>R</mi> <mo>−</mo> <mi>M</mi> <mi>F</mi> </mrow> </msub> </semantics></math>).</p>
Full article ">Figure 13
<p>Detection performance comparison under non-Gaussian impulsive noise (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>): (<b>a</b>) detection probability (<math display="inline"><semantics> <msub> <mi>P</mi> <mi>D</mi> </msub> </semantics></math>) curve with varied SNR (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>F</mi> <mi>A</mi> </mrow> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>); (<b>b</b>) receiver operating curve (ROC) corresponding to 10 dB.</p>
Full article ">Figure 14
<p>Sea experiment: (<b>a</b>) the deployment of low-frequency broadband sound source UW350; (<b>b</b>) the receiver with an ocean sonics hydrophone; (<b>c</b>) the received signal in the time domain; (<b>d</b>) the received signal in the time–frequency domain (received at 500 m distance).</p>
Full article ">Figure 15
<p>Normalized power spectral density (PSD) comparison of received signal and the corresponding MSR output for different distances: (<b>a</b>) 1 km; (<b>b</b>) 2 km; (<b>c</b>) 5 km; (<b>d</b>) ambient noise.</p>
Full article ">Figure 16
<p>Detection performance comparison for ship-radiated signals: (<b>a</b>) detection probability (<math display="inline"><semantics> <msub> <mi>P</mi> <mi>D</mi> </msub> </semantics></math>) curve, varied with distance; (<b>b</b>) receiver operating curve (ROC).</p>
Full article ">
17 pages, 2472 KiB  
Article
LiDAR-Based Intensity-Aware Outdoor 3D Object Detection
by Ammar Yasir Naich and Jesús Requena Carrión
Sensors 2024, 24(9), 2942; https://doi.org/10.3390/s24092942 - 6 May 2024
Cited by 2 | Viewed by 1508
Abstract
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can [...] Read more.
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird’s-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

Figure 1
<p>LiDAR-annotated scene consisting of a 3D point cloud and seven labelled objects indicated by green 3D bounding boxes.</p>
Full article ">Figure 2
<p>Bounding boxes corresponding to true objects (green) and objects predicted by a detection pipeline (blue). True and predicted objects are matched by computing the degree of overlap between their bounding boxes.</p>
Full article ">Figure 3
<p>Superimposed in the annotated scene shown in <a href="#sensors-24-02942-f001" class="html-fig">Figure 1</a> are the bounding boxes of the objects detected by a pipeline <span class="html-italic">l</span> using a confidence score of 10% (<b>a</b>) and 90% (<b>b</b>). Ground truth objects are enclosed in a green bounding box, whereas predicted bounding boxes are blue. Predicted bounding boxes that match ground truth ones are TPs, whereas those that do not match a ground truth bounding box are FPs.</p>
Full article ">Figure 4
<p>Example of a scene from the KITTI dataset: (<b>a</b>) 2D camera image, (<b>b</b>) 3D LiDAR point cloud, and (<b>c</b>) resulting 2D BEV point cloud. The BEV point cloud is produced by projecting the 3D LiDAR point cloud onto a top-down view plane. Objects recognizable in the 3D LiDAR point cloud (<b>b</b>) are also recognizable in the 2D BEV point cloud (<b>c</b>).</p>
Full article ">Figure 4 Cont.
<p>Example of a scene from the KITTI dataset: (<b>a</b>) 2D camera image, (<b>b</b>) 3D LiDAR point cloud, and (<b>c</b>) resulting 2D BEV point cloud. The BEV point cloud is produced by projecting the 3D LiDAR point cloud onto a top-down view plane. Objects recognizable in the 3D LiDAR point cloud (<b>b</b>) are also recognizable in the 2D BEV point cloud (<b>c</b>).</p>
Full article ">Figure 5
<p>Our proposed 3D object detection pipeline consists of three stages, namely, an intensity-aware voxel encoder, which includes intensity features; a 3D backbone for dense feature extraction; and a 2D backbone that produces the final prediction (object classification and bounding box estimation).</p>
Full article ">Figure 6
<p>Architecture of the proposed intensity-aware voxel encoder. After voxelization, a scene is represented as a tensor with dimensions of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>v</mi> </msub> <mo>×</mo> <mn>35</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>v</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>D</mi> </msub> <mo>×</mo> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>×</mo> <msub> <mi>T</mi> <mi>W</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>T</mi> <mi>D</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>H</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>T</mi> <mi>W</mi> </msub> </semantics></math> are the number of voxels along the depth, height, and width dimensions of the scene. After augmentation, a tensor whose dimensions are <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>v</mi> </msub> <mo>×</mo> <mn>35</mn> <mo>×</mo> <mn>7</mn> </mrow> </semantics></math> is generated and then processed via cascaded encoders VFE-1 and VFE-2. A voxel-wise intensity histogram <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </semantics></math>, whose dimensions are <math display="inline"><semantics> <mrow> <mi>T</mi> <mi>v</mi> <mo>×</mo> <mn>10</mn> </mrow> </semantics></math>, is concatenated to the output of VFE-2, whose dimensions are <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>v</mi> </msub> <mo>×</mo> <mn>128</mn> </mrow> </semantics></math>, to produce the final <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>v</mi> </msub> <mo>×</mo> <mn>138</mn> </mrow> </semantics></math> voxel-wise feature map.</p>
Full article ">Figure 7
<p>Examples of 3D LiDAR point cloud scenes where the reflected intensity <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>i</mi> </msub> </semantics></math> has been-color coded. (<b>a</b>) KITTKI instance taken in clear weather conditions. (<b>b</b>) CADC instance taken during adverse weather conditions, where the highlighted region (red eclipse) shows low intensity values.</p>
Full article ">Figure 8
<p>Intensity profiles of six objects defined in the KITTI dataset, obtained by averaging the intensity distributions obtained for each object by using KDE (‘Car’: 6647 objects; ‘Van’: 2106 objects; ‘Truck’: 1027 objects; ‘Pedestrian’: 1778 objects; ‘Person (sitting)’: 98 objects; ‘Cyclist’: 1132 objects). Intensity values below 0 or above 1 are artifacts due to the smoothing nature of the KDE method.</p>
Full article ">Figure 9
<p>Visualisation in the AP × detection rate plane of the performance of the models shown in <a href="#sensors-24-02942-t006" class="html-table">Table 6</a>. AP values are obtained separately for ’Car’, ’Pedestrian’ and ’Cyclist’ objects, and for 3D and BEV detection modalities. Detection rates were computed using one single NVIDIA RTX 3080 GPU.</p>
Full article ">
15 pages, 4416 KiB  
Article
Optimization of Temperature Modulation for Gas Classification Based on Bayesian Optimization
by Tatsuya Iwata, Yuki Okura, Maaki Saeki and Takefumi Yoshikawa
Sensors 2024, 24(9), 2941; https://doi.org/10.3390/s24092941 - 6 May 2024
Viewed by 2852
Abstract
This study proposes an optimization method for temperature modulation in chemiresistor-type gas sensors based on Bayesian optimization (BO), and its applicability was investigated. As voltage for a sensor heater, our previously proposed waveform was employed, and the parameters determining the voltage range were [...] Read more.
This study proposes an optimization method for temperature modulation in chemiresistor-type gas sensors based on Bayesian optimization (BO), and its applicability was investigated. As voltage for a sensor heater, our previously proposed waveform was employed, and the parameters determining the voltage range were optimized. Employing the Bouldin–Davies index (DBI) as an objective function (OBJ), BO was utilized to minimize the DBI calculated from a feature matrix built from the collected data followed by pre-processing. The sensor responses were measured using five test gases with five concentrations, amounting to 2500 data points per parameter set. After seven trials with four initial parameter sets (ten parameter sets were tested in total), the DBI was successfully reduced from 2.1 to 1.5. The classification accuracy for the test gases based on the support vector machine tends to increase with decreasing the DBI, indicating that the DBI acts as a good OBJ. Additionally, the accuracy itself increased from 85.4% to 93.2% through optimization. The deviation from the tendency that the accuracy increases with decreasing the DBI for some parameter sets was also discussed. Consequently, it was demonstrated that the proposed optimization method based on BO is promising for temperature modulation. Full article
(This article belongs to the Special Issue Recent Advancements in Olfaction and Electronic Nose)
Show Figures

Figure 1

Figure 1
<p><math display="inline"><semantics> <msub> <mi>V</mi> <mi mathvariant="normal">H</mi> </msub> </semantics></math> waveform employed in this study. <math display="inline"><semantics> <msub> <mi>V</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>V</mi> <mi>Offset</mi> </msub> </semantics></math> determine the peak and bottom of the wave, and hence, the range of <math display="inline"><semantics> <msub> <mi>V</mi> <mi mathvariant="normal">H</mi> </msub> </semantics></math>.</p>
Full article ">Figure 2
<p>Schematic illustration of the concept of Bayesian optimization.</p>
Full article ">Figure 3
<p>Schematic illustration of the optimization procedure based on Bayesian optimization.</p>
Full article ">Figure 4
<p>Schematic illustration of the flow-control system.</p>
Full article ">Figure 5
<p>Schematic electrical circuit for the measurements. The current for heater resistor (<math display="inline"><semantics> <msub> <mi>R</mi> <mi mathvariant="normal">H</mi> </msub> </semantics></math>) was amplified by a voltage follower, while the sensor conductance (<math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mi mathvariant="normal">S</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <msub> <mi>R</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics></math>) was converted to the output voltage (<math display="inline"><semantics> <msub> <mi>V</mi> <mi>Out</mi> </msub> </semantics></math>) by an inverting amplifier.</p>
Full article ">Figure 6
<p>(<b>a</b>) Heater voltage with one of the initial parameter sets (<math display="inline"><semantics> <msub> <mi>V</mi> <mn>0</mn> </msub> </semantics></math>: 0.35 V, <math display="inline"><semantics> <msub> <mi>V</mi> <mi>Offset</mi> </msub> </semantics></math>: 0.75 V) and one of the corresponding measurement results for each of the test gases: (<b>b</b>) <math display="inline"><semantics> <msub> <mi>G</mi> <mi mathvariant="normal">S</mi> </msub> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <msub> <mi>G</mi> <mrow> <mi mathvariant="normal">S</mi> <mo>,</mo> <mi mathvariant="normal">n</mi> </mrow> </msub> </semantics></math>, and (<b>d</b>) frequency spectra of <math display="inline"><semantics> <msub> <mi>G</mi> <mrow> <mi mathvariant="normal">S</mi> <mo>,</mo> <mi mathvariant="normal">n</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>PC plot of the data obtained using the heater voltage shown in <a href="#sensors-24-02941-f006" class="html-fig">Figure 6</a>a.</p>
Full article ">Figure 8
<p>GPR results after (<b>a</b>) first, (<b>b</b>) third, (<b>c</b>), fifth, and (<b>d</b>) seventh trials. The blue circles indicate the predicted mean obtained by the regression, while the red crosses the experimental results.</p>
Full article ">Figure 9
<p>The observed minimum DBI plotted as a function of the number of trials.</p>
Full article ">Figure 10
<p>Classification accuracy plotted as a function of the DBI. The markers and the error bars indicate mean and max./min. accuracies, respectively.</p>
Full article ">
23 pages, 8208 KiB  
Review
Smart Sensing Chairs for Sitting Posture Detection, Classification, and Monitoring: A Comprehensive Review
by David Faith Odesola, Janusz Kulon, Shiny Verghese, Adam Partlow and Colin Gibson
Sensors 2024, 24(9), 2940; https://doi.org/10.3390/s24092940 - 5 May 2024
Cited by 3 | Viewed by 4225
Abstract
Incorrect sitting posture, characterized by asymmetrical or uneven positioning of the body, often leads to spinal misalignment and muscle tone imbalance. The prolonged maintenance of such postures can adversely impact well-being and contribute to the development of spinal deformities and musculoskeletal disorders. In [...] Read more.
Incorrect sitting posture, characterized by asymmetrical or uneven positioning of the body, often leads to spinal misalignment and muscle tone imbalance. The prolonged maintenance of such postures can adversely impact well-being and contribute to the development of spinal deformities and musculoskeletal disorders. In response, smart sensing chairs equipped with cutting-edge sensor technologies have been introduced as a viable solution for the real-time detection, classification, and monitoring of sitting postures, aiming to mitigate the risk of musculoskeletal disorders and promote overall health. This comprehensive literature review evaluates the current body of research on smart sensing chairs, with a specific focus on the strategies used for posture detection and classification and the effectiveness of different sensor technologies. A meticulous search across MDPI, IEEE, Google Scholar, Scopus, and PubMed databases yielded 39 pertinent studies that utilized non-invasive methods for posture monitoring. The analysis revealed that Force Sensing Resistors (FSRs) are the predominant sensors utilized for posture detection, whereas Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) are the leading machine learning models for posture classification. However, it was observed that CNNs and ANNs do not outperform traditional statistical models in terms of classification accuracy due to the constrained size and lack of diversity within training datasets. These datasets often fail to comprehensively represent the array of human body shapes and musculoskeletal configurations. Moreover, this review identifies a significant gap in the evaluation of user feedback mechanisms, essential for alerting users to their sitting posture and facilitating corrective adjustments. Full article
(This article belongs to the Special Issue Advanced Non-Invasive Sensors: Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>Literature review process.</p>
Full article ">Figure 2
<p>Twenty categories of different sitting postures along with a pie chart indicating their popularity among the research studies found.</p>
Full article ">Figure 3
<p>Taxonomy graph of sensors used in smart sensing chair systems.</p>
Full article ">Figure 4
<p>Examples of FSR sensors. (<b>a</b>) Square-shaped FSR sensor (FSR01CE) [<a href="#B67-sensors-24-02940" class="html-bibr">67</a>]. (<b>b</b>) Circle-shaped FSR sensor (FSR03CE) [<a href="#B67-sensors-24-02940" class="html-bibr">67</a>].</p>
Full article ">Figure 5
<p>Textile pressure sensor. (<b>a</b>) Textile pressure sensor composition. Reproduced with permission [<a href="#B70-sensors-24-02940" class="html-bibr">70</a>]. (<b>b</b>) PreCaTex textile sensor. Reproduced with permission [<a href="#B26-sensors-24-02940" class="html-bibr">26</a>].</p>
Full article ">Figure 6
<p>Illustration of some studies that implemented the use of dense sensor arrays. (<b>a</b>) Chair fitted with large pressure sensor array modules placed on top of the seating cushion. Reproduced with permission [<a href="#B43-sensors-24-02940" class="html-bibr">43</a>]. (<b>b</b>) Pressure array cushion with haptic feedback. Reproduced with permission [<a href="#B30-sensors-24-02940" class="html-bibr">30</a>], copyright 2021 <span class="html-italic">Sensors and Actuators</span>.</p>
Full article ">Figure 7
<p>Research studies using multiple pressure sensors placed around a chair. (<b>a</b>) Chair fitted with 10 textile pressure sensors. Reproduced with permission [<a href="#B26-sensors-24-02940" class="html-bibr">26</a>]. (<b>b</b>) Eight FSR sensors placed around a chair, five sensors placed on a sitting cushion, and three sensors added to a backrest. Reproduced with permission [<a href="#B28-sensors-24-02940" class="html-bibr">28</a>].</p>
Full article ">Figure 8
<p>Number of research papers published on smart sensing chair technology along with the sensor being used from 2007 to 2023.</p>
Full article ">Figure 9
<p>Comparison of machine learning models: number of postures vs. accuracy vs. test subjects, as indicated by the size of the circle.</p>
Full article ">
11 pages, 3826 KiB  
Article
Design and Fabrication of a Film Bulk Acoustic Wave Filter for 3.0 GHz–3.2 GHz S-Band
by Chao Gao, Yupeng Zheng, Haiyang Li, Yuqi Ren, Xiyu Gu, Xiaoming Huang, Yaxin Wang, Yuanhang Qu, Yan Liu, Yao Cai and Chengliang Sun
Sensors 2024, 24(9), 2939; https://doi.org/10.3390/s24092939 - 5 May 2024
Cited by 1 | Viewed by 1581
Abstract
Film bulk acoustic-wave resonators (FBARs) are widely utilized in the field of radio frequency (RF) filters due to their excellent performance, such as high operation frequency and high quality. In this paper, we present the design, fabrication, and characterization of an FBAR filter [...] Read more.
Film bulk acoustic-wave resonators (FBARs) are widely utilized in the field of radio frequency (RF) filters due to their excellent performance, such as high operation frequency and high quality. In this paper, we present the design, fabrication, and characterization of an FBAR filter for the 3.0 GHz–3.2 GHz S-band. Using a scandium-doped aluminum nitride (Sc0.2Al0.8N) film, the filter is designed through a combined acoustic–electromagnetic simulation method, and the FBAR and filter are fabricated using an eight-step lithographic process. The measured FBAR presents an effective electromechanical coupling coefficient (keff2) value up to 13.3%, and the measured filter demonstrates a −3 dB bandwidth of 115 MHz (from 3.013 GHz to 3.128 GHz), a low insertion loss of −2.4 dB, and good out-of-band rejection of −30 dB. The measured 1 dB compression point of the fabricated filter is 30.5 dBm, and the first series resonator burns out first as the input power increases. This work paves the way for research on high-power RF filters in mobile communication. Full article
Show Figures

Figure 1

Figure 1
<p>The design of the proposed FBAR and filter: (<b>a</b>) Mason model. (<b>b</b>) A cross-sectional view of the designed FBAR, and a schematic diagram of the ladder circuit for the filter. (<b>c</b>) The simulated frequency responses of FBARs. The simulated electrical response of the filter: (<b>d</b>) overall characteristics; (<b>e</b>) in-band characteristics.</p>
Full article ">Figure 2
<p>Acoustic–electromagnetic co-simulation of the filter. (<b>a</b>) Electromagnetic model of the ladder-type filter based on FBARs. (<b>b</b>) Simulated overall transmission response of the filter. (<b>c</b>) Comparison of in-band insertion loss between acoustic–electromagnetic co-simulation and circuit simulation results.</p>
Full article ">Figure 3
<p>Simulation results for the temperature distribution in the filter for the power load of 30 dBm.</p>
Full article ">Figure 4
<p>Main process steps for the fabrication of Sc<sub>0.2</sub>Al<sub>0.8</sub>N-based filters. (<b>a</b>) Etch Si to form an air cavity. (<b>b</b>) Deposit SiO<sub>2</sub>. (<b>c</b>) Etch SiO<sub>2</sub> and CMP. (<b>d</b>) Deposit the AlN seed layer and Mo; then, etch Mo to form the bottom electrode. (<b>e</b>) Deposit Sc<sub>0.2</sub>Al<sub>0.8</sub>N and etch. (<b>f</b>) Deposit Mo; then, etch Mo to form a mass loading layer. (<b>g</b>) Deposit Mo; then, etch Mo to form the top electrode. (<b>h</b>) Deposit Au; then, pattern it to form the electrode pad (lift-off). (<b>i</b>) Etch release holes; then, release the SiO<sub>2</sub> to form an air cavity.</p>
Full article ">Figure 5
<p>(<b>a</b>) XRD patterns of Sc<sub>0.2</sub>Al<sub>0.8</sub>N film and Mo film during fabrication. (<b>b</b>) Rocking curve of ScAlN (002) peak. (<b>c</b>) SAED pattern of Sc<sub>0.2</sub>Al<sub>0.8</sub>N film. (<b>d</b>) High-resolution TEM image of Sc<sub>0.2</sub>Al<sub>0.8</sub>N film.</p>
Full article ">Figure 6
<p>(<b>a</b>) Scanning electron microscope (SEM) image of the cross-sectional view of the fabricated FBAR. (<b>b</b>) Top view of the fabricated FBAR. (<b>c</b>) Measured impedance curves and conductance curves of the series and parallel FBARs.</p>
Full article ">Figure 7
<p>(<b>a</b>) SEM image of the fabricated FBAR filter. Comparison of S<sub>21</sub> parameter between acoustic–electromagnetic co-simulation result and measured result of the filter transmission response: (<b>b</b>) overall characteristic, (<b>c</b>) in-band characteristic.</p>
Full article ">Figure 8
<p>(<b>a</b>) Power capacity test system. (<b>b</b>) The output power against the input power. (<b>c</b>) Device burnout caused by excessive power.</p>
Full article ">
24 pages, 7264 KiB  
Article
Rheological Properties and Inkjet Printability of a Green Silver-Based Conductive Ink for Wearable Flexible Textile Antennas
by Abdelkrim Boumegnane, Said Douhi, Assia Batine, Thibault Dormois, Cédric Cochrane, Ayoub Nadi, Omar Cherkaoui and Mohamed Tahiri
Sensors 2024, 24(9), 2938; https://doi.org/10.3390/s24092938 - 5 May 2024
Cited by 4 | Viewed by 1721
Abstract
The development of e-textiles necessitates the creation of highly conductive inks that are compatible with precise inkjet printing, which remains a key challenge. This work presents an innovative, syringe-based method to optimize a novel bio-sourced silver ink for inkjet printing on textiles. We [...] Read more.
The development of e-textiles necessitates the creation of highly conductive inks that are compatible with precise inkjet printing, which remains a key challenge. This work presents an innovative, syringe-based method to optimize a novel bio-sourced silver ink for inkjet printing on textiles. We investigate the relationships between inks’ composition, rheological properties, and printing behavior, ultimately assessing the electrical performance of the fabricated circuits. Using Na–alginate and polyethylene glycol (PEG) as the suspension matrix, we demonstrate their viscosity depends on the component ratios. Rheological control of the silver nanoparticle-laden ink has become paramount for uniform printing on textiles. A specific formulation (3 wt.% AgNPs, 20 wt.% Na–alginate, 40 wt.% PEG, and 40 wt.% solvent) exhibits the optimal rheology, enabling the printing of 0.1 mm thick conductive lines with a low resistivity (8 × 10−3 Ω/cm). Our findings pave the way for designing eco-friendly ink formulations that are suitable for inkjet printing flexible antennas and other electronic circuits onto textiles, opening up exciting possibilities for the next generation of E-textiles. Full article
(This article belongs to the Special Issue Feature Papers in Sensor Materials Section 2023/2024)
Show Figures

Figure 1

Figure 1
<p>XRD pattern of AgNPs.</p>
Full article ">Figure 2
<p>(<b>a</b>) TGA and (<b>b</b>) DTG curves of suspension matrices.</p>
Full article ">Figure 3
<p>DSC thermograms of suspension matrices.</p>
Full article ">Figure 4
<p>(<b>a</b>) Viscosity of suspension matrices at shear rates ranging from zero to 300 s<sup>−1</sup>. (<b>b</b>) Loss rate of suspension matrices, and (<b>c</b>) shear stress curves of suspension matrices at shear rates ranging from zero to 300 s<sup>−1</sup>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Storage and loss modulus and (<b>b</b>) tan <span class="html-italic">δ</span> factor as a function of deformation for different matrices.</p>
Full article ">Figure 6
<p>(<b>a</b>) ΔEHOMO-LUMO energy gap for Na–alginate and PEG. (<b>b</b>) Molecular electrostatic potential (MEP) map.</p>
Full article ">Figure 7
<p>Viscosity of silver conductive inks at shear rates ranging from 0 to 300 s<sup>−1</sup> with silver filler content ranging from 0.5% to 3%.</p>
Full article ">Figure 8
<p>Shear stress curves of silver conductive inks at shear rates ranging from zero to 300 s<sup>−1</sup> with AgNPs content ranging from 0.5% to 3%.</p>
Full article ">Figure 9
<p>The Herschel–Bulkley rheological parameters (yield stress (<span class="html-italic">τ</span><sub>0</sub>), flow index (<span class="html-italic">n</span>), and consistency index (<span class="html-italic">k</span>)) for the silver conductive inks.</p>
Full article ">Figure 10
<p>(<b>a</b>) Variation in resistance as a function of AgNPs wt% for the SM<sub>5</sub> formulation, (<b>b</b>) optical image of the printed sample, and (<b>c</b>) LED test using the printed sample.</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>) SEM images comparing SM<sub>1</sub> with SM<sub>5</sub> based silver ink. (<b>c</b>) Illustration demonstrating PEG incorporation into the Na-alginate network during AgNPs ink preparation, and (<b>d</b>) SEM image of PU-coated PET printed with SM<sub>5</sub> conductive ink (1 mm magnification in the yellow box).</p>
Full article ">Figure 12
<p>(<b>a</b>) Depiction of the configurations of the proposed microstrip flexible antenna in perspective view and top view. Simulated results for both scenarios (Free Space and On-Body): (<b>b</b>) reflection coefficient (S11), and (<b>c</b>) voltage standing wave ratio (VSWR). (<b>d</b>) Maximum gain (the pink bar indicates the highest gain within the antenna’s operating bandwidth). (<b>e</b>) Two-dimensional planar gain in free space and when the antenna is located within the human body (Black line: On body (Φ = 0)), and three-dimensional directional gain. (<b>f</b>) Visualization of the proposed antenna placement on human tissue in CST studio suite and associated SAR distribution.</p>
Full article ">
21 pages, 13388 KiB  
Article
An Optimized Instance Segmentation of Underlying Surface in Low-Altitude TIR Sensing Images for Enhancing the Calculation of LSTs
by Yafei Wu, Chao He, Yao Shan, Shuai Zhao and Shunhua Zhou
Sensors 2024, 24(9), 2937; https://doi.org/10.3390/s24092937 - 5 May 2024
Viewed by 1019
Abstract
The calculation of land surface temperatures (LSTs) via low-altitude thermal infrared remote (TIR) sensing images at a block scale is gaining attention. However, the accurate calculation of LSTs requires a precise determination of the range of various underlying surfaces in the TIR images, [...] Read more.
The calculation of land surface temperatures (LSTs) via low-altitude thermal infrared remote (TIR) sensing images at a block scale is gaining attention. However, the accurate calculation of LSTs requires a precise determination of the range of various underlying surfaces in the TIR images, and existing approaches face challenges in effectively segmenting the underlying surfaces in the TIR images. To address this challenge, this study proposes a deep learning (DL) methodology to complete the instance segmentation and quantification of underlying surfaces through the low-altitude TIR image dataset. Mask region-based convolutional neural networks were utilized for pixel-level classification and segmentation with an image dataset of 1350 annotated TIR images of an urban rail transit hub with a complex distribution of underlying surfaces. Subsequently, the hyper-parameters and architecture were optimized for the precise classification of the underlying surfaces. The algorithms were validated using 150 new TIR images, and four evaluation indictors demonstrated that the optimized algorithm outperformed the other algorithms. High-quality segmented masks of the underlying surfaces were generated, and the area of each instance was obtained by counting the true-positive pixels with values of 1. This research promotes the accurate calculation of LSTs based on the low-altitude TIR sensing images. Full article
(This article belongs to the Special Issue Deep Learning-Based Neural Networks for Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Overview of the QS hub station: (<b>a</b>) location of the QS hub station and (<b>b</b>) layout diagram and surroundings of the QS hub station.</p>
Full article ">Figure 2
<p>Photographs of the layout of the QS hub station: the two buildings and related facilities.</p>
Full article ">Figure 3
<p>Types of underlying surfaces of the QS station.</p>
Full article ">Figure 4
<p>UAV platform and low-altitude TIR imaging sensors.</p>
Full article ">Figure 5
<p>Aerial path plan of the low-altitude image system.</p>
Full article ">Figure 6
<p>Map of the point cloud (Blue dots represent the TIR sensing images; Green dots represent the RGB images; Red dots indicate the image quality are poor.).</p>
Full article ">Figure 7
<p>Marks of the underlying surfaces annotated using LabelMe.</p>
Full article ">Figure 8
<p>Illustration of the annotated low-altitude TIR sensing images.</p>
Full article ">Figure 9
<p>Image data enhancement: (<b>a</b>) raw image and (<b>b</b>) vertically flipped image for data augmentation.</p>
Full article ">Figure 10
<p>Framework details of the original Mask R-CNN algorithm.</p>
Full article ">Figure 11
<p>Feature pyramid: (<b>a</b>) single map and (<b>b</b>) feature pyramid network (Arrows represent the flow of data).</p>
Full article ">Figure 12
<p>Architecture of an FPN.</p>
Full article ">Figure 13
<p>RPN of the Faster R-CNN with a traditional CNN backbone architecture.</p>
Full article ">Figure 14
<p>Overall structure of the optimized Mask R-CNN algorithm.</p>
Full article ">Figure 15
<p>Learning curve of the training process.</p>
Full article ">Figure 16
<p>Schematic of the quantification of the underlying surface marks.</p>
Full article ">Figure 17
<p>A portion of the results of the segmentation of the underlying surface marks: (<b>a</b>) raw images; (<b>b</b>) results with the original Mask R-CNN algorithm; (<b>c</b>) results with the proposed Mask R-CNN algorithm.</p>
Full article ">Figure 17 Cont.
<p>A portion of the results of the segmentation of the underlying surface marks: (<b>a</b>) raw images; (<b>b</b>) results with the original Mask R-CNN algorithm; (<b>c</b>) results with the proposed Mask R-CNN algorithm.</p>
Full article ">Figure 18
<p>Accuracy of the instance segmentation models during the training process.</p>
Full article ">
19 pages, 2594 KiB  
Review
Advanced Home-Based Shoulder Rehabilitation: A Systematic Review of Remote Monitoring Devices and Their Therapeutic Efficacy
by Martina Sassi, Mariajose Villa Corta, Matteo Giuseppe Pisani, Guido Nicodemi, Emiliano Schena, Leandro Pecchia and Umile Giuseppe Longo
Sensors 2024, 24(9), 2936; https://doi.org/10.3390/s24092936 - 5 May 2024
Viewed by 1790
Abstract
Shoulder pain represents the most frequently reported musculoskeletal disorder, often leading to significant functional impairment and pain, impacting quality of life. Home-based rehabilitation programs offer a more accessible and convenient solution for an effective shoulder disorder treatment, addressing logistical and financial constraints associated [...] Read more.
Shoulder pain represents the most frequently reported musculoskeletal disorder, often leading to significant functional impairment and pain, impacting quality of life. Home-based rehabilitation programs offer a more accessible and convenient solution for an effective shoulder disorder treatment, addressing logistical and financial constraints associated with traditional physiotherapy. The aim of this systematic review is to report the monitoring devices currently proposed and tested for shoulder rehabilitation in home settings. The research question was formulated using the PICO approach, and the PRISMA guidelines were applied to ensure a transparent methodology for the systematic review process. A comprehensive search of PubMed and Scopus was conducted, and the results were included from 2014 up to 2023. Three different tools (i.e., the Rob 2 version of the Cochrane risk-of-bias tool, the Joanna Briggs Institute (JBI) Critical Appraisal tool, and the ROBINS-I tool) were used to assess the risk of bias. Fifteen studies were included as they fulfilled the inclusion criteria. The results showed that wearable systems represent a promising solution as remote monitoring technologies, offering quantitative and clinically meaningful insights into the progress of individuals within a rehabilitation pathway. Recent trends indicate a growing use of low-cost, non-intrusive visual tracking devices, such as camera-based monitoring systems, within the domain of tele-rehabilitation. The integration of home-based monitoring devices alongside traditional rehabilitation methods is acquiring significant attention, offering broader access to high-quality care, and potentially reducing healthcare costs associated with in-person therapy. Full article
(This article belongs to the Special Issue Intelligent Sensors for Healthcare and Patient Monitoring)
Show Figures

Figure 1

Figure 1
<p>PRISMA flowchart.</p>
Full article ">Figure 2
<p>Rob 2 version of the Cochrane risk-of-bias tool for randomized control trials [<a href="#B29-sensors-24-02936" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>Joanna Briggs Institute (JBI) Critical Appraisal risk-of-bias tool for case series.</p>
Full article ">Figure 4
<p>ROBINS-I risk-of-bias tool for case studies [<a href="#B30-sensors-24-02936" class="html-bibr">30</a>].</p>
Full article ">
19 pages, 6999 KiB  
Article
Arduino-Based Readout Electronics for Nuclear and Particle Physics
by Markus Köhli, Jannis Weimar, Simon Schmidt, Fabian P. Schmidt, Alexander Lambertz, Laura Weber, Jochen Kaminski and Ulrich Schmidt
Sensors 2024, 24(9), 2935; https://doi.org/10.3390/s24092935 - 5 May 2024
Cited by 1 | Viewed by 1644
Abstract
Open Hardware-based microcontrollers, especially the Arduino platform, have become a comparably easy-to-use tool for rapid prototyping and implementing creative solutions. Such devices in combination with dedicated front-end electronics can offer low-cost alternatives for student projects, slow control and independently operating small-scale instrumentation. The [...] Read more.
Open Hardware-based microcontrollers, especially the Arduino platform, have become a comparably easy-to-use tool for rapid prototyping and implementing creative solutions. Such devices in combination with dedicated front-end electronics can offer low-cost alternatives for student projects, slow control and independently operating small-scale instrumentation. The capabilities can be extended to data taking and signal analysis at mid-level rates. Two detector realizations are presented, which cover the readouts of proportional counter tubes and of scintillators or wavelength-shifting fibers with silicon photomultipliers (SiPMs). The SiPMTrigger realizes a small-scale design for coincidence readout of SiPMs as a trigger or veto detector. It consists of a custom mixed signal front-end board featuring signal amplification, discrimination and a coincidence unit for rates of up to 200 kHz. The nCatcher transforms an Arduino Nano to a proportional counter readout with pulse shape analysis: time over threshold measurement and a 10-bit analog-to-digital converter for pulse heights. The device is suitable for low-to-medium-rate environments up to 5 kHz, where a good signal-to-noise ratio is crucial. We showcase the monitoring of thermal neutrons. For data taking and slow control, a logger board is presented that features an SD card and GSM/LoRa interface. Full article
(This article belongs to the Special Issue Advances in Particle Detectors and Radiation Detectors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Arduino IDE with a C++ example showing the buildup of a simple program. The microcontroller first initializes itself in the <tt>setup()</tt> routine, and then infinitely runs the <tt>loop()</tt>. The internal analog-to-digital converter measures the voltage level on an input pin using the library function <tt>analogRead</tt>, and then uses digital output pins as status indicators.</p>
Full article ">Figure 2
<p>Schematic design of both types of detector front-end electronics with the main functional components for one channel. Whereas the proportional counter readout (<b>left</b>) measures pulse length and pulse height, the SiPM board (<b>right</b>) sets a fixed comparator threshold and counts coincidences between both photon counters. Additionally, both designs include sensors for environmental variables like temperature <span class="html-italic">T</span>, pressure <span class="html-italic">p</span> and relative humidity <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>h</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Three-dimensional rendering of both Arduino-based readout boards. The photon counter front-end (left) is split into a dedicated amplifier board (inlet) with short signal lines to the SiPMs and a digitizer for the coincidence counting of two two-channel boards. The proportional counter front-end features up to five channels, which are fed into one sample, and holds the comparator circuit. Both boards are equipped with a high-voltage module on their backside. The Arduino Nano can be plugged into the corresponding socket row as shown on the left.</p>
Full article ">Figure 4
<p>Schematic of a proportional counter for detecting neutrons, which are converted into ions, emitted back-to-back, by a coating on the counter wall. An electric field between the tube wall and the axial wire accelerates the generated electrons towards the center. In the vicinity of the wire, the electrons reach the gas ionization energy threshold and charge multiplication takes place. The pulse is then read out by a charge sensitive amplifier. The resulting spectrum is shown on the right graph. From the maximum energy deposition of the nuclear fragments the spectrum is extended downwards due to losses in the converter medium itself. For in this case overlapping background noise a signal threshold is required.</p>
Full article ">Figure 5
<p>Input stage of the proportional counter readout. The signal is decoupled from the high voltage and fed into an integrating preamplifier. An optional band pass filter shapes the pulses before they will be scaled to the designated height in the main amplifier.</p>
Full article ">Figure 6
<p>Sample and hold circuit (<b>top</b>) and pulse length measurement circuit (<b>bottom</b>). From the main amplifier, signals are fed in at the entry point ’MAMP’. The peak detector is read out on the A7 pin, while the comparator is connected to the COMOUT pin of the Arduino.</p>
Full article ">Figure 7
<p>Signal shaping and working principle of the front-end electronics for typical pulse signals with the four stages. The signal amplitudes of the analog voltages (blue, orange and green curves) are shown on the left <span class="html-italic">y</span>-axis, and the digital levels of the signal, on the right <span class="html-italic">y</span>-axis (red curve).</p>
Full article ">Figure 8
<p>Working principle of a photon detector: A scintillator (white box) produces light from particles that pass through. A wavelength-shifting fiber (gray) with a different refractive index <math display="inline"><semantics> <msub> <mi>n</mi> <mi>SC</mi> </msub> </semantics></math> than the scintillator <math display="inline"><semantics> <msub> <mi>n</mi> <mi>LG</mi> </msub> </semantics></math> acts as an optical transmitter by reflecting (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math>) light of suitable angles <math display="inline"><semantics> <mi>α</mi> </semantics></math> (translated from angles <math display="inline"><semantics> <mi>β</mi> </semantics></math> inside the scintillator). A silicon photomultiplier (orange) records the intensity, which then yields a spectrum of pulses of simultaneously arriving photons (pe).</p>
Full article ">Figure 9
<p>Schematic of the analog signal part of one channel of the single SiPM board v1 with the main components, from the SiPM to the digital trigger signal (blue arrow).</p>
Full article ">Figure 10
<p>Schematic of the analog signal part of one channel of the split SiPM board v2. Compared to that in the board v1 in <a href="#sensors-24-02935-f009" class="html-fig">Figure 9</a>, the signal polarity is inverted, and a dual channel comparator is used.</p>
Full article ">Figure 11
<p>Main functional components of the Arduino-based data logger with the respective interfaces and protocols.</p>
Full article ">Figure 12
<p>Three-dimensional rendering of the logger board. The Arduino DUE is inserted into the socket rows in the central part of the board. In the top segment, all voltages are generated from the 12 V DC-in connected to the power switch (top left). On the right side of the board, cable-mounted devices can be attached like sensors (four RJ45 ports, green SDI-12 sockets) and antennas (GPS and GSM). The left side is dedicated to periphery devices (from top to bottom) like the SD card, OLED display, SIM card and USB interface. Depending on the application, one of two modems (lower segment) can be operated, either with focus on the NB-IoT or LTE/GPS interfaces. The required antennas need to be connected to their respective SMA sockets.</p>
Full article ">Figure 13
<p>Pulse length–pulse height plot with induced noise aside the main signal events (blue). (<b>Left</b>): Heat (yellow) and mechanical stress (green) induce long but low pulses. (<b>Right</b>): The condensation of water vapor on the board (red) leads to a broad distribution of pulse shapes, which might originate from discharges as well as unwanted conductivity over humid surfaces.</p>
Full article ">Figure 14
<p>Time series of a neutron counter located in Mannheim, Germany, at 49.51471, 8.55178 over the period of two years. The system was exposed to usual environmental conditions ranging from −10 °C to 40 °C, including storms, heavy rain and lighting. Most of the points far below the average data originated from brownouts at low battery voltages.</p>
Full article ">Figure 15
<p>(<b>Left</b>): Accuracy of the frequency measurement of the SiPMTrigger board connected to the interrupt pin of the Arduino. (<b>Right</b>): Temperature stability test of the threshold adjustment for the SiPM spectrum.</p>
Full article ">Figure 16
<p>(<b>Left</b>): Dark count rate of two Hamamatsu S13360-1375PE SiPMs. (<b>Right</b>): Coincidence counting WLSF setup with a plastic scintillator with an alpha source placed at different distances from the readout.</p>
Full article ">Figure 17
<p>Example of the Grafana dashboard over a period of one month. The independently operating logger recorded data of environmental sensors like internal and external humidity, shown in the (<b>top-left</b>) panel; internal and external temperature, shown in the (<b>top-right</b>) panel; battery voltage, shown in the (<b>lower-left</b>) panel; and the count rate of a neutron detector attached to it, shown on the (<b>lower-right</b>) panel. All data were transmitted by a LoRa modem to a gateway and then sent via MQTT to the server, which stored it in a time series data base.</p>
Full article ">Figure 18
<p>Power consumption of the data logger in operation combined with the proportional counter readout unit (nCatcher). Yellow to red color codes represent the nCatcher consumption, whereas green to blue color codes refer to the data logger components. The setup switches between three different modes. It predominantly resides in the sleep mode, which puts both microcontrollers and some parts of the digital domain in a stand-by mode. During the active stage, the data logger collects data from the nCatcher and peripheral SDI-12 devices and saves them on the SD card. According to user-defined intervals, the logger transmits data via 4G to a server.</p>
Full article ">
13 pages, 3807 KiB  
Article
Detection of Dopamine Based on Aptamer-Modified Graphene Microelectrode
by Cuicui Zhang, Tianyou Chen, Yiran Ying and Jing Wu
Sensors 2024, 24(9), 2934; https://doi.org/10.3390/s24092934 - 5 May 2024
Cited by 2 | Viewed by 1115
Abstract
In this paper, a novel aptamer-modified nitrogen-doped graphene microelectrode (Apt-Au-N-RGOF) was fabricated and used to specifically identify and detect dopamine (DA). During the synthetic process, gold nanoparticles were loaded onto the active sites of nitrogen-doped graphene fibers. Then, aptamers were modified on the [...] Read more.
In this paper, a novel aptamer-modified nitrogen-doped graphene microelectrode (Apt-Au-N-RGOF) was fabricated and used to specifically identify and detect dopamine (DA). During the synthetic process, gold nanoparticles were loaded onto the active sites of nitrogen-doped graphene fibers. Then, aptamers were modified on the microelectrode depending on Au-S bonds to prepare Apt-Au-N-RGOF. The prepared microelectrode can specifically identify DA, avoiding interference with other molecules and improving its selectivity. Compared with the N-RGOF microelectrode, the Apt-Au-N-RGOF microelectrode exhibited higher sensitivity, a lower detection limit (0.5 μM), and a wider linear range (1~100 μM) and could be applied in electrochemical analysis fields. Full article
(This article belongs to the Special Issue Editorial Board Members' Collection Series: Aptamer Biosensors)
Show Figures

Figure 1

Figure 1
<p>Preparation of N-doped reduced graphene fiber and microelectrode: (<b>a</b>) fiber prepared by wet spinning method; (<b>b</b>) nitrogen doping mechanism of urea; (<b>c</b>) preparation of microelectrode.</p>
Full article ">Figure 2
<p>Representation SEM images of N-RGOF and Au-N-RGOF: (<b>a</b>) N-RGOF surface; (<b>b</b>) N-RGOF surface; (<b>c</b>) N-RGOF cross-section; (<b>d</b>) Au-N-RGOF surface; (<b>e</b>) Au-N-RGOF surface; (<b>f</b>) Au-N-RGOF cross-section.</p>
Full article ">Figure 3
<p>Raman spectra of N-RGOF, UV–Vis absorption spectra of Au-N-RGOF and Au-N-RGOF and XPS spectrum of Au-N-RGOF: (<b>a</b>) Raman spectra of N-RGOF and Au-N-RGOF; (<b>b</b>) UV–Visible absorption spectra of Au-N-RGOF; (<b>c</b>) XPS total spectra of Au-N-RGOF; (<b>d</b>) the corresponding high-resolution N 1s peak; (<b>e</b>) the corresponding high-resolution C 1s peak.</p>
Full article ">Figure 4
<p>Electrochemical characterization of N-RGOF and detection of DA: (<b>a</b>) CV of N−RGOF in 0.5 M KCl with 5.0 mM K<sub>3</sub>Fe(CN)<sub>6</sub> at different scan rates; (<b>b</b>) CV of N−RGOF in DA solution with different concentrations (1 μM, 3 μM, 5 μM, 8 μM, 10 μM, 30 μM, 50 μM, 80 μM, 100 μM); (<b>c</b>) DPVs of N−RGOF in DA solution with different concentrations; (<b>d</b>) linear relationship between DA concentration and DPV peak current.</p>
Full article ">Figure 5
<p>Electrochemical characterization of N−RGOF in UA solution or mixture solution of UA and DA: (<b>a</b>) CV of N−RGOF in UA solution with different concentrations (1 μM, 3 μM, 5 μM, 8 μM, 10 μM, 30 μM, 50 μM, 80 μM, 100 μM); (<b>b</b>) DPV of N−RGOF in UA solution with different concentrations; (<b>c</b>) CV of N−RGOF in mixed solution of DA and UA with different concentrations; (<b>d</b>) DPV of N−RGOF in mixed solution of DA and UA with different concentrations.</p>
Full article ">Figure 6
<p>Electrochemical characterization of Apt−Au−N−RGOF in DA solution and 100 times CV cycles of RGOF in blank 0.1 M PBS solution: (<b>a</b>) CV of Apt−Au−N−RGOF in DA solution with different concentrations (1 μM, 3 μM, 5 μM, 8 μM, 10 μM, 30 μM, 50 μM, 80 μM, 100 μM); (<b>b</b>) DPV of Apt−Au−N−RGOF in DA solution with different concentrations; (<b>c</b>) linear relationship between different DA concentrations and DPV peak current; (<b>d</b>) RGOF 100 CV cycles in blank 0.1 M PBS solution.</p>
Full article ">Figure 7
<p>Electrochemical characterization of Apt−Au−N−RGOF in UA solution or mixture solution of UA and DA: (<b>a</b>) CV of Apt−Au−N−RGOF in UA solutions with different concentrations (1 μM, 3 μM, 5 μM, 8 μM, 10 μM, 30 μM, 50 μM, 80 μM, 100 μM); (<b>b</b>) DPV of Apt−Au−N−RGOF in UA solutions with different concentrations; (<b>c</b>) CV of Apt−Au−N−RGOF in mixed solution with different concentrations of DA and UA; (<b>d</b>) DPV of Apt−Au−N−RGOF in mixed solution with different concentrations of DA and UA.</p>
Full article ">Figure 8
<p>Selectivity, reproducibility, and stability: (<b>a</b>) DPV of Au−N−RGOF in a mixture solution of UA and DA with different concentrations (1 μM, 3 μM, 5 μM, 8 μM, 10 μM, 30 μM, 50 μM, 80 μM, 100 μM); (<b>b</b>) DPV of Apt−Au−N−RGOF in a mixture solution of UA and DA with different concentrations (1 μM, 3 μM, 5 μM, 8 μM, 10 μM, 30 μM, 50 μM, 80 μM, 100 μM); (<b>c</b>) DPV of Apt−Au−N−RGOF in a mixed solution of DA and UA (50 μM) on the first, second, and third time in (<b>d</b>) DPV of Apt−Au−N−RGOF in a mixed solution of DA and UA (80 μM) at different times (on time, 24 h, 48 h, 72 h).</p>
Full article ">Figure 9
<p>Electrochemical response mechanism of DA.</p>
Full article ">
42 pages, 59595 KiB  
Article
Automated Porosity Characterization for Aluminum Die Casting Materials Using X-ray Radiography, Synthetic X-ray Data Augmentation by Simulation, and Machine Learning
by Stefan Bosse, Dirk Lehmhus and Sanjeev Kumar
Sensors 2024, 24(9), 2933; https://doi.org/10.3390/s24092933 - 5 May 2024
Cited by 1 | Viewed by 1239
Abstract
Detection and characterization of hidden defects, impurities, and damages in homogeneous materials like aluminum die casting materials, as well as composite materials like Fiber–Metal Laminates (FML), is still a challenge. This work discusses methods and challenges in data-driven modeling of automated damage and [...] Read more.
Detection and characterization of hidden defects, impurities, and damages in homogeneous materials like aluminum die casting materials, as well as composite materials like Fiber–Metal Laminates (FML), is still a challenge. This work discusses methods and challenges in data-driven modeling of automated damage and defect detectors using measured X-ray single- and multi-projection images. Three main issues are identified: Data and feature variance, data feature labeling (for supervised machine learning), and the missing ground truth. It will be shown that simulation of synthetic measuring data can deliver a ground truth dataset and accurate labeling for data-driven modeling, but it cannot be used directly to predict defects in manufacturing processes. Noise has a significant impact on the feature detection and will be discussed. Data-driven feature detectors are implemented with semantic pixel Convolutional Neural Networks. Experimental data are measured with different devices: A low-quality and low-cost (Low-Q) X-ray radiography, a typical industrial mid-quality X-ray radiography and Computed Tomography (CT) system, and a state-of-the-art high-quality μ-CT device. The goals of this work are the training of robust and generalized data-driven ML feature detectors with synthetic data only and the transition from CT to single-projection radiography imaging and analysis. Although, as the title implies, the primary task is pore characterization in aluminum high-pressure die-cast materials, but the methods and results are not limited to this use case. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>Top</b>) Methodologies used in this work. (<b>Bottom</b>) Data and model architecture.</p>
Full article ">Figure 2
<p>Simulation flow for the generation of synthetic data.</p>
Full article ">Figure 3
<p>(<b>Left</b>) Complex FML model with single-fiber modeling and a simplified impact damage, 10,000 fibers with 150 μm diameter. Damage size and location can be changed. (<b>Right</b>) Computed X-ray images: Frontal and 45° projections, detector pixel size 100 μm, with and without impact damage.</p>
Full article ">Figure 4
<p>Details of the X-ray simulation flow using GPU-based image computation (gVirtualXray). The Filtered Back-Projection (FBP) is optional and only used for CT simulation.</p>
Full article ">Figure 5
<p>Post-adding of noise to X-ray images (normalized intensity [0, 80]) using Poisson and binomial distributions showing an increasing Signal-to-Noise ratio (SNR) with respect to low absolute signal values converging to a constant SNR for higher values.</p>
Full article ">Figure 6
<p>Post-adding of Gaussian noise to X-ray images (normalized intensity [0, 80]) showing a continuously increasing Signal-to-Noise ratio (SNR) with higher signal values.</p>
Full article ">Figure 7
<p>Statistical distribution of the pore ellipsoid area (<b>a</b>) and volume (<b>b</b>) for the first 500 largest pores shown with linear and logarithmic density axis.</p>
Full article ">Figure 8
<p>μ-CT volume rendering with semitransparent view and contour fit (using ParaView).</p>
Full article ">Figure 9
<p>Different clustered pores (<b>left</b> and <b>middle</b>) and the computed convex hull point cloud (<b>right</b>). From top to bottom decreasing pore sizes.</p>
Full article ">Figure 10
<p>Different image processing algorithms applied to a raw (original) CT slice image (output from CT reconstruction and filtering algorithms). The last image shows the binary combination of the Otsu and adaptive threshold computations, finally used to extract pore features.</p>
Full article ">Figure 11
<p>Architecture and data flow of the semantic pixel classifier using a flat CNN (FC-NN: Fully connected neural network).</p>
Full article ">Figure 12
<p>Basic architecture of SAM model (adapted from [<a href="#B44-sensors-24-02933" class="html-bibr">44</a>]).</p>
Full article ">Figure 13
<p>(<b>Left</b>) Example of a synthetic X-ray training image. (<b>Right</b>) Rectangular ROI pore mask annotations from CAD model.</p>
Full article ">Figure 14
<p>Die casting simulation: Geometry of the casting—die cavity including shot chamber, runner, overflows, etc. during mold filling. For the experiments, the rectangular bending test sample on the left was used. The color plot shows the temperature distribution at the beginning of the filling process.</p>
Full article ">Figure 15
<p>Die casting simulation: Sequence of die filling. Color coding denotes the age of the respective melt volume, while arrows indicate the direction and velocity of melt flow.</p>
Full article ">Figure 16
<p>Die casting simulation representing the filling process: (<b>a</b>) Melt velocity distribution shortly before the end of the filling phase, (<b>b</b>) entrapped air mass prediction broadly reflecting the flow pattern.</p>
Full article ">Figure 17
<p>Die casting simulation depicting the general porosity prediction: (<b>a</b>) Prediction of porosity; (<b>b</b>) Hot spot FS time criterion indicating risk of occurrence of shrinkage porosity.</p>
Full article ">Figure 18
<p>Two example segments (20 × 20 pixels) extracted from the synthetic X-ray images with 20% noise level: (<b>a</b>) With pore; (<b>b</b>) Without pore.</p>
Full article ">Figure 19
<p>Feature map images (pore marking) computed by the CNN pixel classifier using the synthetic X-ray radiography images as input for three different output thresholds (score of the classifier [0, 1]). The ground truth ROI polygons are shown as an overlay, too.</p>
Full article ">Figure 20
<p>Feature map images (pore marking) computed by the CNN pixel classifier and measured X-ray radiography images (Mid-Q) as input (for three different plates).</p>
Full article ">Figure 21
<p>Feature map images of rolled aluminum plates without pores computed by the CNN pixel classifier for different score thresholds (Mid-Q). Expected result: Black without feature marking!</p>
Full article ">Figure 22
<p>Feature map images (pore marking) computed by the CNN pixel classifier and measured X-ray radiography images (Low-Q) as input for different score thresholds (<b>Left</b>); CNN model trained with balanced n/p set; (<b>Right</b>) Post-training with biased set n = 75%, <span class="html-italic">p</span> = 25%.</p>
Full article ">Figure 23
<p>Pore segmentation results obtained from the SAM, triangle thresholding and the overlapping results from both methods. The CLAHE image is obtained by first denoising, then removing uneven illumination, and then applying CLAHE (applied to Mid-Q X-ray images).</p>
Full article ">Figure 24
<p>Comparison of pore area distribution analysis from Mid-Q and Low-Q devices (Low-Q poses higher noise, longer exposure times, but higher resolution, too) using the pixel classifier (PXL-CNN8) with one convolution layer and 8 filters, compared with the deep learning CLAHE–SAM model.</p>
Full article ">Figure 25
<p>CSG-CAD modeling of deformations: (<b>a</b>) Hybrid with cubes and extruded half-boundary profiles; (<b>b</b>) Half-convex hull extrusion; (<b>c</b>) Point cloud convex hull extrusion.</p>
Full article ">Figure 26
<p>Synthetic CAD model of an FML plate with deformations due to an impact damage.</p>
Full article ">Figure 27
<p>One slice of the CT reconstruction using synthetic X-ray images and intensity profiles for aluminum, PREG, and both layersB. Gray: Aluminum layer, Yellow: PREG layer.</p>
Full article ">
19 pages, 8691 KiB  
Article
Pedestrian Pose Recognition Based on Frequency-Modulated Continuous-Wave Radar with Meta-Learning
by Jiajia Shi, Qiang Zhang, Quan Shi, Liu Chu and Robin Braun
Sensors 2024, 24(9), 2932; https://doi.org/10.3390/s24092932 - 5 May 2024
Viewed by 1135
Abstract
With the continuous advancement of autonomous driving and monitoring technologies, there is increasing attention on non-intrusive target monitoring and recognition. This paper proposes an ArcFace SE-attention model-agnostic meta-learning approach (AS-MAML) by integrating attention mechanisms into residual networks for pedestrian gait recognition using frequency-modulated [...] Read more.
With the continuous advancement of autonomous driving and monitoring technologies, there is increasing attention on non-intrusive target monitoring and recognition. This paper proposes an ArcFace SE-attention model-agnostic meta-learning approach (AS-MAML) by integrating attention mechanisms into residual networks for pedestrian gait recognition using frequency-modulated continuous-wave (FMCW) millimeter-wave radar through meta-learning. We enhance the feature extraction capability of the base network using channel attention mechanisms and integrate the additive angular margin loss function (ArcFace loss) into the inner loop of MAML to constrain inner loop optimization and improve radar discrimination. Then, this network is used to classify small-sample micro-Doppler images obtained from millimeter-wave radar as the data source for pose recognition. Experimental tests were conducted on pose estimation and image classification tasks. The results demonstrate significant detection and recognition performance, with an accuracy of 94.5%, accompanied by a 95% confidence interval. Additionally, on the open-source dataset DIAT-μRadHAR, which is specially processed to increase classification difficulty, the network achieves a classification accuracy of 85.9%. Full article
Show Figures

Figure 1

Figure 1
<p>Time-related characteristics of the sawtooth modulation signal. The red represents the transmitting signal, and the blue represents the receiving signal.</p>
Full article ">Figure 2
<p>Micro-Doppler image of a walking person.</p>
Full article ">Figure 3
<p>Micro-Doppler images of seven gait postures.</p>
Full article ">Figure 4
<p>AS-MAML network structure diagram.</p>
Full article ">Figure 5
<p>MAML update processing.</p>
Full article ">Figure 6
<p>Experiment scene of data collection. (<b>a</b>) represents the human target, (<b>b</b>) is the AWR1642 radar and DCA1000EVM data acquisition board, (<b>c</b>) is the front of the radar and data acquisition board, and (<b>d</b>) mmwave studio is the control software of PC.</p>
Full article ">Figure 7
<p>Confusion matrix for ablation experiment. (<b>a</b>–<b>d</b>) refer to the four ablation experiment results mentioned in the above text, respectively.</p>
Full article ">Figure 8
<p>T-SNE visualization results. The clustering effect of (<b>c</b>) AS-MAML is significantly better than that of (<b>a</b>) Res18 and (<b>b</b>) MAML, as the distances between categories are too close, which are circled in red.</p>
Full article ">Figure 9
<p>Schematic diagram of image enhancement.</p>
Full article ">Figure 10
<p>The training loss function and accuracy curves.</p>
Full article ">Figure 11
<p>Confusion matrices for the proposed and compared networks.</p>
Full article ">Figure 12
<p>Micro-Doppler images with three walking speeds.</p>
Full article ">Figure 13
<p>Confusion matrix for classes with three walking speeds.</p>
Full article ">Figure 14
<p>Micro-Doppler images of two-person walking and single-person walking.</p>
Full article ">Figure 15
<p>Confusion matrix for two-person and single-person walking.</p>
Full article ">
22 pages, 2980 KiB  
Article
K-Space Approach in Optical Coherence Tomography: Rigorous Digital Transformation of Arbitrary-Shape Beams, Aberration Elimination and Super-Refocusing beyond Conventional Phase Correction Procedures
by Alexander L. Matveyev, Lev A. Matveev, Grigory V. Gelikonov and Vladimir Y. Zaitsev
Sensors 2024, 24(9), 2931; https://doi.org/10.3390/s24092931 - 5 May 2024
Viewed by 1158
Abstract
For the most popular method of scan formation in Optical Coherence Tomography (OCT) based on plane-parallel scanning of the illuminating beam, we present a compact but rigorous K-space description in which the spectral representation is used to describe both the axial and lateral [...] Read more.
For the most popular method of scan formation in Optical Coherence Tomography (OCT) based on plane-parallel scanning of the illuminating beam, we present a compact but rigorous K-space description in which the spectral representation is used to describe both the axial and lateral structure of the illuminating/received OCT signals. Along with the majority of descriptions of OCT-image formation, the discussed approach relies on the basic principle of OCT operation, in which ballistic backscattering of the illuminating light is assumed. This single-scattering assumption is the main limitation, whereas in other aspects, the presented approach is rather general. In particular, it is applicable to arbitrary beam shapes without the need for paraxial approximation or the assumption of Gaussian beams. The main result of this study is the use of the proposed K-space description to analytically derive a filtering function that allows one to digitally transform the initial 3D set of complex-valued OCT data into a desired (target) dataset of a rather general form. An essential feature of the proposed filtering procedures is the utilization of both phase and amplitude transformations, unlike conventionally discussed phase-only transformations. To illustrate the efficiency and generality of the proposed filtering function, the latter is applied to the mutual transformation of non-Gaussian beams and to the digital elimination of arbitrary aberrations at the illuminating/receiving aperture. As another example, in addition to the conventionally discussed digital refocusing enabling depth-independent lateral resolution the same as in the physical focus, we use the derived filtering function to perform digital “super-refocusing.” The latter does not yet overcome the diffraction limit but readily enables lateral resolution several times better than in the initial physical focus. Full article
Show Figures

Figure 1

Figure 1
<p>Schematically shown geometry of the illuminating beam (not necessarily focused like in the figure) with the lateral coordinates <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> of the axis directed to the tissue depth along the <span class="html-italic">z</span>-axis of the coordinate system. Coordinates <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>s</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> characterize positions of scatterers.</p>
Full article ">Figure 2
<p>Schematic elucidation of the fact that the forward propagation of the illuminating light and back-scattered signal propagation can be described in a symmetrical manner leading to Equation (6). Notice that such an equivalent scheme should be represented for every particular scatterer depth <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>s</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Diagram showing main steps of transformation of the complex valued OCT data obtained for the initial field distribution <math display="inline"><semantics> <mrow> <msub> <mi>U</mi> <mi>L</mi> </msub> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>;</mo> <msub> <mi>k</mi> <mi>n</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> over the illuminating/receiving aperture to the desired form corresponding to a target field distribution <math display="inline"><semantics> <mrow> <msubsup> <mi>U</mi> <mi>L</mi> <mrow> <mo stretchy="false">(</mo> <mi>T</mi> <mo stretchy="false">)</mo> </mrow> </msubsup> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>;</mo> <msub> <mi>k</mi> <mi>n</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> over the aperture.</p>
Full article ">Figure 4
<p>Comparison of differently performed refocusing of a B-scan for a vertical chain of sub-resolution scatterers. Panel (<b>a</b>) is the initial B-scan with a highly focused beam simulated using Equation (7) for each single A-scan. (<b>b</b>) is the result of refocusing based on Equation (15) that looks as a phase-only correction but implicitly implies variable width of the beam at the aperture, such that <math display="inline"><semantics> <mrow> <msubsup> <mi>W</mi> <mi>L</mi> <mo>′</mo> </msubsup> <mo>/</mo> <msubsup> <mi>z</mi> <mn>0</mn> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>W</mi> <mi>L</mi> </msub> <mo>/</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, which yields <math display="inline"><semantics> <mrow> <msubsup> <mi>W</mi> <mn>0</mn> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>W</mi> <mn>0</mn> </msub> </mrow> </semantics></math>. (<b>c</b>) is the result of refocusing based on the filtering function (12) and Equation (14) in which the focus depth is shifted, but the beam radius at the aperture is kept invariable <math display="inline"><semantics> <mrow> <msubsup> <mi>W</mi> <mi>L</mi> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>W</mi> <mrow> <mi>L</mi> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>, so that the focus radius <math display="inline"><semantics> <mrow> <msubsup> <mi>W</mi> <mn>0</mn> <mo>′</mo> </msubsup> <mo>≠</mo> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>s</mi> <mi>t</mi> <mo>.</mo> </mrow> </semantics></math> (<b>d</b>) is the result again based on Equations (12) and (14), but requiring that <math display="inline"><semantics> <mrow> <msubsup> <mi>W</mi> <mn>0</mn> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>W</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, such that condition <math display="inline"><semantics> <mrow> <msubsup> <mi>W</mi> <mi>L</mi> <mo>′</mo> </msubsup> <mo>/</mo> <msubsup> <mi>z</mi> <mn>0</mn> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>W</mi> <mi>L</mi> </msub> <mo>/</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> </mrow> </semantics></math> was used when calculating the filtering function (12). The central wavelength is <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics></math> µm, the initial focus depth is <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>252</mn> </mrow> </semantics></math> µm, and the radius of physical focus is <math display="inline"><semantics> <mrow> <msub> <mi>W</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1.9</mn> </mrow> </semantics></math> µm.</p>
Full article ">Figure 5
<p>Example of transformation of non-Gaussian beams: initially strongly focused beam with a Lorentzian profile (row (<b>a-1</b>–<b>a-3</b>)) is transformed into a Bessel beam (row (<b>b-1</b>–<b>b-3</b>)). First column is for vertical B-scans, the other columns are horizontal scans through the depth <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>1</mn> </msub> </mrow> </semantics></math> of the initial focus and <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>2</mn> </msub> </mrow> </semantics></math> closer to the image bottom. The insets show the corresponding horizontal profiles.</p>
Full article ">Figure 6
<p>Comparison of conventional refocusing and super-refocusing for an initially weakly focused beam with <math display="inline"><semantics> <mrow> <msub> <mi>W</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>12.6</mn> </mrow> </semantics></math> μm for the same vertical positions of scatterers as in <a href="#sensors-24-02931-f004" class="html-fig">Figure 4</a>. At each depth, a pair of scatterers laterally separated by 12 μm is located. Upper row shows color-coded B-scans with insets showing <span class="html-italic">en-face</span> images of scatterers located at z = 480 μm, for which the lateral profiles are shown in the lower raw. Panels (<b>a-1</b>,<b>a-2</b>) are not refocused and (<b>b-1</b>,<b>b-2</b>) show conventional refocusing similar to that in <a href="#sensors-24-02931-f004" class="html-fig">Figure 4</a>. (<b>c-1</b>,<b>c-2</b>) show the result of super-refocusing with 4-fold increase in the lateral resolution, so that individual scatterers become clearly resolved. The noise level is especially clearly seen in the lower row.</p>
Full article ">Figure 7
<p>Illustration of how the regularization parameter influences the noise level in the super-refocused image with the same parameters as in <a href="#sensors-24-02931-f005" class="html-fig">Figure 5</a>. Rows (<b>a-1</b>–<b>a-5</b>,<b>b-1</b>–<b>b-5</b>) show the “digital” noise caused by the regularization itself in the absence of other noises. Rows (<b>c-1</b>–<b>c-5</b>,<b>d-1</b>–<b>d-5</b>) correspond to the same moderate initial noise as in <a href="#sensors-24-02931-f005" class="html-fig">Figure 5</a> and demonstrate that the resultant noise after refocusing strongly depends on the regularization parameter.</p>
Full article ">Figure 8
<p>Illustration of distortions produced by the aberration in the form of Zernike polynomial <math display="inline"><semantics> <mrow> <msubsup> <mi>Z</mi> <mn>3</mn> <mn>1</mn> </msubsup> <mo stretchy="false">(</mo> <mi>ρ</mi> <mo>,</mo> <mi>ϕ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> (<b>upper row</b>) and results of the image transformation using either conventional refocusing by Equation (15) (<b>middle row</b>) or refocusing combined with elimination of the aberrations using the spectral filter given by Equations (12) and (14) (<b>lower row</b>). Column 1 shows the in-depth B-scans, column 2 shows the <span class="html-italic">en face</span> image of the scatter located at the depth of physical focus <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>252</mn> </mrow> </semantics></math> μm and column 3 is the <span class="html-italic">en face</span> image of the scatterer located deeper at <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>480</mn> </mrow> </semantics></math> μm. The small insets show the horizontal profiles of the scatterer images along the <span class="html-italic">X</span>-axis.</p>
Full article ">
18 pages, 8336 KiB  
Article
Leveraging Temporal Information to Improve Machine Learning-Based Calibration Techniques for Low-Cost Air Quality Sensors
by Sharafat Ali, Fakhrul Alam, Johan Potgieter and Khalid Mahmood Arif
Sensors 2024, 24(9), 2930; https://doi.org/10.3390/s24092930 - 4 May 2024
Viewed by 1134
Abstract
Low-cost ambient sensors have been identified as a promising technology for monitoring air pollution at a high spatio-temporal resolution. However, the pollutant data captured by these cost-effective sensors are less accurate than their conventional counterparts and require careful calibration to improve their accuracy [...] Read more.
Low-cost ambient sensors have been identified as a promising technology for monitoring air pollution at a high spatio-temporal resolution. However, the pollutant data captured by these cost-effective sensors are less accurate than their conventional counterparts and require careful calibration to improve their accuracy and reliability. In this paper, we propose to leverage temporal information, such as the duration of time a sensor has been deployed and the time of day the reading was taken, in order to improve the calibration of low-cost sensors. This information is readily available and has so far not been utilized in the reported literature for the calibration of cost-effective ambient gas pollutant sensors. We make use of three data sets collected by research groups around the world, who gathered the data from field-deployed low-cost CO and NO2 sensors co-located with accurate reference sensors. Our investigation shows that using the temporal information as a co-variate can significantly improve the accuracy of common machine learning-based calibration techniques, such as Random Forest and Long Short-Term Memory. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Box plots of the target pollutant concentrations as recorded by the reference sensors for Dataset 1, 2 and 3 in (<b>a</b>–<b>c</b>), respectively. The median and standard deviation of the CO readings are (1.66, 1.26), (0.49, 0.40) and (0.67, 0.25) in ppm, respectively, for the three datasets. The median and standard deviation of the NO<sub>2</sub> readings for the three datasets are (109, 47.23), (18.16, 12.68) and (20.33, 15.65) in ppb, respectively.</p>
Full article ">Figure 1 Cont.
<p>Box plots of the target pollutant concentrations as recorded by the reference sensors for Dataset 1, 2 and 3 in (<b>a</b>–<b>c</b>), respectively. The median and standard deviation of the CO readings are (1.66, 1.26), (0.49, 0.40) and (0.67, 0.25) in ppm, respectively, for the three datasets. The median and standard deviation of the NO<sub>2</sub> readings for the three datasets are (109, 47.23), (18.16, 12.68) and (20.33, 15.65) in ppb, respectively.</p>
Full article ">Figure 2
<p>Process diagram of the dataset training, validation and testing. A <span class="html-italic">k</span>-fold (<span class="html-italic">k</span> = 10) cross-validation has been utilized to ensure that the parameters are more generalized.</p>
Full article ">Figure 3
<p>Empirical CDF plots of calibration error for CO.</p>
Full article ">Figure 3 Cont.
<p>Empirical CDF plots of calibration error for CO.</p>
Full article ">Figure 4
<p>Empirical CDF plots of calibration error for NO<sub>2</sub>.</p>
Full article ">Figure 4 Cont.
<p>Empirical CDF plots of calibration error for NO<sub>2</sub>.</p>
Full article ">Figure 5
<p>Target diagrams of (<b>a</b>) RFR and (<b>b</b>) LSTM for CO.</p>
Full article ">Figure 5 Cont.
<p>Target diagrams of (<b>a</b>) RFR and (<b>b</b>) LSTM for CO.</p>
Full article ">Figure 6
<p>Target diagrams of (<b>a</b>) RFR and (<b>b</b>) LSTM for NO<sub>2</sub>.</p>
Full article ">Figure 7
<p>E-CDFs of CO for all three datasets for different algorithms between scenarios S0 (raw LCS + Temp + Hum), S0T (raw LCS + Temp + Hum+ <span class="html-italic">N<sub>days</sub></span> + <span class="html-italic">Hour</span>), S1 (same as S1—raw LCS + Temp + Hum + other gases) and train test splits (TTS1—90:10, TTS2—20:80).</p>
Full article ">Figure 8
<p>E-CDFs of NO<sub>2</sub> for all three datasets for different algorithms between scenarios S0 (raw LCS + Temp + Hum), S0T (raw LCS + Temp + Hum+ <span class="html-italic">N<sub>days</sub></span> + <span class="html-italic">Hour</span>) and S1 (same as S1—raw LCS + Temp + Hum + other gases) and train test splits (TTS1—90:10, TTS2—20:80).</p>
Full article ">
13 pages, 1888 KiB  
Article
Biomechanical Posture Analysis in Healthy Adults with Machine Learning: Applicability and Reliability
by Federico Roggio, Sarah Di Grande, Salvatore Cavalieri, Deborah Falla and Giuseppe Musumeci
Sensors 2024, 24(9), 2929; https://doi.org/10.3390/s24092929 - 4 May 2024
Cited by 1 | Viewed by 2854
Abstract
Posture analysis is important in musculoskeletal disorder prevention but relies on subjective assessment. This study investigates the applicability and reliability of a machine learning (ML) pose estimation model for the human posture assessment, while also exploring the underlying structure of the data through [...] Read more.
Posture analysis is important in musculoskeletal disorder prevention but relies on subjective assessment. This study investigates the applicability and reliability of a machine learning (ML) pose estimation model for the human posture assessment, while also exploring the underlying structure of the data through principal component and cluster analyses. A cohort of 200 healthy individuals with a mean age of 24.4 ± 4.2 years was photographed from the frontal, dorsal, and lateral views. We used Student’s t-test and Cohen’s effect size (d) to identify gender-specific postural differences and used the Intraclass Correlation Coefficient (ICC) to assess the reliability of this method. Our findings demonstrate distinct sex differences in shoulder adduction angle (men: 16.1° ± 1.9°, women: 14.1° ± 1.5°, d = 1.14) and hip adduction angle (men: 9.9° ± 2.2°, women: 6.7° ± 1.5°, d = 1.67), with no significant differences in horizontal inclinations. ICC analysis, with the highest value of 0.95, confirms the reliability of the approach. Principal component and clustering analyses revealed potential new patterns in postural analysis such as significant differences in shoulder–hip distance, highlighting the potential of unsupervised ML for objective posture analysis, offering a promising non-invasive method for rapid, reliable screening in physical therapy, ergonomics, and sports. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence in Gait and Posture Analysis)
Show Figures

Figure 1

Figure 1
<p>Pose estimation of the collected photos, visualization of the superimposed skeletal model highlighting the anatomical landmarks on frontal, dorsal, and lateral photo views of a woman and a man.</p>
Full article ">Figure 2
<p>Angle results of the machine learning posture analysis with detailed comparison of body joint angles and horizontal inclinations for men and women, as indicated by mean values ± standard deviations. (ADD: adduction, EXT: extension, VAR/VAL: varus/valgus, FLX: flexion).</p>
Full article ">Figure 3
<p>Representation of data distribution from the application of the clustering algorithms within the space of the first two principal components.</p>
Full article ">
9 pages, 659 KiB  
Article
Analyzing the Thermal Characteristics of Three Lining Materials for Plantar Orthotics
by Esther Querol-Martínez, Artur Crespo-Martínez, Álvaro Gómez-Carrión, Juan Francisco Morán-Cortés, Alfonso Martínez-Nova and Raquel Sánchez-Rodríguez
Sensors 2024, 24(9), 2928; https://doi.org/10.3390/s24092928 - 4 May 2024
Viewed by 1123
Abstract
Introduction: The choice of materials for covering plantar orthoses or wearable insoles is often based on their hardness, breathability, and moisture absorption capacity, although more due to professional preference than clear scientific criteria. An analysis of the thermal response to the use of [...] Read more.
Introduction: The choice of materials for covering plantar orthoses or wearable insoles is often based on their hardness, breathability, and moisture absorption capacity, although more due to professional preference than clear scientific criteria. An analysis of the thermal response to the use of these materials would provide information about their behavior; hence, the objective of this study was to assess the temperature of three lining materials with different characteristics. Materials and Methods: The temperature of three materials for covering plantar orthoses was analyzed in a sample of 36 subjects (15 men and 21 women, aged 24.6 ± 8.2 years, mass 67.1 ± 13.6 kg, and height 1.7 ± 0.09 m). Temperature was measured before and after 3 h of use in clinical activities, using a polyethylene foam copolymer (PE), ethylene vinyl acetate (EVA), and PE-EVA copolymer foam insole with the use of a FLIR E60BX thermal camera. Results: In the PE copolymer (material 1), temperature increases between 1.07 and 1.85 °C were found after activity, with these differences being statistically significant in all regions of interest (p < 0.001), except for the first toe (0.36 °C, p = 0.170). In the EVA foam (material 2) and the expansive foam of the PE-EVA copolymer (material 3), the temperatures were also significantly higher in all analyzed areas (p < 0.001), ranging between 1.49 and 2.73 °C for EVA and 0.58 and 2.16 °C for PE-EVA. The PE copolymer experienced lower overall overheating, and the area of the fifth metatarsal head underwent the greatest temperature increase, regardless of the material analyzed. Conclusions: PE foam lining materials, with lower density or an open-cell structure, would be preferred for controlling temperature rise in the lining/footbed interface and providing better thermal comfort for users. The area of the first toe was found to be the least overheated, while the fifth metatarsal head increased the most in temperature. This should be considered in the design of new wearables to avoid excessive temperatures due to the lining materials. Full article
(This article belongs to the Special Issue Wearable Sensors for Continuous Health Monitoring and Analysis)
Show Figures

Figure 1

Figure 1
<p>Thermal image of the insole after 3 h of use.</p>
Full article ">
15 pages, 6793 KiB  
Article
Consensus-Based Information Filtering in Distributed LiDAR Sensor Network for Tracking Mobile Robots
by Isabella Luppi, Neel Pratik Bhatt and Ehsan Hashemi
Sensors 2024, 24(9), 2927; https://doi.org/10.3390/s24092927 - 4 May 2024
Viewed by 1232
Abstract
A distributed state observer is designed for state estimation and tracking of mobile robots amidst dynamic environments and occlusions within distributed LiDAR sensor networks. The proposed novel framework enhances three-dimensional bounding box detection and tracking utilizing a consensus-based information filter and a region [...] Read more.
A distributed state observer is designed for state estimation and tracking of mobile robots amidst dynamic environments and occlusions within distributed LiDAR sensor networks. The proposed novel framework enhances three-dimensional bounding box detection and tracking utilizing a consensus-based information filter and a region of interest for state estimation of mobile robots. The framework enables the identification of the input to the dynamic process using remote sensing, enhancing the state prediction accuracy for low-visibility and occlusion scenarios in dynamic scenes. Experimental evaluations in indoor settings confirm the effectiveness of the framework in terms of accuracy and computational efficiency. These results highlight the benefit of integrating stationary LiDAR sensors’ state estimates into a switching consensus information filter to enhance the reliability of tracking and to reduce estimation error in the sense of mean square and covariance. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup for evaluation of the LiDAR-based distributed state observer: unmanned ground vehicle (UGV) setup (<b>left</b>), and solid-state LiDAR used for remote sensing (<b>right</b>).</p>
Full article ">Figure 2
<p>The <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math>-projection of the UGV robot’s clustered point cloud in real time for <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>s</mi> </msub> <mo>≤</mo> <msub> <mi>z</mi> <mi>r</mi> </msub> </mrow> </semantics></math> (<b>left</b>), for <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>s</mi> </msub> <mo>&gt;</mo> <msub> <mi>z</mi> <mi>r</mi> </msub> </mrow> </semantics></math> (<b>center</b>), and the convex hull of the cluster <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math>-projection (<b>right</b>).</p>
Full article ">Figure 3
<p>(<b>a</b>) Lateral-external points and L-shape reduction: (i) if <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mi>j</mi> </msub> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math>, the point <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mi>j</mi> </msub> </semantics></math> lies on the same side of the decision boundary (dashed blue line) as the sensor, or is collinear with the line segment defined by <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <msub> <mi>β</mi> <mi>min</mi> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <msub> <mi>β</mi> <mi>max</mi> </msub> </msub> </semantics></math> (it is considered to be part of the main edges and is retained), and (ii) if <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mi>j</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>, the point <math display="inline"><semantics> <msub> <mi mathvariant="bold">p</mi> <mi>j</mi> </msub> </semantics></math> is removed from <math display="inline"><semantics> <msub> <mi>C</mi> <mi>i</mi> </msub> </semantics></math>; (<b>b</b>) closest point and RANSAC line fitting.</p>
Full article ">Figure 4
<p>Overview of the distributed state observer on each node <math display="inline"><semantics> <msub> <mi mathvariant="bold">s</mi> <mi>i</mi> </msub> </semantics></math> in the sensor network.</p>
Full article ">Figure 5
<p>High-fidelity simulation of LiDAR coverage in indoor setting.</p>
Full article ">Figure 6
<p>Tracked position and its absolute error over time in Scenario 1, where <span class="html-italic">S</span> is the starting point for the trajectory.</p>
Full article ">Figure 7
<p>Point cloud clustering in (<b>a</b>) a dynamic scene with moving objects, (<b>b</b>) partial occlusion, and (<b>c</b>) full occlusion by an operator moving alongside the robot (in blue).</p>
Full article ">Figure 8
<p>State estimation results and absolute error comparison with KCF-L for single run of Scenario 2, with full and partial occlusion.</p>
Full article ">Figure 9
<p>Computational time of the proposed LiDAR-based remote sensing framework for Scenario 1 (<b>top</b>) and Scenario 2 (<b>bottom</b>) subject to occlusion and dynamic objects moving around the robot.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop