Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,917)

Search Parameters:
Keywords = sensor types

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 30487 KiB  
Article
Joint Classification of Hyperspectral and LiDAR Data via Multiprobability Decision Fusion Method
by Tao Chen, Sizuo Chen, Luying Chen, Huayue Chen, Bochuan Zheng and Wu Deng
Remote Sens. 2024, 16(22), 4317; https://doi.org/10.3390/rs16224317 - 19 Nov 2024
Abstract
With the development of sensor technology, the sources of remotely sensed image data for the same region are becoming increasingly diverse. Unlike single-source remote sensing image data, multisource remote sensing image data can provide complementary information for the same feature, promoting its recognition. [...] Read more.
With the development of sensor technology, the sources of remotely sensed image data for the same region are becoming increasingly diverse. Unlike single-source remote sensing image data, multisource remote sensing image data can provide complementary information for the same feature, promoting its recognition. The effective utilization of remote sensing image data from various sources can enhance the extraction of image features and improve the accuracy of feature recognition. Hyperspectral remote sensing (HSI) data and light detection and ranging (LiDAR) data can provide complementary information from different perspectives and are frequently combined in feature identification tasks. However, the process of joint use suffers from data redundancy, low classification accuracy and high time complexity. To address the aforementioned issues and improve feature recognition in classification tasks, this paper introduces a multiprobability decision fusion (PRDRMF) method for the combined classification of HSI and LiDAR data. First, the original HSI data and LiDAR data are downscaled via the principal component–relative total variation (PRTV) method to remove redundant information. In the multifeature extraction module, the local texture features and spatial features of the image are extracted to consider the local texture and spatial structure of the image data. This is achieved by utilizing the local binary pattern (LBP) and extended multiattribute profile (EMAP) for the two types of data after dimensionality reduction. The four extracted features are subsequently input into the corresponding kernel–extreme learning machine (KELM), which has a simple structure and good classification performance, to obtain four classification probability matrices (CPMs). Finally, the four CPMs are fused via a multiprobability decision fusion method to obtain the optimal classification results. Comparison experiments on four classical HSI and LiDAR datasets demonstrate that the method proposed in this paper achieves high classification performance while reducing the overall time complexity of the method. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of PRDRMF.</p>
Full article ">Figure 2
<p>The impact of PRTV on data de-redundancy. (<b>a</b>) Image after PRTV processing of raw HIS. (<b>b</b>) Image of original HSI data. (<b>c</b>) Output of PRDRMF.</p>
Full article ">Figure 3
<p>Parameters of PRTV on classification accuracy for four datasets: (<b>a</b>) smoothing degree λ; (<b>b</b>) texture element σ.</p>
Full article ">Figure 4
<p>Parameters of LBP on classification accuracy for four datasets: (<b>a</b>) size of range diameter size r; (<b>b</b>) number of sample points n.</p>
Full article ">Figure 5
<p>Performance of KELM with different kernel functions.</p>
Full article ">Figure 6
<p>Classification maps of the 2013 Houston dataset using different methods. (<b>a</b>) Pseudo-color image of HSI, (<b>b</b>) LiDAR, (<b>c</b>) ground truth map, (<b>d</b>) SVM (59.40%), (<b>e</b>) CCNN (86.92%), (<b>f</b>) EndNet (88.52%), (<b>g</b>) CRNN (88.55%), (<b>h</b>) TBCNN (88.91%), (<b>i</b>) coupled CNN (90.43%), (<b>j</b>) CNNMRF (90.61%), (<b>k</b>) FusAtNet (89.98%), (<b>l</b>) S2ENet (94.19%), (<b>m</b>) CALC (94.71%), (<b>n</b>) Fusion-HCT (99.76%), (<b>o</b>) SepG-ResNET50 (72.67%), (<b>p</b>) DSMSC<sup>2</sup>N (91.49%), (<b>q</b>) PRDRMF (99.79%).</p>
Full article ">Figure 7
<p>Classification maps of the MUUFL dataset using different methods. (<b>a</b>) Pseudo-color image of HSI, (<b>b</b>) LiDAR, (<b>c</b>) ground truth map, (<b>d</b>) SVM (4.47%), (<b>e</b>) CCNN (88.96%), (<b>f</b>) EndNet (87.75%), (<b>g</b>) CRNN (91.38%), (<b>h</b>) TBCNN (90.85%), (<b>i</b>) coupled CNN (90.93%), (<b>j</b>) CNNMRF (88.94%), (<b>k</b>) FusAtNet (91.48%), (<b>l</b>) S2ENet (91.68%), (<b>m</b>) CALC (82.91%), (<b>n</b>) Fusion-HCT (87.43%), (<b>o</b>) SepG-ResNET50 (82.90%), (<b>p</b>) DSMSC<sup>2</sup>N (91.17%), (<b>q</b>) PRDRMF (92.21%).</p>
Full article ">Figure 7 Cont.
<p>Classification maps of the MUUFL dataset using different methods. (<b>a</b>) Pseudo-color image of HSI, (<b>b</b>) LiDAR, (<b>c</b>) ground truth map, (<b>d</b>) SVM (4.47%), (<b>e</b>) CCNN (88.96%), (<b>f</b>) EndNet (87.75%), (<b>g</b>) CRNN (91.38%), (<b>h</b>) TBCNN (90.85%), (<b>i</b>) coupled CNN (90.93%), (<b>j</b>) CNNMRF (88.94%), (<b>k</b>) FusAtNet (91.48%), (<b>l</b>) S2ENet (91.68%), (<b>m</b>) CALC (82.91%), (<b>n</b>) Fusion-HCT (87.43%), (<b>o</b>) SepG-ResNET50 (82.90%), (<b>p</b>) DSMSC<sup>2</sup>N (91.17%), (<b>q</b>) PRDRMF (92.21%).</p>
Full article ">Figure 8
<p>Classification maps of the Trento daaset using different methods. (<b>a</b>) Pseudo-color image of HSI, (<b>b</b>) LiDAR, (<b>c</b>) ground truth map, (<b>d</b>) SVM (72.89%), (<b>e</b>) CCNN (97.29%), (<b>f</b>) EndNet (94.17%), (<b>g</b>) CRNN (97.22%), (<b>h</b>) TBCNN (97.46%), (<b>i</b>) coupled CNN (97.69%), (<b>j</b>) CNNMRF (98.40%), (<b>k</b>) FusAtNet (99.06%) (<b>l</b>) S2ENet (98.54%), (<b>m</b>) CALC (99.38%), (<b>n</b>) Fusion-HCT (99.60%), (<b>o</b>) SepG-ResNET50 (93.82%), (<b>p</b>) DSMSC<sup>2</sup>N (98.93%), (<b>q</b>) PRDRMF (99.73%).</p>
Full article ">Figure 9
<p>Classification maps of the 2018 Houston dataset using different methods. (<b>a</b>) Pseudo-color image of HSI, (<b>b</b>) LiDAR, (<b>c</b>) ground truth map, (<b>d</b>) SVM (81.49%), (<b>e</b>) CCNN (90.09%), (<b>f</b>) EndNet (90.72%), (<b>g</b>) CRNN (91.16%), (<b>h</b>) TBCNN (91.21%), (<b>i</b>) coupled CNN (92.21%), (<b>j</b>) CNNMRF (92.35%), (<b>k</b>) FusAtNet (91.58%), (<b>l</b>) S2ENet (94.59%), (<b>m</b>) CALC (94.80%), (<b>n</b>) Fusion-HCT (96.68%), (<b>o</b>) SepG-ResNET50 (88.30%), (<b>p</b>) DSMSC<sup>2</sup>N (93.55%), (<b>q</b>) PRDRMF (96.93%).</p>
Full article ">Figure 10
<p>Visualization of data feature distribution for the 2013 Houston dataset. (<b>a</b>) Raw HSI, (<b>b</b>) PRTV, (<b>c</b>) PRTV+ multifeature extraction module, (<b>d</b>) PRDRMF.</p>
Full article ">Figure 11
<p>Visualization of data feature distribution for the MUUFL dataset. (<b>a</b>) Raw HSI, (<b>b</b>) PRTV, (<b>c</b>) PRTV+ multifeature extraction module, (<b>d</b>) PRDRMF.</p>
Full article ">Figure 12
<p>Visualization of data feature distribution for the Trento dataset. (<b>a</b>) Raw HSI, (<b>b</b>) PRTV, (<b>c</b>) PRTV+ multifeature extraction Module, (<b>d</b>) PRDRMF.</p>
Full article ">Figure 13
<p>Visualization of data feature distribution for the Trento dataset. (<b>a</b>) Raw HSI, (<b>b</b>) PRTV, (<b>c</b>) PRTV+ multifeature extraction module, (<b>d</b>) PRDRMF.</p>
Full article ">
21 pages, 12271 KiB  
Article
Detection of Marine Oil Spill from PlanetScope Images Using CNN and Transformer Models
by Jonggu Kang, Chansu Yang, Jonghyuk Yi and Yangwon Lee
J. Mar. Sci. Eng. 2024, 12(11), 2095; https://doi.org/10.3390/jmse12112095 - 19 Nov 2024
Abstract
The contamination of marine ecosystems by oil spills poses a significant threat to the marine environment, necessitating the prompt and effective implementation of measures to mitigate the associated damage. Satellites offer a spatial and temporal advantage over aircraft and unmanned aerial vehicles (UAVs) [...] Read more.
The contamination of marine ecosystems by oil spills poses a significant threat to the marine environment, necessitating the prompt and effective implementation of measures to mitigate the associated damage. Satellites offer a spatial and temporal advantage over aircraft and unmanned aerial vehicles (UAVs) in oil spill detection due to their wide-area monitoring capabilities. While oil spill detection has traditionally relied on synthetic aperture radar (SAR) images, the combined use of optical satellite sensors alongside SAR can significantly enhance monitoring capabilities, providing improved spatial and temporal coverage. The advent of deep learning methodologies, particularly convolutional neural networks (CNNs) and Transformer models, has generated considerable interest in their potential for oil spill detection. In this study, we conducted a comprehensive and objective comparison to evaluate the suitability of CNN and Transformer models for marine oil spill detection. High-resolution optical satellite images were used to optimize DeepLabV3+, a widely utilized CNN model; Swin-UPerNet, a representative Transformer model; and Mask2Former, which employs a Transformer-based architecture for both encoding and decoding. The results of cross-validation demonstrate a mean Intersection over Union (mIoU) of 0.740, 0.840 and 0.804 for all the models, respectively, indicating their potential for detecting oil spills in the ocean. Additionally, we performed a histogram analysis on the predicted oil spill pixels, which allowed us to classify the types of oil. These findings highlight the considerable promise of the Swin Transformer models for oil spill detection in the context of future marine disaster monitoring. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Marine Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>Examples of image processing steps: (<b>a</b>) original satellite images, (<b>b</b>) images after gamma correction and histogram adjustment, and (<b>c</b>) labeled images.</p>
Full article ">Figure 2
<p>Flowchart of this study, illustrating the processes of labeling, modeling, optimization, and evaluation using the DeepLabV3+, Swin-UPerNet, and Mask2Former models [<a href="#B23-jmse-12-02095" class="html-bibr">23</a>,<a href="#B24-jmse-12-02095" class="html-bibr">24</a>,<a href="#B25-jmse-12-02095" class="html-bibr">25</a>].</p>
Full article ">Figure 3
<p>Concept of the 5-fold cross-validation in this study.</p>
Full article ">Figure 4
<p>Examples of image data augmentation using the Albumentations library. The example images include random 90-degree rotation, horizontal flip, vertical flip, optical distortion, grid distortion, RGB shift, and random brightness/contrast adjustment.</p>
Full article ">Figure 5
<p>Randomly selected examples from fold 1, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 6
<p>Randomly selected examples from fold 2, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 7
<p>Randomly selected examples from fold 3, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 8
<p>Randomly selected examples from fold 4, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 9
<p>Randomly selected examples from fold 5, including PlanetScope RGB images, segmentation labels, and predictions from DeepLabV3+ (DL), Swin-UPerNet (Swin), and Mask2Former (M2F).</p>
Full article ">Figure 10
<p>Thick oil layers with a dark black tone: histogram distribution graph and box plot of oil spill pixels extracted from the labels, DeepLabV3+, Swin-UPerNet, and Mask2Former. The <span class="html-italic">x</span>-axis values represent the digital numbers (DNs) from PlanetScope images. (<b>a</b>) Oil mask, (<b>b</b>) histogram, and (<b>c</b>) box plot.</p>
Full article ">Figure 11
<p>Thin oil layers with a bright silver tone: histogram distribution graph and box plot of oil spill pixels extracted from the labels, DeepLabV3+, Swin-UPerNet, and Mask2Former. The <span class="html-italic">x</span>-axis values represent the digital numbers (DNs) from PlanetScope images. (<b>a</b>) Oil mask, (<b>b</b>) histogram, and (<b>c</b>) box plot.</p>
Full article ">Figure 12
<p>Thin oil layers with a bright rainbow tone: histogram distribution graph and box plot of oil spill pixels extracted from the labels, DeepLabV3+, Swin-UPerNet, and Mask2Former. The <span class="html-italic">x</span>-axis values represent the digital numbers (DNs) from PlanetScope images. (<b>a</b>) Oil mask, (<b>b</b>) histogram, and (<b>c</b>) box plot.</p>
Full article ">
21 pages, 2496 KiB  
Review
Transportation Mode Detection Using Learning Methods and Self-Contained Sensors: Review
by Ilhem Gharbi, Fadoua Taia-Alaoui, Hassen Fourati, Nicolas Vuillerme and Zebo Zhou
Sensors 2024, 24(22), 7369; https://doi.org/10.3390/s24227369 - 19 Nov 2024
Viewed by 70
Abstract
Due to increasing traffic congestion, travel modeling has gained importance in the development of transportion mode detection (TMD) strategies over the past decade. Nowadays, recent smartphones, equipped with integrated inertial measurement units (IMUs) and embedded algorithms, can play a crucial role in such [...] Read more.
Due to increasing traffic congestion, travel modeling has gained importance in the development of transportion mode detection (TMD) strategies over the past decade. Nowadays, recent smartphones, equipped with integrated inertial measurement units (IMUs) and embedded algorithms, can play a crucial role in such development. In particular, obtaining much more information on the transportation modes used by users through smartphones is very challenging due to the variety of the data (accelerometers, magnetometers, gyroscopes, proximity sensors, etc.), the standardization issue of datasets and the pertinence of learning methods for that purpose. Reviewing the latest progress on TMD systems is important to inform readers about recent datasets used in detection, best practices for classification issues and the remaining challenges that still impact the detection performances. Existing TMD review papers until now offer overviews of applications and algorithms without tackling the specific issues faced with real-world data collection and classification. Compared to these works, the proposed review provides some novelties such as an in-depth analysis of the current state-of-the-art techniques in TMD systems, relying on recent references and focusing particularly on the major existing problems, and an evaluation of existing methodologies for detecting travel modes using smartphone IMUs (including dataset structures, sensor data types, feature extraction, etc.). This review paper can help researchers to focus their efforts on the main problems and challenges identified. Full article
Show Figures

Figure 1

Figure 1
<p>Processing pipeline for predicting the transportation modes.</p>
Full article ">Figure 2
<p>Transforming time series (raw sensor data) into feature space through the segmentation (window partitioning in red) and computation of features (feature extraction (FE)) [<a href="#B35-sensors-24-07369" class="html-bibr">35</a>].</p>
Full article ">Figure 3
<p>Resultant acceleration in Tram [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 4
<p>Resultant acceleration in Walk [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 5
<p>Resultant acceleration in Car [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 6
<p>Resultant acceleration in Motorcycle [<a href="#B31-sensors-24-07369" class="html-bibr">31</a>].</p>
Full article ">Figure 7
<p>Sensor placement for the perscido dataset [<a href="#B23-sensors-24-07369" class="html-bibr">23</a>].</p>
Full article ">Figure 8
<p>Sensor placement for the SHL dataset [<a href="#B27-sensors-24-07369" class="html-bibr">27</a>].</p>
Full article ">Figure 9
<p>Android applications: (<b>a</b>) Phyphox, (<b>b</b>) Physics toolbox suite and (<b>c</b>) Sensorlogger.</p>
Full article ">
24 pages, 9386 KiB  
Article
Toward Improving Human Training by Combining Wearable Full-Body IoT Sensors and Machine Learning
by Nazia Akter, Andreea Molnar and Dimitrios Georgakopoulos
Sensors 2024, 24(22), 7351; https://doi.org/10.3390/s24227351 - 18 Nov 2024
Viewed by 287
Abstract
This paper proposes DigitalUpSkilling, a novel IoT- and AI-based framework for improving and personalising the training of workers who are involved in physical-labour-intensive jobs. DigitalUpSkilling uses wearable IoT sensors to observe how individuals perform work activities. Such sensor observations are continuously processed to [...] Read more.
This paper proposes DigitalUpSkilling, a novel IoT- and AI-based framework for improving and personalising the training of workers who are involved in physical-labour-intensive jobs. DigitalUpSkilling uses wearable IoT sensors to observe how individuals perform work activities. Such sensor observations are continuously processed to synthesise an avatar-like kinematic model for each worker who is being trained, referred to as the worker’s digital twins. The framework incorporates novel work activity recognition using generative adversarial network (GAN) and machine learning (ML) models for recognising the types and sequences of work activities by analysing an individual’s kinematic model. Finally, the development of skill proficiency ML is proposed to evaluate each trainee’s proficiency in work activities and the overall task. To illustrate DigitalUpSkilling from wearable IoT-sensor-driven kinematic models to GAN-ML models for work activity recognition and skill proficiency assessment, the paper presents a comprehensive study on how specific meat processing activities in a real-world work environment can be recognised and assessed. In the study, DigitalUpSkilling achieved 99% accuracy in recognising specific work activities performed by meat workers. The study also presents an evaluation of the proficiency of workers by comparing kinematic data from trainees performing work activities. The proposed DigitalUpSkilling framework lays the foundation for next-generation digital personalised training. Full article
(This article belongs to the Special Issue Wearable and Mobile Sensors and Data Processing—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>DigitalUpSkilling framework.</p>
Full article ">Figure 2
<p>Hybrid GAN-ML activity classification.</p>
Full article ">Figure 3
<p>Skill proficiency assessment.</p>
Full article ">Figure 4
<p>(<b>a</b>) Placement of sensors; (<b>b</b>) sensors and straps; (<b>c</b>) alignment of sensors with the participant’s movements.</p>
Full article ">Figure 5
<p>Work environment for the data collection: (<b>a</b>) boning area; (<b>b</b>) slicing area.</p>
Full article ">Figure 6
<p>Dataflow of the study.</p>
Full article ">Figure 7
<p>(<b>a</b>) Worker performing boning; (<b>b</b>) worker’s real-time digital twin; (<b>c</b>) digital twins showing body movements along with real-time graphs of the joint’s movements.</p>
Full article ">Figure 8
<p>Comparison of the error rates of the different ML models.</p>
Full article ">Figure 9
<p>Confusion matrices: (<b>a</b>) boning; (<b>b</b>) slicing with pitch and roll from right-hand sensors.</p>
Full article ">Figure 10
<p>Distribution of the activity classification: (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 11
<p>Accuracy of the GAN for different percentages of synthetic data: (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 12
<p>Accuracy of the GAN with different percentages of synthetic data (circled area showing drop in the accuracy): (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 13
<p>Classification accuracy with the GAN, SMOTE, and ENN (circled area showing improvement in the accuracy): (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 14
<p>Distribution of right-hand pitch and roll mean (in degree).</p>
Full article ">Figure 15
<p>Comparison of the engagement in boning (W1: Worker 1; W2: Worker 2).</p>
Full article ">Figure 16
<p>Comparison of the engagement in slicing.</p>
Full article ">Figure 17
<p>Comparison of the accelerations of the right hand.</p>
Full article ">Figure 18
<p>Comparison of the accelerations of the right-hand.</p>
Full article ">Figure 19
<p>Comparisons of abduction, rotation, and flexion of the right shoulder during boning activities: (<b>a</b>) worker 1; (<b>b</b>) worker 2.</p>
Full article ">
12 pages, 9300 KiB  
Article
Field Experiments of Distributed Acoustic Sensing Measurements
by Haiyan Shang, Lin Zhang and Shaoyi Chen
Photonics 2024, 11(11), 1083; https://doi.org/10.3390/photonics11111083 - 18 Nov 2024
Viewed by 218
Abstract
Modern, large bridges and tunnels represent important nodes in transportation arteries and have a significant impact on the development of transportation. The health and safety monitoring of these structures has always been a significant concern and is reliant on various types of sensors. [...] Read more.
Modern, large bridges and tunnels represent important nodes in transportation arteries and have a significant impact on the development of transportation. The health and safety monitoring of these structures has always been a significant concern and is reliant on various types of sensors. Distributed acoustic sensing (DAS) with telecommunication fibers is an emerging technology in the research areas of sensing and communication. DAS provides an effective and low-cost approach for the detection of various resources and seismic activities. In this study, field experiments are elucidated, using DAS for the Hong Kong–Zhuhai–Macao Bridge, and for studying vehicle trajectories, earthquakes, and other activities. The basic signal-processing methods of filtering and normalization are adopted for analyzing the data obtained with DAS. With the proposed DAS technology, the activities on shore, vehicle trajectories on bridges and in tunnels during both day and night, and microseisms within 200 km were successfully detected. Enabled by DAS technology and mass fiber networks, more studies on sensing and communication systems for the monitoring of bridge and tunnel engineering are expected to provide future insights. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

Figure 1
<p>The map for the DAS test with the optical fiber cables along the Hong Kong–Zhuhai–Macao Bridge in Guangdong–Hong Kong–Macau Greater Bay Area, China. The inset shows an example of the fiber optic cabling along the bridge corridor.</p>
Full article ">Figure 2
<p>The results of the original wave swarm within 21 min after UTC 2021-08-06 08:10:06, in the sea area near the Hong Kong–Zhuhai–Macao Bridge Port. The horizontal axis in the figure represents the measurement time with unit seconds (s). The vertical axis is along the fiber direction from Zhuhai Port (<b>bottom</b>) to Hong Kong Port (<b>top</b>). The unit of fiber length is meters (m). Gray dashed lines mark the sections of the coastal region, bridge, and ocean within the figure.</p>
Full article ">Figure 3
<p>The results of the strain-rate data in the first section of 2.8 km. (<b>a</b>) The original wave waterfall swarm plot, which marks the 0–500 m section with a red dashed box; (<b>b</b>) the calculated f-k spectrum; (<b>c</b>) the zoomed-in view of the 0–500 m range corresponding to the red dashed box in (<b>a</b>).</p>
Full article ">Figure 4
<p>The examples of the original channel wave-time and the spectrum-frequency plot at fiber distances of 210 m, 270 m, and 280 m. The vertical axis represents the strength. The horizontal axis in wave-time plot and spectrum-frequency plot represents time and frequency, respectively. The insets provide the enlarged details.</p>
Full article ">Figure 5
<p>The results of the signal output after filtering and normalization measured during the day.</p>
Full article ">Figure 6
<p>The examples of the results for the Yangxi earthquake swarm recorded at UTC 2021-08-11 17:16:36. The solid vertical lines mark the time of three microseisms ML = 1.1 at UTC 2021-08-11 17:18:11 (white line), 2.6 at UTC 2021-08-11 17:18:17 (red line), and 1.2 at UTC 2021-08-11 17:19:33 (orange line).</p>
Full article ">Figure 7
<p>The recorded results for the bridge section. (<b>a</b>) A waterfall plot of the data in the bridge section; the rectangular box marks the vibration signal observed after spectral filtering. (<b>b</b>) The examples of the recorded vibration signals.</p>
Full article ">Figure 8
<p>The results of the data for no microseisms and with microseism swarms in the last ocean section. Original plots of (<b>a</b>) no microseisms and (<b>b</b>) with microseisms. The signal output of (<b>c</b>) no microseisms and (<b>d</b>) with microseisms after filtering and normalization. The solid vertical lines mark the time of three microseisms.</p>
Full article ">
33 pages, 11481 KiB  
Article
Establishing Lightweight and Robust Prediction Models for Solar Power Forecasting Using Numerical–Categorical Radial Basis Function Deep Neural Networks
by Chee-Hoe Loh, Yi-Chung Chen, Chwen-Tzeng Su and Heng-Yi Su
Appl. Sci. 2024, 14(22), 10625; https://doi.org/10.3390/app142210625 - 18 Nov 2024
Viewed by 258
Abstract
As green energy technology develops, so too grows research interest in topics such as solar power forecasting. The output of solar power generation is uncontrollable, which makes accurate prediction of output an important task in the management of power grids. Despite a plethora [...] Read more.
As green energy technology develops, so too grows research interest in topics such as solar power forecasting. The output of solar power generation is uncontrollable, which makes accurate prediction of output an important task in the management of power grids. Despite a plethora of theoretical models, most frameworks encounter problems in practice because they assume that received data is error-free, which is unlikely, as this type of data is gathered by outdoor sensors. We thus designed a robust solar power forecasting model and methodology based on the concept of ensembling, with three key design elements. First, as models established using the ensembling concept typically have high computational costs, we pruned the deep learning model architecture to reduce the size of the model. Second, the mediation model often used for pruning is not suitable for solar power forecasting problems, so we designed a numerical–categorical radial basis function deep neural network (NC-RBF-DNN) to replace the mediation model. Third, existing pruning methods can only establish one model at a time, but the ensembling concept involves the establishment of multiple sub-models simultaneously. We therefore designed a factor combination search algorithm, which can identify the most suitable factor combinations for the sub-models of ensemble models using very few experiments, thereby ensuring that we can establish the target ensemble model with the smallest architecture and minimal error. Experiments using a dataset from real-world solar power plants verified that the proposed method could be used to build an ensemble model of the target within ten attempts. Furthermore, despite considerable error in the model inputs (two inputs contained 10% error), the predicted NRMSE of our model is still over 10 times better than the recent model. Full article
Show Figures

Figure 1

Figure 1
<p>Three existing methods for pruning deep learning model architecture include (1) pruning model weights, (2) pruning entire hidden layers, and (3) pruning unnecessary input factors.</p>
Full article ">Figure 2
<p>Flow chart of factor reduction for deep learning architecture pruning.</p>
Full article ">Figure 3
<p>Flow chart of proposed methodology.</p>
Full article ">Figure 4
<p>Distribution of weather data: (<b>a</b>) normal distribution; (<b>b</b>) data concentrated at minimum value end; (<b>c</b>) data concentrated at minimum and maximum value ends.</p>
Full article ">Figure 5
<p>Flow chart of existing lightweight deep learning models during online applications.</p>
Full article ">Figure 6
<p>Architecture of proposed NC-RBF-DNN.</p>
Full article ">Figure 7
<p>Influence of RBF on numerical values: (<b>a</b>) Scenario 1 when different <span class="html-italic">x</span> values produce different probabilities; (<b>b</b>) Scenario 2 when different <span class="html-italic">x</span> values produce different probabilities; (<b>c</b>) Scenario 1 when different <span class="html-italic">x</span> values produce near-zero probabilities; (<b>d</b>) Scenario 2 when different <span class="html-italic">x</span> values produce near-zero probabilities.</p>
Full article ">Figure 8
<p>Influence of RBF on categorical values: (<b>a</b>) Scenario 1 when different <span class="html-italic">x</span> values produce different probabilities; (<b>b</b>) Scenario 2 when different <span class="html-italic">x</span> values produce different probabilities; (<b>c</b>) Scenario 1 when different <span class="html-italic">x</span> values produce near-zero probabilities; (<b>d</b>) Scenario 2 when different <span class="html-italic">x</span> values produce near-zero probabilities.</p>
Full article ">Figure 9
<p>Discussion of RBF layer outputs: (<b>a</b>) influencing final model outputs; (<b>b</b>) Scenario 1 when not influencing final model outputs; (<b>c</b>) Scenario 2 when not influencing final model outputs.</p>
Full article ">Figure 10
<p>Importance ranking of 16 factors obtained by NC-RBF-DNN and Pearson correlation.</p>
Full article ">Figure 11
<p>Comparison of the first 200 normalized values of relative humidity at 1000 mbar, 2 m temperature, and surface solar radiation in the target dataset.</p>
Full article ">Figure 12
<p>Comparison of errors resulting from modeling with top <span class="html-italic">n</span> and last <span class="html-italic">n</span> factors: (<b>a</b>) modeling with lightweight NC-RBF-DNN; (<b>b</b>) modeling with random forest.</p>
Full article ">Figure 13
<p>Errors resulting from modeling with top <span class="html-italic">n</span> factors.</p>
Full article ">Figure 14
<p>Number of combinations that need to be checked for near-optimal solution: (<b>a</b>) modeling with lightweight NC-RBF-DNN; (<b>b</b>) modeling with random forest.</p>
Full article ">Figure 15
<p>Prediction errors resulting from top combinations obtained using proposed method and different number of factors in a combination: (<b>a</b>) modeling with lightweight NC-RBF-DNN; (<b>b</b>) modeling with random forest.</p>
Full article ">Figure 16
<p>Factors chosen in near-optimal solution with different number of factors in a combination, where the darker the color, the more factors are selected. (<b>a</b>) modeling with lightweight NC-RBF-DNN; (<b>b</b>) modeling with random forest.</p>
Full article ">Figure 17
<p>When using lightweight NC-RBF-DNN to model, the impact of one input error on the model output error. (<b>a</b>) Total column ice water, (<b>b</b>) 10-m V wind component, (<b>c</b>) surface solar rad down, (<b>d</b>) total precipitation.</p>
Full article ">Figure 18
<p>When using the random forest to model, the impact of one input error on the model output error. (<b>a</b>) Two-meter temperature, (<b>b</b>) surface solar rad down.</p>
Full article ">Figure 19
<p>When using lightweight NC-RBF-DNN to model, the impact of two input errors on the model output error. (<b>a</b>) Surface solar rad down vs. total column liquid water, (<b>b</b>) surface solar rad down vs. total column ice water, (<b>c</b>) surface solar rad down vs. surface pressure, (<b>d</b>) surface solar rad down vs. relative humidity at 1000 mbar, (<b>e</b>) surface solar rad down vs. 10 m V wind component, (<b>f</b>) surface solar rad down vs. total precipitation.</p>
Full article ">Figure 20
<p>When using the random forest to model, the impact of two input errors on the model output error. (<b>a</b>) Surface solar rad down vs. total column liquid water, (<b>b</b>) surface solar rad down vs. surface pressure, (<b>c</b>) surface solar rad down vs. relative humidity at 1000 mbar, (<b>d</b>) surface solar rad down vs. total cloud cover, (<b>e</b>) surface solar rad down vs. 2 m temperature.</p>
Full article ">Figure 21
<p>Comparison of NRMSE between our ensemble lightweight NC-RBF-DNN and previous models [<a href="#B19-applsci-14-10625" class="html-bibr">19</a>] when an input error is present. (<b>a</b>) Total column ice water, (<b>b</b>) 10-m V wind component, (<b>c</b>) surface solar rad down, (<b>d</b>) total precipitation.</p>
Full article ">Figure 22
<p>Comparison of NRMSE between our ensemble random forest and previous models [<a href="#B19-applsci-14-10625" class="html-bibr">19</a>] when an input error is present. (<b>a</b>) Two-meter temperature, (<b>b</b>) surface solar rad down.</p>
Full article ">Figure 23
<p>Comparison of NRMSE between our ensemble lightweight NC-RBF-DNN and previous models when two input errors are present. (<b>a</b>) Surface solar rad down vs. total column liquid water, (<b>b</b>) surface solar rad down vs. total column ice water, (<b>c</b>) surface solar rad down vs. surface pressure, (<b>d</b>) surface solar rad down vs. relative humidity at 1000 mbar, (<b>e</b>) surface solar rad down vs. 10 m V wind component, (<b>f</b>) surface solar rad down vs. total precipitation.</p>
Full article ">Figure 23 Cont.
<p>Comparison of NRMSE between our ensemble lightweight NC-RBF-DNN and previous models when two input errors are present. (<b>a</b>) Surface solar rad down vs. total column liquid water, (<b>b</b>) surface solar rad down vs. total column ice water, (<b>c</b>) surface solar rad down vs. surface pressure, (<b>d</b>) surface solar rad down vs. relative humidity at 1000 mbar, (<b>e</b>) surface solar rad down vs. 10 m V wind component, (<b>f</b>) surface solar rad down vs. total precipitation.</p>
Full article ">Figure 24
<p>Comparison of NRMSE between our ensemble random forest and previous models when two input errors are present. (<b>a</b>) Surface solar rad down vs. total column liquid water, (<b>b</b>) surface solar rad down vs. surface pressure, (<b>c</b>) surface solar rad down vs. relative humidity at 1000 mbar, (<b>d</b>) surface solar rad down vs. total cloud cover, (<b>e</b>) surface solar rad down vs. 2 m temperature.</p>
Full article ">
16 pages, 3869 KiB  
Article
A Polarization-Insensitive and Highly Sensitive THz Metamaterial Multi-Band Perfect Absorber
by Gang Tao, Qian Zhao, Qianju Song, Zao Yi, Yougen Yi and Qingdong Zeng
Micromachines 2024, 15(11), 1388; https://doi.org/10.3390/mi15111388 - 16 Nov 2024
Viewed by 385
Abstract
In this article, we present a terahertz (THz) metamaterial absorber that blends two types of coordinated materials: Dirac semimetals and vanadium dioxide. Compared to other absorbers on the market, which are currently non-adjustable or have a single adjustment method, our absorber is superior [...] Read more.
In this article, we present a terahertz (THz) metamaterial absorber that blends two types of coordinated materials: Dirac semimetals and vanadium dioxide. Compared to other absorbers on the market, which are currently non-adjustable or have a single adjustment method, our absorber is superior because it has two coordinated modes with maximum adjustment ranges of 80.7% and 0.288 THz. The device contains four flawless absorption peaks (M1, M2, M3, and M4) spanning the frequency range of 2.0 THz to 6.0 THz, all with absorption rates greater than 99%. After calculation, the relative impedance of the device matches with that in free space, resulting in perfect absorption. In addition, our absorber has extremely excellent polarization insensitivity but is highly sensitive to changes in the environmental refractive index, with the highest environmental refractive index sensitivity of 716 GHz/RIU (gigahertz per refractive index unit). To sum up, the terahertz metamaterial absorber we showed has four perfect absorption peaks, high sensitivity, and stable polarization. This means it could be useful in areas like changing electromagnetic waves, making new sensors, and switching. Full article
21 pages, 2882 KiB  
Review
Gold Nanoprobes for Robust Colorimetric Detection of Nucleic Acid Sequences Related to Disease Diagnostics
by Maria Enea, Andreia Leite, Ricardo Franco and Eulália Pereira
Nanomaterials 2024, 14(22), 1833; https://doi.org/10.3390/nano14221833 - 16 Nov 2024
Viewed by 335
Abstract
Gold nanoparticles (AuNPs) are highly attractive for applications in the field of biosensing, particularly for colorimetric nucleic acid detection. Their unique optical properties, which are highly sensitive to changes in their environment, make them ideal candidates for developing simple, rapid, and cost-effective assays. [...] Read more.
Gold nanoparticles (AuNPs) are highly attractive for applications in the field of biosensing, particularly for colorimetric nucleic acid detection. Their unique optical properties, which are highly sensitive to changes in their environment, make them ideal candidates for developing simple, rapid, and cost-effective assays. When functionalized with oligonucleotides (Au-nanoprobes), they can undergo aggregation or dispersion in the presence of complementary sequences, leading to distinct color changes that serve as a visual signal for detection. Aggregation-based assays offer significant advantages over other homogeneous assays, such as fluorescence-based methods, namely, label-free protocols, rapid interactions in homogeneous solutions, and detection by the naked eye or using low-cost instruments. Despite promising results, the application of Au-nanoprobe-based colorimetric assays in complex biological matrices faces several challenges. The most significant are related to the colloidal stability and oligonucleotide functionalization of the Au-nanoprobes but also to the mode of detection. The type of functionalization method, type of spacer, the oligo–AuNPs ratio, changes in pH, temperature, or ionic strength influence the Au-nanoprobe colloidal stability and thus the performance of the assay. This review elucidates characteristics of the Au-nanoprobes that are determined for colorimetric gold nanoparticles (AuNPs)-based nucleic acid detection, and how they influence the sensitivity and specificity of the colorimetric assay. These characteristics of the assay are fundamental to developing low-cost, robust biomedical sensors that perform effectively in biological fluids. Full article
(This article belongs to the Special Issue Noble Metal-Based Nanostructures: Optical Properties and Applications)
Show Figures

Figure 1

Figure 1
<p>Timeline of AuNPs use for nucleic acid detection.</p>
Full article ">Figure 2
<p>Dependence of LSPR on spherical gold nanoparticles diameter and aggregation state.</p>
Full article ">Figure 3
<p>Colorimetric detection methods using spherical AuNPs: (Top panel) Cross-linking assay—a color change occurs as nucleic acid sequence strands specifically hybridize with complementary sequences, reducing the distance between particles, and resulting in a blue solution (positive test). In the absence of complementary sequences, the solution stays red (negative test). (Middle panel) Non-cross-linking assay—an increase in ionic strength induces AuNP aggregation, resulting in a blue solution (negative test). When complementary targets are present, the solution stays red (positive test). (Bottom panel) Colorimetric assay using unmodified AuNPs: In the absence of complementary sequences, only single-stranded DNA (ssDNA) is present, stabilizing AuNPs against salt-induced aggregation, and the solution stays red (negative result). Conversely, when hybridization occurs in the presence of a complementary sequence, double-stranded DNA (dsDNA) forms, and aggregation occurs (blue solution is a positive result). UV/vis spectra and Nanoparticle Tracking analysis (NTA) profiles are shown with blue lines corresponding to aggregated AuNPs samples and red lines to non-aggregated ones. Also indicated are the positive (green check) and negative (red cross) results for each test.</p>
Full article ">Figure 4
<p>Published successful functionalization methods of AuNPs with HS-oligos, resulting in Au-nanoprobes.</p>
Full article ">Figure 5
<p>Examples of Au nanoparticle interaction with (i) ssDNA, (ii) PolyA-ssDNA and PolyT-ssDNA, (iii) PEG-ssDNA, and (iv) thiolated-(CH2)6-ssDNA.</p>
Full article ">
32 pages, 3323 KiB  
Systematic Review
Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review
by Josef Augusto Oberdan Souza Silva, Vilson Soares de Siqueira, Marcio Mesquita, Luís Sérgio Rodrigues Vale, Jhon Lennon Bezerra da Silva, Marcos Vinícius da Silva, João Paulo Barcelos Lemos, Lorena Nunes Lacerda, Rhuanito Soranz Ferrarezi and Henrique Fonseca Elias de Oliveira
Agronomy 2024, 14(11), 2697; https://doi.org/10.3390/agronomy14112697 - 15 Nov 2024
Viewed by 393
Abstract
Integrating advanced technologies such as artificial intelligence (AI) with traditional agricultural practices has changed how activities are developed in agriculture, with the aim of automating manual processes and improving the efficiency and quality of farming decisions. With the advent of deep learning models [...] Read more.
Integrating advanced technologies such as artificial intelligence (AI) with traditional agricultural practices has changed how activities are developed in agriculture, with the aim of automating manual processes and improving the efficiency and quality of farming decisions. With the advent of deep learning models such as convolutional neural network (CNN) and You Only Look Once (YOLO), many studies have emerged given the need to develop solutions to problems and take advantage of all the potential that this technology has to offer. This systematic literature review aims to present an in-depth investigation of the application of AI in supporting the management of weeds, plant nutrition, water, pests, and diseases. This systematic review was conducted using the PRISMA methodology and guidelines. Data from different papers indicated that the main research interests comprise five groups: (a) type of agronomic problems; (b) type of sensor; (c) dataset treatment; (d) evaluation metrics and quantification; and (e) AI technique. The inclusion (I) and exclusion (E) criteria adopted in this study included: (I1) articles that obtained AI techniques for agricultural analysis; (I2) complete articles written in English; (I3) articles from specialized scientific journals; (E1) articles that did not describe the type of agrarian analysis used; (E2) articles that did not specify the AI technique used and that were incomplete or abstract; (E3) articles that did not present substantial experimental results. The articles were searched on the official pages of the main scientific bases: ACM, IEEE, ScienceDirect, MDPI, and Web of Science. The papers were categorized and grouped to show the main contributions of the literature to support agricultural decisions using AI. This study found that AI methods perform better in supporting weed detection, classification of plant diseases, and estimation of agricultural yield in crops when using images captured by Unmanned Aerial Vehicles (UAVs). Furthermore, CNN and YOLO, as well as their variations, present the best results for all groups presented. This review also points out the limitations and potential challenges when working with deep machine learning models, aiming to contribute to knowledge systematization and to benefit researchers and professionals regarding AI applications in mitigating agronomic problems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the systematic review selection steps according to the PRISMA methodology, according to the PRISMA 2020 statement from Page et al. [<a href="#B25-agronomy-14-02697" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Flowchart of the systematic literature review data extraction and sequence highlights, adapted from Siqueira et al. [<a href="#B23-agronomy-14-02697" class="html-bibr">23</a>]. Data extraction steps: (a) list of articles divided by the type of agronomic problem that each proposed to solve; (b) list of articles, divided by type of agronomic problem, that used sensors to acquire the dataset; (c) list of articles, divided by type of agronomic problem, that used image improvement techniques in the dataset; (d) number of articles that used evaluation metrics; (e) list of the main machine learning models used by each article in this study.</p>
Full article ">Figure 3
<p>Example of data output after training the YOLOv7 model for weed segmentation in commercial crops.</p>
Full article ">Figure 4
<p>Number of articles and timeline of publications per type of agronomic problems.</p>
Full article ">Figure 5
<p>Number of articles published and scientific platforms per type of agronomic problems.</p>
Full article ">Figure 6
<p>Number of articles per country included in this SLR.</p>
Full article ">
28 pages, 11153 KiB  
Article
Forward Fall Detection Using Inertial Data and Machine Learning
by Cristian Tufisi, Zeno-Iosif Praisach, Gilbert-Rainer Gillich, Andrade Ionuț Bichescu and Teodora-Liliana Heler
Appl. Sci. 2024, 14(22), 10552; https://doi.org/10.3390/app142210552 - 15 Nov 2024
Viewed by 259
Abstract
Fall risk assessment is becoming an important concern, with the realization that falls, and more importantly fainting occurrences, in most cases require immediate medical attention and can pose huge health risks, as well as financial and social burdens. The development of an accurate [...] Read more.
Fall risk assessment is becoming an important concern, with the realization that falls, and more importantly fainting occurrences, in most cases require immediate medical attention and can pose huge health risks, as well as financial and social burdens. The development of an accurate inertial sensor-based fall risk assessment tool combined with machine learning algorithms could significantly advance healthcare. This research aims to investigate the development of a machine learning approach for falling and fainting detection, using wearable sensors with an emphasis on forward falls. In the current paper we address the problem of the lack of inertial time-series data to differentiate the forward fall event from normal activities, which are difficult to obtain from real subjects. To solve this problem, we proposed a forward dynamics method to generate necessary training data using the OpenSim software, version 4.5. To develop a model as close to the real world as possible, anthropometric data taken from the literature was used. The raw X and Y axes acceleration data was generated using OpenSim software, and ML fall prediction methods were trained. The machine learning (ML) accuracy was validated by testing with data acquired from six unique volunteers, considering the forward fall type. Full article
Show Figures

Figure 1

Figure 1
<p>Connecting the Raspberry Pi Pico and MPU-6050 sensor.</p>
Full article ">Figure 2
<p>Developed IMU sensor: (<b>a</b>) CAD model of the developed IMU; (<b>b</b>) IMU sensor.</p>
Full article ">Figure 3
<p>Setup for gait acceleration measurement.</p>
Full article ">Figure 4
<p>Recorded normalized acceleration values on the Gait event of Person 1.</p>
Full article ">Figure 5
<p>Flowchart for developing the digital model.</p>
Full article ">Figure 6
<p>Developed model: (<b>a</b>) schematic representation of the segmented model; (<b>b</b>) Open Sim developed digital model.</p>
Full article ">Figure 7
<p>Topology view of the digital model.</p>
Full article ">Figure 8
<p>Damped forward fall simulation.</p>
Full article ">Figure 9
<p>Training data acquired through simulation for the forward damped fall: (<b>a</b>) Recorded speed on the X and Y axis [m/s]; (<b>b</b>) Recorded acceleration on the X and Y axis.</p>
Full article ">Figure 10
<p>Forward fall simulation.</p>
Full article ">Figure 11
<p>Training data acquired through simulation for the forward fall: (<b>a</b>) Recorded speed on the X and Y axis [m/s]; (<b>b</b>) Recorded acceleration on the X and Y axis.</p>
Full article ">Figure 12
<p>Plotted RMSE curve.</p>
Full article ">Figure 13
<p>Plotted Loss curve.</p>
Full article ">Figure 14
<p>Testing performance of the trained LSTM network.</p>
Full article ">Figure 15
<p>Training performance of the trained LSTM network (<b>a</b>) Confusion matrix; (<b>b</b>) True positive (TPR) and false negative rates (FNR).</p>
Full article ">Figure 16
<p>Confusion matrix for the trained Fine KNN.</p>
Full article ">Figure 17
<p>Forward fall simulation of Subject 3.</p>
Full article ">Figure 18
<p>Training data acquired through experimental measurements of fall events for Subject 1: (<b>a</b>) Recorded acceleration on the X and Y axes [m/s<sup>2</sup>] for the damped forward fall; (<b>b</b>) Recorded acceleration on the X and Y axes [m/s<sup>2</sup>] for the undamped forward fall.</p>
Full article ">Figure 19
<p>Comparison of the recorded Y-axis normalized acceleration values for the forward damped fall event between simulated and experimental measurements for the 5 subjects.</p>
Full article ">Figure 20
<p>LSTM network fall event predictions for Subject 1: (<b>a</b>) Predictions for the damped forward fall; (<b>b</b>) Predictions for the undamped forward fall.</p>
Full article ">Figure 21
<p>LSTM network fall event predictions for Subject 2: (<b>a</b>) Predictions for the damped forward fall; (<b>b</b>) Predictions for the undamped forward fall.</p>
Full article ">Figure 22
<p>LSTM network fall event predictions for Subject 3: (<b>a</b>) Predictions for the damped forward fall; (<b>b</b>) Predictions for the undamped forward fall.</p>
Full article ">Figure 23
<p>LSTM network fall event predictions for Subject 4: (<b>a</b>) Predictions for the damped forward fall; (<b>b</b>) Predictions for the undamped forward fall.</p>
Full article ">Figure 24
<p>LSTM network fall event predictions for Subject 5: (<b>a</b>) Predictions for the damped forward fall; (<b>b</b>) Predictions for the undamped forward fall.</p>
Full article ">Figure 25
<p>LSTM network fall event predictions for Subject 6: (<b>a</b>) Predictions for the damped forward fall; (<b>b</b>) Predictions for the undamped forward fall.</p>
Full article ">Figure 26
<p>LSTM network prediction for normal walking of the six participants.</p>
Full article ">Figure 27
<p>LSTM network prediction for sitting in bed event for Subject 1.</p>
Full article ">Figure 28
<p>KNN network prediction for Subject 1.</p>
Full article ">Figure 29
<p>KNN network prediction for Subject 2.</p>
Full article ">Figure 30
<p>KNN network prediction for Subject 3.</p>
Full article ">Figure 31
<p>KNN network prediction for Subject 4.</p>
Full article ">Figure 32
<p>KNN network prediction for Subject 5.</p>
Full article ">Figure 33
<p>KNN network prediction for Subject 6.</p>
Full article ">
18 pages, 2082 KiB  
Systematic Review
The Use of Wearable Sensors and Machine Learning Methods to Estimate Biomechanical Characteristics During Standing Posture or Locomotion: A Systematic Review
by Isabelle J. Museck, Daniel L. Brinton and Jesse C. Dean
Sensors 2024, 24(22), 7280; https://doi.org/10.3390/s24227280 - 14 Nov 2024
Viewed by 359
Abstract
Balance deficits are present in a variety of clinical populations and can negatively impact quality of life. The integration of wearable sensors and machine learning technology (ML) provides unique opportunities to quantify biomechanical characteristics related to balance outside of a laboratory setting. This [...] Read more.
Balance deficits are present in a variety of clinical populations and can negatively impact quality of life. The integration of wearable sensors and machine learning technology (ML) provides unique opportunities to quantify biomechanical characteristics related to balance outside of a laboratory setting. This article provides a general overview of recent developments in using wearable sensors and ML to estimate or predict biomechanical characteristics such as center of pressure (CoP) and center of mass (CoM) motion. This systematic review was conducted according to PRISMA guidelines. Databases including Scopus, PubMed, CINHAL, Trip PRO, Cochrane, and Otseeker databases were searched for publications on the use of wearable sensors combined with ML to predict biomechanical characteristics. Fourteen publications met the inclusion criteria and were included in this review. From each publication, information on study characteristics, testing conditions, ML models applied, estimated biomechanical characteristics, and sensor positions were extracted. Additionally, the study type, level of evidence, and Downs and Black scale score were reported to evaluate methodological quality and bias. Most studies tested subjects during walking and utilized some type of neural network (NN) ML model to estimate biomechanical characteristics. Many of the studies focused on minimizing the necessary number of sensors and placed them on areas near or below the waist. Nearly all studies reporting RMSE and correlation coefficients had values <15% and >0.85, respectively, indicating strong ML model estimation accuracy. Overall, this review can help guide the future development of ML algorithms and wearable sensor technologies to estimate postural mechanics. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Pyramid for artificial intelligence scientific evidence [<a href="#B24-sensors-24-07280" class="html-bibr">24</a>]. <span class="html-italic">Adapted from</span> “<span class="html-italic">The artificial intelligence evidence-based medicine pyramid,” by Bellini, V., et al., 2023, World J Crit Care Med, 2023. 12(2): p. 89–91. Copyright 2023. Reprinted with permission</span>.</p>
Full article ">Figure 2
<p>PRISMA flow diagram.</p>
Full article ">Figure 3
<p>Summary of the different ML models applied in the reviewed studies.</p>
Full article ">Figure 4
<p>Sensor placement locations and the percentage of studies included in this review that consider each position (percentages are based on counts of studies that included that specific sensor location and may not add up to 100%).</p>
Full article ">Figure 5
<p>Wearable sensor characteristics.</p>
Full article ">
8 pages, 3903 KiB  
Communication
Trace Acetylene Gas Detection Based on a Miniaturized Y-Sphere Coupled Photoacoustic Sensor
by Xiaohong Chen, Sen Wang, Dongming Li, Zhao Shi and Qiang Liang
Sensors 2024, 24(22), 7274; https://doi.org/10.3390/s24227274 - 14 Nov 2024
Viewed by 349
Abstract
In this work, a miniaturized Y-sphere coupled photoacoustic (YSCPA) sensor is proposed for trace C2H2 gas detection. The cavity volume of the designed YSCPA sensor is about 0.7 mL. The finite element method (FEM) has been performed to analyze the [...] Read more.
In this work, a miniaturized Y-sphere coupled photoacoustic (YSCPA) sensor is proposed for trace C2H2 gas detection. The cavity volume of the designed YSCPA sensor is about 0.7 mL. The finite element method (FEM) has been performed to analyze the comparative performance of the YSCPA sensor and T-type PA sensor, indicating that the first-order resonance frequency (FORF) of the newly proposed YSCPA sensor has been reduced by half while the PA signal has been improved by a factor of 3 compared to the T-type PA sensor. C2H2 is employed as a target gas to test the performance of the YSCPA sensor. The experimental test results show that the response time of the gas is 26 s. The minimum detection limit (MDL) reaches 189 ppb at a lock-in integration time of 1 s. By extending the lock-in integration time to 100 s, the MDL of the designed PA sensor is reduced to 18.1 ppb. The designed YSCPA sensor has the advantages of small size, low gas consumption, simple structure, and high sensitivity, which is expected to be an effective solution for rapid and real-time monitoring of dissolved C2H2 gas in transformer oil. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The structural schematic of the YSCPA sensor. (<b>b</b>) The structural mechanical sketch of the YSCPA sensor. (<b>c</b>) The physical drawing of the YSCPA sensor.</p>
Full article ">Figure 2
<p>Simulated amplitude–frequency response curves. (<b>a</b>) The YSCPA sensor. (<b>b</b>) The T-type PA and Y-type PA sensors.</p>
Full article ">Figure 3
<p>The modal analysis of PA pressure fields. (<b>a</b>) The YSCPA sensor. (<b>b</b>) The T-type PA and Y-type PA sensors.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sketch of the experimental test system with the YSCPA sensor. (<b>b</b>) Absorption lines of C<sub>2</sub>H<sub>2</sub> (1 ppm) and water vapor (1000 ppm).</p>
Full article ">Figure 5
<p>The frequency–response curve of the YSCPA sensor with 2000 ppm C<sub>2</sub>H<sub>2</sub>.</p>
Full article ">Figure 6
<p>The variation curve of PA signal with modulation current.</p>
Full article ">Figure 7
<p>The obtained 2f signal corresponding to different concentrations.</p>
Full article ">Figure 8
<p>Fitting curves between different C<sub>2</sub>H<sub>2</sub> concentrations and 2f peak signal.</p>
Full article ">Figure 9
<p>Gas response time of the YSCPA sensor.</p>
Full article ">Figure 10
<p>The noise level of the YSCPA sensor-based trace C<sub>2</sub>H<sub>2</sub> detection system.</p>
Full article ">Figure 11
<p>The Allan variance of the YSCPA sensor.</p>
Full article ">
25 pages, 8370 KiB  
Article
The Analysis of the ZnO/Por-Si Hierarchical Surface by Studying Fractal Properties with High Accuracy and the Behavior of the EPR Spectra Components in the Ordering of Structure
by Tatyana Seredavina, Rashid Zhapakov, Danatbek Murzalinov, Yulia Spivak, Nurzhan Ussipov, Tatyana Chepushtanova, Aslan Bolysbay, Kulzira Mamyrbayeva, Yerik Merkibayev, Vyacheslav Moshnikov, Aliya Altmyshbayeva and Azamat Tulegenov
Processes 2024, 12(11), 2541; https://doi.org/10.3390/pr12112541 - 14 Nov 2024
Viewed by 344
Abstract
A hierarchical surface that includes objects with different sizes, as a result of creating local fields, initiates a large number of effects. Micropores in the composition of macropores, as well as nanoclusters of the substance, were detected by scanning electron and atomic force [...] Read more.
A hierarchical surface that includes objects with different sizes, as a result of creating local fields, initiates a large number of effects. Micropores in the composition of macropores, as well as nanoclusters of the substance, were detected by scanning electron and atomic force microscopies on the surface of ZnO/Por-Si samples. An identical fractal dimension for all levels of the hierarchy was determined for these structures, which is associated with the same response to external excitation. Photoluminescence studies have shown the presence of localized levels in the band gap, with the probability of capturing both electrons and holes, which ensures charge transitions between energy bands. Decomposition of the electron paramagnetic resonance (EPR) signal into components made it possible to determine the manifestations of various types of interaction between paramagnetic particles, including the hyperfine structure of the spectrum. The ordering of the structure of the substance as a result of sequential annealing in the range from 300 to 500 °C was revealed in the EPR spectrum. This fact, as well as photo- and gas sensitivity for all types of samples studied, confirms the prospects of using these structures as sensors. Full article
(This article belongs to the Section Materials Processes)
Show Figures

Figure 1

Figure 1
<p>Images of the sample surface without depositing ZnO layers: (<b>a</b>) SEM image taken at an angle of 12° to the horizontal axis, at magnification ×550, (<b>b</b>) SEM image taken at an angle of 12° to the horizontal axis, at magnification ×1200, (<b>c</b>) Optical microscope image of the surface of ground side of a silicon wafer.</p>
Full article ">Figure 2
<p>SEM image of a sample without depositing ZnO, taken vertically to the surface.</p>
Full article ">Figure 3
<p>Schematic representation of the structure of the porous layer of the sample without depositing ZnO layers.</p>
Full article ">Figure 4
<p>(<b>a</b>) SEM image of a macropore of a sample with 20 layers of ZnO; (<b>b</b>) The height distribution of structures at the boundary of macropores of a sample with 20 layers of ZnO, obtained by the Gwyddion program v2.64.</p>
Full article ">Figure 5
<p>(<b>a</b>) SEM image of a macropore of a sample with 25 layers of ZnO; (<b>b</b>) The height distribution of structures at the boundary of macropores of a sample with 25 layers of ZnO, obtained by the Gwyddion program v2.64.</p>
Full article ">Figure 6
<p>Microscopy images of the sample with 25 ZnO layers: (<b>a</b>) SEM image of the macroporous structure of the sample; (<b>b</b>) SEM image of the surface inside the macro pores; (<b>c</b>) AFM image of the microporous structure of the sample; (<b>d</b>) AFM image of the structure of nanocrystals formed between micropores, transformed in the Gwyddion program v2.64.</p>
Full article ">Figure 6 Cont.
<p>Microscopy images of the sample with 25 ZnO layers: (<b>a</b>) SEM image of the macroporous structure of the sample; (<b>b</b>) SEM image of the surface inside the macro pores; (<b>c</b>) AFM image of the microporous structure of the sample; (<b>d</b>) AFM image of the structure of nanocrystals formed between micropores, transformed in the Gwyddion program v2.64.</p>
Full article ">Figure 7
<p>AFM images of the microporous structure of the sample with 25 layers of ZnO: (<b>a</b>) 10 × 10 µm, (<b>b</b>) 200 nm resolution.</p>
Full article ">Figure 8
<p>Dependence of the logarithm of the number of pores N(δ) on the scale δ for (<b>a</b>) macroporous level of surface hierarchy, (<b>b</b>) microporous level of surface hierarchy.</p>
Full article ">Figure 9
<p>A percentage error matrix for determining the number of pores of porous silicon using YOLOv8 neural network.</p>
Full article ">Figure 10
<p>(<b>a</b>) AFM image of nanoclusters located between micropores; (<b>b</b>) Dependence of the logarithm of the number of pores N(δ) on the scale δ for nanoscale level of surface hierarchy.</p>
Full article ">Figure 11
<p>The photoluminescence spectrum, decomposed by Gaussian for a sample of porous silicon without ZnO.</p>
Full article ">Figure 12
<p>Photoluminescence spectrum decomposed into Gaussians for a sample: (<b>a</b>) with 20 layers of ZnO, (<b>b</b>) with 25 layers of ZnO.</p>
Full article ">Figure 13
<p>The comparison of photoluminescence peak intensities for samples with 20 and 25 ZnO layers.</p>
Full article ">Figure 14
<p>The dependence of the resistance of the samples on the number of deposited layers of ZnO.</p>
Full article ">Figure 15
<p>EPR spectrum of the sample with 25 ZnO layers before annealing.</p>
Full article ">Figure 16
<p>Comparison of EPR spectra for extreme points at signal saturation (P<sub>1</sub>= 1 mW, P<sub>2</sub> = 7.4 mW).</p>
Full article ">Figure 17
<p>Changing the signal parameters for the components of the right doublet at microwave powers from 3.4 to 6.6 mW: (<b>a</b>) changing the signal intensity from microwave power; (<b>b</b>) changing the signal width from microwave power.</p>
Full article ">Figure 18
<p>The EPR spectrum for the sample without ZnO deposition, obtained by subtracting the spectra at 5.8 mW and 5.4 mW: 1—signal in the middle of the magnetic field sweep, 2—left doublet signal, 3—right doublet signal.</p>
Full article ">Figure 19
<p>The dependence of the intensity of the fourth component of the spectrum on the sequential increase in microwave power for a sample with 25 ZnO layers.</p>
Full article ">Figure 20
<p>Comparison of the dependences of the intensity of the fourth component of the spectrum on the sequential increase in microwave power for samples: 1—25 layers of ZnO, 2—without deposition ZnO.</p>
Full article ">Figure 21
<p>EPR spectrum decomposed into components for a sample with 25 ZnO layers: (<b>a</b>) after annealing at 300 °C, (<b>b</b>) after annealing at 400 °C, (<b>c</b>) after annealing at 500 °C.</p>
Full article ">
41 pages, 6420 KiB  
Article
Analyzing Autonomous Vehicle Collision Types to Support Sustainable Transportation Systems: A Machine Learning and Association Rules Approach
by Ehsan Kohanpour, Seyed Rasoul Davoodi and Khaled Shaaban
Sustainability 2024, 16(22), 9893; https://doi.org/10.3390/su16229893 - 13 Nov 2024
Viewed by 460
Abstract
The increasing presence of autonomous vehicles (AVs) in transportation, driven by advances in AI and robotics, requires a strong focus on safety in mixed-traffic environments to promote sustainable transportation systems. This study analyzes AV crashes in California using advanced machine learning to identify [...] Read more.
The increasing presence of autonomous vehicles (AVs) in transportation, driven by advances in AI and robotics, requires a strong focus on safety in mixed-traffic environments to promote sustainable transportation systems. This study analyzes AV crashes in California using advanced machine learning to identify patterns among various crash factors. The main objective is to explore AV crash mechanisms by extracting association rules and developing a decision tree model to understand interactions between pre-crash conditions, driving states, crash types, severity, locations, and other variables. A multi-faceted approach, including statistical analysis, data mining, and machine learning, was used to model crash types. The SMOTE method addressed data imbalance, with models like CART, Apriori, RF, XGB, SHAP, and Pearson’s test applied for analysis. Findings reveal that rear-end crashes are the most common, making up over 50% of incidents. Side crashes at night are also frequent, while angular and head-on crashes tend to be more severe. The study identifies high-risk locations, such as complex unsignalized intersections, and highlights the need for improved AV sensor technology, AV–infrastructure coordination, and driver training. Technological advancements like V2V and V2I communication are suggested to significantly reduce the number and severity of specific types of crashes, thereby enhancing the overall safety and sustainability of transportation systems. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptual framework. Process of crash data extraction to modeling.</p>
Full article ">Figure 2
<p>The heat map of AV crashes in the test areas.</p>
Full article ">Figure 3
<p>The sample OL-316 form for the AV collision report provided by the CA DMV is presented. (<b>a</b>) First page of form OL-316; (<b>b</b>) Second page of form OL-316; (<b>c</b>) Third page of form OL-316.</p>
Full article ">Figure 4
<p>Word cloud of points of interest with the highest number of crashes.</p>
Full article ">Figure 5
<p>Descriptive statistics of CA DMV data as of 31 December 2023.</p>
Full article ">Figure 6
<p>Descriptive statistics of CA DMV data. (<b>a</b>) means Types of ADS disengagement; (<b>b</b>) means Type of intersection at the collision site; (<b>c</b>) means Intersection with traffic signals; (<b>d</b>) means Types of AV collisions; (<b>e</b>) means AV driving mode; (<b>f</b>) means Collision severity.</p>
Full article ">Figure 7
<p>Decision tree for classification and regression for the variable of collision type.</p>
Full article ">Figure 8
<p>Association rules bubble chart.</p>
Full article ">Figure 9
<p>Variable importance for collision type using XGB, CART, and RF algorithms.</p>
Full article ">Figure 10
<p>Feature importance with SHAP. (<b>a</b>) Impact on model output; (<b>b</b>) Average impact on model output.</p>
Full article ">
14 pages, 5445 KiB  
Article
Project Report: Thermal Performance of FIRSTLIFE House
by Jan Tywoniak, Zdenko Malík, Kamil Staněk and Kateřina Sojková
Buildings 2024, 14(11), 3600; https://doi.org/10.3390/buildings14113600 - 13 Nov 2024
Viewed by 250
Abstract
The paper deals with selected thermal properties of a small building that was built during the international student competition Solar Decathlon 2021/2022 and is now part of the Living Lab in Wuppertal. It summarizes the essential information about the overall design of this [...] Read more.
The paper deals with selected thermal properties of a small building that was built during the international student competition Solar Decathlon 2021/2022 and is now part of the Living Lab in Wuppertal. It summarizes the essential information about the overall design of this wooden building with construction and technologies corresponding to the passive building standard. Built-in sensors and other equipment enable long-term monitoring of thermal parameters. Part of the information comes from the building operation control system. The thermal transmittance value for the perimeter wall matches calculated expectation well, even from a short period of time and not at an achievable perfectly steady state boundary condition. The (positive) difference between the calculated values and the measured ones did not exceed 0.015 W/(m2K). It was proven that even for such a small building with a very small heat demand, the heat transfer coefficient can be estimated alternatively from a co-heating test (measured electricity power for a fan heater) and from energy delivered to underfloor heating (calorimeter in heating system). Differences among both measurement types and calculation matched in the range ± 10%. In the last section, the dynamic response test is briefly described. The measured indoor air temperature curves under periodic dynamic loads (use of fan heater) are compared with the simulation results. The simulation model working with lumped parameters for each element of the building envelope was able to replicate the measured situation well, while its use does not require special knowledge of the user. In the studied case, the differences between measured and simulated air temperatures were less than 1 Kelvin if the first two to three days of the test period are ignored due to large thermal inertia. Finally, the measurement campaign program for the next period is outlined. Full article
(This article belongs to the Special Issue Constructions in Europe: Current Issues and Future Challenges)
Show Figures

Figure 1

Figure 1
<p>General view from North-East (photo Sigurd Steinprinz).</p>
Full article ">Figure 2
<p>View from above (photo Sigurd Steinprinz).</p>
Full article ">Figure 3
<p>Scheme of measuring devices (A is wall sensors for temperature control; B, C are setup of sensors in external wall; D is electric fan heater; E is tripods for indoor air quality monitoring; F is meteorological station on roof; G is calorimeter in underfloor heating circuit).</p>
Full article ">Figure 4
<p>An illustrative selection of the temperature data recorded in the external wall, setup B during the first heating tests with the fan heater.</p>
Full article ">Figure 5
<p>Recording of heat flow sensors during first heating tests with fan heater.</p>
Full article ">Figure 6
<p>Heating test with underfloor heating (blue: interior air temperature, brown: exterior air temperature, green: delivered energy, for A, B, C see <a href="#buildings-14-03600-t004" class="html-table">Table 4</a>).</p>
Full article ">Figure 7
<p>Heat transfer coefficient <span class="html-italic">H</span><sub>T</sub> [W/K] of HDU—comparison of the results. The marked area corresponds with the majority of the value from the calculation (<a href="#buildings-14-03600-t002" class="html-table">Table 2</a>) extended in the range of ±10%.</p>
Full article ">Figure 8
<p>Dynamic response test 24 January to 30 January 2024—comparison measured and simulated interior air temperature.</p>
Full article ">Figure A1
<p>Construction detail (East façade) and structure compositions.</p>
Full article ">Figure A2
<p>HDU floor plan.</p>
Full article ">
Back to TopTop