Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (22,327)

Search Parameters:
Keywords = features detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1294 KiB  
Article
Integrative Stacking Machine Learning Model for Small Cell Lung Cancer Prediction Using Metabolomics Profiling
by Md. Shaheenur Islam Sumon, Marwan Malluhi, Noushin Anan, Mohannad Natheef AbuHaweeleh, Hubert Krzyslak, Semir Vranic, Muhammad E. H. Chowdhury and Shona Pedersen
Cancers 2024, 16(24), 4225; https://doi.org/10.3390/cancers16244225 - 18 Dec 2024
Abstract
Background: Small cell lung cancer (SCLC) is an extremely aggressive form of lung cancer, characterized by rapid progression and poor survival rates. Despite the importance of early diagnosis, the current diagnostic techniques are invasive and restricted. Methods: This study presents a novel stacking-based [...] Read more.
Background: Small cell lung cancer (SCLC) is an extremely aggressive form of lung cancer, characterized by rapid progression and poor survival rates. Despite the importance of early diagnosis, the current diagnostic techniques are invasive and restricted. Methods: This study presents a novel stacking-based ensemble machine learning approach for classifying small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) using metabolomics data. The analysis included 191 SCLC cases, 173 NSCLC cases, and 97 healthy controls. Feature selection techniques identified significant metabolites, with positive ions proving more relevant. Results: For multi-class classification (control, SCLC, NSCLC), the stacking ensemble achieved 85.03% accuracy and 92.47 AUC using Support Vector Machine (SVM). Binary classification (SCLC vs. NSCLC) further improved performance, with ExtraTreesClassifier reaching 88.19% accuracy and 92.65 AUC. SHapley Additive exPlanations (SHAP) analysis revealed key metabolites like benzoic acid, DL-lactate, and L-arginine as significant predictors. Conclusions: The stacking ensemble approach effectively leverages multiple classifiers to enhance overall predictive performance. The proposed model effectively captures the complementary strengths of different classifiers, enhancing the detection of SCLC and NSCLC. This work accentuates the potential of combining metabolomics with advanced machine learning for non-invasive early lung cancer subtype detection, offering an alternative to conventional biopsy methods. Full article
(This article belongs to the Collection Diagnosis and Treatment of Primary and Secondary Lung Cancers)
21 pages, 1706 KiB  
Article
Intelligent Recognition of Road Internal Void Using Ground-Penetrating Radar
by Qian Kan, Xing Liu, Anxin Meng and Li Yu
Appl. Sci. 2024, 14(24), 11848; https://doi.org/10.3390/app142411848 - 18 Dec 2024
Abstract
Internal road voids can lead to decreased load-bearing capacity, which may result in sudden road collapse, posing threats to traffic safety. Three-dimensional ground-penetrating radar (3D GPR) detects internal road structures by transmitting high-frequency electromagnetic waves into the ground and receiving reflected waves. However, [...] Read more.
Internal road voids can lead to decreased load-bearing capacity, which may result in sudden road collapse, posing threats to traffic safety. Three-dimensional ground-penetrating radar (3D GPR) detects internal road structures by transmitting high-frequency electromagnetic waves into the ground and receiving reflected waves. However, due to noise interference during detection, accurately identifying void areas based on GPR-collected images remains a significant challenge. Therefore, in order to more accurately detect and identify the void areas inside the road, this study proposes an intelligent recognition method for internal road voids based on 3D GPR. First, extensive data on internal road voids was collected using 3D GPR, and the GPR echo characteristics of void areas were analyzed. To address the issue of poor image quality in GPR images, a GPR image enhancement model integrating multi-frequency information was proposed by combining the Unet model, Multi-Head Cross Attention mechanism, and diffusion model. Finally, the intelligent recognition model and enhanced GPR images were used to achieve intelligent and accurate recognition of internal road voids, followed by engineering validation. The research results demonstrate that the proposed road internal void image enhancement model achieves significant improvements in both visual effects and quantitative evaluation metrics, while providing more effective void features for intelligent recognition models. This study offers technical support for precise decision making in road maintenance and ensuring safe road operations. Full article
(This article belongs to the Special Issue Ground Penetrating Radar: Data, Imaging, and Signal Analysis)
32 pages, 4714 KiB  
Article
Application and Analysis of the MFF-YOLOv7 Model in Underwater Sonar Image Target Detection
by Kun Zheng, Haoshan Liang, Hongwei Zhao, Zhe Chen, Guohao Xie, Liguo Li, Jinghua Lu and Zhangda Long
J. Mar. Sci. Eng. 2024, 12(12), 2326; https://doi.org/10.3390/jmse12122326 - 18 Dec 2024
Abstract
The need for precise identification of underwater sonar image targets is growing in areas such as marine resource exploitation, subsea construction, and ocean ecosystem surveillance. Nevertheless, conventional image recognition algorithms encounter several obstacles, including intricate underwater settings, poor-quality sonar image data, and limited [...] Read more.
The need for precise identification of underwater sonar image targets is growing in areas such as marine resource exploitation, subsea construction, and ocean ecosystem surveillance. Nevertheless, conventional image recognition algorithms encounter several obstacles, including intricate underwater settings, poor-quality sonar image data, and limited sample quantities, which hinder accurate identification. This study seeks to improve underwater sonar image target recognition capabilities by employing deep learning techniques and developing the Multi-Gradient Feature Fusion YOLOv7 model (MFF-YOLOv7) to address these challenges. This model incorporates the Multi-Scale Information Fusion Module (MIFM) as a replacement for YOLOv7’s SPPCSPC, substitutes the Conv of CBS following ELAN with RFAConv, and integrates the SCSA mechanism at three junctions where the backbone links to the head, enhancing target recognition accuracy. Trials were conducted using datasets like URPC, SCTD, and UATD, encompassing comparative studies of attention mechanisms, ablation tests, and evaluations against other leading algorithms. The findings indicate that the MFF-YOLOv7 model substantially surpasses other models across various metrics, demonstrates superior underwater target detection capabilities, exhibits enhanced generalization potential, and offers a more dependable and precise solution for underwater target identification. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
17 pages, 3787 KiB  
Article
Direct On-Chip Diagnostics of Streptococcus bovis/Streptococcus equinus Complex in Bovine Mastitis Using Bioinformatics-Driven Portable qPCR
by Jaewook Kim, Eiseul Kim, Seung-Min Yang, Si Hong Park and Hae-Yeong Kim
Biomolecules 2024, 14(12), 1624; https://doi.org/10.3390/biom14121624 - 18 Dec 2024
Abstract
This study introduces an innovative on-site diagnostic method for rapidly detecting the Streptococcus bovis/Streptococcus equinus complex (SBSEC), crucial for livestock health and food safety. Through a comprehensive genomic analysis of 206 genomes, this study identified genetic markers that improved classification and [...] Read more.
This study introduces an innovative on-site diagnostic method for rapidly detecting the Streptococcus bovis/Streptococcus equinus complex (SBSEC), crucial for livestock health and food safety. Through a comprehensive genomic analysis of 206 genomes, this study identified genetic markers that improved classification and addressed misclassifications, particularly in genomes labeled S. equinus and S. lutetiensis. These markers were integrated into a portable quantitative polymerase chain reaction (qPCR) that can detect SBSEC species with high sensitivity (down to 101 or 100 colony-forming units/mL). The portable system featuring a flat chip and compact equipment allows immediate diagnosis within 30 min. The diagnostic method was validated in field conditions directly from cattle udders, farm environments, and dairy products. Among the 100 samples, 51 tested positive for bacteria associated with mastitis. The performance of this portable qPCR was comparable to laboratory methods, offering a reliable alternative to whole-genome sequencing for early detection in clinical, agricultural, and environmental settings. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the portable qPCR system and its components. (<b>A</b>) Portable qPCR device, illustrating the components and structure of the system. 1, LCD display; 2, heating plate; 3, groove for easy handling. (<b>B</b>) Microfluidic chip used for loading and processing samples in the qPCR device. (<b>C</b>) Thermal cycling temperature profiles of the portable qPCR system. (<b>D</b>) Real-time amplification screen of the portable qPCR system, displaying amplification progress in different channels. (<b>E</b>) Amplification curves generated by the portable qPCR system, representing the increase in fluorescence over cycles for different samples. (<b>F</b>) Melting curve analysis, showing the temperature-dependent dissociation of amplified products to assess specificity.</p>
Full article ">Figure 2
<p>Pangenome analysis of the SBSEC. (<b>A</b>) Phylogenetic tree and genome clustering. The circular phylogenetic tree displays the relationships between SBSEC strains based on whole-genome analysis. Each color in the outer ring represents different species within the complex. The tree branches indicate the evolutionary distance between the strains. The red squares in the outer ring highlight genomes that were misclassified. (<b>B</b>) The phylogenetic tree on the left shows the relationships among the 206 SBSEC strains, while the right panel displays a gene presence–absence matrix with 24,117 gene clusters. Each row corresponds to a strain and each column to a gene cluster, where dark blue indicates presence and light blue indicates absence. (<b>C</b>) This graph illustrates how the number of conserved genes declines as more genomes are analyzed, while the total number of genes continues to rise. (<b>D</b>) The plot shows that the number of new genes decreases steeply as additional genomes are added, but the count of unique genes stays nearly constant. (<b>E</b>) A pie chart categorizes the pangenome into core genes (99–100% of strains), soft-core genes (95–99%), shell genes (15–95%), and cloud genes (0–15%). The core genome contains 119 genes, the soft core has 222 genes, while the shell and cloud consist of 3772 and 20,004 genes, respectively.</p>
Full article ">Figure 3
<p>Specificity analysis of portable qPCR for SBSEC species detection. Amplification curves were generated for each target species. (<b>A</b>) <span class="html-italic">S. alactolyticus</span>, (<b>B</b>) <span class="html-italic">S. equinus</span>, (<b>C</b>) <span class="html-italic">S. gallolyticus</span> subsp. <span class="html-italic">gallolyticus</span>, (<b>D</b>) <span class="html-italic">S. gallolyticus</span> subsp. <span class="html-italic">macedonicus</span>, (<b>E</b>) <span class="html-italic">S. gallolyticus</span> subsp. <span class="html-italic">pasteurianus</span>, and (<b>F</b>) <span class="html-italic">S. lutetiensis</span> using species-specific primers. Each qPCR run was performed in triplicate, and the data represent mean values with error bars indicating standard deviations. The absence of amplification in nontarget (NT) and negative control (NC) samples demonstrates the high specificity of the portable qPCR assay for the target species.</p>
Full article ">Figure 4
<p>Limit of detection (LOD) for SBSEC using portable qPCR. (<b>A</b>) <span class="html-italic">S. alactolyticus</span>, (<b>B</b>) <span class="html-italic">S. equinus</span>, (<b>C</b>) <span class="html-italic">S. gallolyticus</span>, and (<b>D</b>) <span class="html-italic">S. lutetiensis</span> in pure culture. (<b>E</b>) <span class="html-italic">S. alactolyticus</span>, (<b>F</b>) <span class="html-italic">S. equinus</span>, (<b>G</b>) <span class="html-italic">S. gallolyticus</span>, and (<b>H</b>) <span class="html-italic">S. lutetiensis</span> in spiked food samples. Each bar represents qPCR signal intensity for serially diluted pure cultures, ranging from the highest concentration (left) to the lowest concentration (right), showing the sensitivity of detection for each species. The different bar colors represent the logarithmic CFU values (Log CFU per reaction) as indicated on the x-axis, with each color corresponding to a specific concentration step. All tests were conducted in triplicate, with standard deviation bars indicating variability between replicates.</p>
Full article ">Figure 5
<p>Standard curves for portable qPCR quantification of SBSEC species. Standard curves for SBSEC species detection using portable qPCR, showing the relationship between the logarithm of template concentration and Ct values. Each curve represents tenfold serial dilutions of DNA from the highest to lowest concentrations. (<b>A</b>) <span class="html-italic">S. alactolyticus</span>, (<b>B</b>) <span class="html-italic">S. equinus</span>, (<b>C</b>) <span class="html-italic">S. gallolyticus</span>, and (<b>D</b>) <span class="html-italic">S. lutetiensis</span> in pure culture. (<b>E</b>) <span class="html-italic">S. alactolyticus</span>, (<b>F</b>) <span class="html-italic">S. equinus</span>, (<b>G</b>) <span class="html-italic">S. gallolyticus</span>, and (<b>H</b>) <span class="html-italic">S. lutetiensis</span> in spiked food samples. All assays were performed in triplicate, with error bars representing the standard deviation of the replicates.</p>
Full article ">Figure 6
<p>Field diagnostic process of SBSEC using portable qPCR. (<b>A</b>) The farm where the samples were collected. (<b>B</b>) Swabbing of the udder surface for mastitis diagnosis. (<b>C</b>) Environmental sampling from the farm surroundings. (<b>D</b>) On-site diagnostic step performed immediately after sampling: 1, collected sample; 2, direct buffer for DNA extraction; 3, on-site diagnostic chip; 4, portable diagnostic device. (<b>E</b>) On-site DNA extraction performed immediately after sampling, taking less than 5 min at the farm site. (<b>F</b>) On-site analysis using the portable diagnostic device within 20 min.</p>
Full article ">
23 pages, 1250 KiB  
Article
Evaluation of Thermal Liquid Biopsy Analysis of Saliva and Blood Plasma Specimens as a Novel Diagnostic Modality in Head and Neck Cancer
by Gabriela Schneider, Alagammai Kaliappan, Nathan Joos, Laura M. Dooley, Brian S. Shumway, Jonathan B. Chaires, Wolfgang Zacharias, Jeffrey M. Bumpous and Nichola C. Garbett
Cancers 2024, 16(24), 4220; https://doi.org/10.3390/cancers16244220 - 18 Dec 2024
Abstract
Background: Over the past decade, saliva-based liquid biopsies have emerged as promising tools for the early diagnosis, prognosis, and monitoring of cancer, particularly in high-risk populations. However, challenges persist because of low concentrations and variable modifications of biomarkers linked to tumor development when [...] Read more.
Background: Over the past decade, saliva-based liquid biopsies have emerged as promising tools for the early diagnosis, prognosis, and monitoring of cancer, particularly in high-risk populations. However, challenges persist because of low concentrations and variable modifications of biomarkers linked to tumor development when compared to normal salivary components. Methods: This study explores the application of differential scanning calorimetry (DSC)-based thermal liquid biopsy (TLB) for analyzing saliva and blood plasma samples from head and neck cancer (HNC) patients. Results: Our research identified an effective saliva processing method via high-speed centrifugation and ultrafiltration, resulting in reliable TLB data. Notably, we recorded unique TLB profiles for saliva from 48 HNC patients and 21 controls, revealing distinct differences in thermal transition features that corresponded to salivary protein denaturation. These results indicated the potential of saliva TLB profiles in differentiating healthy individuals from HNC patients and identifying tumor characteristics. In contrast, TLB profiles for blood plasma samples exhibited smaller differences between HNC patients and had less utility for differentiation within HNC. Conclusions: Our findings support the feasibility of saliva-based TLB for HNC diagnostics, with further refinement in sample collection and the incorporation of additional patient variables anticipated to enhance accuracy, ultimately advancing non-invasive diagnostic strategies for HNC detection and monitoring. Full article
21 pages, 7934 KiB  
Article
Improved You Only Look Once v.8 Model Based on Deep Learning: Precision Detection and Recognition of Fresh Leaves from Yunnan Large-Leaf Tea Tree
by Chun Wang, Hongxu Li, Xiujuan Deng, Ying Liu, Tianyu Wu, Weihao Liu, Rui Xiao, Zuzhen Wang and Baijuan Wang
Agriculture 2024, 14(12), 2324; https://doi.org/10.3390/agriculture14122324 - 18 Dec 2024
Abstract
Yunnan Province, China, known for its superior ecological environment and diverse climate conditions, is home to a rich resource of tea-plant varieties. However, the subtle differences in shape, color and size among the fresh leaves of different tea-plant varieties pose significant challenges for [...] Read more.
Yunnan Province, China, known for its superior ecological environment and diverse climate conditions, is home to a rich resource of tea-plant varieties. However, the subtle differences in shape, color and size among the fresh leaves of different tea-plant varieties pose significant challenges for their identification and detection. This study proposes an improved YOLOv8 model based on a dataset of fresh leaves from five tea-plant varieties among Yunnan large-leaf tea trees. Dynamic Upsampling replaces the UpSample module in the original YOLOv8, reducing the data volume in the training process. The Efficient Pyramid Squeeze Attention Network is integrated into the backbone of the YOLOv8 network to boost the network’s capability to handle multi-scale spatial information. To improve model performance and reduce the number of redundant features within the network, a Spatial and Channel Reconstruction Convolution module is introduced. Lastly, Inner-SIoU is adopted to reduce network loss and accelerate the convergence of regression. Experimental results indicate that the improved YOLOv8 model achieves precision, recall and an mAP of 88.4%, 89.9% and 94.8%, representing improvements of 7.1%, 3.9% and 3.4% over the original model. This study’s proposed improved YOLOv8 model not only identifies fresh leaves from different tea-plant varieties but also achieves graded recognition, effectively addressing the issues of strong subjectivity in manual identification detection, the long training time of the traditional deep learning model and high hardware cost. It establishes a robust technical foundation for the intelligent and refined harvesting of tea in Yunnan’s tea gardens. Full article
Show Figures

Figure 1

Figure 1
<p>Sample images of tea leaves from different tea-plant varieties ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1).</p>
Full article ">Figure 2
<p>Examples of augmented tea-leaf dataset samples.</p>
Full article ">Figure 3
<p>Structure of the improved YOLOv8 Network. The parts enclosed by red dashed lines are the locations where the improved modules are added.</p>
Full article ">Figure 4
<p>Dynamic Upsampling structure: (<b>a</b>) is sampling based Dynamic Upsampling; (<b>b</b>) is the sampling point generator in DySample.</p>
Full article ">Figure 5
<p>PSA module structure. The upper part of the picture depicts the structure of the proposed pyramid squeeze attention (PSA) module, and the lower part of the picture depicts the SEWeight module.</p>
Full article ">Figure 6
<p>SPC implementation process.</p>
Full article ">Figure 7
<p>SRU and CRU module structure. The upper part of the picture depicts a Spatial Reconstruction Unit, the lower part of the picture depicts the Channel Reconstruction Unit.</p>
Full article ">Figure 8
<p>Inner-IoU structure. Green solid wireframe indicates Target Box, green dashed line indicates Anchor Box, red solid wireframe indicates Inner Target Box, and red dashed line indicates Inner Anchor Box.</p>
Full article ">Figure 9
<p>Change curve of loss function. The left side is the loss-function curve of the improved YOLOv8, and the right side is the loss-function curve of YOLOv8.</p>
Full article ">Figure 10
<p>Comparison of detection effects before and after the improvement of YOLOv8 ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1).</p>
Full article ">Figure 11
<p>Performance parameter change curves for different improvements of YOLOv8 in the training process. (<b>a</b>) Shows the precision change curve of different improvements of YOLOv8 during training, (<b>b</b>) shows the recall change curve of different improvements of YOLOv8 during training and (<b>c</b>) shows the mAP50% change curve of different improvements of YOLOv8 during training. In the legend, D stands for DySample, E stands for EPSANet, S stands for SCConv and I stands for Inner-SIoU.</p>
Full article ">Figure 12
<p>Different improvements of YOLOv8 in visual thermal map effect presentation ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1). Deep red indicates high attention, yellow indicates medium and high attention, green indicates moderate concern, light blue indicates low attention, dark blue indicates very low attention.</p>
Full article ">Figure 13
<p>Identification results of different models ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1).</p>
Full article ">Figure 14
<p>Comparison of recognition results of improved YOLOv8 model under extreme lighting conditions: (<b>a</b>,<b>c</b>) indicates the recognition result of the model under normal illumination, (<b>b</b>) indicates the recognition result of the model under over-dark illumination, and (<b>d</b>) indicates the recognition result of the model under overexposure illumination.</p>
Full article ">
18 pages, 1133 KiB  
Article
PMDRSpell: Dynamic Residual Chinese Spelling Check Method Guided by Phonological and Morphological Knowledge
by Guanguang Chang, Yangsen Zhang, Youren Yu and Jiayuan Song
Electronics 2024, 13(24), 4989; https://doi.org/10.3390/electronics13244989 - 18 Dec 2024
Abstract
Since the errors in Chinese Spell Correction (CSC) involve phonetically or morphologically confusing Chinese characters, mainstream models have made numerous attempts to fuse phonological and morphological knowledge. We observe that in erroneous sentences where the vast majority of Chinese characters are correctly written, [...] Read more.
Since the errors in Chinese Spell Correction (CSC) involve phonetically or morphologically confusing Chinese characters, mainstream models have made numerous attempts to fuse phonological and morphological knowledge. We observe that in erroneous sentences where the vast majority of Chinese characters are correctly written, mainstream models may unintentionally increase the difficulty of predicting these correct characters when integrating multi-modal knowledge across all characters. Additionally, these models often overlook the potential relationship between the phonological and morphological modalities of a Chinese character when utilizing multi-modal information. In this paper, we propose an end-to-end model called PMDRSpell, which models erroneous Chinese characters in sentences using their multi-modal knowledge and reduces the use of multi-modal information for correct Chinese characters. Additionally, it uncovers the relationship between phonological and morphological features based on the characteristics of phonograms, enhancing the similarity between similar Chinese characters. Specifically, coarse-grained and hierarchical detection is first employed to localize and mask error locations within sentences, using the original embedding information as residual features. Next, correlation information in the phonological and morphological modalities of the erroneous characters is extracted to construct new representational features, which are then used to update the erroneous Chinese character information within the residual features. Finally, the masked sentences are predicted using the MLM model and classified to generate correct sentences by combining the residual features with the updated multi-modal information. Our model effectively reduces the interference from correct Chinese characters during the inspection process and leverages multi-modal information to accurately correct incorrect Chinese characters. In our comparison experiments with recent state-of-the-art models, PMDRSpell outperforms the optimal baseline in terms of error-corrected F1 scores for Sighan14 and Sighan15 by 1.2 and 1.0 percentage points, respectively. Full article
23 pages, 13973 KiB  
Article
Joint Fault Diagnosis of IGBT and Current Sensor in LLC Resonant Converter Module Based on Reduced Order Interval Sliding Mode Observer
by Xi Zha, Wei Feng, Xianfeng Zhang, Zhonghua Cao and Xinyang Chen
Sensors 2024, 24(24), 8077; https://doi.org/10.3390/s24248077 - 18 Dec 2024
Abstract
LLC resonant converters have emerged as essential components in DC charging station modules, thanks to their outstanding performance attributes such as high power density, efficiency, and compact size. The stability of these converters is crucial for vehicle endurance and passenger experience, making reliability [...] Read more.
LLC resonant converters have emerged as essential components in DC charging station modules, thanks to their outstanding performance attributes such as high power density, efficiency, and compact size. The stability of these converters is crucial for vehicle endurance and passenger experience, making reliability a top priority. However, malfunctions in the switching transistor or current sensor can hinder the converter’s ability to maintain a resonant state and stable output voltage, leading to a notable reduction in system efficiency and output capability. This article proposes a fault diagnosis strategy for LLC resonant converters utilizing a reduced-order interval sliding mode observer. Initially, an augmented generalized system for the LLC resonant converter is developed to convert current sensor faults into generalized state vectors. Next, the application of matrix transformations plays a critical role in decoupling open-circuit faults from the inverter system’s state and current sensor faults. To achieve accurate estimation of phase currents and detection of current sensor faults, a reduced-order interval sliding mode observer has been designed. Building upon the estimation results generated by this observer, a diagnostic algorithm featuring adaptive thresholds has been introduced. This innovative algorithm effectively differentiates between current sensor faults and open switch faults, enhancing fault detection accuracy. Furthermore, it is capable of localizing faulty power switches and estimating various types of current sensor faults, thereby providing valuable insights for maintenance and repair. The robustness and effectiveness of the proposed fault diagnosis algorithm have been validated through experimental results and comparisons with existing methods, confirming its practical applicability in real-world inverter systems. Full article
Show Figures

Figure 1

Figure 1
<p>The topology of Charging Module.</p>
Full article ">Figure 2
<p>LLC resonant converter circuit topology diagram.</p>
Full article ">Figure 3
<p>Equivalent diagram of LLC resonant converter circuit.</p>
Full article ">Figure 4
<p>phase-k current flow path under normal working conditions.</p>
Full article ">Figure 5
<p>Phase-k current flow path in case of <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>k</mi> <mn>1</mn> </mrow> </msub> </semantics></math> open circuit fault.</p>
Full article ">Figure 6
<p>Fault diagnosis process.</p>
Full article ">Figure 7
<p>Hardware in the loop experimental device.</p>
Full article ">Figure 8
<p>Diagnosis results of open circuit fault of power switch tube <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>a</mi> <mn>2</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>Drift fault diagnosis results of DC side current sensor.</p>
Full article ">Figure 10
<p>Diagnosis results of DC side current sensor offset fault.</p>
Full article ">Figure 11
<p>Robustness verification results of <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>b</mi> <mn>2</mn> </mrow> </msub> </semantics></math> open circuit fault under DC voltage fluctuation fluctuations.</p>
Full article ">Figure 12
<p>Verification results of gain fault robustness under sudden changes in DC side load parameters.</p>
Full article ">
24 pages, 21931 KiB  
Article
Evaluating and Enhancing Face Anti-Spoofing Algorithms for Light Makeup: A General Detection Approach
by Zhimao Lai, Yang Guo, Yongjian Hu, Wenkang Su and Renhai Feng
Sensors 2024, 24(24), 8075; https://doi.org/10.3390/s24248075 - 18 Dec 2024
Abstract
Makeup modifies facial textures and colors, impacting the precision of face anti-spoofing systems. Many individuals opt for light makeup in their daily lives, which generally does not hinder face identity recognition. However, current research in face anti-spoofing often neglects the influence of light [...] Read more.
Makeup modifies facial textures and colors, impacting the precision of face anti-spoofing systems. Many individuals opt for light makeup in their daily lives, which generally does not hinder face identity recognition. However, current research in face anti-spoofing often neglects the influence of light makeup on facial feature recognition, notably the absence of publicly accessible datasets featuring light makeup faces. If these instances are incorrectly flagged as fraudulent by face anti-spoofing systems, it could lead to user inconvenience. In response, we develop a face anti-spoofing database that includes light makeup faces and establishes a criterion for determining light makeup to select appropriate data. Building on this foundation, we assess multiple established face anti-spoofing algorithms using the newly created database. Our findings reveal that the majority of these algorithms experience a decrease in performance when faced with light makeup faces. Consequently, this paper introduces a general face anti-spoofing algorithm specifically designed for light makeup faces, which includes a makeup augmentation module, a batch channel normalization module, a backbone network updated via the Exponential Moving Average (EMA) method, an asymmetric virtual triplet loss module, and a nearest neighbor supervised contrastive module. The experimental outcomes confirm that the proposed algorithm exhibits superior detection capabilities when handling light makeup faces. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Confidence scores between input bare-faced and makeup face images.</p>
Full article ">Figure 2
<p>Scatter plots of the <math display="inline"><semantics> <mrow> <mi>L</mi> <msub> <mi>I</mi> <mn>1</mn> </msub> </mrow> </semantics></math> values for light makeup and heavy makeup faces.</p>
Full article ">Figure 3
<p>Scatter plots of the <math display="inline"><semantics> <mrow> <mi>L</mi> <msub> <mi>I</mi> <mn>2</mn> </msub> </mrow> </semantics></math> values for light makeup and heavy makeup images.</p>
Full article ">Figure 4
<p>Examples of the judgment, where the first two images in each group are bare-faced images and the third image is the makeup face to be evaluated.</p>
Full article ">Figure 5
<p>Example reference makeup face images.</p>
Full article ">Figure 6
<p>Example triplet of the original real, bare-faced image, the reference makeup face image, and the generated light makeup face image.</p>
Full article ">Figure 7
<p>Before-and-after samples of every database makeup transfer.</p>
Full article ">Figure 8
<p>Framework of the proposed method.</p>
Full article ">Figure 9
<p>Structural diagram of the makeup augmentation module.</p>
Full article ">Figure 10
<p>Sketch map of the reference makeup image screening standard.</p>
Full article ">Figure 11
<p>Sketch map of the Poisson fusion area.</p>
Full article ">Figure 12
<p>Schematic diagram of a batch channel-normalized module.</p>
Full article ">Figure 13
<p>Process of the improved neighbor-supervised contrastive learning.</p>
Full article ">Figure 14
<p>t-SNE visualization of feature separation by the proposed algorithm.</p>
Full article ">Figure 15
<p>Impact of asymmetric virtual triplet loss on feature separation.</p>
Full article ">Figure 16
<p>Comparison of supervised contrastive learning vs. nearest neighbor supervised contrastive learning.</p>
Full article ">
16 pages, 3143 KiB  
Article
DGA Domain Detection Based on Transformer and Rapid Selective Kernel Network
by Jisheng Tang, Yiling Guan, Shenghui Zhao, Huibin Wang and Yinong Chen
Electronics 2024, 13(24), 4982; https://doi.org/10.3390/electronics13244982 - 18 Dec 2024
Abstract
Botnets pose a significant challenge in network security by leveraging Domain Generation Algorithms (DGA) to evade traditional security measures. Extracting DGA domain samples is inherently complex, and the current DGA detection models often struggle to capture domain features effectively when facing limited training [...] Read more.
Botnets pose a significant challenge in network security by leveraging Domain Generation Algorithms (DGA) to evade traditional security measures. Extracting DGA domain samples is inherently complex, and the current DGA detection models often struggle to capture domain features effectively when facing limited training data. This limitation results in suboptimal detection performance and an imbalance between model accuracy and complexity. To address these challenges, this paper introduces a novel multi-scale feature fusion model that integrates the Transformer architecture with the Rapid Selective Kernel Network (R-SKNet). The proposed model employs the Transformer’s encoder to couple the single-domain character elements with the multiple types of relationships within the global domain block. This paper proposes integrating R-SKNet into DGA detection and developing an efficient channel attention (ECA) module. By enhancing the branch information guidance in the SKNet architecture, the approach achieves adaptive receptive field selection, multi-scale feature capture, and lightweight yet efficient multi-scale convolution. Moreover, the improved Feature Pyramid Network (FPN) architecture, termed EFAM, is utilized to adjust channel weights for outputs at different stages of the backbone network, leading to achieving multi-scale feature fusion. Experimental results demonstrate that, in tasks with limited training samples, the proposed method achieves lower computational complexity and higher detection accuracy compared to mainstream detection models. Full article
Show Figures

Figure 1

Figure 1
<p>Overall framework.</p>
Full article ">Figure 2
<p>Sample domain length.</p>
Full article ">Figure 3
<p>Transformer encoder module.</p>
Full article ">Figure 4
<p>R-SK convolution structure.</p>
Full article ">Figure 5
<p><b>ECA module</b>.</p>
Full article ">Figure 6
<p>Band matrix.</p>
Full article ">Figure 7
<p>EFAM structure.</p>
Full article ">Figure 8
<p>Binary Classification Results and Model Parameters Comparisons.</p>
Full article ">
37 pages, 6293 KiB  
Article
KidneyNet: A Novel CNN-Based Technique for the Automated Diagnosis of Chronic Kidney Diseases from CT Scans
by Saleh Naif Almuayqil, Sameh Abd El-Ghany, A. A. Abd El-Aziz and Mohammed Elmogy
Electronics 2024, 13(24), 4981; https://doi.org/10.3390/electronics13244981 - 18 Dec 2024
Abstract
This study presents KidneyNet, an innovative computer-aided diagnosis (CAD) system designed to identify chronic kidney diseases (CKDs), such as kidney stones, cysts, and tumors, in CT scans. KidneyNet utilizes a convolutional neural network (CNN) structure consisting of eight convolutional layers, three pooling layers, [...] Read more.
This study presents KidneyNet, an innovative computer-aided diagnosis (CAD) system designed to identify chronic kidney diseases (CKDs), such as kidney stones, cysts, and tumors, in CT scans. KidneyNet utilizes a convolutional neural network (CNN) structure consisting of eight convolutional layers, three pooling layers, a flattening layer, and two fully connected layers. Small filters enhance computational efficiency by reducing the number of parameters and minimizing the risk of overfitting compared to larger filters. The model captures more complex and abstract features as data move through the layers. The initial layers identify basic patterns, while the deeper layers focus on more intricate representations. KidneyNet aims to enhance the efficiency and accuracy of kidney disease diagnosis. Additionally, the model incorporates the gradient-weighted class activation mapping (Grad-CAM) algorithm, which helps to pinpoint affected areas in the scans. This feature improves interpretability, allowing clinicians to identify which regions the model deemed significant for detecting abnormalities such as tumors, cysts, or stones. Through extensive testing on a CT kidney dataset, KidneyNet demonstrated impressive performance metrics, with 99.88% accuracy, 99.92% specificity, 99.76% sensitivity, 99.58% precision, and an F1 score of 99.67%, outperforming existing models. This approach alleviates the diagnostic burden on radiologists and promotes early detection, potentially saving lives. This study highlights the critical role of advanced imaging analysis in addressing kidney conditions and emphasizes KidneyNet’s capability to deliver precise and cost-effective diagnoses. Full article
(This article belongs to the Special Issue AI-Driven Digital Image Processing: Latest Advances and Prospects)
Show Figures

Figure 1

Figure 1
<p>Some examples of CT scans from the CT kidney dataset: (<b>A</b>) normal CT scan, (<b>B</b>) cyst CT scan, (<b>C</b>) CT scan, and (<b>D</b>) stone CT scan.</p>
Full article ">Figure 2
<p>The proposed KidneyNet model architecture.</p>
Full article ">Figure 3
<p>The KidneyNet architecture.</p>
Full article ">Figure 4
<p>VGG19’s architecture.</p>
Full article ">Figure 5
<p>The architecture of EfficientNet-B1.</p>
Full article ">Figure 6
<p>Xception’s architecture.</p>
Full article ">Figure 7
<p>Grad-CAM localization for MRI scans of CT kidney dataset.</p>
Full article ">Figure 8
<p>Loss vs. epoch number of the five DL models for CKD classification.</p>
Full article ">Figure 9
<p>The accuracy of the five DL models for CKD classification.</p>
Full article ">Figure 10
<p>The ROC curves of the five DL models for CKD classification.</p>
Full article ">Figure 11
<p>The confusion matrices of the five DL models for CKD classification.</p>
Full article ">Figure 11 Cont.
<p>The confusion matrices of the five DL models for CKD classification.</p>
Full article ">Figure 12
<p>The accuracies of KidneyNet with different optimizers and LR values.</p>
Full article ">Figure 13
<p>Clinical variations related to kidney conditions, including cysts, stones, and tumors.</p>
Full article ">
26 pages, 2762 KiB  
Article
Uncovering the Diagnostic Power of Radiomic Feature Significance in Automated Lung Cancer Detection: An Integrative Analysis of Texture, Shape, and Intensity Contributions
by Sotiris Raptis, Christos Ilioudis and Kiki Theodorou
BioMedInformatics 2024, 4(4), 2400-2425; https://doi.org/10.3390/biomedinformatics4040129 - 18 Dec 2024
Abstract
Background: Lung cancer still maintains the leading position among causes of death in the world; the process of early detection surely contributes to changes in the survival of patients. Standard diagnostic methods are grossly insensitive, especially in the early stages. In this paper, [...] Read more.
Background: Lung cancer still maintains the leading position among causes of death in the world; the process of early detection surely contributes to changes in the survival of patients. Standard diagnostic methods are grossly insensitive, especially in the early stages. In this paper, radiomic features are discussed that can assure improved diagnostic accuracy through automated lung cancer detection by considering the important feature categories, such as texture, shape, and intensity, originating from the CT DICOM images. Methods: We developed and compared the performance of two machine learning models—DenseNet-201 CNN and XGBoost—trained on radiomic features with the ability to identify malignant tumors from benign ones. Feature importance was analyzed using SHAP and techniques of permutation importance that enhance both the global and case-specific interpretability of the models. Results: A few features that reflect tumor heterogeneity and morphology include GLCM Entropy, shape compactness, and surface-area-to-volume ratio. These performed excellently in diagnosis, with DenseNet-201 producing an accuracy of 92.4% and XGBoost at 89.7%. The analysis of feature interpretability ascertains its potential in early detection and boosting diagnostic confidence. Conclusions: The current work identifies the most important radiomic features and quantifies their diagnostic significance through a properly conducted feature selection process reflecting stability analysis. This provides the blueprint for feature-driven model interpretability in clinical applications. Radiomics features have great value in the automated diagnosis of lung cancer, especially when combined with machine learning models. This might improve early detection and open personalized diagnostic strategies for precision oncology. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the radiomics workflow used in this study.</p>
Full article ">Figure 2
<p>Distribution of radiomic feature categories extracted in this study.</p>
Full article ">Figure 3
<p>Distribution of radiomic features based on ICC values.</p>
Full article ">Figure 4
<p>SHAP summary plot illustrating the global impact of selected radiomic features on model predictions.</p>
Full article ">Figure 5
<p>SHAP dependence plot illustrating the influence of First-order Mean Intensity on model predictions.</p>
Full article ">Figure 6
<p>Permutation importance score of radiomic features.</p>
Full article ">Figure 7
<p>SHAP dependence plot showing the effect of GLCM Entropy on model predictions.</p>
Full article ">Figure 8
<p>Trade-offs between model interpretability and diagnostic performance.</p>
Full article ">
22 pages, 5498 KiB  
Article
Small-Sample Target Detection Across Domains Based on Supervision and Distillation
by Fusheng Sun, Jianli Jia, Xie Han, Liqun Kuang and Huiyan Han
Electronics 2024, 13(24), 4975; https://doi.org/10.3390/electronics13244975 - 18 Dec 2024
Viewed by 134
Abstract
To address the issues of significant object discrepancies, low similarity, and image noise interference between source and target domains in object detection, we propose a supervised learning approach combined with knowledge distillation. Initially, student and teacher models are jointly trained through supervised and [...] Read more.
To address the issues of significant object discrepancies, low similarity, and image noise interference between source and target domains in object detection, we propose a supervised learning approach combined with knowledge distillation. Initially, student and teacher models are jointly trained through supervised and distillation-based approaches, iteratively refining the inter-model weights to mitigate the issue of model overfitting. Secondly, a combined convolutional module is integrated into the feature extraction network of the student model, to minimize redundant computational effort; an explicit visual center module is embedded within the feature pyramid network, to bolster feature representation; and a spatial grouping enhancement module is incorporated into the region proposal network, to mitigate the adverse effects of noise on the outcomes. Ultimately, the model undergoes a comprehensive optimization process that leverages the loss functions originating from both the supervised and knowledge distillation phases. The experimental results demonstrate that this strategy significantly boosts classification and identification accuracy on cross-domain datasets; when compared to the TFA (Task-agnostic Fine-tuning and Adapter), CD-FSOD (Cross-Domain Few-Shot Object Detection) and DeFRCN (Decoupled Faster R-CNN for Few-Shot Object Detection), with sample orders of magnitude 1 and 5, increased the detection accuracy by 1.67% and 1.87%, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of methods.</p>
Full article ">Figure 2
<p>Student model architecture diagram.</p>
Full article ">Figure 3
<p>Schematic Diagram of the PConv Operation.</p>
Full article ">Figure 4
<p>Convolutional structure comparison diagram.</p>
Full article ">Figure 5
<p>The EVC module.</p>
Full article ">Figure 6
<p>Lightweight MLP module.</p>
Full article ">Figure 7
<p>The LVC module.</p>
Full article ">Figure 8
<p>Lightweight SGE module diagram.</p>
Full article ">Figure 9
<p>Prediction results.</p>
Full article ">Figure 10
<p>ArTaxOr dataset prediction results.</p>
Full article ">Figure 11
<p>Results for the UODD dataset.</p>
Full article ">Figure 12
<p>DIOR dataset prediction results.</p>
Full article ">
30 pages, 5847 KiB  
Article
Unified Detection and Feature Extraction of Ships in Satellite Images
by Kristian Aalling Sørensen, Peder Heiselberg and Henning Heiselberg
Remote Sens. 2024, 16(24), 4719; https://doi.org/10.3390/rs16244719 - 17 Dec 2024
Viewed by 215
Abstract
The increasing importance of maritime surveillance, particularly in monitoring dark ships, highlights the need for advanced detection models that go beyond simple ship localisation. Current approaches largely focus on either detection or feature extraction, leaving a gap in unified methods capable of providing [...] Read more.
The increasing importance of maritime surveillance, particularly in monitoring dark ships, highlights the need for advanced detection models that go beyond simple ship localisation. Current approaches largely focus on either detection or feature extraction, leaving a gap in unified methods capable of providing detailed ship characteristics. This study addresses this gap by developing a unified model for ship detection and characterisation from Synthetic Aperture Radar images, estimating features such as true length, true breadth, and heading. The model is designed to detect ships of varying sizes while simultaneously estimating their characteristics, and experimental results show a high detection accuracy, with a recall of 87.7% and an F1-score of 93.5%. The model also effectively estimates ship dimensions, with mean errors of 1.4 ± 16.2 m for length and 1.5 ± 4.5 m for breadth. Estimating the heading proved challenging for smaller ships, but was accurate for larger ships. A total of 50% of the heading estimates were within 15 degrees of error. This unified approach offers practical benefits for maritime operations. It is especially useful in situations where both ship detection and detailed information are needed, such as predicting future ship positions or identifying ships. Full article
Show Figures

Figure 1

Figure 1
<p>Ship detection on an image chip with the outputs of current models, including confidence and bounding box (<b>left</b>), and the output of our model, including probability, true length, true breadth, and heading (<b>right</b>).</p>
Full article ">Figure 2
<p>Uncalibrated Sentinel-1 IW GRD image chips, clipped to the 99th percentile pixel value. Ships can have very different appearances due to the nature of SAR acquisitions. (<b>Top</b>): Two acquisitions of the same 154 m long NAVIGATOR GENESIS ship showing the effect of speckle and sea state variations and (<b>bottom</b>): two acquisitions of the 98 m long ARCO AXE ship showing the effect of triple bounce side lobe (<b>lower left</b>). Figure from Ref. [<a href="#B39-remotesensing-16-04719" class="html-bibr">39</a>].</p>
Full article ">Figure 3
<p>OpenSARship Sentinel-1 IW GRD subset examples. Each subset is annotated with only one ship.</p>
Full article ">Figure 4
<p>Detection head for the unified model consisting of two parts. One part is used for ship detection and discrimination from other objects. The other part is used to infer characteristics of the ship. H and W represent the height and width of the feature map, which depend on the stride of the scale, while K denotes the number of kernels. C is the number of classes used; in this paper, C = 2.</p>
Full article ">Figure 5
<p>Illustration of the target from a single scale, illustrated as the red cell. The other scales contain the same number of features but use a finer or coarser grid, depending on the strides. Each cell in the final feature map has 9 features, meaning 9 targets were generated for each cell.</p>
Full article ">Figure 6
<p>Classification loss for both training and validation loss.</p>
Full article ">Figure 7
<p>F1-score and recall vs. ship length.</p>
Full article ">Figure 8
<p>The mean probability of detecting ships relative to the true average speed and length. The white regions are those without any data, i.e., no ships longer than 350 m were stationary in the test dataset.</p>
Full article ">Figure 9
<p>The VH polarisation of the Sentinel-1 IW GRD image acquired near the Faroe Islands.</p>
Full article ">Figure 10
<p>Ship detection probabilities on a subset of a new Sentinel-1 image with the probability for large ships (<b>left</b>) and the probability after a 0.5 threshold (<b>right</b>).</p>
Full article ">Figure 11
<p>The detected ship from a new, unseen Sentinel-1 IW GRD image.</p>
Full article ">Figure 12
<p>Ship length (<b>left</b>) and ship breadth (<b>right</b>).</p>
Full article ">Figure 13
<p>Estimated standard deviation of a single sample (<b>left</b>), where the results are overlaid on the original sample (<b>right</b>) to illustrate the high uncertainty where there is no ship. The input sample also illustrates how the image chips were randomly placed in an image of size 2 × 128 × 128.</p>
Full article ">Figure 14
<p>The error in the heading in degrees. The polar histogram shows that the errors are mainly centred around 0 degrees, with some around 180 degrees (<b>left</b>) and the cumulative distribution function (<b>right</b>) shows that 50% of the errors are within 15 degrees.</p>
Full article ">Figure 15
<p>The true and estimated headings are shown, with colours representing the true length of the ship. Larger ships are mostly estimated correctly, while smaller ships tend to exhibit a ±180-degree offset, as indicated by the dashed lines. Estimating the heading for small ships, approximately 4 pixels in size, is challenging due to the lack of clear bow and stern features.</p>
Full article ">Figure A1
<p>Overall model architecture. See <a href="#remotesensing-16-04719-f0A2" class="html-fig">Figure A2</a> for details on the backbone, and <a href="#remotesensing-16-04719-f0A3" class="html-fig">Figure A3</a> for information on the neck.</p>
Full article ">Figure A2
<p>The backbone of the model. The boxes on the left describe the types of layers in the backbone shown on the right. The grey boxes are convolution modules that include the SiLU activation function and batch normalisation. Green boxes are the BottleneckCSP modules, illustrated in <a href="#remotesensing-16-04719-f0A4" class="html-fig">Figure A4</a>.</p>
Full article ">Figure A3
<p>The neck used in the model. The input and output feature maps have a range of sizes, here illustrated with sizes of 8 × 8, 16 × 16, and 32 × 32. This corresponds with an original image size of 2 × 128 × 128.</p>
Full article ">Figure A4
<p>The BottleneckCSP, which, in turn, includes the regular bottleneck module. The blue boxes all contain convolutional layers.</p>
Full article ">Figure A5
<p>Illustration of a regular 3 × 3 convolution kernel (<b>left</b>) and a 3 × 3 atrous convolution kernel with an atrous rate of 2 (<b>right</b>), effectively increasing the receptive field from 9 to 25 with the same number of parameters.</p>
Full article ">Figure A6
<p>The regular bottleneck layer. The blue boxes are convolutional layers.</p>
Full article ">Figure A7
<p>The module used in the detection head for each ship feature characterisation.</p>
Full article ">
29 pages, 3178 KiB  
Article
Lighting the Path: Raman Spectroscopy’s Journey Through the Microbial Maze
by Markus Salbreiter, Sandra Baaba Frempong, Sabrina Even, Annette Wagenhaus, Sophie Girnus, Petra Rösch and Jürgen Popp
Molecules 2024, 29(24), 5956; https://doi.org/10.3390/molecules29245956 - 17 Dec 2024
Viewed by 545
Abstract
The rapid and precise identification of microorganisms is essential in environmental science, pharmaceuticals, food safety, and medical diagnostics. Raman spectroscopy, valued for its ability to provide detailed chemical and structural information, has gained significant traction in these fields, especially with the adoption of [...] Read more.
The rapid and precise identification of microorganisms is essential in environmental science, pharmaceuticals, food safety, and medical diagnostics. Raman spectroscopy, valued for its ability to provide detailed chemical and structural information, has gained significant traction in these fields, especially with the adoption of various excitation wavelengths and tailored optical setups. The choice of wavelength and setup in Raman spectroscopy is influenced by factors such as applicability, cost, and whether bulk or single-cell analysis is performed, each impacting sensitivity and specificity in bacterial detection. In this study, we investigate the potential of different excitation wavelengths for bacterial identification, utilizing a mock culture composed of six bacterial species: three Gram-positive (S. warneri, S. cohnii, and E. malodoratus) and three Gram-negative (P. stutzeri, K. terrigena, and E. coli). To improve bacterial classification, we applied machine learning models to analyze and extract unique spectral features from Raman data. The results indicate that the choice of excitation wavelength significantly influences the bacterial spectra obtained, thereby impacting the accuracy and effectiveness of the subsequent classification results. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Mean UV resonance Raman spectra of <span class="html-italic">S. warneri</span> (<b>a</b>), <span class="html-italic">S. cohnii</span> (<b>b</b>), <span class="html-italic">P. stutzeri</span> (<b>c</b>), <span class="html-italic">K. terrigena</span> (<b>d</b>), <span class="html-italic">E. malodoratus</span> (<b>e</b>), and <span class="html-italic">E. coli</span> (<b>f</b>) measured at 229, 244, and 257 nm.</p>
Full article ">Figure 2
<p>PCA–LDA results of classification of bacteria measured at 229 nm (<b>left</b>), 244 nm (<b>middle</b>), and 257 nm (<b>right</b>). <span class="html-italic">S. warneri</span> (black), <span class="html-italic">S. cohnii</span> (red), <span class="html-italic">P. stutzeri</span> (blue), <span class="html-italic">K. terrigena</span> (green), <span class="html-italic">E. malodoratus</span> (magenta), and <span class="html-italic">E. coli</span> (orange).</p>
Full article ">Figure 3
<p>Mean Raman spectra (<b>left</b>) from a Raman microscope with 532 nm excitation and the corresponding PCA–LDA results of classification (<b>right</b>) of <span class="html-italic">S. warneri</span> (a, black), <span class="html-italic">S. cohnii</span> (b, red), <span class="html-italic">P. stutzeri</span> (c, blue), <span class="html-italic">K. terrigena</span> (d, green), <span class="html-italic">E. malodoratus</span> (e, magenta), and <span class="html-italic">E. coli</span> (f, orange). *: cytochrome.</p>
Full article ">Figure 4
<p>Raman spectra measured with 785 nm excitation of an <span class="html-italic">E. coli</span> colony (red) and agar (black): (<b>A</b>) in the region 3100−350 cm<sup>−1</sup> (raw data) before baseline correction, and (<b>B</b>) in the region 1800−500 cm<sup>−1</sup> after baseline correction. ν = stretching modes, δ = bending modes.</p>
Full article ">Figure 5
<p>Mean Raman spectra (<b>left</b>) from a Raman fiber probe with 785 nm excitation and the corresponding PCA–LDA results of classification (<b>right</b>) of <span class="html-italic">S. warneri</span> (a, black), <span class="html-italic">S. cohnii</span> (b, red), <span class="html-italic">P. stutzeri</span> (c, blue), <span class="html-italic">K. terrigena</span> (d, green), <span class="html-italic">E. malodoratus</span> (e, magenta), and <span class="html-italic">E. coli</span> (f, orange).</p>
Full article ">
Back to TopTop