Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,028)

Search Parameters:
Keywords = hyper-parameter optimization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 3147 KiB  
Article
A Risk-Optimized Framework for Data-Driven IPO Underperformance Prediction in Complex Financial Systems
by Mazin Alahmadi
Systems 2025, 13(3), 179; https://doi.org/10.3390/systems13030179 - 6 Mar 2025
Abstract
Accurate predictions of Initial Public Offerings (IPOs) aftermarket performance are essential for making informed investment decisions in the financial sector. This paper attempts to predict IPO short-term underperformance during a month post-listing. The current research landscape lacks modern models that address the needs [...] Read more.
Accurate predictions of Initial Public Offerings (IPOs) aftermarket performance are essential for making informed investment decisions in the financial sector. This paper attempts to predict IPO short-term underperformance during a month post-listing. The current research landscape lacks modern models that address the needs of small and imbalanced datasets relevant to emerging markets, as well as the risk preferences of investors. To fill this gap, we present a practical framework utilizing tree-based ensemble learning, including Bagging Classifier (BC), Random Forest (RF), AdaBoost (Ada), Gradient Boosting (GB), XGBoost (XG), Stacking Classifier (SC), and Extra Trees (ET), with Decision Tree (DT) as a base estimator. The framework leverages data-driven methodologies to optimize decision-making in complex financial systems, integrating ANOVA F-value for feature selection, Randomized Search for hyperparameter optimization, and SMOTE for class balance. The framework’s effectiveness is assessed using a hand-collected dataset that includes features from both pre-IPO prospectus and firm-specific financial data. We thoroughly evaluate the results using single-split evaluation and 10-fold cross-validation analysis. For the single-split validation, ET achieves the highest accuracy of 86%, while for the 10-fold validation, BC achieves the highest accuracy of 70%. Additionally, we compare the results of the proposed framework with deep-learning models such as MLP, TabNet, and ANN to assess their effectiveness in handling IPO underperformance predictions. These results demonstrate the framework’s capability to enable robust data-driven decision-making processes in complex and dynamic financial environments, even with limited and imbalanced datasets. The framework also proposes a dynamic methodology named Investor Preference Prediction Framework (IPPF) to match tree-based ensemble models to investors’ risk preferences when predicting IPO underperformance. It concludes that different models may be suitable for various risk profiles. For the dataset at hand, ET and Ada are more appropriate for risk-averse investors, while BC is suitable for risk-tolerant investors. The results underscore the framework’s importance in improving IPO underperformance predictions, which can better inform investment strategies and decision-making processes. Full article
(This article belongs to the Special Issue Data-Driven Decision Making for Complex Systems)
Show Figures

Figure 1

Figure 1
<p>The Proposed Framework.</p>
Full article ">Figure 2
<p>ROC curves of all the classifiers during testing.</p>
Full article ">Figure 3
<p>Comparison with existing studies using the test dataset [<a href="#B37-systems-13-00179" class="html-bibr">37</a>].</p>
Full article ">Figure 4
<p>Representation of model selection adjusted for investor’s risk level for single-split validation.</p>
Full article ">Figure 5
<p>Representation of model selection adjusted for investor’s risk level for 10-fold validation.</p>
Full article ">Figure 6
<p>Robustness Ratio Curves for Both Single-Split and 10-Fold Validations.</p>
Full article ">
20 pages, 3271 KiB  
Article
Fine-Tuned Machine Learning Classifiers for Diagnosing Parkinson’s Disease Using Vocal Characteristics: A Comparative Analysis
by Mehmet Meral, Ferdi Ozbilgin and Fatih Durmus
Diagnostics 2025, 15(5), 645; https://doi.org/10.3390/diagnostics15050645 - 6 Mar 2025
Abstract
Background/Objectives: This paper is significant in highlighting the importance of early and precise diagnosis of Parkinson’s Disease (PD) that affects both motor and non-motor functions to achieve better disease control and patient outcomes. This study seeks to assess the effectiveness of machine [...] Read more.
Background/Objectives: This paper is significant in highlighting the importance of early and precise diagnosis of Parkinson’s Disease (PD) that affects both motor and non-motor functions to achieve better disease control and patient outcomes. This study seeks to assess the effectiveness of machine learning algorithms optimized to classify PD based on vocal characteristics to serve as a non-invasive and easily accessible diagnostic tool. Methods: This study used a publicly available dataset of vocal samples from 188 people with PD and 64 controls. Acoustic features like baseline characteristics, time-frequency components, Mel Frequency Cepstral Coefficients (MFCCs), and wavelet transform-based metrics were extracted and analyzed. The Chi-Square test was used for feature selection to determine the most important attributes that enhanced the accuracy of the classification. Six different machine learning classifiers, namely SVM, k-NN, DT, NN, Ensemble and Stacking models, were developed and optimized via Bayesian Optimization (BO), Grid Search (GS) and Random Search (RS). Accuracy, precision, recall, F1-score and AUC-ROC were used for evaluation. Results: It has been found that Stacking models, especially those fine-tuned via Grid Search, yielded the best performance with 92.07% accuracy and an F1-score of 0.95. In addition to that, the choice of relevant vocal features, in conjunction with the Chi-Square feature selection method, greatly enhanced the computational efficiency and classification performance. Conclusions: This study highlights the potential of combining advanced feature selection techniques with hyperparameter optimization strategies to enhance machine learning-based PD diagnosis using vocal characteristics. Ensemble models proved particularly effective in handling complex datasets, demonstrating robust diagnostic performance. Future research may focus on deep learning approaches and temporal feature integration to further improve diagnostic accuracy and scalability for clinical applications. Full article
Show Figures

Figure 1

Figure 1
<p>Gender distribution of samples in the dataset.</p>
Full article ">Figure 2
<p>Proposed methodology.</p>
Full article ">Figure 3
<p>Proposed Stacking Learning method.</p>
Full article ">Figure 4
<p>ROC curves of machine learning classifiers optimized (<b>a</b>) with BO parameters, (<b>b</b>) with RS parameters, and (<b>c</b>) with GS parameters.</p>
Full article ">Figure 4 Cont.
<p>ROC curves of machine learning classifiers optimized (<b>a</b>) with BO parameters, (<b>b</b>) with RS parameters, and (<b>c</b>) with GS parameters.</p>
Full article ">Figure 5
<p>Comparison of AUC values across different models and optimization methods for PD classification.</p>
Full article ">Figure 6
<p>SHAP summary plots for GS-Ensemble model: feature contributions to PD classification for (<b>a</b>) Class 0 and (<b>b</b>) Class 1.</p>
Full article ">Figure 6 Cont.
<p>SHAP summary plots for GS-Ensemble model: feature contributions to PD classification for (<b>a</b>) Class 0 and (<b>b</b>) Class 1.</p>
Full article ">
18 pages, 3748 KiB  
Article
A Comparative Study of Energy Management Strategies for Battery-Ultracapacitor Electric Vehicles Based on Different Deep Reinforcement Learning Methods
by Wenna Xu, Hao Huang, Chun Wang, Shuai Xia and Xinmei Gao
Energies 2025, 18(5), 1280; https://doi.org/10.3390/en18051280 - 5 Mar 2025
Viewed by 170
Abstract
An efficient energy management strategy (EMS) is crucial for the energy-saving and emission-reduction effects of electric vehicles. Research on deep reinforcement learning (DRL)-driven energy management systems (EMSs) has made significant strides in the global automotive industry. However, most scholars study only the impact [...] Read more.
An efficient energy management strategy (EMS) is crucial for the energy-saving and emission-reduction effects of electric vehicles. Research on deep reinforcement learning (DRL)-driven energy management systems (EMSs) has made significant strides in the global automotive industry. However, most scholars study only the impact of a single DRL algorithm on EMS performance, ignoring the potential improvement in optimization objectives that different DRL algorithms can offer under the same benchmark. This paper focuses on the control strategy of hybrid energy storage systems (HESSs) comprising lithium-ion batteries and ultracapacitors. Firstly, an equivalent model of the HESS is established based on dynamic experiments. Secondly, a regulated decision-making framework is constructed by uniformly setting the action space, state space, reward function, and hyperparameters of the agent for different DRL algorithms. To compare the control performances of the HESS under various EMSs, the regulation properties are analyzed with the standard driving cycle condition. Finally, the simulation results indicate that the EMS powered by a deep Q network (DQN) markedly diminishes the detrimental impact of peak current on the battery. Furthermore, the EMS based on a deep deterministic policy gradient (DDPG) reduces energy loss by 28.3%, and the economic efficiency of the EMS based on dynamic programming (DP) is improved to 0.7%. Full article
(This article belongs to the Section E: Electric Vehicles)
Show Figures

Figure 1

Figure 1
<p>The topology of HESS.</p>
Full article ">Figure 2
<p>The model based on an equivalent circuit: (<b>a</b>) Battery. (<b>b</b>) Ultracapacitor.</p>
Full article ">Figure 3
<p>The results of HPPC and UDDS experiments: (<b>a</b>,<b>b</b>) Battery. (<b>c</b>,<b>d</b>) Ultracapacitor.</p>
Full article ">Figure 4
<p>The results of precision validation: (<b>a</b>) Battery. (<b>b</b>) Ultracapacitor.</p>
Full article ">Figure 5
<p>The structure of reinforcement learning.</p>
Full article ">Figure 6
<p>DQN-based EMS optimization control framework.</p>
Full article ">Figure 7
<p>DDPG-based EMS optimization control framework.</p>
Full article ">Figure 8
<p>Driving cycle of UDDS. (<b>a</b>) The velocity of UDDS (<b>b</b>) The required power of UDDS.</p>
Full article ">Figure 9
<p>The comparison results of battery and ultracapacitor under different EMSs: (<b>a</b>) battery <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">SOC</mi> </mrow> <mrow> <mi>bat</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) ultracapacitor <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SOC</mi> </mrow> <mrow> <mi>uc</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) battery current; (<b>d</b>) ultracapacitor current; (<b>e</b>) battery power; and (<b>f</b>) ultracapacitor power.</p>
Full article ">Figure 10
<p>Comparison of energy loss in DRL-EMSs.</p>
Full article ">
19 pages, 959 KiB  
Article
Is Malware Detection Needed for Android TV?
by Gokhan Ozogur, Zeynep Gurkas-Aydin and Mehmet Ali Erturk
Appl. Sci. 2025, 15(5), 2802; https://doi.org/10.3390/app15052802 - 5 Mar 2025
Viewed by 108
Abstract
The smart TV ecosystem is rapidly expanding, allowing developers to publish their applications on TV markets to provide a wide array of services to TV users. However, this open nature can lead to significant cybersecurity concerns by bringing unauthorized access to home networks [...] Read more.
The smart TV ecosystem is rapidly expanding, allowing developers to publish their applications on TV markets to provide a wide array of services to TV users. However, this open nature can lead to significant cybersecurity concerns by bringing unauthorized access to home networks or leaking sensitive information. In this study, we focus on the security of Android TVs by developing a lightweight malware detection model specifically for these devices. We collected various Android TV applications from different markets and injected malicious payloads into benign applications to create Android TV malware, which is challenging to find on the market. We proposed a machine learning approach to detecting malware and evaluated our model. We compared the performance of nine classifiers and optimized the hyperparameters. Our findings indicated that the model performed well in rare malware cases on Android TVs. The most successful model classified malware with an F1-Score of 0.9789 in 0.1346 milliseconds per application. Full article
Show Figures

Figure 1

Figure 1
<p>Methodologies in our study.</p>
Full article ">Figure 2
<p>VirusTotal scan results for a benign TV application.</p>
Full article ">Figure 3
<p>VirusTotal scan results for a malicious TV application.</p>
Full article ">Figure 4
<p>Average training time of models per application.</p>
Full article ">Figure 5
<p>Average testing time of the models per application.</p>
Full article ">Figure 6
<p>Confusion charts for classifiers.</p>
Full article ">Figure 7
<p>ROC curves of the models.</p>
Full article ">Figure 8
<p>Results of the models in testing with respect to the F1-Score.</p>
Full article ">Figure 9
<p>Results of the models in testing with respect to the MCC.</p>
Full article ">
28 pages, 9704 KiB  
Article
Hybrid Population Based Training–ResNet Framework for Traffic-Related PM2.5 Concentration Classification
by Afaq Khattak, Badr T. Alsulami and Caroline Mongina Matara
Atmosphere 2025, 16(3), 303; https://doi.org/10.3390/atmos16030303 - 5 Mar 2025
Viewed by 123
Abstract
Traffic emissions serve as one of the most significant sources of atmospheric PM2.5 pollution in developing countries, driven by the prevalence of aging vehicle fleets and the inadequacy of regulatory frameworks to mitigate emissions effectively. This study presents a Hybrid Population-Based Training (PBT)–ResNet [...] Read more.
Traffic emissions serve as one of the most significant sources of atmospheric PM2.5 pollution in developing countries, driven by the prevalence of aging vehicle fleets and the inadequacy of regulatory frameworks to mitigate emissions effectively. This study presents a Hybrid Population-Based Training (PBT)–ResNet framework for classifying traffic-related PM2.5 levels into hazardous exposure (HE) and acceptable exposure (AE), based on the World Health Organization (WHO) guidelines. The framework integrates ResNet architectures (ResNet18, ResNet34, and ResNet50) with PBT-driven hyperparameter optimization, using data from Open-Seneca sensors along the Nairobi Expressway, combined with meteorological and traffic data. First, analysis showed that the PBT-tuned ResNet34 was the most effective model, achieving a precision (0.988), recall (0.971), F1-Score (0.979), Matthews Correlation Coefficient (MCC) of 0.904, Geometric Mean (G-Mean) of 0.962, and Balanced Accuracy (BA) of 0.962, outperforming alternative models, including ResNet18, ResNet34, and baseline approaches such as Feedforward Neural Networks (FNN), Bidirectional Long Short-Term Memory (BiLSTM), Bidirectional Gated Recurrent Unit (BiGRU), and Gene Expression Programming (GEP). Subsequent feature importance analysis using a permutation-based strategy, along with SHAP analysis, revealed that humidity and hourly traffic volume were the most influential features. The findings indicated that medium to high humidity values were associated with an increased likelihood of HE, while medium to high traffic volumes similarly contributed to the occurrence of HE. Full article
(This article belongs to the Special Issue Recent Advances in Mobile Source Emissions (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Proposed Hybrid PBT-ResNet framework for the classification and prediction of PM2.5.</p>
Full article ">Figure 2
<p>Strategic Data Collection Sites Along the Nairobi Expressway.</p>
Full article ">Figure 3
<p>Observed PM2.5 (µg/m<sup>3</sup>) at various sites during different time periods along the Nairobi Expressway: (<b>a</b>–<b>c</b>) measurements from Sites 1, 2, and 3 between 23 and 29 August 2021, (<b>d</b>–<b>f</b>) measurements from Sites 1, 2, and 3 between 13 and 18 December 2021, and (<b>g</b>–<b>i</b>) measurements from Sites 1, 2, and 3 between 21 and 27 March 2022.</p>
Full article ">Figure 3 Cont.
<p>Observed PM2.5 (µg/m<sup>3</sup>) at various sites during different time periods along the Nairobi Expressway: (<b>a</b>–<b>c</b>) measurements from Sites 1, 2, and 3 between 23 and 29 August 2021, (<b>d</b>–<b>f</b>) measurements from Sites 1, 2, and 3 between 13 and 18 December 2021, and (<b>g</b>–<b>i</b>) measurements from Sites 1, 2, and 3 between 21 and 27 March 2022.</p>
Full article ">Figure 3 Cont.
<p>Observed PM2.5 (µg/m<sup>3</sup>) at various sites during different time periods along the Nairobi Expressway: (<b>a</b>–<b>c</b>) measurements from Sites 1, 2, and 3 between 23 and 29 August 2021, (<b>d</b>–<b>f</b>) measurements from Sites 1, 2, and 3 between 13 and 18 December 2021, and (<b>g</b>–<b>i</b>) measurements from Sites 1, 2, and 3 between 21 and 27 March 2022.</p>
Full article ">Figure 3 Cont.
<p>Observed PM2.5 (µg/m<sup>3</sup>) at various sites during different time periods along the Nairobi Expressway: (<b>a</b>–<b>c</b>) measurements from Sites 1, 2, and 3 between 23 and 29 August 2021, (<b>d</b>–<b>f</b>) measurements from Sites 1, 2, and 3 between 13 and 18 December 2021, and (<b>g</b>–<b>i</b>) measurements from Sites 1, 2, and 3 between 21 and 27 March 2022.</p>
Full article ">Figure 4
<p>Density plots of different Input Factors: (<b>a</b>) PM2.5 concentration, (<b>b</b>) relative humidity, (<b>c</b>) hourly traffic volume, (<b>d</b>) wind speed, (<b>e</b>) temperature, and (<b>f</b>) mean vehicle speed.</p>
Full article ">Figure 4 Cont.
<p>Density plots of different Input Factors: (<b>a</b>) PM2.5 concentration, (<b>b</b>) relative humidity, (<b>c</b>) hourly traffic volume, (<b>d</b>) wind speed, (<b>e</b>) temperature, and (<b>f</b>) mean vehicle speed.</p>
Full article ">Figure 4 Cont.
<p>Density plots of different Input Factors: (<b>a</b>) PM2.5 concentration, (<b>b</b>) relative humidity, (<b>c</b>) hourly traffic volume, (<b>d</b>) wind speed, (<b>e</b>) temperature, and (<b>f</b>) mean vehicle speed.</p>
Full article ">Figure 5
<p>Comparison of PM2.5 class distribution before and after applying SMOTE treatment; (<b>a</b>) class distribution in the original dataset; (<b>b</b>) class distribution in the SMOTE-treated dataset.</p>
Full article ">Figure 6
<p>Accuracy and loss vs. epochs: (<b>a</b>) ResNet18; (<b>b</b>) ResNet34; (<b>c</b>) ResNet50; (<b>d</b>) FNN; (<b>e</b>) BiGRU; (<b>f</b>) BiLSTM.</p>
Full article ">Figure 6 Cont.
<p>Accuracy and loss vs. epochs: (<b>a</b>) ResNet18; (<b>b</b>) ResNet34; (<b>c</b>) ResNet50; (<b>d</b>) FNN; (<b>e</b>) BiGRU; (<b>f</b>) BiLSTM.</p>
Full article ">Figure 7
<p>(<b>a</b>) Confusion matrix for ResNet18; (<b>b</b>) ROC curve for ResNet18; (<b>c</b>) Precision–Recall Curve for ResNet18; (<b>d</b>) confusion matrix for ResNet34; (<b>e</b>) ROC curve for ResNet34; (<b>f</b>) Precision–Recall Curve for ResNet34; (<b>g</b>) confusion matrix for ResNet50; (<b>h</b>) ROC curve for ResNet50; (<b>i</b>) Precision–Recall Curve for ResNe50; (<b>j</b>) confusion matrix for FNN; (<b>k</b>) ROC curve for FNN; (<b>l</b>) Precision–Recall Curve for FNN; (<b>m</b>) confusion matrix for BiGRU; (<b>n</b>) ROC curve for BiGRU; (<b>o</b>) Precision–Recall Curve for BiGRU; (<b>p</b>) confusion matrix for BiLSTM; (<b>q</b>) ROC curve for BiLSTM; (<b>r</b>) Precision–Recall Curve for BiLSTM; (<b>s</b>) confusion matrix for GEP; (<b>t</b>) ROC curve for GEP; and (<b>u</b>) Precision–Recall Curve for GEP.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Confusion matrix for ResNet18; (<b>b</b>) ROC curve for ResNet18; (<b>c</b>) Precision–Recall Curve for ResNet18; (<b>d</b>) confusion matrix for ResNet34; (<b>e</b>) ROC curve for ResNet34; (<b>f</b>) Precision–Recall Curve for ResNet34; (<b>g</b>) confusion matrix for ResNet50; (<b>h</b>) ROC curve for ResNet50; (<b>i</b>) Precision–Recall Curve for ResNe50; (<b>j</b>) confusion matrix for FNN; (<b>k</b>) ROC curve for FNN; (<b>l</b>) Precision–Recall Curve for FNN; (<b>m</b>) confusion matrix for BiGRU; (<b>n</b>) ROC curve for BiGRU; (<b>o</b>) Precision–Recall Curve for BiGRU; (<b>p</b>) confusion matrix for BiLSTM; (<b>q</b>) ROC curve for BiLSTM; (<b>r</b>) Precision–Recall Curve for BiLSTM; (<b>s</b>) confusion matrix for GEP; (<b>t</b>) ROC curve for GEP; and (<b>u</b>) Precision–Recall Curve for GEP.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Confusion matrix for ResNet18; (<b>b</b>) ROC curve for ResNet18; (<b>c</b>) Precision–Recall Curve for ResNet18; (<b>d</b>) confusion matrix for ResNet34; (<b>e</b>) ROC curve for ResNet34; (<b>f</b>) Precision–Recall Curve for ResNet34; (<b>g</b>) confusion matrix for ResNet50; (<b>h</b>) ROC curve for ResNet50; (<b>i</b>) Precision–Recall Curve for ResNe50; (<b>j</b>) confusion matrix for FNN; (<b>k</b>) ROC curve for FNN; (<b>l</b>) Precision–Recall Curve for FNN; (<b>m</b>) confusion matrix for BiGRU; (<b>n</b>) ROC curve for BiGRU; (<b>o</b>) Precision–Recall Curve for BiGRU; (<b>p</b>) confusion matrix for BiLSTM; (<b>q</b>) ROC curve for BiLSTM; (<b>r</b>) Precision–Recall Curve for BiLSTM; (<b>s</b>) confusion matrix for GEP; (<b>t</b>) ROC curve for GEP; and (<b>u</b>) Precision–Recall Curve for GEP.</p>
Full article ">Figure 8
<p>ResNet34 model Interpretation; (<b>a</b>) permutation-based feature importance; (<b>b</b>) SHAP summary plot.</p>
Full article ">
31 pages, 4101 KiB  
Article
Fingerprint Classification Based on Multilayer Extreme Learning Machines
by Axel Quinteros and David Zabala-Blanco
Appl. Sci. 2025, 15(5), 2793; https://doi.org/10.3390/app15052793 - 5 Mar 2025
Viewed by 172
Abstract
Fingerprint recognition is one of the most effective and widely adopted methods for person identification. However, the computational time required for the querying of large databases is excessive. To address this, preprocessing steps such as classification are necessary to speed up the response [...] Read more.
Fingerprint recognition is one of the most effective and widely adopted methods for person identification. However, the computational time required for the querying of large databases is excessive. To address this, preprocessing steps such as classification are necessary to speed up the response time to a query. Fingerprints are typically categorized into five classes, though this classification is unbalanced. While advanced classification algorithms, including support vector machines (SVMs), multilayer perceptrons (MLPs), and convolutional neural networks (CNNs), have demonstrated near-perfect accuracy (approaching 100%), their high training times limit their widespread applicability across institutions. In this study, we introduce, for the first time, the use of a multilayer extreme learning machine (M-ELM) for fingerprint classification, aiming to improve training efficiency. A comparative analysis is conducted with CNNs and unbalanced extreme learning machines (W-ELMs), as these represent the most influential methodologies in the literature. The tests utilize a database generated by SFINGE software, which simulates realistic fingerprint distributions, with datasets comprising hundreds of thousands of samples. To optimize and simplify the M-ELM, widely recognized descriptors in the field—Capelli02, Liu10, and Hong08—are used as input features. This effectively reduces dimensionality while preserving the representativeness of the fingerprint information. A brute-force heuristic optimization approach is applied to determine the hyperparameters that maximize classification accuracy across different M-ELM configurations while avoiding excessive training times. A comparison is made with the aforementioned approaches in terms of accuracy, penetration rate, and computational cost. The results demonstrate that a two-layer hidden ELM achieves superior classification of both majority and minority fingerprint classes with remarkable computational efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>General architecture of an original ELM.</p>
Full article ">Figure 2
<p>Representative structure of a multilayer ELM.</p>
Full article ">Figure 3
<p>General architecture of an ELM-AE. The colors indicate the type of neuron, following the same scheme as the original ELM.</p>
Full article ">Figure 4
<p>Samples concerning fingerprint image quality: (<b>a</b>) default; (<b>b</b>) HQNoPert; (<b>c</b>) VQAndPert.</p>
Full article ">Figure 5
<p>The accuracy in the training and validation phases in terms of the number of hidden neurons of the original ELM, considering Capelli02 as a descriptor.</p>
Full article ">Figure 6
<p>The accuracy vs. the number of hidden neurons of the original ELM in the training and validation phases when the descriptor corresponds to Hong08.</p>
Full article ">Figure 7
<p>The accuracy vs. the number of hidden neurons of the original ELM in the training and validation phases when the descriptor corresponds to Liu10.</p>
Full article ">Figure 8
<p>Accuracy in terms of the number of neurons of the two-layer hidden ELM considering the Capelli02 descriptor and the (<b>a</b>) default, (<b>b</b>) HQNoPert, and (<b>c</b>) VQAndPert databases.</p>
Full article ">Figure 9
<p>Accuracy in terms of the number of neurons of the ELM of two hidden layers considering the Hong08 descriptor and the (<b>a</b>) default, (<b>b</b>) HQNoPert, and (<b>c</b>) VQAndPert databases.</p>
Full article ">Figure 10
<p>Accuracy as a function of the number of neurons of the two-hidden-layer ELM considering the Liu10 descriptor and the (<b>a</b>) default, (<b>b</b>) HQNoPert, and (<b>c</b>) VQAndPert databases.</p>
Full article ">Figure 11
<p>Accuracy as a function of the number of neurons of the three-hidden-layer ELM, taking into account the Capelli02 descriptor and the (<b>a</b>) default, (<b>b</b>) HQNoPert, and (<b>c</b>) VQAndPert databases.</p>
Full article ">Figure 12
<p>Accuracy as a function of the number of neurons of the three-hidden-layer ELM, taking into account the Hong08 descriptor and the (<b>a</b>) default, (<b>b</b>) HQNoPert, and (<b>c</b>) VQAndPert databases.</p>
Full article ">Figure 13
<p>Accuracy as a function of the number of neurons of the three-hidden-layer ELM, taking into account the Liu10 descriptor and the (<b>a</b>) default, (<b>b</b>) HQNoPert, and (<b>c</b>) VQAndPert databases.</p>
Full article ">Figure 14
<p>Confusion matrices for the ELM-M2 and ELM-M3 models.</p>
Full article ">
25 pages, 20763 KiB  
Article
Research on Maneuvering Motion Prediction for Intelligent Ships Based on LSTM-Multi-Head Attention Model
by Dongyu Liu, Xiaopeng Gao, Cong Huo and Wentao Su
J. Mar. Sci. Eng. 2025, 13(3), 503; https://doi.org/10.3390/jmse13030503 - 5 Mar 2025
Viewed by 54
Abstract
In complex marine environments, accurate prediction of maneuvering motion is crucial for the precise control of intelligent ships. This study aims to enhance the predictive capabilities of maneuvering motion for intelligent ships in such environments. We propose a novel maneuvering motion prediction method [...] Read more.
In complex marine environments, accurate prediction of maneuvering motion is crucial for the precise control of intelligent ships. This study aims to enhance the predictive capabilities of maneuvering motion for intelligent ships in such environments. We propose a novel maneuvering motion prediction method based on Long Short-Term Memory (LSTM) and Multi-Head Attention Mechanisms (MHAM). To construct a foundational dataset, we integrate Computational Fluid Dynamics (CFD) numerical simulation technology to develop a mathematical model of actual ship maneuvering motions influenced by wind, waves, and currents. We simulate typical operating conditions to acquire relevant data. To emulate real marine environmental noise and data loss phenomena, we introduce Ornstein–Uhlenbeck (OU) noise and random occlusion noise into the data and apply the MaxAbsScaler method for dataset normalization. Subsequently, we develop a black-box model for intelligent ship maneuvering motion prediction based on LSTM networks and Multi-Head Attention Mechanisms. We conduct a comprehensive analysis and discussion of the model structure and hyperparameters, iteratively optimize the model, and compare the optimized model with standalone LSTM and MHAM approaches. Finally, we perform generalization testing on the optimized motion prediction model using test sets for zigzag and turning conditions. The results demonstrate that our proposed model significantly improves the accuracy of ship maneuvering predictions compared to standalone LSTM and MHAM algorithms and exhibits superior generalization performance. Full article
Show Figures

Figure 1

Figure 1
<p>Define the ship’s coordinate system of motion.</p>
Full article ">Figure 2
<p>Turning motion data collection: (<b>a</b>) u in still water; (<b>b</b>) v in still water; (<b>c</b>) r in still water; (<b>d</b>) u in wave environment; (<b>e</b>) v in wave environment; and (<b>f</b>) r in wave environment.</p>
Full article ">Figure 3
<p>Zigzag motion data collection: (<b>a</b>) u in still water; (<b>b</b>) v in still water; (<b>c</b>) r in still water; (<b>d</b>) u in wave environment; (<b>e</b>) v in wave environment; and (<b>f</b>) r in wave environment.</p>
Full article ">Figure 3 Cont.
<p>Zigzag motion data collection: (<b>a</b>) u in still water; (<b>b</b>) v in still water; (<b>c</b>) r in still water; (<b>d</b>) u in wave environment; (<b>e</b>) v in wave environment; and (<b>f</b>) r in wave environment.</p>
Full article ">Figure 4
<p>Training, validation, and testing sets.</p>
Full article ">Figure 5
<p>LSTM model unit structure.</p>
Full article ">Figure 6
<p>Multi-Head Attention Mechanism structure.</p>
Full article ">Figure 7
<p>LSTM-Multi-Head Attention-1 Model Framework.</p>
Full article ">Figure 8
<p>LSTM-Multi-Head Attention-2 Model Framework.</p>
Full article ">Figure 9
<p>LSTM-Multi-Head Attention-3 Model Framework.</p>
Full article ">Figure 10
<p>Forecasting effects of the proposed models.</p>
Full article ">Figure 11
<p>RMSE and loss curves of the proposed models.</p>
Full article ">Figure 12
<p>Forecasting effects of models with different regularization methods.</p>
Full article ">Figure 13
<p>RMSE and loss curves with different regularization methods.</p>
Full article ">Figure 14
<p>Forecasting effects of models with different numbers of heads.</p>
Full article ">Figure 15
<p>RMSE and loss curves with different numbers of heads.</p>
Full article ">Figure 16
<p>Analysis of the impact of the number of neurons on model performance.</p>
Full article ">Figure 17
<p>RMSE and loss curves with different number of neurons.</p>
Full article ">Figure 18
<p>Forecasting effects of models with different training batch sizes.</p>
Full article ">Figure 19
<p>RMSE and loss curves with different training batch sizes.</p>
Full article ">Figure 20
<p>Analysis of the Impact of sliding window size.</p>
Full article ">Figure 21
<p>RMSE and loss curves with different sliding window size.</p>
Full article ">Figure 22
<p>Comparison of prediction effects among LSTM, GRU, Multi-Head Attention, Transformer, and LSTM-Multi-Head Attention-2 models.</p>
Full article ">Figure 23
<p>RMSE and loss curves of LSTM, GRU, Multi-Head Attention, Transformer, and LSTM-Multi-Head Attention-2 models.</p>
Full article ">Figure 24
<p>Prediction of u, v, r, and heading for an 8-degree turning movement.</p>
Full article ">Figure 25
<p>Prediction of u, v, r, and heading for a 15-degree turning movement.</p>
Full article ">Figure 26
<p>Prediction of trajectory for 8-degree and 15-degree turning movement.</p>
Full article ">Figure 27
<p>Prediction of u, v, r, and heading for 5°/5° Zigzag.</p>
Full article ">Figure 27 Cont.
<p>Prediction of u, v, r, and heading for 5°/5° Zigzag.</p>
Full article ">Figure 28
<p>The optimized forecasting effect.</p>
Full article ">
15 pages, 4516 KiB  
Article
Optimization of Deep Learning Models for Enhanced Respiratory Signal Estimation Using Wearable Sensors
by Jiseon Kim and Jooyong Kim
Processes 2025, 13(3), 747; https://doi.org/10.3390/pr13030747 - 4 Mar 2025
Viewed by 192
Abstract
Measuring breathing changes during exercise is crucial for healthcare applications. This study used wearable capacitive sensors to capture abdominal motion and extract breathing patterns. Data preprocessing methods included filtering and normalization, followed by feature extraction for classification. Despite the growing interest in respiratory [...] Read more.
Measuring breathing changes during exercise is crucial for healthcare applications. This study used wearable capacitive sensors to capture abdominal motion and extract breathing patterns. Data preprocessing methods included filtering and normalization, followed by feature extraction for classification. Despite the growing interest in respiratory monitoring, research on a deep learning-based analysis of breathing data remains limited. To address this research gap, we optimized CNN and ResNet through systematic hyperparameter tuning, enhancing classification accuracy and robustness. The optimized ResNet outperformed the CNN in accuracy (0.96 vs. 0.87) and precision for Class 4 (0.8 vs. 0.6), demonstrating its capability to capture complex breathing patterns. These findings highlight the importance of hyperparameter optimization in respiratory monitoring and suggest ResNet as a promising tool for real-time assessment in medical applications. Full article
(This article belongs to the Special Issue Smart Wearable Technology: Thermal Management and Energy Applications)
Show Figures

Figure 1

Figure 1
<p>A schematic of the proposed work.</p>
Full article ">Figure 2
<p>Abdominal movement in response to breathing [<a href="#B23-processes-13-00747" class="html-bibr">23</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Wearable sensors in the shape of a finished garment [<a href="#B23-processes-13-00747" class="html-bibr">23</a>]; (<b>b</b>) the effect of single-ply and triple-ply thread length on the resistance value.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic of LCR meter [<a href="#B23-processes-13-00747" class="html-bibr">23</a>]; (<b>b</b>) measurement of wearable sensors.</p>
Full article ">Figure 5
<p>(<b>a</b>) CNN architecture; (<b>b</b>) ResNet architecture.</p>
Full article ">Figure 6
<p>Breathing data under different conditions: (<b>a</b>) resting; (<b>b</b>) low intensity; (<b>c</b>) moderate intensity; and (<b>d</b>) high intensity.</p>
Full article ">Figure 7
<p>(<b>a</b>) Training results window; (<b>b</b>) confusion matrix (CNN-1).</p>
Full article ">Figure 8
<p>The results of O-CNN. (<b>a</b>) Validation accuracy; (<b>b</b>) training loss.</p>
Full article ">Figure 9
<p>(<b>a</b>) Training results window; (<b>b</b>) confusion matrix (ResNet-1).</p>
Full article ">Figure 10
<p>The results of O-Resnet. (<b>a</b>) Validation accuracy; (<b>b</b>) training loss.</p>
Full article ">Figure 11
<p>Confusion matrix of O-CNN.</p>
Full article ">Figure 12
<p>Confusion matrix of O-ResNet.</p>
Full article ">
18 pages, 5447 KiB  
Article
Coupling Interpretable Feature Selection with Machine Learning for Evapotranspiration Gap Filling
by Lizheng Wang, Lixin Dong and Qiutong Zhang
Water 2025, 17(5), 748; https://doi.org/10.3390/w17050748 - 4 Mar 2025
Viewed by 207
Abstract
Evapotranspiration (ET) plays a pivotal role in linking the water and carbon cycles between the land and atmosphere, with latent heat flux (LE) representing the energy manifestation of ET. Due to adverse meteorological conditions, data quality filtering, and instrument malfunctions, LE measured by [...] Read more.
Evapotranspiration (ET) plays a pivotal role in linking the water and carbon cycles between the land and atmosphere, with latent heat flux (LE) representing the energy manifestation of ET. Due to adverse meteorological conditions, data quality filtering, and instrument malfunctions, LE measured by the eddy covariance (EC) is temporally discontinuous at the hourly and daily scales. Machine-learning (ML) models effectively capture the complex relationships between LE and its influencing factors, demonstrating superior performance in filling LE data gaps. However, the selection of features in ML models often relies on empirical knowledge, with identical features frequently used across stations, leading to reduced modeling accuracy. Therefore, this study proposes an LE gap-filling model (SHAP-AWF-BO-LightGBM) that combines the Shapley additive explanations adaptive weighted fusion method with the Bayesian optimization light gradient-boosting machine algorithm. This is tested using data from three stations in the Heihe River Basin, China, representing different plant functional types. For 30 min interval missing LE data, the RMSE ranges from 17.90 W/m2 to 20.17 W/m2, while the MAE ranges from 10.74 W/m2 to 14.04 W/m2. The SHAP-AWF method is used for feature selection. First, the importance of SHAP features from multiple ensemble-learning models is adaptively weighted as the basis for feature input into the BO-LightGBM algorithm, which enhances the interpretability and transparency of the model. Second, data redundancy and the cost of collecting other feature data during model training are reduced, improving model calculation efficiency (reducing the initial number of features of different stations from 42, 46, and 48 to 10, 15, and 8, respectively). Third, under the premise of ensuring accuracy as much as possible, the gap-filling ratio for missing LE data at different stations is improved, and the adaptability of using only automatic weather station observation is enhanced (the improvement range is between 7.46% and 11.67%). Simultaneously, the hyperparameters of the LightGBM algorithm are optimized using a Bayesian algorithm, further enhancing the accuracy of the model. This study provides a new approach and perspective to fill the missing LE in EC measurement. Full article
Show Figures

Figure 1

Figure 1
<p>Location of eddy covariance instrument and automatic weather station in the study area.</p>
Full article ">Figure 2
<p>Feature selection based on SHAP adaptive weighted fusion (SHAP-AWF).</p>
Full article ">Figure 3
<p>Latent heat flux gap-filling technology route.</p>
Full article ">Figure 4
<p>The accuracy of the BO-LightGBM model on the test set under different feature combinations at different stations (MAE and RMSE in W/m<sup>2</sup>).</p>
Full article ">Figure 5
<p>Test set accuracy evaluation of the SHAP-AWF-BO-LightGBM model at different stations and seasons (Spring is March to May, Summer is June to August, Autumn is September to November, and Winter is December, January, and February. (<b>a</b>), (<b>b</b>) and (<b>c</b>) respectively represent different evaluation metrics).</p>
Full article ">Figure 6
<p>Variation trend of evapotranspiration at different stations on an annual scale ((<b>a</b>) represents the ET of different stations in different years, and (<b>b</b>) represents the average annual ET of different stations).</p>
Full article ">Figure 7
<p>Contribution of the top 10 features at different stations (energy includes Rn, DSR, DLR, USR, ULR, G, etc.; temperature includes Ta, IRT, Ts, etc. Moisture includes Ms, RH, etc.; temporal information includes DOY, Month, etc. Other features also include energy, temperature, moisture, etc. Their equipment has different detection depths or heights, please refer to the remainder of <a href="#water-17-00748-t002" class="html-table">Table 2</a> minus <a href="#water-17-00748-t004" class="html-table">Table 4</a> for details. The brackets in the station indicate the vegetation function type and the initial number of features of the station).</p>
Full article ">
22 pages, 286 KiB  
Article
SHAP Informed Neural Network
by Jarrod Graham and Victor S. Sheng
Mathematics 2025, 13(5), 849; https://doi.org/10.3390/math13050849 - 4 Mar 2025
Viewed by 143
Abstract
In the context of neural network optimization, this study explores the performance and computational efficiency of learning rate adjustment strategies applied with Adam and SGD optimizers. Methods evaluated include exponential annealing, step decay, and SHAP-informed adjustments across three datasets: Breast Cancer, Diabetes, and [...] Read more.
In the context of neural network optimization, this study explores the performance and computational efficiency of learning rate adjustment strategies applied with Adam and SGD optimizers. Methods evaluated include exponential annealing, step decay, and SHAP-informed adjustments across three datasets: Breast Cancer, Diabetes, and California Housing. The SHAP-informed adjustments integrate feature importance metrics derived from cooperative game theory, either scaling the global learning rate or directly modifying gradients of first-layer parameters. A comprehensive grid search was conducted to optimize the hyperparameters, and performance was assessed using metrics such as test loss, RMSE, R2 score, accuracy, and training time. Results revealed that while step decay consistently delivered strong performance across datasets, SHAP-informed methods often demonstrated even higher accuracy and generalization, such as SHAP achieving the lowest test loss and RMSE on the California Housing dataset. However, the computational overhead of SHAP-based approaches was significant, particularly in targeted gradient adjustments. This study highlights the potential of SHAP-informed methods to guide optimization processes through feature-level insights, offering advantages in data with complex feature interactions. Despite computational challenges, these methods provide a foundation for exploring how feature importance can inform neural network training, presenting promising directions for future research on scalable and efficient optimization techniques. Full article
(This article belongs to the Special Issue Neural Networks and Their Applications)
18 pages, 6652 KiB  
Article
Tensile Strength Predictive Modeling of Natural-Fiber-Reinforced Recycled Aggregate Concrete Using Explainable Gradient Boosting Models
by Celal Cakiroglu, Farnaz Ahadian, Gebrail Bekdaş and Zong Woo Geem
J. Compos. Sci. 2025, 9(3), 119; https://doi.org/10.3390/jcs9030119 - 4 Mar 2025
Viewed by 70
Abstract
Natural fiber composites have gained significant attention in recent years due to their environmental benefits and unique mechanical properties. These materials combine natural fibers with polymer matrices to create sustainable alternatives to traditional synthetic composites. In addition to natural fiber reinforcement, the usage [...] Read more.
Natural fiber composites have gained significant attention in recent years due to their environmental benefits and unique mechanical properties. These materials combine natural fibers with polymer matrices to create sustainable alternatives to traditional synthetic composites. In addition to natural fiber reinforcement, the usage of recycled aggregates in concrete has been proposed as a remedy to combat the rapidly increasing amount of construction and demolition waste in recent years. However, the accurate prediction of the structural performance metrics, such as tensile strength, remains a challenge for concrete composites reinforced with natural fibers and containing recycled aggregates. This study aims to develop predictive models of natural-fiber-reinforced recycled aggregate concrete based on experimental results collected from the literature. The models have been trained on a dataset consisting of 482 data points. Each data point consists of the amounts of cement, fine and coarse aggregate, water-to-binder ratio, percentages of recycled coarse aggregate and natural fiber, and the fiber length. The output feature of the dataset is the splitting tensile strength of the concrete. Extreme gradient boosting (XGBoost), light gradient boosting machine (LightGBM) and extra trees regressor models were trained to predict the tensile strength of the specimens. For optimum performance, the hyperparameters of these models were optimized using the blended search strategy (BlendSearch) and cost-related frugal optimization (CFO). The tensile strength could be predicted with a coefficient of determination greater than 0.95 by the XGBoost model. To make the predictive models accessible, an online graphical user interface was also made available on the Streamlit platform. A feature importance analysis was carried out using the Shapley additive explanations (SHAP) approach. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Coir [<a href="#B30-jcs-09-00119" class="html-bibr">30</a>], (<b>b</b>) ramie [<a href="#B16-jcs-09-00119" class="html-bibr">16</a>], (<b>c</b>) jute [<a href="#B31-jcs-09-00119" class="html-bibr">31</a>] fibers.</p>
Full article ">Figure 2
<p>Distribution of the input and output features.</p>
Full article ">Figure 3
<p>Parallel coordinates’ plot of the dataset.</p>
Full article ">Figure 4
<p>Isolation of data points.</p>
Full article ">Figure 5
<p>Predictive model development and interpretation.</p>
Full article ">Figure 6
<p>Explained variance ratios.</p>
Full article ">Figure 7
<p>Outliers for (<b>a</b>) contamination = 0.1, (<b>b</b>) contamination = 0.06, (<b>c</b>) contamination = 0.02, (<b>d</b>) contamination = 0.01.</p>
Full article ">Figure 8
<p>Model performances with respect to contamination.</p>
Full article ">Figure 9
<p>Extra trees model performance fluctuations on the test set.</p>
Full article ">Figure 10
<p>Hyperparameter optimization steps.</p>
Full article ">Figure 11
<p>Predicted and true values for (<b>a</b>) extra trees, (<b>b</b>) LightGBM, (<b>c</b>) XGBoost.</p>
Full article ">Figure 12
<p>Online graphical user interface.</p>
Full article ">Figure 13
<p>SHAP feature importances.</p>
Full article ">Figure 14
<p>SHAP summary plot.</p>
Full article ">Figure 15
<p>SHAP heatmap plot.</p>
Full article ">
18 pages, 992 KiB  
Article
Baby Cry Classification Using Structure-Tuned Artificial Neural Networks with Data Augmentation and MFCC Features
by Tayyip Ozcan and Hafize Gungor
Appl. Sci. 2025, 15(5), 2648; https://doi.org/10.3390/app15052648 - 1 Mar 2025
Viewed by 307
Abstract
Babies express their needs, such as hunger, discomfort, or sleeplessness, by crying. However, understanding these cries correctly can be challenging for parents. This can delay the baby’s needs, increase parents’ stress levels, and negatively affect the baby’s development. In this paper, an integrated [...] Read more.
Babies express their needs, such as hunger, discomfort, or sleeplessness, by crying. However, understanding these cries correctly can be challenging for parents. This can delay the baby’s needs, increase parents’ stress levels, and negatively affect the baby’s development. In this paper, an integrated system for the classification of baby sounds is proposed. The proposed method includes data augmentation, feature extraction, hyperparameter tuning, and model training steps. In the first step, various data augmentation techniques were applied to increase the training data’s diversity and strengthen the model’s generalization capacity. The MFCC (Mel-Frequency Cepstral Coefficients) method was used in the second step to extract meaningful and distinctive features from the sound data. MFCC represents sound signals based on the frequencies the human ear perceives and provides a strong basis for classification. The obtained features were classified with an artificial neural network (ANN) model with optimized hyperparameters. The hyperparameter optimization of the model was performed using the grid search algorithm, and the most appropriate parameters were determined. The training, validation, and test data sets were separated at 75%, 10%, and 15% ratios, respectively. The model’s performance was tested on mixed sounds. The test results were analyzed, and the proposed method showed the highest performance, with a 90% accuracy rate. In the comparison study with an artificial neural network (ANN) on the Donate a Cry data set, the F1 score was reported as 46.99% and the test accuracy as 85.93%. In this paper, additional techniques such as data augmentation, hyperparameter tuning, and MFCC feature extraction allowed the model accuracy to reach 90%. The proposed method offers an effective solution for classifying baby sounds and brings a new approach to this field. Full article
Show Figures

Figure 1

Figure 1
<p>Mel spectrogram example for each class.</p>
Full article ">Figure 2
<p>Block diagram of the MFCC computation.</p>
Full article ">Figure 3
<p>The architecture of the proposed method.</p>
Full article ">Figure 4
<p>Structure-tuned ANN for the proposed method.</p>
Full article ">Figure 5
<p>Confusion matrix for the proposed method.</p>
Full article ">Figure 6
<p>Learning curves for ANN.</p>
Full article ">Figure 7
<p>ROC curve for ANN.</p>
Full article ">
27 pages, 4500 KiB  
Article
Low Capillary Elastic Flow Model Optimization Using the Lattice Boltzmann Method and Non-Dominated Sorting Genetic Algorithm
by Yaqi Hou, Wei Zhang, Jiahua Hu, Feiyu Gao and Xuexue Zong
Micromachines 2025, 16(3), 298; https://doi.org/10.3390/mi16030298 - 28 Feb 2025
Viewed by 297
Abstract
In simulations of elastic flow using the lattice Boltzmann method (LBM), the steady-state behavior of the flow at low capillary numbers is typically poor and prone to the formation of bubbles with inhomogeneous lengths. This phenomenon undermines the precise control of heat transfer, [...] Read more.
In simulations of elastic flow using the lattice Boltzmann method (LBM), the steady-state behavior of the flow at low capillary numbers is typically poor and prone to the formation of bubbles with inhomogeneous lengths. This phenomenon undermines the precise control of heat transfer, mass transfer, and reactions within microchannels and microreactors. This paper establishes an LBM multiphase flow model enhanced by machine learning. The hyperparameters of the machine learning model are optimized using the particle swarm algorithm. In contrast, the non-dominated sorting genetic algorithm (NSGA-II) is incorporated to optimize bubble lengths and stability. This results in a coupled multiphase flow numerical simulation model that integrates LBM, machine learning, and the particle swarm algorithm. Using this model, we investigate the influence of elastic flow parameters on bubble length and stability in a T-shaped microchannel. The simulation results demonstrate that the proposed LBM multiphase flow model can effectively predict bubble elongation rates under complex conditions. Furthermore, multi-objective optimization determines the optimal gas–liquid two-phase inlet flow rate relationship, significantly mitigating elastic flow instability at low capillary numbers. This approach enhances the controllability of the elastic flow process and improves the efficiency of mass and heat transfer. Full article
Show Figures

Figure 1

Figure 1
<p>Microchannel model.</p>
Full article ">Figure 2
<p>Pressure distribution along the droplet centerline.</p>
Full article ">Figure 3
<p>Verification of Laplace’s law.</p>
Full article ">Figure 4
<p>Contact angle verification.</p>
Full article ">Figure 5
<p>Verification of thermodynamic consistency.</p>
Full article ">Figure 6
<p>Bubble formation process, red represents the liquid phase and blue represents the gas phase, (<b>a</b>–<b>c</b>) are the gradual expansion of the gas phase, (<b>d</b>–<b>f</b>) are the collapse of the bubble within the main channel, (<b>g</b>,<b>h</b>) are the bubble break-up at the junction.</p>
Full article ">Figure 7
<p>Machine learning modeling flowchart.</p>
Full article ">Figure 8
<p>Schematic diagram of the length of the bubbles, red represents the liquid phase and blue represents the gas phase.</p>
Full article ">Figure 9
<p>Algorithm flowchart.</p>
Full article ">Figure 10
<p>Correlation between predicted and true values of the training set using three different models. (<b>a</b>) PSO-SVR model Coefficient of determination R<sup>2</sup> = 0.62, (<b>b</b>) PSO-BP model Coefficient of determination R<sup>2</sup> = 0.80, (<b>c</b>) PSO-RF model Coefficient of determination R2 = 0.85.</p>
Full article ">Figure 11
<p>Correlation between predicted and true values of the test set using three different models. (<b>a</b>) PSO-SVR model coefficient of determination R<sup>2</sup> = 0.61, (<b>b</b>) PSO-BP model coefficient of determination R2 = 0.81, (<b>c</b>) PSO-RF model coefficient of determination R2 = 0.84.</p>
Full article ">Figure 12
<p>Variation of bubble length with stochastic model parameters. (<b>a</b>) The first set of simulation results. (<b>b</b>) The second set of simulation results. (<b>c</b>) The third set of simulation results. Red represents the liquid phase and blue represents the gas phase.</p>
Full article ">Figure 13
<p>Comparison of the performance of the three machine learning models.</p>
Full article ">Figure 14
<p>Pareto frontier curve.</p>
Full article ">Figure 15
<p>Optimal solution set validation. (<b>a</b>) Fourth set of simulation results. (<b>b</b>) Group V simulation results. (<b>c</b>) Non-optimized results. (<b>d</b>) Non-optimized results. Red represents the liquid phase and blue represents the gas phase.</p>
Full article ">Figure 16
<p>Variation of bubble length with stochastic model parameters. Red represents the liquid phase and blue represents the gas phase.</p>
Full article ">
17 pages, 1928 KiB  
Article
Enhancing Travel Time Prediction for Intelligent Transportation Systems: A High-Resolution Origin–Destination-Based Approach with Multi-Dimensional Features
by Chaoyang Shi, Waner Zou, Yafei Wang, Zhewen Zhu, Tengda Chen, Yunfei Zhang and Ni Wang
Sustainability 2025, 17(5), 2111; https://doi.org/10.3390/su17052111 - 28 Feb 2025
Viewed by 239
Abstract
Accurate travel time prediction is essential for improving urban mobility, traffic management, and ride-hailing services. Traditional link- and path-based models face limitations due to data sparsity, segmentation errors, and computational inefficiencies. This study introduces an origin–destination (OD)-based travel time prediction framework leveraging high-resolution [...] Read more.
Accurate travel time prediction is essential for improving urban mobility, traffic management, and ride-hailing services. Traditional link- and path-based models face limitations due to data sparsity, segmentation errors, and computational inefficiencies. This study introduces an origin–destination (OD)-based travel time prediction framework leveraging high-resolution ride-hailing trajectory data. Unlike previous works, our approach systematically integrates spatiotemporal, quantified weather metrics and driver behavior clustering to enhance predictive accuracy. The proposed model employs a Back Propagation Neural Network (BPNN), which dynamically adjusts hyperparameters to improve generalization and mitigate overfitting. Empirical validation using ride-hailing data from Xi’an, China, demonstrates superior predictive performance, particularly for medium-range trips, achieving an RMSE of 202.89 s and a MAPE of 16.52%. Comprehensive ablation studies highlight the incremental benefits of incorporating spatiotemporal, weather, and behavioral features, showcasing their contributions to reducing prediction errors. While the model excels in moderate-speed scenarios, it exhibits limitations in short trips and low-speed cases due to data imbalance. Future research will enhance model robustness through data augmentation, real-time traffic integration, and scenario-specific adaptations. This study provides a scalable and adaptable travel time prediction framework, offering valuable insights for urban traffic management, dynamic route optimization, and sustainable mobility solutions within ITS. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial information of the study area: (<b>a</b>) Location of Xi’an; (<b>b</b>) The central area of Xi’an; (<b>c</b>) The road network of the study area.</p>
Full article ">Figure 2
<p>Overview of the proposed methodology.</p>
Full article ">Figure 3
<p>Structure of the BPNN model.</p>
Full article ">Figure 4
<p>Performance of the proposed method: (<b>a</b>) CDF of the absolute percentage error; (<b>b</b>) Influence of travel distance on MAPE; (<b>c</b>) Influence of travel time on MAPE; (<b>d</b>) Influence of travel speed on MAPE.</p>
Full article ">Figure 5
<p>MAPE distribution with travel time and distance.</p>
Full article ">
21 pages, 2600 KiB  
Article
A Particle Swarm Optimization-Based Ensemble Broad Learning System for Intelligent Fault Diagnosis in Safety-Critical Energy Systems with High-Dimensional Small Samples
by Jiasheng Yan, Yang Sui and Tao Dai
Mathematics 2025, 13(5), 797; https://doi.org/10.3390/math13050797 - 27 Feb 2025
Viewed by 213
Abstract
Intelligent fault diagnosis (IFD) plays a crucial role in reducing maintenance costs and enhancing the reliability of safety-critical energy systems (SCESs). In recent years, deep learning-based IFD methods have achieved high fault diagnosis accuracy extracting implicit higher-order correlations between features. However, the excessive [...] Read more.
Intelligent fault diagnosis (IFD) plays a crucial role in reducing maintenance costs and enhancing the reliability of safety-critical energy systems (SCESs). In recent years, deep learning-based IFD methods have achieved high fault diagnosis accuracy extracting implicit higher-order correlations between features. However, the excessive long training time of deep learning models conflicts with the requirements of real-time analysis for IFD, hindering their further application in practical industrial environments. To address the aforementioned challenge, this paper proposes an innovative IFD method for SCES that combines the particle swarm optimization (PSO) algorithm and the ensemble broad learning system (EBLS). Specifically, the broad learning system (BLS), known for its low time complexity and high classification accuracy, is adopted as an alternative to deep learning for fault diagnosis in SCES. Furthermore, EBLS is designed to enhance model stability and classification accuracy with high-dimensional small samples by incorporating the random forest (RF) algorithm and an ensemble strategy into the traditional BLS framework. In order to reduce the computational cost of the EBLS, which is constrained by the selection of its hyperparameters, the PSO algorithm is employed to optimize the hyperparameters of the EBLS. Finally, the model is validated through simulated data from a complex nuclear power plant (NPP). Numerical experiments reveal that the proposed method significantly improved the diagnostic efficiency while maintaining high accuracy. In summary, the proposed approach shows great promise for boosting the capabilities of the IFD models for SCES. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of BLS.</p>
Full article ">Figure 2
<p>Computational flowchart of BLS.</p>
Full article ">Figure 3
<p>Architecture of the EBLS.</p>
Full article ">Figure 4
<p>Flowchart of the PSO-EBLS.</p>
Full article ">Figure 5
<p>Personal computer transient analyzer.</p>
Full article ">
Back to TopTop