Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,023)

Search Parameters:
Keywords = XGBoost model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 2254 KiB  
Article
Relationship Between Spatial Form, Functional Distribution, and Vitality of Railway Station Areas Under Station-City Synergetic Development: A Case Study of Four Special-Grade Stations in Beijing
by Yuhan Sun, Bo Wan and Qiang Sheng
Sustainability 2024, 16(22), 10102; https://doi.org/10.3390/su162210102 - 19 Nov 2024
Abstract
The integration of railway stations into urban environments necessitates a detailed examination of their vitality and influencing factors. This study assesses urban vitality around four major railway stations in Beijing utilizing a variety of analytical models including Ordinary Least Squares, Geographically Weighted Regression, [...] Read more.
The integration of railway stations into urban environments necessitates a detailed examination of their vitality and influencing factors. This study assesses urban vitality around four major railway stations in Beijing utilizing a variety of analytical models including Ordinary Least Squares, Geographically Weighted Regression, Multi-Scale Geographically Weighted Regression, and machine learning approaches such as XGBoost 2.0.3, Random Forest 1.4.1.post1, and LightGBM 4.3.0. These analyses are grounded in Baidu heatmaps and examine relationships with spatial form, functional distribution, and spatial configuration. The results indicate significant associations between urban vitality and variables such as commercial density, average number of floors, integration, residential density, and housing prices, particularly in predicting weekday vitality. The MGWR model demonstrates enhanced fit and robustness, explaining 84.8% of the variability in vitality, while the Random Forest model displays the highest stability among the machine learning options, accounting for 76.9% of vitality variation. The integration of SHAP values with MGWR coefficients identifies commercial density as the most critical predictor, with the average number of floors and residential density also being key. These findings offer important insights for spatial planning in areas surrounding railway stations. Full article
(This article belongs to the Special Issue Urban Planning and Built Environment)
25 pages, 5341 KiB  
Article
Artificial Intelligence Optimization for User Prediction and Efficient Energy Distribution in Electric Vehicle Smart Charging Systems
by Siow Jat Shern, Md Tanjil Sarker, Mohammed Hussein Saleh Mohammed Haram, Gobbi Ramasamy, Siva Priya Thiagarajah and Fahmid Al Farid
Energies 2024, 17(22), 5772; https://doi.org/10.3390/en17225772 - 19 Nov 2024
Viewed by 138
Abstract
This paper presents an advanced AI-based optimization framework for Electric Vehicle (EV) smart charging systems, focusing on efficient energy distribution to meet dynamic user demand. The study leverages machine learning models such as Random Forest, Support Vector Regression (SVR), Gradient Boosting Regressor, XGBoost, [...] Read more.
This paper presents an advanced AI-based optimization framework for Electric Vehicle (EV) smart charging systems, focusing on efficient energy distribution to meet dynamic user demand. The study leverages machine learning models such as Random Forest, Support Vector Regression (SVR), Gradient Boosting Regressor, XGBoost, LightGBM, and Long Short-Term Memory (LSTM) to forecast user demand and optimize energy allocation. Among the models, XGBoost demonstrated superior predictive performance, achieving the lowest Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), making it the most effective for real-time user demand prediction in smart charging scenarios. The framework introduces proportional and priority-based allocation strategies to distribute available energy effectively, with a focus on minimizing energy shortfalls and balancing supply with user demand. Results from the XGBoost model reduced prediction error by 15% compared to other models, significantly improving the station’s ability to meet user demand efficiently. The proposed AI framework enhances charging station operations, supports grid stability, and promotes sustainability in the context of increasing EV adoption. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram for AI-based EV smart charging system.</p>
Full article ">Figure 2
<p>Performance analysis of six machine learning models.</p>
Full article ">Figure 3
<p>Future user prediction for six models (left—top: Random Forest, middle: Gradient Boosting Regressor, bottom: LightGBM; right—top: SVR, middle: XGBoost, bottom: LSTM).</p>
Full article ">Figure 4
<p>Actual and predicted user counts for a smart EV charging station over 30 days.</p>
Full article ">Figure 5
<p>Actual and predicted energy usage (in kWh) for a smart EV charging station.</p>
Full article ">Figure 6
<p>Energy usage (in kWh) for a set of users.</p>
Full article ">Figure 7
<p>Scatter plot for user kWh prediction for six models. (left—top: Random Forest, middle: Gradient Boosting Regressor, bottom: LightGBM; right—top: SVR, middle: XGBoost, bottom: LSTM).</p>
Full article ">
12 pages, 2388 KiB  
Article
Analyzing the Relationship Between COVID-19 and Sociodemographic and Environmental Factors: A Case Study in Toronto
by Brian Anlan Yu and Qinmin Vivian Hu
Electronics 2024, 13(22), 4524; https://doi.org/10.3390/electronics13224524 - 18 Nov 2024
Viewed by 275
Abstract
COVID-19 has disproportionately impacted communities based on sociodemographic and environmental factors. Previous studies have largely focused on traditional statistical models to investigate these disparities with limited attention to within-city variations. This research addresses this gap by employing advanced machine learning models to predict [...] Read more.
COVID-19 has disproportionately impacted communities based on sociodemographic and environmental factors. Previous studies have largely focused on traditional statistical models to investigate these disparities with limited attention to within-city variations. This research addresses this gap by employing advanced machine learning models to predict COVID-19 case counts at the neighborhood level within Toronto. Using algorithms such as Support Vector Regression, Random Forest, Gradient Boosting, and XGBoost, along with SHAP (SHapley Additive exPlanations) analysis, we identify key factors impacting COVID-19 transmission, including air pollution, socioeconomic status, and racialized group membership. Our results demonstrate that sociodemographic factors significantly influence sporadic cases, while environmental factors, particularly air pollutants, are critical in outbreak cases. This study highlights the value of machine learning in understanding complex interactions between risk factors with implications for targeted public health interventions to mitigate COVID-19 disparities. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of different predicative models (MSE).</p>
Full article ">Figure 2
<p>Comparison of different predicative models (<math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math>).</p>
Full article ">Figure 3
<p>Feature importance.</p>
Full article ">
26 pages, 1044 KiB  
Article
PredXGBR: A Machine Learning Framework for Short-Term Electrical Load Prediction
by Rifat Zabin, Khandaker Foysal Haque and Ahmed Abdelgawad
Electronics 2024, 13(22), 4521; https://doi.org/10.3390/electronics13224521 - 18 Nov 2024
Viewed by 286
Abstract
The growing demand for consumer-end electrical load is driving the need for smarter management of power sector utilities. In today’s technologically advanced society, efficient energy usage is critical, leaving no room for waste. To prevent both electricity shortage and wastage, electrical load forecasting [...] Read more.
The growing demand for consumer-end electrical load is driving the need for smarter management of power sector utilities. In today’s technologically advanced society, efficient energy usage is critical, leaving no room for waste. To prevent both electricity shortage and wastage, electrical load forecasting becomes the most convenient way out. However, the conventional and probabilistic methods are less adaptive to the acute, micro, and unusual changes in the demand trend. With the recent development of artificial intelligence (AI), machine learning (ML) has become the most popular choice due to its higher accuracy based on time-, demand-, and trend-based feature extractions. Thus, we propose an Extreme Gradient Boosting (XGBoost) regression-based model—PredXGBR-1, which employs short-term lag features to predict hourly load demand. The novelty of PredXGBR-1 lies in its focus on short-term lag autocorrelations to enhance adaptability to micro-trends and demand fluctuations. Validation across five datasets, representing electrical load in the eastern and western USA over a 20-year period, shows that PredXGBR-1 outperforms a long-term feature-based XGBoost model, PredXGBR-2, and state-of-the-art recurrent neural network (RNN) and long short-term memory (LSTM) models. Specifically, PredXGBR-1 achieves an mean absolute percentage error (MAPE) between 0.98 and 1.2% and an R2 value of 0.99, significantly surpassing PredXGBR-2’s R2 of 0.61 and delivering up to 86.8% improvement in MAPE compared to LSTM models. These results confirm the superior performance of PredXGBR-1 in accurately forecasting short-term load demand. Full article
Show Figures

Figure 1

Figure 1
<p>Main steps of ARIMA and SVM.</p>
Full article ">Figure 2
<p>Main steps of RNN and LSTM.</p>
Full article ">Figure 3
<p>Working principle of the proposed <tt>PredXGBR</tt>-1 model. The model iteratively refines its prediction by minimizing residuals using successive regression trees. Each new tree improves upon the predictions of its predecessor by learning from the residuals.</p>
Full article ">Figure 4
<p>The original data along with the <span class="html-italic">trend</span>, <span class="html-italic">periodic</span>, and <span class="html-italic">residual</span> patterns of electrical load consumption for the PJM and Dayton datasets.</p>
Full article ">Figure 5
<p>Heatmaps of different temporal features of PJM dataset.</p>
Full article ">Figure 6
<p>Heatmaps of different temporal features of Dayton dataset.</p>
Full article ">Figure 7
<p>Comparative analysis of the MAPE and <math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> value of the proposed approach: <tt>PredXGBR</tt>-1.</p>
Full article ">Figure 8
<p>Analysis of the generalization performance of <tt>PredXGBR</tt>-1 when compared with two of the best-performing models—SVM and TCN. Models are trained with one dataset and tested with others.</p>
Full article ">Figure 8 Cont.
<p>Analysis of the generalization performance of <tt>PredXGBR</tt>-1 when compared with two of the best-performing models—SVM and TCN. Models are trained with one dataset and tested with others.</p>
Full article ">Figure 9
<p>Comparative analysis of the computational complexity (FLOPS) and inference time of <tt>PredXGBR</tt>-1 (Model1).</p>
Full article ">
14 pages, 237 KiB  
Article
Predictive Analytics for Thyroid Cancer Recurrence: A Machine Learning Approach
by Elizabeth Clark, Samantha Price, Theresa Lucena, Bailey Haberlein, Abdullah Wahbeh and Raed Seetan
Knowledge 2024, 4(4), 557-570; https://doi.org/10.3390/knowledge4040029 (registering DOI) - 18 Nov 2024
Viewed by 234
Abstract
Differentiated thyroid cancer (DTC), comprising papillary and follicular thyroid cancers, is the most prevalent type of thyroid malignancy. Accurate prediction of DTC is crucial for improving patient outcomes. Machine learning (ML) offers a promising approach to analyze risk factors and predict cancer recurrence. [...] Read more.
Differentiated thyroid cancer (DTC), comprising papillary and follicular thyroid cancers, is the most prevalent type of thyroid malignancy. Accurate prediction of DTC is crucial for improving patient outcomes. Machine learning (ML) offers a promising approach to analyze risk factors and predict cancer recurrence. In this study, we aimed to develop predictive models to identify patients at an elevated risk of DTC recurrence based on 16 risk factors. We developed six ML models and applied them to a DTC dataset. We evaluated the ML models using Synthetic Minority Over-Sampling Technique (SMOTE) and with hyperparameter tuning. We measured the models’ performance using precision, recall, F1 score, and accuracy. Results showed that Random Forest consistently outperformed the other investigated models (KNN, SVM, Decision Tree, AdaBoost, and XGBoost) across all scenarios, demonstrating high accuracy and balanced precision and recall. The application of SMOTE improved model performance, and hyperparameter tuning enhanced overall model effectiveness. Full article
19 pages, 7362 KiB  
Article
Highly Efficient JR Optimization Technique for Solving Prediction Problem of Soil Organic Carbon on Large Scale
by Harsh Vazirani, Xiaofeng Wu, Anurag Srivastava, Debajyoti Dhar and Divyansh Pathak
Sensors 2024, 24(22), 7317; https://doi.org/10.3390/s24227317 - 15 Nov 2024
Viewed by 280
Abstract
We utilized remote sensing and ground cover data to predict soil organic carbon (SOC) content across a vast geographic region. Employing a combination of machine learning and deep learning techniques, we developed a novel data fusion approach that integrated Digital Elevation Model (DEM) [...] Read more.
We utilized remote sensing and ground cover data to predict soil organic carbon (SOC) content across a vast geographic region. Employing a combination of machine learning and deep learning techniques, we developed a novel data fusion approach that integrated Digital Elevation Model (DEM) data, MODIS satellite imagery, WOSIS soil profile data, and CHELSA environmental data. This combined dataset, named GeoBlendMDWC, was specifically designed for SOC prediction. The primary aim of this research is to develop and evaluate a novel optimization algorithm for accurate SOC prediction by leveraging multi-source environmental data. Specifically, this study aims to (1) create an integrated dataset combining remote sensing and ground data for comprehensive SOC analysis, (2) develop a new optimization technique that enhances both machine learning and deep learning model performance, and (3) evaluate the algorithm’s efficiency and accuracy against established optimization methods like Jaya and GridSearchCV. This study focused on India, Australia, and South Africa, countries known for their significant agricultural activities. We introduced a novel optimization technique for both machine learning and deep neural networks, comparing its performance to established methods like the Jaya optimization technique and GridSearchCV. The models evaluated included XGBoost Regression, LightGBM, Gradient Boosting Regression (GBR), Random Forest Regression, Decision Tree Regression, and a Multilayer Perceptron (MLP) model. Our research demonstrated that the proposed optimization algorithm consistently outperformed existing methods in terms of execution time and performance. It achieved results comparable to GridSearchCV, reaching an R2 of 90.16, which was a significant improvement over the base XGBoost model’s R2 of 79.08. In deep learning optimization, it significantly outperformed the Jaya algorithm, achieving an R2 of 61.34 compared to Jaya’s 30.04. Moreover, it was 20–30 times faster than GridSearchCV. Given its speed and accuracy, this algorithm can be applied to real-time data processing in remote sensing satellites. This advanced methodology will greatly benefit the agriculture and farming sectors by providing precise SOC predictions. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Soil organic carbon content by region in tons-per-hectare (ton/ha) in India [<a href="#B34-sensors-24-07317" class="html-bibr">34</a>], Australia [<a href="#B35-sensors-24-07317" class="html-bibr">35</a>], and Africa [<a href="#B36-sensors-24-07317" class="html-bibr">36</a>] respectively.</p>
Full article ">Figure 2
<p>Research workflow.</p>
Full article ">Figure 3
<p>Flowchart of the optimization algorithm.</p>
Full article ">Figure 4
<p>Correlation matrix for the Indian–Australian–African combined dataset. Pearson correlation methodology is used to calculate the correlation values.</p>
Full article ">Figure 5
<p>Average execution time comparison in milliseconds between the machine learning models when using different optimization techniques.</p>
Full article ">
20 pages, 7030 KiB  
Article
Identification of Exploited Unreliable Account Passwords in the Information Infrastructure Using Machine Learning Methods
by Mikhail Rusanov, Mikhail Babenko, Maria Lapina and Mohammad Sajid
Big Data Cogn. Comput. 2024, 8(11), 159; https://doi.org/10.3390/bdcc8110159 - 15 Nov 2024
Viewed by 319
Abstract
Accounts are an integral part of most modern information systems and provide their owners with the ability to authenticate within the system. This paper presents an analysis of existing methods for detecting simple account passwords in automated systems. Their advantages and disadvantages are [...] Read more.
Accounts are an integral part of most modern information systems and provide their owners with the ability to authenticate within the system. This paper presents an analysis of existing methods for detecting simple account passwords in automated systems. Their advantages and disadvantages are listed. A method was developed to detect simple exploitable passwords that administrators can use to supplement other existing methods to increase the overall security of automated systems against threats from accounts potentially compromised by attackers. The method was based on the analysis of commands executed in automated or manual modes with the indication of credentials in plain text. Minimum password strength requirements are provided based on the security level. A special case was considered in which all passwords analyzed in this way were found explicitly in the system logs. We developed a unified definition of the classification of passwords into simple and strong, and also developed machine learning technology for their classification. The method offers a flexible adaptation to a specific system, taking into account the level of significance of the information being processed and the password policy adopted, expressed in the possibility of retraining the machine learning model. The experimental method using machine learning algorithms, namely the ensemble of decision trees, for classifying passwords into strong and potentially compromised by attackers based on flexible password strength criteria, showed high results. The performance of the method is also compared against other machine learning algorithms, specifically XGBoost, Random Forest, and Naive Bayes. The presented approach also solves the problem of detecting events related to the use and storage of credentials in plain text. We used the dataset of approximately 770,000 passwords, allowing the machine learning model to accurately classify 98% of the passwords by their significance levels. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of considered simple and strong passwords.</p>
Full article ">Figure 2
<p>The general scheme of the model for identifying exploited simple passwords.</p>
Full article ">Figure 3
<p>Correlation matrix.</p>
Full article ">Figure 4
<p>Correlation matrix after feature selection.</p>
Full article ">Figure 5
<p>Distribution of passwords by the number of unique characters: (<b>a</b>) simple passwords; (<b>b</b>) strong passwords.</p>
Full article ">Figure 6
<p>Distribution of passwords by alphabet change frequency: (<b>a</b>) simple passwords; (<b>b</b>) strong passwords.</p>
Full article ">Figure 7
<p>Correlation matrix after feature selection.</p>
Full article ">Figure 8
<p>Initial preprocessing of strong passwords.</p>
Full article ">Figure 9
<p>Calculation of the parameters and post-processing of strong passwords.</p>
Full article ">Figure 10
<p>Processing of simple passwords.</p>
Full article ">Figure 11
<p>The configuration of the “Decision Tree Learner” module.</p>
Full article ">Figure 12
<p>ROC curve.</p>
Full article ">
22 pages, 5382 KiB  
Article
Impact of Feature Selection Techniques on the Performance of Machine Learning Models for Depression Detection Using EEG Data
by Marwa Hassan and Naima Kaabouch
Appl. Sci. 2024, 14(22), 10532; https://doi.org/10.3390/app142210532 - 15 Nov 2024
Viewed by 343
Abstract
Major depressive disorder (MDD) poses a significant challenge in mental healthcare due to difficulties in accurate diagnosis and timely identification. This study explores the potential of machine learning models trained on EEG-based features for depression detection. Six models and six feature selection techniques [...] Read more.
Major depressive disorder (MDD) poses a significant challenge in mental healthcare due to difficulties in accurate diagnosis and timely identification. This study explores the potential of machine learning models trained on EEG-based features for depression detection. Six models and six feature selection techniques were compared, highlighting the crucial role of feature selection in enhancing classifier performance. This study investigates the six feature selection methods: Elastic Net, Mutual Information (MI), Chi-Square, Forward Feature Selection with Stochastic Gradient Descent (FFS-SGD), Support Vector Machine-based Recursive Feature Elimination (SVM-RFE), and Minimal-Redundancy-Maximal-Relevance (mRMR). These methods were combined with six diverse classifiers: Logistic Regression, Support Vector Machine (SVM), Random Forest, Extreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and Light Gradient Boosting Machine (LightGBM). The results demonstrate the substantial impact of feature selection on model performance. SVM-RFE with SVM achieved the highest accuracy (93.54%) and F1 score (95.29%), followed by Logistic Regression with an accuracy of 92.86% and F1 score of 94.84%. Elastic Net also delivered strong results, with SVM and Logistic Regression both achieving 90.47% accuracy. Other feature selection methods yielded lower performance, emphasizing the importance of selecting appropriate feature selection and machine learning algorithms. These findings suggest that careful selection and application of feature selection techniques can significantly enhance the accuracy of EEG-based depression detection. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Standard 10–20 EEG setup that consists of 19 channels.</p>
Full article ">Figure 2
<p>Topographical comparison of mean power spectral density (PSD) across frequency bands in healthy and depressed groups.</p>
Full article ">Figure 3
<p>Top 100 features selected by ElasticNet, ranked by their coefficient magnitudes.</p>
Full article ">Figure 4
<p>Mutual information values for the top 100 predictors and the target variable.</p>
Full article ">Figure 5
<p>Top 100 features selected using the chi-square feature selection method.</p>
Full article ">Figure 6
<p>Top 100 features selected using FFS based on the SGD algorithm.</p>
Full article ">Figure 7
<p>Absolute coefficient values of the top 100 features selected using SVM-RFE.</p>
Full article ">Figure 8
<p>Top 100 features selected using the mRMR feature selection method.</p>
Full article ">Figure 9
<p>Algorithm performance metrics with and without different feature selection techniques.</p>
Full article ">Figure 10
<p>ROC curves for the six models using features selected by SVM-RFE.</p>
Full article ">
14 pages, 1452 KiB  
Article
Online Prediction and Correction of Static Voltage Stability Index Based on Extreme Gradient Boosting Algorithm
by Huiling Qin, Shuang Li, Juncheng Zhang, Zhi Rao, Chengyu He, Zhijun Chen and Bo Li
Energies 2024, 17(22), 5710; https://doi.org/10.3390/en17225710 - 15 Nov 2024
Viewed by 278
Abstract
With the increasing integration of renewable energy sources into the power grid and the continuous expansion of grid infrastructure, real-time preventive control becomes crucial. This article proposes a real-time prediction and correction method based on the extreme gradient boosting (XGBoost) algorithm. The XGBoost [...] Read more.
With the increasing integration of renewable energy sources into the power grid and the continuous expansion of grid infrastructure, real-time preventive control becomes crucial. This article proposes a real-time prediction and correction method based on the extreme gradient boosting (XGBoost) algorithm. The XGBoost algorithm is utilized to evaluate the real-time stability of grid static voltage, with the voltage stability L-index as the prediction target. A correction model is established with the objective of minimizing correction costs while considering the operational constraints of the grid. When the L-index exceeds the warning value, the XGBoost algorithm can obtain the importance of each feature of the system and calculate the sensitivity approximation of highly important characteristics. The model corrects these characteristics to maintain the system’s operation within a reasonably secure range. The methodology is demonstrated using the IEEE-14 and IEEE-118 systems. The results show that the XGBoost algorithm has higher prediction accuracy and computational efficiency in assessing the static voltage stability of the power grid. It is also shown that the proposed approach has the potential to greatly improve the operational dependability of the power grid. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

Figure 1
<p>Correction control optimization process.</p>
Full article ">Figure 2
<p>Illustration of the IEEE-14 system diagram.</p>
Full article ">Figure 3
<p>Error distribution of different machine learning algorithms.</p>
Full article ">Figure 4
<p>Feature parameter importance of IEEE-14 system node.</p>
Full article ">
27 pages, 3743 KiB  
Article
Performance Analysis and Improvement of Machine Learning with Various Feature Selection Methods for EEG-Based Emotion Classification
by Sherzod Abdumalikov, Jingeun Kim and Yourim Yoon
Appl. Sci. 2024, 14(22), 10511; https://doi.org/10.3390/app142210511 - 14 Nov 2024
Viewed by 624
Abstract
Emotion classification is a challenge in affective computing, with applications ranging from human–computer interaction to mental health monitoring. In this study, the classification of emotional states using electroencephalography (EEG) data were investigated. Specifically, the efficacy of the combination of various feature selection methods [...] Read more.
Emotion classification is a challenge in affective computing, with applications ranging from human–computer interaction to mental health monitoring. In this study, the classification of emotional states using electroencephalography (EEG) data were investigated. Specifically, the efficacy of the combination of various feature selection methods and hyperparameter tuning of machine learning algorithms for accurate and robust emotion recognition was studied. The following feature selection methods were explored: filter (SelectKBest with analysis of variance (ANOVA) F-test), embedded (least absolute shrinkage and selection operator (LASSO) tuned using Bayesian optimization (BO)), and wrapper (genetic algorithm (GA)) methods. We also executed hyperparameter tuning of machine learning algorithms using BO. The performance of each method was assessed. Two different EEG datasets, EEG Emotion and DEAP Dataset, containing 2548 and 160 features, respectively, were evaluated using random forest (RF), logistic regression, XGBoost, and support vector machine (SVM). For both datasets, the experimented three feature selection methods consistently improved the accuracy of the models. For EEG Emotion dataset, RF with LASSO achieved the best result among all the experimented methods increasing the accuracy from 98.78% to 99.39%. In the DEAP dataset experiment, XGBoost with GA showed the best result, increasing the accuracy by 1.59% and 2.84% for valence and arousal. We also show that these results are superior to those by the previous other methods in the literature. Full article
(This article belongs to the Special Issue Advances in Biosignal Processing)
Show Figures

Figure 1

Figure 1
<p>EEG brainwave dataset training.</p>
Full article ">Figure 2
<p>Flowchart of GA.</p>
Full article ">Figure 3
<p>Violin plots of statistical features in the EEG Emotion dataset: (<b>a</b>) mean, (<b>b</b>) mean difference (computed between windows), (<b>c</b>) min, (<b>d</b>) min difference (computed between windows), (<b>e</b>) min difference (computed for each quarter window), (<b>f</b>) max, (<b>g</b>) max difference (computed between windows), (<b>h</b>) max difference (computed for each quarter window), (<b>i</b>) standard deviation, (<b>j</b>) standard deviation difference (computed between windows), (<b>k</b>) log, (<b>l</b>) correlation, (<b>m</b>) entropy, (<b>n</b>) FFT.</p>
Full article ">Figure 4
<p>Violin plot of ten randomly selected features included in the DEAP dataset.</p>
Full article ">Figure 5
<p>FFT-based frequency analysis of the EEG dataset: randomly selected FFT of a sample with (<b>a</b>) positive and (<b>b</b>) negative emotion levels; emotion level analysis of the DEAP dataset: (<b>c</b>) neutral labels from the EEG Emotion dataset, (<b>d</b>) valence level, and (<b>e</b>) arousal level from the DEAP dataset.</p>
Full article ">Figure 6
<p>Graph comparing the four performance indicators of feature selection methods on the EEG Emotion dataset: (<b>a</b>) filter-based feature selection method; (<b>b</b>) embedded-based feature selection method; (<b>c</b>) wrapper-based feature selection method.</p>
Full article ">Figure 6 Cont.
<p>Graph comparing the four performance indicators of feature selection methods on the EEG Emotion dataset: (<b>a</b>) filter-based feature selection method; (<b>b</b>) embedded-based feature selection method; (<b>c</b>) wrapper-based feature selection method.</p>
Full article ">Figure 7
<p>Graph comparing the four performance indicators of feature selection methods on the DEAP dataset: (<b>a</b>) filter-based feature selection method; (<b>b</b>) embedded-based feature selection method; (<b>c</b>) wrapper-based feature selection method.</p>
Full article ">Figure 8
<p>Correlation heatmaps: (<b>a</b>) before feature selection, (<b>b</b>) after feature selection for the EEG Emotion dataset, (<b>c</b>) before feature selection for the DEAP dataset, (<b>d</b>) after feature selection for the valence label in the DEAP dataset, and (<b>e</b>) after feature selection for the arousal label in the DEAP dataset.</p>
Full article ">
22 pages, 4646 KiB  
Article
Concrete Creep Prediction Based on Improved Machine Learning and Game Theory: Modeling and Analysis Methods
by Wenchao Li, Houmin Li, Cai Liu and Kai Min
Buildings 2024, 14(11), 3627; https://doi.org/10.3390/buildings14113627 - 14 Nov 2024
Viewed by 313
Abstract
Understanding the impact of creep on the long-term mechanical features of concrete is crucial, and constructing an accurate prediction model is the key to exploring the development of concrete creep under long-term loads. Therefore, in this study, three machine learning (ML) models, a [...] Read more.
Understanding the impact of creep on the long-term mechanical features of concrete is crucial, and constructing an accurate prediction model is the key to exploring the development of concrete creep under long-term loads. Therefore, in this study, three machine learning (ML) models, a Support Vector Machine (SVM), Random Forest (RF), and Extreme Gradient Boosting Machine (XGBoost), are constructed, and the Hybrid Snake Optimization Algorithm (HSOA) is proposed, which can reduce the risk of the ML model falling into the local optimum while improving its prediction performance. Simultaneously, the contributions of the input features are ranked, and the optimal model’s prediction outcomes are explained through SHapley Additive exPlanations (SHAP). The research results show that the optimized SVM, RF, and XGBoost models increase their accuracies on the test set by 9.927%, 9.58%, and 14.1%, respectively, and the XGBoost has the highest precision in forecasting the concrete creep. The verification results of four scenarios confirm that the optimized model can precisely capture the compliance changes in long-term creep, meeting the requirements for forecasting the nature of concrete creep. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

Figure 1
<p>Histogram of the distribution of input variables.</p>
Full article ">Figure 2
<p>Indicators for Model Evaluation.</p>
Full article ">Figure 3
<p>Radar charts of the training (<b>a</b>) and testing sets (<b>b</b>) for the HSOA-SVM, HSOA-RF, and HSOA-XGBoost models’ performance.</p>
Full article ">Figure 4
<p>The regression results of the test sets for (<b>a</b>) SVM; (<b>b</b>) RF; (<b>c</b>) XGBoost; (<b>d</b>) HSOA-SVM; (<b>e</b>) HSOA-RF; and (<b>f</b>) HSOA-XGBoost.</p>
Full article ">Figure 4 Cont.
<p>The regression results of the test sets for (<b>a</b>) SVM; (<b>b</b>) RF; (<b>c</b>) XGBoost; (<b>d</b>) HSOA-SVM; (<b>e</b>) HSOA-RF; and (<b>f</b>) HSOA-XGBoost.</p>
Full article ">Figure 5
<p>Residual analysis plots for (<b>a</b>) HSOA-SVM; (<b>b</b>) HSOA-RF; and (<b>c</b>) HSOA-XGBoost models.</p>
Full article ">Figure 6
<p>Residual distribution plot for HSOA-SVM, HSOA-RF, and HSOA-XGBoost.</p>
Full article ">Figure 7
<p>Comparing the errors between the testing sets of (<b>a</b>) HSOA-SVM; (<b>b</b>) HSOA-RF; and (<b>c</b>) HSOA-XGBoost.</p>
Full article ">Figure 8
<p>Aggregated HSOA-XGBoost-based concrete creep SHAP.</p>
Full article ">Figure 9
<p>Global SHAP values using the HSOA-XGBoost model.</p>
Full article ">Figure 10
<p>Results of importance analysis using SHAP. (<b>a</b>) Time since loading (days). (<b>b</b>) Cement content (Kg/m<sup>3</sup>). (<b>c</b>) Water–cement ratio. (<b>d</b>) Loading stress (MPa). (<b>e</b>) Compressive strength (MPa).</p>
Full article ">Figure 11
<p>SHAP characteristic force diagram. (<b>a</b>) Scenario 1; (<b>b</b>) Scenario 2; (<b>c</b>) Scenario 3.</p>
Full article ">Figure 12
<p>Four typical scenarios for predicting creep flexibility: (<b>a</b>) Scenario S1; (<b>b</b>) Scenario S2; (<b>c</b>) Scenario S3; and (<b>d</b>) Scenario S4.</p>
Full article ">
11 pages, 1745 KiB  
Article
Improved Cd Detection in Rice Grain Using LIBS with Husk-Based XGBoost Transfer Learning
by Weiping Xie, Jiang Xu, Lin Huang, Yuan Xu, Qi Wan, Yangfan Chen and Mingyin Yao
Agriculture 2024, 14(11), 2053; https://doi.org/10.3390/agriculture14112053 - 14 Nov 2024
Viewed by 281
Abstract
Cadmium (Cd) is a highly toxic metal that is difficult to completely eliminate from soil, despite advancements in modern agricultural and environmental technologies that have successfully reduced Cd levels. However, rice remains a key source of Cd exposure for humans. Even small amounts [...] Read more.
Cadmium (Cd) is a highly toxic metal that is difficult to completely eliminate from soil, despite advancements in modern agricultural and environmental technologies that have successfully reduced Cd levels. However, rice remains a key source of Cd exposure for humans. Even small amounts of Cd absorbed by rice can pose a potential health risk to the human body. Laser-induced breakdown spectroscopy (LIBS) has the advantages of simple sample preparation and fast analysis, which, combined with the transfer learning method, is expected to realize the real-time and rapid detection of low-level heavy metals in rice. In this work, 21 groups of naturally matured rice samples from potentially Cd-contaminated environments were collected. These samples were processed into rice husk, brown rice, and polished rice groups, and the reference Cd content was measured by ICP-MS. The XGBoost algorithm, known for its excellent performance in handling high-dimensional data and nonlinear relationships, was applied to construct both the XGBoost base model and the XGBoost-based transfer learning model to predict Cd content in brown rice and polished rice. By pre-training on rice husk source data, the XGBoost-based transfer learning model can learn from the abundant information available in rice husk to improve Cd quantification in rice grain. For brown rice, the XGBoost base model achieved RC2 of 0.9852 and RP2 of 0.8778, which were improved to 0.9885 and 0.9743, respectively, with the XGBoost-based transfer learning model. In the case of polished rice, the base model achieved RC2 of 0.9838 and RP2 of 0.8683, while the transfer learning model enhanced these to 0.9883 and 0.9699, respectively. The results indicate that the transfer learning method not only improves the detection capability for low Cd content in rice but also provides new insights for food safety detection. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of LIBS experimental setup.</p>
Full article ">Figure 2
<p>Schematic of the XGBoost structure used for modeling.</p>
Full article ">Figure 3
<p>Schematic of migration learning from pre-trained rice husk XGBoost model to target domain.</p>
Full article ">Figure 4
<p>Typical LIBS spectrum of #19 rice sample.</p>
Full article ">Figure 5
<p>Analytical curve of Cd content in brown rice. (<b>a</b>) XGBoost base model; (<b>b</b>) XGBoost-based transfer learning model.</p>
Full article ">Figure 6
<p>Analytical curve of Cd content in polished rice. (<b>a</b>) XGBoost base model; (<b>b</b>) XGBoost-based transfer learning model.</p>
Full article ">
24 pages, 4650 KiB  
Article
Passenger Flow Prediction for Rail Transit Stations Based on an Improved SSA-LSTM Model
by Xing Zhao, Chenxi Li, Xueting Zou, Xiwang Du and Ahmed Ismail
Mathematics 2024, 12(22), 3556; https://doi.org/10.3390/math12223556 - 14 Nov 2024
Viewed by 328
Abstract
Accurate and timely passenger flow prediction is important for the successful deployment of rail transit intelligent operation. The Sparrow Search Algorithm (SSA) has been applied to the parameter optimization of a Long-Short-Term Memory (LSTM) model. To solve the inherent weaknesses of SSA, this [...] Read more.
Accurate and timely passenger flow prediction is important for the successful deployment of rail transit intelligent operation. The Sparrow Search Algorithm (SSA) has been applied to the parameter optimization of a Long-Short-Term Memory (LSTM) model. To solve the inherent weaknesses of SSA, this paper proposes an improved SSA-LSTM model with optimization strategies including Tent Map and Levy Flight to practice the short-term prediction of boarding passenger flow at rail transit stations. Aimed at the passenger flow at four rail transit stations in Nanjing, China, it is found that the day of a week and rainfall are the influencing factors with the highest correlation. On this basis, we apply the proposed SSA-LSTM and four baseline models to realize the short-term prediction, and carry out the prediction experiments with different time granularities. According to the experimental results, the proposed SSA-LSTM model has a more effective performance than the Support Vector Regression (SVR) method, the eXtreme Gradient Boosting (XGBoost) model, the traditional LSTM model, and the improved LSTM model with the Whale Optimization Algorithm (WOA-LSTM) in the passenger flow prediction. In addition, for most stations, the prediction accuracy of the proposed SSA-LSTM model is greater at a larger time granularity, but there are still exceptions. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of an LSTM cell.</p>
Full article ">Figure 2
<p>Framework of the proposed SSA-LSTM model.</p>
Full article ">Figure 3
<p>Example of the optimization procedure for Tent-Levy-SSA.</p>
Full article ">Figure 4
<p>Nanjing rail system and passenger flow thermodynamic diagram (In 2017).</p>
Full article ">Figure 5
<p>Daily boarding passenger flow at the stations in October 2017.</p>
Full article ">Figure 6
<p>Temporal patterns of boarding passenger flow on different days at the stations.</p>
Full article ">Figure 6 Cont.
<p>Temporal patterns of boarding passenger flow on different days at the stations.</p>
Full article ">Figure 6 Cont.
<p>Temporal patterns of boarding passenger flow on different days at the stations.</p>
Full article ">Figure 6 Cont.
<p>Temporal patterns of boarding passenger flow on different days at the stations.</p>
Full article ">Figure 7
<p>Prediction results of boarding passenger flow at stations with 10 min time granularity.</p>
Full article ">Figure 7 Cont.
<p>Prediction results of boarding passenger flow at stations with 10 min time granularity.</p>
Full article ">
46 pages, 4014 KiB  
Article
Robust Human Activity Recognition for Intelligent Transportation Systems Using Smartphone Sensors: A Position-Independent Approach
by John Benedict Lazaro Bernardo, Attaphongse Taparugssanagorn, Hiroyuki Miyazaki, Bipun Man Pati and Ukesh Thapa
Appl. Sci. 2024, 14(22), 10461; https://doi.org/10.3390/app142210461 - 13 Nov 2024
Viewed by 803
Abstract
This study explores Human Activity Recognition (HAR) using smartphone sensors to address the challenges posed by position-dependent datasets. We propose a position-independent system that leverages data from accelerometers, gyroscopes, linear accelerometers, and gravity sensors collected from smartphones placed either on the chest or [...] Read more.
This study explores Human Activity Recognition (HAR) using smartphone sensors to address the challenges posed by position-dependent datasets. We propose a position-independent system that leverages data from accelerometers, gyroscopes, linear accelerometers, and gravity sensors collected from smartphones placed either on the chest or in the left/right leg pocket. The performance of traditional machine learning algorithms (Decision Trees (DT), K-Nearest Neighbors (KNN), Random Forest (RF), Support Vector Classifier (SVC), and XGBoost) is compared against deep learning models (Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Temporal Convolutional Networks (TCN), and Transformer models) under two sensor configurations. Our findings highlight that the Temporal Convolutional Network (TCN) model consistently outperforms other models, particularly in the four-sensor non-overlapping configuration, achieving the highest accuracy of 97.70%. Deep learning models such as LSTM, GRU, and Transformer also demonstrate strong performance, showcasing their effectiveness in capturing temporal dependencies in HAR tasks. Traditional machine learning models, including RF and XGBoost, provide reasonable performance but do not match the accuracy of deep learning models. Additionally, incorporating data from linear accelerometers and gravity sensors led to slight improvements over using accelerometer and gyroscope data alone. This research enhances the recognition of passenger behaviors for intelligent transportation systems, contributing to more efficient congestion management and emergency response strategies. Full article
Show Figures

Figure 1

Figure 1
<p>Running Activity Accelerometer Data: acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 2
<p>Running Activity Gyroscope Data: angular velocity along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 3
<p>Running Activity Linear Accelerometer Data: linear acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 4
<p>Running Activity Gravity Sensor Data: gravitational acceleration values along the x, y, and z axes recorded between 80 and 85 s.</p>
Full article ">Figure 5
<p>Methodological framework for assessing machine learning and deep learning techniques.</p>
Full article ">Figure 6
<p>Architecture of a Gated Recurrent Unit (GRU) Network used in Activity Recognition. Adapted from [<a href="#B37-applsci-14-10461" class="html-bibr">37</a>], showing the flow through the reset and update gates, facilitating efficient sequential data processing.</p>
Full article ">Figure 7
<p>Architecture of a Long Short-Term Memory (LSTM) Network utilized in Activity Recognition. Adapted from [<a href="#B38-applsci-14-10461" class="html-bibr">38</a>], showing the flow of information through the forget, input, and output gates to manage long-term dependencies in sequential data.</p>
Full article ">Figure 8
<p>Architecture of a Temporal Convolutional Network (TCN) for Activity Recognition, adapted from [<a href="#B28-applsci-14-10461" class="html-bibr">28</a>]. Dilated causal convolutions capture long-term dependencies, with dropout layers to prevent overfitting.</p>
Full article ">Figure 9
<p>Architecture of the Transformer Model used in Activity Recognition, illustrating the multi-head attention and feed-forward layers, adapted from [<a href="#B29-applsci-14-10461" class="html-bibr">29</a>]. The positional encoding enables handling of sequential data without recurrence.</p>
Full article ">Figure 10
<p>ConfusionMatrices formodels using a two-sensor configuration with non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 10 Cont.
<p>ConfusionMatrices formodels using a two-sensor configuration with non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 11
<p>Confusion Matrices for models using a two-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 11 Cont.
<p>Confusion Matrices for models using a two-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 12
<p>Confusion Matrices for models using a four-sensor configuration with Non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 12 Cont.
<p>Confusion Matrices for models using a four-sensor configuration with Non-overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 13
<p>Confusion Matrices for models using a four-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">Figure 13 Cont.
<p>Confusion Matrices for models using a four-sensor configuration with 50% overlapping data segments: (<b>a</b>) DT, (<b>b</b>) KNN, (<b>c</b>) RF, (<b>d</b>) SVC, (<b>e</b>) XGBoost, (<b>f</b>) GRU, (<b>g</b>) LSTM, (<b>h</b>) TCN, (<b>i</b>) Transformer.</p>
Full article ">
23 pages, 10028 KiB  
Article
A New Frontier in Wind Shear Intensity Forecasting: Stacked Temporal Convolutional Networks and Tree-Based Models Framework
by Afaq Khattak, Jianping Zhang, Pak-wai Chan, Feng Chen and Abdulrazak H. Almaliki
Atmosphere 2024, 15(11), 1369; https://doi.org/10.3390/atmos15111369 - 13 Nov 2024
Viewed by 415
Abstract
Wind shear presents a considerable hazard to aviation safety, especially during the critical phases of takeoff and landing. Accurate forecasting of wind shear events is essential to mitigate these risks and improve both flight safety and operational efficiency. This paper introduces a hybrid [...] Read more.
Wind shear presents a considerable hazard to aviation safety, especially during the critical phases of takeoff and landing. Accurate forecasting of wind shear events is essential to mitigate these risks and improve both flight safety and operational efficiency. This paper introduces a hybrid Temporal Convolutional Networks and Tree-Based Models (TCNs-TBMs) framework specifically designed for time series modeling and the prediction of wind shear intensity. The framework utilizes the ability of TCNs to capture intricate temporal patterns and integrates it with the predictive strengths of TBMs, such as Extreme Gradient Boosting (XGBoost), Random Forest (RF), and Categorical Boosting (CatBoost), resulting in robust forecast. To ensure optimal performance, hyperparameter tuning was performed using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), enhancing predictive accuracy. The effectiveness of the framework is validated through comparative analyses with standalone machine learning models such as XGBoost, RF, and CatBoost. The proposed TCN-XGBoost model outperformed these alternatives, achieving a lower Root Mean Squared Error (RMSE: 1.95 for training, 1.97 for testing), Mean Absolute Error (MAE: 1.41 for training, 1.39 for testing), and Mean Absolute Percentage Error (MAPE: 7.90% for training, 7.89% for testing). Furthermore, the uncertainty analysis demonstrated the model’s reliability, with a lower mean uncertainty (7.14 × 10−8) and standard deviation of uncertainty (6.48 × 10−8) compared to other models. These results highlight the potential of the TCNs-TBMs framework to significantly enhance the accuracy of wind shear intensity predictions, emphasizing the value of advanced time series modeling techniques for risk management and decision-making in the aviation industry. This study highlights the framework’s broader applicability to other meteorological forecasting tasks, contributing to aviation safety worldwide. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

Figure 1
<p>Illustration of intense wind shear effect on approaching aircraft.</p>
Full article ">Figure 2
<p>Stacked TCN-TBM framework for the time series prediction of wind shear intensity.</p>
Full article ">Figure 3
<p>Lantau Island located south of HKIA.</p>
Full article ">Figure 4
<p>Infrastructure surrounding and inside HKIA.</p>
Full article ">Figure 5
<p>Diagram depicting a Doppler LiDAR scan along the 3-degree fixed glide path for the western approach to the north runway at HKIA.</p>
Full article ">Figure 6
<p>Doppler LiDAR velocity plots for different runways at 0145 UTC, 27 August 2017.</p>
Full article ">Figure 7
<p>Encounter locations of wind shear events in the runway vicinity.</p>
Full article ">Figure 8
<p>Wind shear intensity over time at HKIA.</p>
Full article ">Figure 9
<p>Seasonality trends in the wind shear intensity at HKIA.</p>
Full article ">Figure 10
<p>Wind Shear intensity over time, with training data (1 January 2017–31 July 2021) in pink and testing data (1 August 2021–31 December 2021) in purple.</p>
Full article ">Figure 11
<p>Comparison of actual vs. predicted values for different time series models: (<b>a</b>) TCN-XGBoost based on training data; (<b>b</b>) TCN-XGBoost based on testing data; (<b>c</b>) TCN-RF based on training data; (<b>d</b>) TCN-RF based on testing data; (<b>e</b>) TCN-CatBoost based on training data; (<b>f</b>) TCN-CatBoost based on testing data; (<b>g</b>) XGBoost based on training data; (<b>h</b>) XGBoost based on testing data; (<b>i</b>) CatBoost based on training data; (<b>j</b>) CatBoost based on testing data; (<b>k</b>) RF based on training data; (<b>l</b>) RF based on testing data.</p>
Full article ">Figure 11 Cont.
<p>Comparison of actual vs. predicted values for different time series models: (<b>a</b>) TCN-XGBoost based on training data; (<b>b</b>) TCN-XGBoost based on testing data; (<b>c</b>) TCN-RF based on training data; (<b>d</b>) TCN-RF based on testing data; (<b>e</b>) TCN-CatBoost based on training data; (<b>f</b>) TCN-CatBoost based on testing data; (<b>g</b>) XGBoost based on training data; (<b>h</b>) XGBoost based on testing data; (<b>i</b>) CatBoost based on training data; (<b>j</b>) CatBoost based on testing data; (<b>k</b>) RF based on training data; (<b>l</b>) RF based on testing data.</p>
Full article ">Figure 11 Cont.
<p>Comparison of actual vs. predicted values for different time series models: (<b>a</b>) TCN-XGBoost based on training data; (<b>b</b>) TCN-XGBoost based on testing data; (<b>c</b>) TCN-RF based on training data; (<b>d</b>) TCN-RF based on testing data; (<b>e</b>) TCN-CatBoost based on training data; (<b>f</b>) TCN-CatBoost based on testing data; (<b>g</b>) XGBoost based on training data; (<b>h</b>) XGBoost based on testing data; (<b>i</b>) CatBoost based on training data; (<b>j</b>) CatBoost based on testing data; (<b>k</b>) RF based on training data; (<b>l</b>) RF based on testing data.</p>
Full article ">Figure 11 Cont.
<p>Comparison of actual vs. predicted values for different time series models: (<b>a</b>) TCN-XGBoost based on training data; (<b>b</b>) TCN-XGBoost based on testing data; (<b>c</b>) TCN-RF based on training data; (<b>d</b>) TCN-RF based on testing data; (<b>e</b>) TCN-CatBoost based on training data; (<b>f</b>) TCN-CatBoost based on testing data; (<b>g</b>) XGBoost based on training data; (<b>h</b>) XGBoost based on testing data; (<b>i</b>) CatBoost based on training data; (<b>j</b>) CatBoost based on testing data; (<b>k</b>) RF based on training data; (<b>l</b>) RF based on testing data.</p>
Full article ">Figure 12
<p>Uncertainty plot for different deep learning and machine learning models: (<b>a</b>) TCN-XGBoost; (<b>b</b>) TCN-RF; (<b>c</b>) TCN-CatBoost; (<b>d</b>) XGBoost; (<b>e</b>) RF; (<b>f</b>) CatBoost.</p>
Full article ">Figure 12 Cont.
<p>Uncertainty plot for different deep learning and machine learning models: (<b>a</b>) TCN-XGBoost; (<b>b</b>) TCN-RF; (<b>c</b>) TCN-CatBoost; (<b>d</b>) XGBoost; (<b>e</b>) RF; (<b>f</b>) CatBoost.</p>
Full article ">Figure 12 Cont.
<p>Uncertainty plot for different deep learning and machine learning models: (<b>a</b>) TCN-XGBoost; (<b>b</b>) TCN-RF; (<b>c</b>) TCN-CatBoost; (<b>d</b>) XGBoost; (<b>e</b>) RF; (<b>f</b>) CatBoost.</p>
Full article ">
Back to TopTop