Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,936)

Search Parameters:
Keywords = e-learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 302 KiB  
Article
Prevalence and Risk Factors of Mobile Screen Dependence in Arab Women Screened with Psychological Stress: A Cross-Talk with Demographics and Insomnia
by Omar Gammoh, Abdelrahim Alqudah, Mariam Al-Ameri, Bilal Sayaheen, Mervat Alsous, Deniz Al-Tawalbeh, Mo’en Alnasraween, Batoul Al. Muhaissen, Alaa A. A. Aljabali, Sireen Abdul Rahim Shilbayeh and Esam Qnais
J. Clin. Med. 2025, 14(5), 1463; https://doi.org/10.3390/jcm14051463 (registering DOI) - 21 Feb 2025
Abstract
Background/Objectives: The current study aims to investigate the rate and the factors associated with mobile screen dependence as a coping mechanism among women residing in Jordan and screened for stress, with a focus on demographics and insomnia. Methods: This cross-sectional study [...] Read more.
Background/Objectives: The current study aims to investigate the rate and the factors associated with mobile screen dependence as a coping mechanism among women residing in Jordan and screened for stress, with a focus on demographics and insomnia. Methods: This cross-sectional study with predefined inclusion criteria used validated tools to assess stress, anxiety, and insomnia. Results: The data analyzed from 431 women showed that 265 (61.5%) were ≤25 years old, 352 (81.7%) received a university education, and 201 (46.6%) were current students. In addition, 207 (48.0%) reported a dependence on mobile screens for coping, 107 (24.8%) reported severe anxiety, and 180 (41.7%) reported severe insomnia. The multivariable regression analysis revealed that mobile screen dependence—as a personal coping choice—was significantly associated with “students” (OR = 1.75, 95% CI = 1.19–2.57, p = 0.004) and “severe insomnia” (OR = 1.07, 95% CI = 1.07–2.32, p = 0.02). Conclusions: We report that a high rate of mobile dependence is associated with students and insomnia. Prompt action should be taken to raise awareness regarding the proper coping mechanisms in this population. Full article
(This article belongs to the Special Issue Effect of Long-Term Insomnia on Mental Health)
21 pages, 2658 KiB  
Article
Effect of a Plant-Based Nootropic Supplement on Perceptual Decision-Making and Brain Network Interdependencies: A Randomised, Double-Blinded, and Placebo-Controlled Study
by David O’Reilly, Joshua Bolam, Ioannis Delis and Andrea Utley
Brain Sci. 2025, 15(3), 226; https://doi.org/10.3390/brainsci15030226 - 21 Feb 2025
Abstract
Background: Natural nootropic compounds are evidenced to restore brain function in clinical and older populations and are purported to enhance cognitive abilities in healthy cohorts. This study aimed to provide neurocomputational insight into the discrepancies between the remarkable self-reports and growing interest in [...] Read more.
Background: Natural nootropic compounds are evidenced to restore brain function in clinical and older populations and are purported to enhance cognitive abilities in healthy cohorts. This study aimed to provide neurocomputational insight into the discrepancies between the remarkable self-reports and growing interest in nootropics among healthy adults and the inconclusive performance-enhancing effects found in the literature. Methods: Towards this end, we devised a randomised, double-blinded, and placebo-controlled study where participants performed a visual categorisation task prior to and following 60 days of supplementation with a plant-based nootropic, while electroencephalographic (EEG) signals were concurrently captured. Results: We found that although no improvements in choice accuracy or reaction times were observed, the application of multivariate information-theoretic measures to the EEG source space showed broadband increases in similar and complementary interdependencies across brain networks of various spatial scales. These changes not only resulted in localised increases in the redundancy among brain network interactions but also more significant and widespread increases in synergy, especially within the delta frequency band. Conclusions: Our findings suggest that natural nootropics can improve overall brain network cohesion and energetic efficiency, computationally demonstrating the beneficial effects of natural nootropics on brain health. However, these effects could not be related to enhanced rapid perceptual decision-making performance in a healthy adult sample. Future research investigating these specific compounds as cognitive enhancers in healthy populations should focus on complex cognition in deliberative tasks (e.g., creativity, learning) and over longer supplementation durations. Clinical trials registration number: NCT06689644. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
17 pages, 2876 KiB  
Article
Investigation of the Wheat Production Dynamics Under Climate Change via Machine Learning Models
by Ayca Nur Sahin Demirel
Sustainability 2025, 17(5), 1832; https://doi.org/10.3390/su17051832 - 21 Feb 2025
Abstract
This study employs two distinct machine learning (ML) methodologies to investigate the impact of 12 different key climatic variables on wheat production efficiency, a crucial component of the global and Turkish agricultural economy. Neural network (NN) and eXtreme Gradient Boosting (XGBoost) algorithms are [...] Read more.
This study employs two distinct machine learning (ML) methodologies to investigate the impact of 12 different key climatic variables on wheat production efficiency, a crucial component of the global and Turkish agricultural economy. Neural network (NN) and eXtreme Gradient Boosting (XGBoost) algorithms are utilised to model wheat production performance using climate variable data, including greenhouse gases, from 1990 to 2024. The models incorporate a total of 21 different independent variables, comprising 9 climatic variables (daytime and nighttime total 18 variables) and 3 distinct greenhouse gas variables, considering day and night values separately. Wheat production efficiency analyses indicate that between 2005 and 2024, Turkey’s wheat cultivation area decreased, while production efficiency increased. ML analyses reveal that greenhouse gases are the most influential variables in wheat production. XGBoost identified four different variables associated with wheat production, whereas the neural network determined that five different variables affect wheat production. While the influence of greenhouse gases was observed in both ML models, it was concluded that nighttime humidity, daytime 10 m v-wind, and daytime 2 m temperature may be additional climatic factors that will impact wheat production in the future. This study elucidates the complex relationship between climate change and wheat production in Turkey. The findings emphasise the importance of the potential for predicting wheat yields with the dual influence of climatic factors and informing agricultural producers about such next-generation practices. Full article
Show Figures

Figure 1

Figure 1
<p>Machine learning diagrams for climate change–wheat production yield. (<b>a</b>) for XGBoost machine learning, (<b>b</b>) for neural network machine learning. Both diagrams show the stages of the machine learning process in RStudio software after the dataset upload.</p>
Full article ">Figure 2
<p>Turkey’s wheat production performances. The peak of each column indicates the area of wheat cultivation (yellow), production amount (pink) and wheat yield (turquoise) in the indicated year.</p>
Full article ">Figure 3
<p>The changes in climate factors. In these graphs, the y-axes indicate abbreviations of Climatic Variables (CV) and the following abbreviations: CV-1 represents the 10-m U-wind (m s<sup>−1</sup>), CV-2 the 10-m V-wind (m s<sup>−1</sup>), CV-3 the 2-m temperature (K), CV-4 the soil moisture (%), CV-5 the total precipitation (m), CV-6 humidity (kgs m<sup>−2</sup>), CV-7 surface pressure (Pa), CV-8 solar radiation (Ws m<sup>−2</sup>), and CV-9 thermal radiation (Ws m<sup>−2</sup>). The three main graphs at the bottom line show the changes in greenhouse gases in given years.</p>
Full article ">Figure 4
<p>1:1 graph for model success: (<b>a</b>) XGBoost, (<b>b</b>) Neural network. The graph presents the comparison of prediction values and observation values. The black dots indicate the prediction values corresponding to the observation values. The red line is the 1:1 line indicating a perfect prediction for the machine learning model.</p>
Full article ">
20 pages, 5144 KiB  
Article
Multi-Scale Channel Mixing Convolutional Network and Enhanced Residual Shrinkage Network for Rolling Bearing Fault Diagnosis
by Xiaoxu Li, Jiaming Chen, Jianqiang Wang, Jixuan Wang, Jiahao Wang, Xiaotao Li and Yingnan Kan
Electronics 2025, 14(5), 855; https://doi.org/10.3390/electronics14050855 - 21 Feb 2025
Abstract
Rolling bearing vibration signals in rotating machinery exhibit complex nonlinear and multi-scale features with redundant information interference. To address these challenges, this paper presents a multi-scale channel mixing convolutional network (MSCMN) and an enhanced deep residual shrinkage network (eDRSN) for improved feature learning [...] Read more.
Rolling bearing vibration signals in rotating machinery exhibit complex nonlinear and multi-scale features with redundant information interference. To address these challenges, this paper presents a multi-scale channel mixing convolutional network (MSCMN) and an enhanced deep residual shrinkage network (eDRSN) for improved feature learning and fault diagnosis accuracy in industrial settings. The MSCMN, applied in the initial and intermediate network layers, extracts multi-scale features from vibration signals, providing detailed information. By incorporating 1 × 1 convolutional blocks, the MSCMN mixes and reduces the feature dimensions, generating attention weights to suppress the interference from redundant information. Due to the high noise and nonlinear nature of industrial vibration signals, traditional linear layer representation is often inadequate. Thus, we propose an eDRSN with a Kolmogorov–Arnold Network–linear layer (KANLinear), which combines linear transformations with B-spline interpolation to capture both linear and nonlinear features, thereby enhancing threshold learning. Experiments on datasets from Case Western Reserve University and our laboratory validated the efficacy of the MSCMN-eDRSN model, which demonstrated improved diagnostic accuracy and robustness under noisy, real-world conditions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Structure of MSCMN.</p>
Full article ">Figure 2
<p>Structure of eRSBU.</p>
Full article ">Figure 3
<p>Framework of the proposed MSCMN-eDRSN.</p>
Full article ">Figure 4
<p>CWRU bearing test stand.</p>
Full article ">Figure 5
<p>(<b>a</b>) Diagnostic accuracy of different models under Gaussian noise. (<b>b</b>) Diagnostic accuracy of different models under Laplace noise.</p>
Full article ">Figure 6
<p>Confusion matrix of the proposed model in Gaussian noise environments. (<b>a</b>) SNR = −8 dB. (<b>b</b>) SNR = 5 dB.</p>
Full article ">Figure 7
<p>Confusion matrix of the proposed model in Laplace noise environments. (<b>a</b>) SNR= −8 dB. (<b>b</b>) SNR = 5 dB.</p>
Full article ">Figure 8
<p>Laboratory bearing test stand.</p>
Full article ">Figure 9
<p>Test accuracy of different models against Gaussian noise with SNR = −8 dB.</p>
Full article ">Figure 10
<p>Test accuracy of different models against Laplace noise with SNR = −8 dB.</p>
Full article ">Figure 11
<p>Confusion matrix for different models under Gaussian noise with SNR = −8 dB. (<b>a</b>) MSCNN. (<b>b</b>) DRSN-CS. (<b>c</b>) DRSN–Transformer. (<b>d</b>) Proposed.</p>
Full article ">
13 pages, 1752 KiB  
Article
The Role of Baseline Total Kidney Volume Growth Rate in Predicting Tolvaptan Efficacy for ADPKD Patients: A Feasibility Study
by Hreedi Dev, Zhongxiu Hu, Jon D. Blumenfeld, Arman Sharbatdaran, Yelynn Kim, Chenglin Zhu, Daniil Shimonov, James M. Chevalier, Stephanie Donahue, Alan Wu, Arindam RoyChoudhury, Xinzi He and Martin R. Prince
J. Clin. Med. 2025, 14(5), 1449; https://doi.org/10.3390/jcm14051449 - 21 Feb 2025
Abstract
Background/Objectives: Although tolvaptan efficacy in ADPKD has been demonstrated in randomized clinical trials, there is no definitive method for assessing its efficacy in the individual patient in the clinical setting. In this exploratory feasibility study, we report a method to quantify the [...] Read more.
Background/Objectives: Although tolvaptan efficacy in ADPKD has been demonstrated in randomized clinical trials, there is no definitive method for assessing its efficacy in the individual patient in the clinical setting. In this exploratory feasibility study, we report a method to quantify the change in total kidney volume (TKV) growth rate to retrospectively evaluate tolvaptan efficacy for individual patients. Treatment-related changes in estimated glomerular filtration rate (eGFR) are also assessed. Methods: MRI scans covering at least 1 year prior to and during treatment with tolvaptan were performed, with deep learning facilitated kidney segmentation and fitting multiple imaging timepoints to exponential growth in 32 ADPKD patients. Clustering analysis differentiated tolvaptan treatment “responders” and “non-responders” based upon the magnitude of change in TKV growth rate. Differences in rate of eGFR decline, urine osmolality, and other parameters were compared between responders and non-responders. Results: Eighteen (56%) tolvaptan responders (mean age 42 ± 8 years) were identified by k-means clustering, with an absolute reduction in annual TKV growth rate of >2% (mean = −5.1% ± 2.5% per year). Thirteen (44%) non-responders were identified, with <1% absolute reduction in annual TKV growth rate (mean = +2.4% ± 2.7% per year) during tolvaptan treatment. Compared to non-responders, tolvaptan responders had significantly higher mean TKV growth rates prior to tolvaptan treatment (7.1% ± 3.6% per year vs. 3.7% ± 2.4% per year; p = 0.003) and higher median pretreatment spot urine osmolality (Uosm, 393 mOsm/kg vs. 194 mOsm/kg, p = 0.03), confirmed by multivariate analysis. Mean annual rate of eGFR decline was less in responders than in non-responders (−0.25 ± 0.04, CI: [−0.27, −0.23] mL/min/1.73 m2 per year vs. −0.40 ± 0.06, CI: [−0.43, −0.37] mL/min/1.73 m2 per year, p = 0.036). Conclusions: In this feasibility study designed to assess predictors of tolvaptan treatment efficacy in individual patients with ADPKD, we found that high pretreatment levels of annual TKV growth rate and higher pretreatment spot urine osmolality were associated with a responder phenotype. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Patient flowchart.</p>
Full article ">Figure 2
<p>Clustering analysis of absolute change in TKV growth rate before and during tolvaptan treatment.</p>
Full article ">Figure 3
<p>Change in TKV growth rate from pre to during tolvaptan for the entire cohort ((<b>A</b>), grey), responders ((<b>B</b>), blue), and non-responders ((<b>C</b>), red). Note that all responders (n = 18) had at least 2% absolute reduction in the TKV growth rate while non-responders (n = 14) all had less than 1% absolute reduction in TKV growth rate.</p>
Full article ">Figure 4
<p>Change in TKV growth rate during tolvaptan treatment in a responder ((<b>A</b>), 3.4% to −5.6% TKV absolute change per year) and non-responder ((<b>B</b>), 3.9% to 9.2% TKV absolute change per year). Time of tolvaptan initiation is indicated by the vertical grey dashed line. Pre-tolvaptan TKV growth rate (black dashed line) and during-tolvaptan TKV growth rate (blue dashed line) were calculated using two-parameter least squares fitting of TKV measured on MRIs acquired before and after tolvaptan initiation respectively. Blue, orange, green and red lines correspond to Mayo Imaging Classification trajectories with annual TKV growth rates in the legend.</p>
Full article ">Figure 5
<p>Trajectory of estimated glomerular filtration rate (eGFR) in 32 subjects. Change in eGFR (ΔeGFR) measurements of responders (blue dots) and non-responders (red dots) every 4 month since tolvaptan initiation to month 48 overlayed by fitted trendlines for all patients (black line), responders (blue line), non-responders (red line). Rate of eGFR decline in responders is −0.25 ± 0.04, CI: [−0.27, −0.23] mL/min/1.73 m<sup>2</sup> per year, and rate of eGFR decline in non-responders is −0.40 ± 0.06, CI: [−0.43, −0.37] mL/min/1.73 m<sup>2</sup> per year, which is 60% greater than responders.</p>
Full article ">
22 pages, 2622 KiB  
Article
Machine Learning with Evolutionary Parameter Tuning for Singing Registers Classification
by Tales Boratto, Gabriel de Oliveira Costa, Alexsandro Meireles, Anna Klara Sá Teles Rocha Alves, Camila M. Saporetti, Matteo Bodini, Alexandre Cury and Leonardo Goliatt
Signals 2025, 6(1), 9; https://doi.org/10.3390/signals6010009 - 21 Feb 2025
Abstract
Behind human voice production, a complex biological mechanism generates and modulates sound. Recent research has explored machine-learning (ML) techniques to analyze singing-voice characteristics. However, the classification efficiency reported in such research works suggests the possibility of improvement. In addition, there is also scope [...] Read more.
Behind human voice production, a complex biological mechanism generates and modulates sound. Recent research has explored machine-learning (ML) techniques to analyze singing-voice characteristics. However, the classification efficiency reported in such research works suggests the possibility of improvement. In addition, there is also scope for further improvement through the application of still under-utilized optimization techniques. Thus, the present article proposes a novel approach that leverages the Differential Evolution (DE) algorithm to optimize hyperparameters within three selected ML models, with the aim of classifying singing-voice registers i.e., chest, mixed, and head registers). To develop the present study, a dataset of 350 audio files encompassing the three aforementioned registers was constructed. Then, the TSFEL Python library was employed to extract 14 pieces of temporal information from the audio signals for subsequent classification by the employed ML models. The obtained findings demonstrated that the Extreme Gradient Boosting model, optimized with DE, achieved an average classification accuracy of 97.60%, thus indicating the efficacy of the proposed approach for singing-voice register classification. Full article
Show Figures

Figure 1

Figure 1
<p>The source-filter model of speech production consists of the glottis as the excitation source and the vocal tract (nasal and oral cavities) as the filter. Furthermore, the temporal and spectral representations of the source, vocal tract, and resulting speech signal are represented. Reprinted from Almaghrabi et al. [<a href="#B2-signals-06-00009" class="html-bibr">2</a>], Copyright (2023), with permission from Elsevier.</p>
Full article ">Figure 2
<p>The figure reports a typical tensile stress-strain curve for the vocal fold along the anterior-posterior direction, measured during loading and unloading at a frequency of 1 Hz. The slope of the tangent line (represented by dashed lines) to the stress-strain curve indicates the tangent stiffness. Due to the viscous nature of the vocal folds, the stress is generally higher during loading than unloading. This curve was derived by averaging data over 30 cycles following a 10-cycle preconditioning. Reprinted with permission from Zhang [<a href="#B3-signals-06-00009" class="html-bibr">3</a>]. Copyright 2016, Acoustical Society of America.</p>
Full article ">Figure 3
<p>Illustration of the muscles involved in voice production and control: (<b>a</b>) superior view of the vocal folds, including the cartilaginous framework and laryngeal muscles; (<b>b</b>) medial view of the cricoarytenoid joint, which is formed between the arytenoid and cricoid cartilages; (<b>c</b>) posterolateral view of the cricothyroid joint, formed by the thyroid and cricoid cartilages. The arrows in (<b>b</b>,<b>c</b>) show the possible directions of movement of the arytenoid and cricoid cartilages due to the activation of the LCA and CT muscles, respectively. Reprinted with permission from Zhang [<a href="#B3-signals-06-00009" class="html-bibr">3</a>]. Copyright 2016, Acoustical Society of America.</p>
Full article ">Figure 4
<p>Illustration of the architecture of an MLP with two hidden layers composed of 5 neurons. The activation function is represented by <math display="inline"><semantics> <mi>φ</mi> </semantics></math>. The diagram shows the flow of information from the input layer through the first and second hidden layers to the output layer, highlighting the fully connected structure of the network.</p>
Full article ">Figure 5
<p>Flowchart of the methodological process used.</p>
Full article ">Figure 6
<p>The figure reports the average confusion matrices for each evaluated ML model: (<b>a</b>) SVC, (<b>b</b>) MLP, and (<b>c</b>) XGB. The reported classes, respectively, refer to chest, mixed, and head voices.</p>
Full article ">Figure 7
<p>Distribution of internal parameters for the SVC model: (<b>a</b>) parameter C, (<b>b</b>) parameter <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, and (<b>c</b>) kernel type.</p>
Full article ">Figure 8
<p>Distribution of internal parameters for the MLP model: (<b>a</b>) number of neurons, (<b>b</b>) parameter <math display="inline"><semantics> <mi>α</mi> </semantics></math>, (<b>c</b>) number of layers, (<b>d</b>) parameter <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>, and (<b>e</b>) solver type.</p>
Full article ">Figure 9
<p>Distribution of internal parameters for the XGB model: (<b>a</b>) parameter <math display="inline"><semantics> <mi>α</mi> </semantics></math>, (<b>b</b>) parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math>, (<b>c</b>) learning rate, and (<b>d</b>) number of estimators.</p>
Full article ">Figure 10
<p>Feature importance (SHAP) scores for classification of the test datasets for each ML model: (<b>a</b>) SVC, (<b>b</b>) MLP, (<b>c</b>) XGB.</p>
Full article ">
20 pages, 3066 KiB  
Article
GeNetFormer: Transformer-Based Framework for Gene Expression Prediction in Breast Cancer
by Oumeima Thaalbi and Moulay A. Akhloufi
AI 2025, 6(3), 43; https://doi.org/10.3390/ai6030043 - 21 Feb 2025
Abstract
Background: Histopathological images are often used to diagnose breast cancer and have shown high accuracy in classifying cancer subtypes. Prediction of gene expression from whole-slide images and spatial transcriptomics data is important for cancer treatment in general and breast cancer in particular. This [...] Read more.
Background: Histopathological images are often used to diagnose breast cancer and have shown high accuracy in classifying cancer subtypes. Prediction of gene expression from whole-slide images and spatial transcriptomics data is important for cancer treatment in general and breast cancer in particular. This topic has been a challenge in numerous studies. Method: In this study, we present a deep learning framework called GeNetFormer. We evaluated eight advanced transformer models including EfficientFormer, FasterViT, BEiT v2, and Swin Transformer v2, and tested their performance in predicting gene expression using the STNet dataset. This dataset contains 68 H&E-stained histology images and transcriptomics data from different types of breast cancer. We followed a detailed process to prepare the data, including filtering genes and spots, normalizing stain colors, and creating smaller image patches for training. The models were trained to predict the expression of 250 genes using different image sizes and loss functions. GeNetFormer achieved the best performance using the MSELoss function and a resolution of 256 × 256 while integrating EfficientFormer. Results: It predicted nine out of the top ten genes with a higher Pearson Correlation Coefficient (PCC) compared to the retrained ST-Net method. For cancer biomarker genes such as DDX5 and XBP1, the PCC values were 0.7450 and 0.7203, respectively, outperforming ST-Net, which scored 0.6713 and 0.7320, respectively. In addition, our method gave better predictions for other genes such as FASN (0.7018 vs. 0.6968) and ERBB2 (0.6241 vs. 0.6211). Conclusions: Our results show that GeNetFormer provides improvements over other models such as ST-Net and show how transformer architectures are capable of analyzing spatial transcriptomics data to advance cancer research. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

Figure 1
<p>STNet dataset images. (<b>a</b>) Original whole-slide images. (<b>b</b>) Stain-normalized images using the Vahadane method [<a href="#B15-ai-06-00043" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>Overview of the GeNetFormer framework (integrating EfficientFormer, the best-performing model), for predicting gene expression from WSIs. (<b>A</b>) Data preparation: WSIs were stain normalized and then the patches were extracted. (<b>B</b>) Model training: Patches were fed into the integrated network comprising multiple stages of <math display="inline"><semantics> <mrow> <mi>M</mi> <msup> <mi>B</mi> <mrow> <mn>4</mn> <mi>D</mi> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>M</mi> <msup> <mi>B</mi> <mrow> <mn>3</mn> <mi>D</mi> </mrow> </msup> </mrow> </semantics></math> blocks, with intermediate layers labeled as (i), (ii), (iii), and (iv), representing the hierarchical progression of feature extraction, ending with a fully connected (FC) layer producing 250 outputs corresponding to gene expressions. (<b>C</b>) Model evaluation: The model’s predictions of an individual gene were evaluated using samples of WSIs: (<b>a</b>) test samples, (<b>b</b>) binary designation of tumor (black) and normal (white) regions, (<b>c</b>) ground truth, (<b>d</b>) model predictions of an individual gene, and (<b>e</b>) overlay of predictions on the test sample.</p>
Full article ">Figure 3
<p>Visualization of different individual gene expression predictions by the GeNetFormer framework. (<b>a</b>) Original test sample. (<b>b</b>) Binary labels of tumor (black) and normal (white) regions. (<b>c</b>) Ground truth. (<b>d</b>) Model predictions of the genes DDX5 (PCC = 0.7450), XBP1 (PCC = 0.7203), and FASN (PCC = 0.7018). (<b>e</b>) Overlay of the predictions on the original test sample.</p>
Full article ">Figure 4
<p>PCC value distribution for the 250 genes predicted by GeNetFormer.</p>
Full article ">
12 pages, 3796 KiB  
Commentary
Student Use of Digital Patient Cases May Improve Performance in a Pharmacy Cardiovascular Therapeutics Course
by Paul J. Wong, Noam Morningstar-Kywi, Rory E. Kim and Tien M. H. Ng
Pharmacy 2025, 13(2), 31; https://doi.org/10.3390/pharmacy13020031 - 21 Feb 2025
Abstract
The use of digital patient cases (eCases) is associated with student-perceived improvements in learning. However, novel instructional tools must demonstrate measurable student benefits to justify ongoing use. This research sought to identify the impact of digital patient cases (eCases) on student performance in [...] Read more.
The use of digital patient cases (eCases) is associated with student-perceived improvements in learning. However, novel instructional tools must demonstrate measurable student benefits to justify ongoing use. This research sought to identify the impact of digital patient cases (eCases) on student performance in a PharmD cardiovascular course. Optional eCases for hypertension (HTN), venous thromboembolism (VTE), and acute heart failure (AHF) were incorporated into the course. Performance on the exams and course overall was compared between student cohorts based on eCase use. Aggregated data were analyzed by year. Additional analysis was performed for scores on exam items related to eCase content. From 2020 to 2022, a total of 322/562 students (57.3%) used any eCase. While there were no differences in 2020 and 2021, eCase users in 2022 had significantly higher course (83.6% vs. 79.7%, p = 0.002) and final exam scores (75.0% vs. 67.7%, p < 0.001) compared with non-users. VTE eCase users had higher scores on VTE exam items compared to non-users, but only in 2021. AHF eCase users received higher scores on AHF exam items compared to non-users in 2021 and 2022. Among certain cohorts, student eCase use was associated with improved performance, and the use of certain eCases showed differences in content-specific performance. The eCase is a promising instructional tool that warrants further investigation to determine best design elements for maximal effectiveness. Full article
(This article belongs to the Section Pharmacy Education and Student/Practitioner Training)
Show Figures

Figure 1

Figure 1
<p>Basic diagrammatic representation of student progression through an eCase.</p>
Full article ">Figure A1
<p>Depiction of the initial patient scenario provided to the student.</p>
Full article ">Figure A2
<p>Depiction of the example patient’s profile.</p>
Full article ">Figure A3
<p>Depiction of the initial decision point for adjusting the patient’s therapeutic regimen.</p>
Full article ">Figure A4
<p>Depiction of a subsequent decision point for adjusting the patient’s therapeutic regimen.</p>
Full article ">Figure A5
<p>Depiction of feedback provided to the student after successful and unsuccessful completion of the eCase.</p>
Full article ">
24 pages, 9588 KiB  
Article
Evapotranspiration Partitioning for Croplands Based on Eddy Covariance Measurements and Machine Learning Models
by Jie Zhang, Shanshan Yang, Jingwen Wang, Ruiyun Zeng, Sha Zhang, Yun Bai and Jiahua Zhang
Agronomy 2025, 15(3), 512; https://doi.org/10.3390/agronomy15030512 - 20 Feb 2025
Abstract
Accurately partitioning evapotranspiration (ET) of cropland into productive plant transpiration (T) and non-productive soil evaporation (E) is important for improving crop water use efficiency. Many methods, including machine learning methods, have been developed for ET partitioning. However, the applicability of machine learning models [...] Read more.
Accurately partitioning evapotranspiration (ET) of cropland into productive plant transpiration (T) and non-productive soil evaporation (E) is important for improving crop water use efficiency. Many methods, including machine learning methods, have been developed for ET partitioning. However, the applicability of machine learning models in cropland ET partitioning with diverse crop rotations is not clear. In this study, machine learning models are used to predict E, and T is obtained by calculating the difference between ET and E, leading to the derivation of the ratio of transpiration to evapotranspiration (T/ET). We evaluated six machine learning models (i.e., artificial neural networks (ANN), extremely randomized trees (ExtraTrees), gradient boosting decision tree (GBDT), light gradient boosting machine (LightGBM), random forest (RF), and extreme gradient boosting (XGBoost)) on partitioning ET at 16 cropland flux sites during the period from 2000 to 2020. The evaluation results showed that the XGBoost model had the best performance (R = 0.88, RMSE = 6.87 W/m2, NSE = 0.77, and MAE = 3.41 W/m2) when considering the meteorological data, ecosystem sensible heat flux, ecosystem respiration, soil water content, and remote sensing vegetation indices as input variables. Due to the unavailability of observed E or T data at the 16 cropland sites, we used three other widely used ET partitioning methods to indirectly validate the accuracy of our ET partitioning results based on XGBoost. The results showed that our T estimation results were highly consistent with their T estimation results (R = 0.83–0.91). Moreover, based on the XGBoost model and the three other ET partitioning methods, we estimated the ratio of transpiration to evapotranspiration (T/ET) for different crops. On average, maize had the highest T/ET of 0.619 ± 0.119, followed by soybean (0.618 ± 0.085), winter wheat (0.614 ± 0.08), and sugar beet (0.611 ± 0.065). Lower T/ET was found for paddy rice (0.505 ± 0.055), winter barley (0.590 ± 0.058), potato (0.540 ± 0.088), and rapeseed (0.522 ± 0.107). These results suggest the machine learning models are easy and applicable for cropland T/ET estimation with different crop rotations and reveal obvious differences in water use among different crops, which is crucial for the sustainability of water resources and improvements in cropland water use efficiency. Full article
(This article belongs to the Special Issue Advanced Machine Learning in Agriculture)
Show Figures

Figure 1

Figure 1
<p>The spatial distribution of the 16 eddy covariance flux sites of cropland used in this study. (<b>b</b>,<b>c</b>) are detailed explanations of the two black boxes in (<b>a</b>) above. The base map is the world map from the Köppen–Geiger Climate Classification (<a href="http://www.gloh2o.org/koppen" target="_blank">www.gloh2o.org/koppen</a> (accessed on 10 May 2024)).</p>
Full article ">Figure 2
<p>(<b>a</b>) R, (<b>b</b>) RMSE, (<b>c</b>)NSE, and (<b>d</b>) MAE of ANN, ExtraTrees, GBDT, LightGBM, RF, and XGBoost across eight experiments.</p>
Full article ">Figure 3
<p>Performance of ANN, ExtraTrees, GBDT, LightGBM, RF, and XGBoost in the prediction of soil evaporation in the A8 experiment for all cropland sites. The solid black line represents the 1:1 line, and the dashed red line is the fitted line.</p>
Full article ">Figure 4
<p>Performance of the XGBoost model when meteorological features, sensible heat flux, ecosystem respiration, soil water content, and vegetation indices (A8) are input at each site. The solid black line represents the 1:1 line, and the dashed red line is the fitted line.</p>
Full article ">Figure 5
<p>Comparison of estimated daily T of the X24 method with three other methods: (<b>a</b>) T<sub>X24</sub> compared to T<sub>Z16</sub>, (<b>b</b>) T<sub>X24</sub> compared to T<sub>N18</sub> (<b>c</b>) T<sub>X24</sub> compared to T<sub>Y22</sub>, and (<b>d</b>) T<sub>X24</sub> compared to T<sub>Mean</sub>. The T<sub>Mean</sub> is the mean of the T estimated by the other three methods (Z16, N18, and Y22).</p>
Full article ">Figure 6
<p>Comparison of the estimated daily T using the Z16, N18, Y22, and X24 methods at (<b>a</b>) DE-Kli, maize was planted from 23 April to 2 October 2007; (<b>b</b>) DE-Rus, sugar beet was planted from 27 March to 1 October 2014; (<b>c</b>) US-Twt, paddy rice was planted from 2 April to 20 September 2013; and (<b>d</b>) FR-Gri, winter wheat was planted before 15 July 2006, and winter barley was planted after 4 October 2006.</p>
Full article ">Figure 7
<p>The multi-year mean T/ET for different crops based on four ET partitioning methods (Z16, N18, Y22, and X24). Error bars represent ± 1 standard error.</p>
Full article ">Figure 8
<p>Scatter plots of predicted and observed soil evaporation using four different depths of SWC: (<b>a</b>) TIME + SWC1, (<b>b</b>) TIME + SWC2, (<b>c</b>) TIME + SWC3, (<b>d</b>) TIME + SWC4, (<b>e</b>) TIME + SWC1 + SWC2, (<b>f</b>) TIME + SWC1 + SWC2 + SWC3, and (<b>g</b>) TIME + SWC1 + SWC2 + SWC3 + SWC4. The solid black line represents the 1:1 line, and the dashed red line is the fitted line.</p>
Full article ">Figure 9
<p>SHAP values of the model input variables in the prediction of soil evaporation. (<b>a</b>) The mean absolute SHAP value across 16 sites for each input variable, with a dot representing a flux site; (<b>b</b>) the SHAP summary plot of the input variables from all sites, with a dot representing a sample. The SHAP contribution (%) in (<b>a</b>) is calculated as the ratio of the SHAP value of each variable to the sum of all absolute SHAP values.</p>
Full article ">
13 pages, 5199 KiB  
Article
Deep Learning-Based Mapping of Textile Stretch Sensors to Surface Electromyography Signals: Multilayer Perceptron, Convolutional Neural Network, and Residual Network Models
by Gyubin Lee, Sangun Kim, Ji-seon Kim and Jooyong Kim
Processes 2025, 13(3), 601; https://doi.org/10.3390/pr13030601 - 20 Feb 2025
Abstract
This study evaluates the mapping accuracy between textile stretch sensor data and surface electromyography (sEMG) signals using Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Residual Network (ResNet) models. Data from the forearm, biceps brachii, and triceps brachii were analyzed using Root Mean [...] Read more.
This study evaluates the mapping accuracy between textile stretch sensor data and surface electromyography (sEMG) signals using Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Residual Network (ResNet) models. Data from the forearm, biceps brachii, and triceps brachii were analyzed using Root Mean Square Error (RMSE) and R2 as performance metrics. ResNet achieved the lowest RMSE (e.g., 0.1285 for biceps brachii) and highest R2 (0.8372), outperforming CNN (RMSE: 0.1455; R2: 0.7639) and MLP (RMSE: 0.1789; R2: 0.6722). The residual learning framework of ResNet effectively handles nonlinear patterns and noise, enabling more accurate predictions even for low-variability datasets like the triceps brachii. CNN showed moderate improvement over MLP by learning temporal features but struggled with low-variability datasets. MLP, as the baseline model, demonstrated the highest RMSE and lowest R2, highlighting its limitations in capturing complex relationships. These results suggest the potential reliability of ResNet for mapping textile stretch sensor data to sEMG signals, showing promising performance within the scope of this study. Future research could explore broader applications across different sensor configurations and activities to further validate these findings. Full article
(This article belongs to the Special Issue Research on Intelligent Fault Diagnosis Based on Neural Network)
Show Figures

Figure 1

Figure 1
<p>Dip-coating process for fabricating textile stretch sensors.</p>
Full article ">Figure 2
<p>Placement of textile stretch sensors: (<b>a</b>) inner side and (<b>b</b>) outer side of the arm sleeve.</p>
Full article ">Figure 3
<p>Response of textile stretch sensors during arm movements: (<b>a</b>) extension and relaxation; (<b>b</b>) flexion and stretching.</p>
Full article ">Figure 4
<p>MLP architecture with input, hidden, and output layers.</p>
Full article ">Figure 5
<p>1D-CNN architecture with input, convolutional, and regression layers.</p>
Full article ">Figure 6
<p>ResNet architecture with input, convolutional, shortcut, and regression layers.</p>
Full article ">Figure 7
<p>Data collected from textile stretch sensors for each muscle group.</p>
Full article ">Figure 8
<p>Data collected from sEMG for each muscle group.</p>
Full article ">Figure 9
<p>Training and validation RMSE: (<b>a</b>) MLP, (<b>b</b>) CNN, and (<b>c</b>) ResNet.</p>
Full article ">Figure 10
<p>Comparison of predicted and target values for biceps brachii: (<b>a</b>) MLP, (<b>b</b>) CNN, and (<b>c</b>) ResNet.</p>
Full article ">Figure 11
<p>Comparison of predicted and target values for forearm muscles: (<b>a</b>) MLP, (<b>b</b>) CNN, and (<b>c</b>) ResNet.</p>
Full article ">Figure 12
<p>Comparison of predicted and target values for triceps brachii: (<b>a</b>) MLP, (<b>b</b>) CNN, and (<b>c</b>) ResNet.</p>
Full article ">
17 pages, 4798 KiB  
Article
Deep Learning-Based Algorithm for Road Defect Detection
by Shaoxiang Li and Dexiang Zhang
Sensors 2025, 25(5), 1287; https://doi.org/10.3390/s25051287 - 20 Feb 2025
Abstract
With the increasing demand for road defect detection, existing deep learning methods have made significant progress in terms of accuracy and speed. However, challenges remain, such as insufficient detection precision for detection precision for road defect recognition and issues of missed or false [...] Read more.
With the increasing demand for road defect detection, existing deep learning methods have made significant progress in terms of accuracy and speed. However, challenges remain, such as insufficient detection precision for detection precision for road defect recognition and issues of missed or false detections in complex backgrounds. These issues reduce detection reliability and hinder real-world deployment. To address these challenges, this paper proposes an improved YOLOv8-based model, RepGD-YOLOV8W. First, it replaces the C2f module in the GD mechanism with the improved C2f module based on RepViTBlock to construct the Rep-GD module. This improvement not only maintains high detection accuracy but also significantly enhances computational efficiency. Subsequently, the Rep-GD module was used to replace the traditional neck part of the model, thereby improving multi-scale feature fusion, particularly for detecting small targets (e.g., cracks) and large targets (e.g., potholes) in complex backgrounds. Additionally, the introduction of the Wise-IoU loss function further optimized the bounding box regression task, enhancing the model’s stability and generalization. Experimental results demonstrate that the improved REPGD-YOLOV8W model achieved a 2.4% increase in mAP50 on the RDD2022 dataset. Compared with other mainstream methods, this model exhibits greater robustness and flexibility in handling road defects of various scales. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Low-stage collection-distribution branches.</p>
Full article ">Figure 2
<p>High-stage collection-distribution branches.</p>
Full article ">Figure 3
<p>Improved C2f module diagram.</p>
Full article ">Figure 4
<p>Four typical damage types.</p>
Full article ">Figure 5
<p>mAP@50 comparison chart.</p>
Full article ">Figure 6
<p>Precision comparison chart.</p>
Full article ">Figure 7
<p>Recall comparison chart.</p>
Full article ">Figure 8
<p>RepGD-YOLOV8W PR curves.</p>
Full article ">Figure 9
<p>YOLOv8n PR curves.</p>
Full article ">Figure 10
<p>YOLOv8 detection results.</p>
Full article ">Figure 11
<p>RepGD-YOLOV8W test results.</p>
Full article ">
170 KiB  
Proceeding Paper
Cómo Entrenar tu Dragón: A European Credit Transfer System Module to Develop Critical Artificial Intelligence Literacy in a PGCERT Programme for New Higher Education Lecturers
by Mari Cruz García Vallejo
Proceedings 2025, 114(1), 2; https://doi.org/10.3390/proceedings2025114002 - 19 Feb 2025
Abstract
This paper summarizes the findings and main conclusions from the first presentation of the module “CETD23: Cómo entrenar a tu dragón: la inteligencia artificial generativa como herramienta para mejorar el aprendizaje en entornos online e híbridos”. This is an optional module accredited through [...] Read more.
This paper summarizes the findings and main conclusions from the first presentation of the module “CETD23: Cómo entrenar a tu dragón: la inteligencia artificial generativa como herramienta para mejorar el aprendizaje en entornos online e híbridos”. This is an optional module accredited through the ECTS (European Credit Transfer System) and delivered as part of the “Plan de Formación de Docencia y Personal Investigador 2021–2025” of the Universidad de Las Palmas de Gran Canaria (ULPGC). The Plan de Formación is a development programme offered by Spanish universities to new and existing teaching staff, aimed at improving the quality of their teaching practises in line with Aneca’s Docencia regulations (like the PGCERT and PGCAPT programmes in the UK). The aim of module CETD23 is to explore the use of Generative AI (GenAI) to enhance learning and teaching and to build the AI literacy of ULPGC’s teaching staff. The module received high student satisfaction, with an average score of 4.84 on the Likert Scale, and achieved a 100% completion rate for the final summative project. The final conclusions highlight the need for universities to establish reglamentos (policies and guidance) on how to use GenAI to enhance learning and assessment, as well as to involve students as equal partners in the design and assessment of methods that use AI. Full article
40 pages, 9921 KiB  
Article
Geoinformatics and Machine Learning for Shoreline Change Monitoring: A 35-Year Analysis of Coastal Erosion in the Upper Gulf of Thailand
by Chakrit Chawalit, Wuttichai Boonpook, Asamaporn Sitthi, Kritanai Torsri, Daroonwan Kamthonkiat, Yumin Tan, Apised Suwansaard and Attawut Nardkulpat
ISPRS Int. J. Geo-Inf. 2025, 14(2), 94; https://doi.org/10.3390/ijgi14020094 - 19 Feb 2025
Abstract
Coastal erosion is a critical environmental challenge in the Upper Gulf of Thailand, driven by both natural processes and human activities. This study analyzes 35 years (1988–2023) of shoreline changes using geoinformatics, machine learning algorithms (Random Forest, Support Vector Machine, Maximum Likelihood, Minimum [...] Read more.
Coastal erosion is a critical environmental challenge in the Upper Gulf of Thailand, driven by both natural processes and human activities. This study analyzes 35 years (1988–2023) of shoreline changes using geoinformatics, machine learning algorithms (Random Forest, Support Vector Machine, Maximum Likelihood, Minimum Distance), and the Digital Shoreline Analysis System (DSAS). The results show that the Random Forest algorithm, utilizing spectral bands and indices (NDVI, NDWI, MNDWI, SAVI), achieved the highest classification accuracy (98.17%) and a Kappa coefficient of 0.9432, enabling reliable delineation of land and water boundaries. The extracted annual shorelines were validated with high accuracy, yielding RMSE values of 13.59 m (2018) and 8.90 m (2023). The DSAS analysis identified significant spatial and temporal variations in shoreline erosion and accretion. Between 1988 and 2006, the most intense erosion occurred in regions 4 and 5, influenced by sea-level rise, strong monsoonal currents, and human activities. However, from 2006 to 2018, erosion rates declined significantly, attributed to coastal protection structures and mangrove restoration. The period 2018–2023 exhibited a combination of erosion and accretion, reflecting dynamic sediment transport processes and the impact of coastal management measures. Over time, erosion rates declined due to the implementation of protective structures (e.g., bamboo fences, rock revetments) and the natural expansion of mangrove forests. However, localized erosion remains persistent in low-lying, vulnerable areas, exacerbated by tidal forces, rising sea levels, and seasonal monsoons. Anthropogenic activities, including urban development, mangrove deforestation, and aquaculture expansion, continue to destabilize shorelines. The findings underscore the importance of sustainable coastal management strategies, such as mangrove restoration, soft engineering coastal protection, and integrated land-use planning. This study demonstrates the effectiveness of combining machine learning and geoinformatics for shoreline monitoring and provides valuable insights for coastal erosion mitigation and enhancing coastal resilience in the Upper Gulf of Thailand. Full article
Show Figures

Figure 1

Figure 1
<p>Map of the study area in the Upper Gulf of Thailand which is divided into six regions based on physical characteristics.</p>
Full article ">Figure 2
<p>Workflow of the research methodology used for shoreline change analysis in the Upper Gulf of Thailand.</p>
Full article ">Figure 3
<p>Compare the performance of classification algorithms including Minimum Distance, Maximum Likelihood Classifier, Support Vector Machine, and Random Forest in overall accuracy and Cohen’s Kappa Coefficient.</p>
Full article ">Figure 4
<p>Classification results using four ML methods—Random Forest (<b>a</b>), Support Vector Machine (<b>b</b>), Maximum Likelihood Classifier (<b>c</b>), and Minimum Distance (<b>d</b>), for the Upper Gulf of Thailand. Each classification result illustrates the boundary between land and water in sample areas, including beach (<b>aA</b>,<b>bA</b>,<b>cA</b>,<b>dA</b>), mangrove forest (<b>aB</b>,<b>bB</b>,<b>cB</b>,<b>dB</b>), coastal fishing areas (<b>aC</b>,<b>bC</b>,<b>cC</b>,<b>dC</b>), shoreline protection structures (<b>aD</b>,<b>bD</b>,<b>cD</b>,<b>dD</b>), and steep cliffs (<b>aF</b>,<b>bF</b>,<b>cF</b>,<b>dF</b>).</p>
Full article ">Figure 5
<p>Overall accuracy and Cohen’s Kappa Coefficient for the Random Forest classification method applied to 65 satellite images from 1988 to 2023.</p>
Full article ">Figure 6
<p>Overlay of the extracted shorelines from seven time periods (1988, 1994, 2000, 2006, 2011, 2018, and 2023) in the Upper Gulf of Thailand. (<b>A</b>) represents shoreline changes at the Klong Yi San Kao estuary, (<b>B</b>) represents shoreline changes at Pak Thalenai, (<b>C</b>) represents shoreline changes at the mangrove area in Bang Krachao, (<b>E</b>) represents shoreline changes at Khun Samut Chin, and (<b>D</b>) represents shoreline changes at Khlong Nang Hong.</p>
Full article ">Figure 7
<p>Assessment of annual shoreline extraction compared to the reference shorelines in 2018 (<b>a</b>) and 2023 (<b>b</b>) in the Upper Gulf of Thailand. Shoreline locations in 2018: (<b>aA</b>) Hua Hin Beach, (<b>aB</b>) Chaosamran Beach, (<b>aC</b>) Pak Thale Nok, (<b>aD</b>) Bang Khun Thian, (<b>aE</b>) Bang Pu, (<b>aF</b>) Udom Bay, (<b>aG</b>) Na Chom Thian Beach, and (<b>aH</b>) Bang Sare. Shoreline locations in 2023: (<b>bA</b>) Hua Hin Beach, (<b>bB</b>) Chaosamran Beach, (<b>bC</b>) Bang Tabun estuary, (<b>bD</b>) Bang Khun Thian, (<b>bE</b>) Udom Bay, (<b>bF</b>) Jomtien Beach, (<b>bG</b>) Na Chom Thian Beach, and (<b>bH</b>) Bang Sare.</p>
Full article ">Figure 8
<p>Results of shoreline change analysis using the Digital Shoreline Analysis System (DSAS) for the Upper Gulf of Thailand.</p>
Full article ">Figure 9
<p>Trends in global mean sea level and average temperature, along with mean sea level, average temperature, and accumulated shoreline erosion in the Upper Gulf of Thailand.</p>
Full article ">Figure 10
<p>Correlation analyses between sea level, temperature, and coastal erosion: (<b>a</b>) Global mean sea level vs. global average temperature (<b>b</b>). Coastal erosion in the Upper Gulf of Thailand vs. global mean temperature (<b>c</b>). Coastal erosion in the Upper Gulf of Thailand vs. global mean sea level (<b>d</b>). Mean sea level vs. mean temperature in the Upper Gulf of Thailand (<b>e</b>). Coastal erosion vs. mean temperature in the Upper Gulf of Thailand (<b>e</b>), (<b>f</b>) Coastal erosion vs. mean sea level in the Upper Gulf of Thailand.</p>
Full article ">Figure 11
<p>Shoreline changes over six time periods from Hua Hin District to Laem Phak Bia region. (A) represents shoreline changes in the northern part of Cha-Am Beach, and (B) represents shoreline changes in Bang Kao Beach.</p>
Full article ">Figure 12
<p>Shoreline change analysis: sample of shoreline changes (<b>a</b>) over six time periods in Saphan Pla Cha-am (<b>c</b>), and sample of shoreline changes (<b>b</b>) over six time periods in the coastal area Bang Kao Subdistrict, Cha-am District, Phetchaburi (<b>d</b>).</p>
Full article ">Figure 13
<p>Shoreline changes over six time periods from Laem Phak Bia–Mae Klong River. (A) represents shoreline changes at the Klong Yi San Kao estuary, and (B) represents shoreline changes at Pak Thalenai.</p>
Full article ">Figure 14
<p>Shoreline change analysis: sample of shoreline changes (<b>a</b>) over six time periods in the coastal area between the Mae Klong estuary and the Khlong Bang Tabun estuary (<b>c</b>), and sample of shoreline changes (<b>b</b>) over six time periods in the coastal area Pak Thale Conservation Area, Pak Thale Subdistrict, Ban Laem District, Phetchaburi Province (<b>d</b>).</p>
Full article ">Figure 15
<p>Shoreline changes over six time periods from Mae Klong River to Tha Chin River. (A) represents shoreline changes at the mangrove area in Bang Krachao, and (B) represents shoreline changes at Ao Mahachai Mangrove Forest Study Centre.</p>
Full article ">Figure 16
<p>Shoreline change analysis: sample of shoreline changes (<b>a</b>) over six time periods in the coastal area Bang Phraek Subdistrict, Mueang District, Samut Sakhon Province (<b>c</b>), and a sample of shoreline changes (<b>b</b>) over six time periods in the coastal area Ao Mahachai Mangrove Forest Natural Education Center, Bang Phraek Subdistrict, Mueang District, Samut Sakhon Province (<b>d</b>).</p>
Full article ">Figure 17
<p>The shoreline changes over six time periods from Tha Chin River to Chao Phraya River. (A) represents shoreline changes at Khun Samut Chin, and (B) represents shoreline changes at the Tha Chin estuary.</p>
Full article ">Figure 18
<p>Shoreline change analysis: sample of shoreline changes (<b>a</b>) over six time periods in the coastal area Ban Khun Samut Chin, Laem Fa Pha Subdistrict, Phra Samut Chedi District, Samut Prakan Province (<b>c</b>), and sample of shoreline changes (<b>b</b>) over six time periods in the coastal area Marine and Coastal Resources Office, Samut Sakhon Mueang District, Samut Sakhon Province (<b>d</b>).</p>
Full article ">Figure 19
<p>Shoreline changes over six time periods from Chao Phraya River to Bang Pakong River. (A) represents shoreline changes at Khlong Nang Hong, and (B) represents shoreline changes at Bang Pu Mai.</p>
Full article ">Figure 20
<p>Shoreline change analysis: sample of shoreline changes (<b>a</b>) over six time periods in the coastal area Khlong Dan Subdistrict, Bang Bo District, Samut Prakan Province (<b>c</b>), and sample of shoreline changes (<b>b</b>) over six time periods in the coastal area Bang Pu Subdistrict, Mueang District, Samut Prakan Province (<b>d</b>).</p>
Full article ">Figure 21
<p>Shoreline changes over six time periods from Bang Pakong River to Sattahip District. (A) represents shoreline changes at the Bang Pakong estuary, and (B) represents shoreline changes at Laem Chabang Port.</p>
Full article ">Figure 22
<p>Shoreline change analysis: sample of shoreline changes (<b>a</b>) over six time periods in the coastal area Bang Pakong estuary, Khlong Tamhru Subdistrict, Mueang District, Chonburi Province (<b>c</b>), and sample of shoreline changes (<b>b</b>) over six time periods in Laem Chabang coastal area, Thung Sukhla Subdistrict, Sri Racha District, Chonburi Province (<b>d</b>).</p>
Full article ">
12 pages, 3152 KiB  
Article
High-Precision Phenotyping in Soybeans: Applying Multispectral Variables Acquired at Different Phenological Stages
by Celí Santana Silva, Dthenifer Cordeiro Santana, Fábio Henrique Rojo Baio, Ana Carina da Silva Cândido Seron, Rita de Cássia Félix Alvarez, Larissa Pereira Ribeiro Teodoro, Carlos Antônio da Silva Junior and Paulo Eduardo Teodoro
AgriEngineering 2025, 7(2), 47; https://doi.org/10.3390/agriengineering7020047 - 19 Feb 2025
Abstract
Soybean stands out for being the most economically important oilseed in the world. Remote sensing techniques and precision agriculture are being analyzed through research in different agricultural regions as a technological system aiming at productivity and possible low-cost reduction. Machine learning (ML) methods, [...] Read more.
Soybean stands out for being the most economically important oilseed in the world. Remote sensing techniques and precision agriculture are being analyzed through research in different agricultural regions as a technological system aiming at productivity and possible low-cost reduction. Machine learning (ML) methods, together with the advent of demand for remotely piloted aircraft available on the market in the recent decade, have been conducive to remote sensing data processes. The objective of this work was to evaluate the best ML and input configurations in the classification of agronomic variables in different phenological stages. The spectral variables were obtained in three phenological stages of soybean genotypes: V8 (at 45 days after emergence—DAE), R1 (60 DAE), and R5 (80 DAE). A Sensefly eBee fixed-wing RPA equipped with the Parrot Sequoia multispectral sensor coupled to the RGB sensor was used. The Sequoia multispectral sensor with an RGB sensor acquired reflectance at wavelengths of blue (450 nm), green (550 nm), red (660 nm), near-infrared (735 nm), and infrared (790 nm). The following were used to evaluate the agronomic traits: days to maturity, number of branches, productivity, plant height, height of the first pod insertion and diameter of the main stem. The random forest (RF) model showed greater accuracy with data collected in the R5 stage, whose accuracies were close to 56 for the percentage of correct classifications (CC), close to 0.2 for Kappa, and above 0.55 for the F-score. Logistic regression (RL) and support vector machine (SVM) models showed better performance in the early reproductive stage R1, with accuracies above 55 for CC, close to 0.1 for Kappa, and close to 0.4 for the F-score. J48 performed better with data from the V8 stage, with accuracies above 50 for CC and close to 0.4 for the F-score. This reinforces that the use of different specific spectra for each model can enhance accuracy, optimizing the choice of model according to the phenological stage of the plants. Full article
(This article belongs to the Special Issue The Future of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Summary chart of the processes carried out to obtain the results.</p>
Full article ">Figure 2
<p>Principal component analysis (PCA) for the 32 soybean genotypes evaluated and clusters formed by the k-means algorithm based on the agronomic traits evaluated.</p>
Full article ">Figure 3
<p>Boxplot with the means of each cluster formed for the agronomic variables days to maturity (DM), first pod height (FPH, cm), plant height (PH, cm), main stem diameter (SD, cm), number of branches (NB), and grain yield (GP, kg ha<sup>−1</sup>) evaluated in 32 soybean genotypes.</p>
Full article ">Figure 4
<p>Correlation network between the agronomic variables days to maturity (DM), grain yield (GP), number of branches (NB), plant height (PH), first pod height (FHP), main stem diameter (SD), and the spectral bands wavelengths of blue (450 nm), green (550 nm), red (660 nm), red edge (735 nm), and near-infrared (790 nm) for the three evaluation periods (V8 (<b>A</b>), R1 (<b>B</b>), and R5 (<b>C</b>)). Green and red lines indicate positive and negative correlations, respectively. The magnitude of these relationships is indicated by the thickness of the lines, in which thicker lines represent correlations above 0.60.</p>
Full article ">Figure 5
<p>Boxplot for the percentage of correct classification (CC) for the significant interaction between the machine learning models and the inputs tested. Means followed by the same uppercase letters for the different inputs and the same lowercase letters for the different ML algorithms do not differ by the Scott–Knott test at 5% probability.</p>
Full article ">Figure 6
<p>Boxplot for Kappa coefficient for significant interaction between machine learning models and tested inputs. Means followed by the same uppercase letters for the different inputs and the same lowercase letters for the different ML algorithms do not differ by the Scott–Knott test at 5% probability.</p>
Full article ">Figure 7
<p>Boxplot for F-score for significant interaction between machine learning models and tested inputs. The means followed by the same uppercase letters for the different inputs and the same lowercase letters for the different ML algorithms do not differ by the Scott–Knott test at 5% probability.</p>
Full article ">
20 pages, 8363 KiB  
Article
Predicting Stress–Strain Curve with Confidence: Balance Between Data Minimization and Uncertainty Quantification by a Dual Bayesian Model
by Tianyi Li, Zhengyuan Chen, Zhen Zhang, Zhenhua Wei, Gan-Ji Zhong, Zhong-Ming Li and Han Liu
Polymers 2025, 17(4), 550; https://doi.org/10.3390/polym17040550 - 19 Feb 2025
Abstract
Driven by polymer processing–property data, machine learning (ML) presents an efficient paradigm in predicting the stress–strain curve. However, it is generally challenged by (i) the deficiency of training data, (ii) the one-to-many issue of processing–property relationship (i.e., aleatoric uncertainty), and (iii) the unawareness [...] Read more.
Driven by polymer processing–property data, machine learning (ML) presents an efficient paradigm in predicting the stress–strain curve. However, it is generally challenged by (i) the deficiency of training data, (ii) the one-to-many issue of processing–property relationship (i.e., aleatoric uncertainty), and (iii) the unawareness of model uncertainty (i.e., epistemic uncertainty). Here, leveraging a Bayesian neural network (BNN) and a recently proposed dual-architected model for curve prediction, we introduce a dual Bayesian model that enables accurate prediction of the stress–strain curve while distinguishing between aleatoric and epistemic uncertainty at each processing condition. The model is trained using a Taguchi array dataset that minimizes the data size while maximizing the representativeness of 27 samples in a 4D processing parameter space, significantly reducing data requirements. By incorporating hidden layers and output-distribution layers, the model quantifies both aleatoric and epistemic uncertainty, aligning with experimental data fluctuations, and provides a 95% confidence interval for stress–strain predictions at each processing condition. Overall, this study establishes an uncertainty-aware framework for curve property prediction with reliable, modest uncertainty at a small data size, thus balancing data minimization and uncertainty quantification. Full article
(This article belongs to the Special Issue Simulation and Calculation of Polymer Composite Materials)
Show Figures

Figure 1

Figure 1
<p>Schematic of the stress–strain curves illustrating the aleatoric uncertainty associated with polymer specimens prepared under identical injection molding conditions. The three-colored curves represent three separate specimens, highlighting the inherent variability in mechanical behavior, even when processed with the same parameters (i.e., mold temperature <span class="html-italic">T</span><sub>mold</sub>, packing pressure <span class="html-italic">P</span><sub>pack</sub>, injection pressure <span class="html-italic">P</span><sub>inject</sub>, and injection rate <span class="html-italic">R</span><sub>inject</sub>) due to the complex, black box nature of the polymer manufacturing process.</p>
Full article ">Figure 2
<p>Dataset visualization of the stress–strain curves at different molding conditions. (<b>a</b>) Selected 27 molding conditions with the Taguchi method. Each condition has four tunable parameters, including mold temperature Tmold, packing pressure Ppack, injection pressure Pinject, and injection rate Rinject. (<b>b</b>) Stress–strain curves of specimens prepared at Tmold = 80 °C, Ppack = 20 MPa, Pinject = 70 MPa, and Rinject = 24.858 cm<sup>3</sup>/s. (<b>c</b>) Stress–strain curves of specimens prepared at Tmold = 20 °C, Ppack = 20 MPa, Pinject = 30 MPa, Rinject = 24.858 cm<sup>3</sup>/s; (<b>d</b>) Schematic illustrating three different stress–strain curve types, including strain softening type, steady flow type, and strain hardening type. The points indicate key curve features, including linear limit point, maximum yielding point, strain softening inflection point, steady flow limit point, and fracture point. (<b>e</b>) Example of curve type distribution at one molding condition. (<b>f</b>) Violin plot of the distribution of standardized curve features at one molding condition.</p>
Full article ">Figure 3
<p>Schematic of the dual-distribution neural network (DNN). (<b>a</b>) DNN architecture built to predict the dual-distribution representation of curve variation, including (i) the categorical distribution of curve type and (ii) the approxi-normal distribution of each curve feature, provided by a curve type classifier and a curve feature predictor, respectively. (<b>b</b>) Schematic of the curve feature predictor, which outputs each feature’s aleatoric uncertainty as a normal distribution represented by its mean <span class="html-italic">μ</span> and standard deviation <span class="html-italic">δ</span>. (<b>c</b>) Schematic of the reconstructed stress–strain curve and its aleatoric uncertainty based on the predicted curve type and feature distribution.</p>
Full article ">Figure 4
<p>Prediction accuracy of the curve type classifier. (<b>a</b>) Confusion matrix of the training set. The dataset contains 27 molding conditions, wherein 25 conditions are selected as the training set, while the remaining 2 conditions serve as the test set. (<b>b</b>) Stress–strain curves in one test condition. (<b>c</b>) Predicted versus true categorical distribution of curve type in the test condition.</p>
Full article ">Figure 5
<p>Prediction accuracy of the curve feature predictor. (<b>a</b>) Predicted versus true mean values for each curve feature, wherein the horizontal and vertical error bars represent the standard deviation of true versus predicted data, respectively. (<b>b</b>) Average calibration plot of observed versus predicted proportion in α-prediction interval for each curve feature, wherein α ranges from 0% to 100% to indicate the data proportion falling within the α-prediction interval.</p>
Full article ">Figure 6
<p>Stress–strain curve prediction using the DNN model. (<b>a</b>–<b>c</b>) Schematic illustrating the rules to reconstruct the expected stress–strain curve and its aleatoric uncertainty based on the dual distribution of curve types and features (see text for the details). (<b>d</b>–<b>f</b>) Examples of stress–strain curve prediction at different molding conditions, including (<b>d</b>) Tmold = 80 °C, Ppack = 80 MPa, Pinject = 30 MPa, and Rinject = 58.002 cm<sup>3</sup>/s; (<b>e</b>) Tmold = 50 °C, Ppack = 50 MPa, Pinject = 50 MPa, and Rinject = 58.002 cm<sup>3</sup>/s; and (<b>f</b>) Tmold = 20 °C, Ppack = 80 MPa, Pinject = 30 MPa, and Rinject = 41.43 cm<sup>3</sup>/s, wherein the shadow region represents the curve’s aleatoric uncertainty. Experimental data are also added as a reference. (<b>g</b>–<b>i</b>) Predicted curve type distributions at these molding conditions.</p>
Full article ">Figure 7
<p>Schematic illustration of epistemic uncertainty induced by model deviation. The total uncertainty consists of aleatoric uncertainty (data deviation) and epistemic uncertainty (model deviation).</p>
Full article ">Figure 8
<p>Uncertainty quantification by dual-distribution Bayesian network (DBN). (<b>a</b>) Schematic illustrating the working principle of a curve-type classifier using a Bayesian neural network (BNN), wherein the weights and biases in BNN neurons are sampled from their independent Gaussian distributions so that the BNN-based classifier is equivalent to multiple classifiers based on artificial neural networks (ANNs) under different settings of weights and biases. By statistical averaging, the mean <span class="html-italic">μ</span> and standard deviation <span class="html-italic">δ</span> of curve type probability are obtained. (<b>b</b>) BNN-based curve feature regressor, wherein the multiple mean <span class="html-italic">μ</span> and standard deviation <span class="html-italic">δ</span> of curve feature distribution are statistically averaged to obtain the expected mean <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>μ</mi> </mrow> <mo stretchy="false">¯</mo> </mover> </mrow> </semantics></math> and aleatoric uncertainty <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>δ</mi> </mrow> <mo stretchy="false">¯</mo> </mover> </mrow> </semantics></math>, and meanwhile, the standard deviation of <span class="html-italic">μ</span> and <span class="html-italic">δ</span> are computed as the epistemic uncertainty <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>δ</mi> </mrow> <mrow> <mi>μ</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>δ</mi> </mrow> <mrow> <mi>δ</mi> </mrow> </msub> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 9
<p>Prediction accuracy of the BNN-based curve type classifier. (<b>a</b>) Confusion matrix of the training set. The BNN-based classifier uses the same training scheme as the DNN model (see <a href="#polymers-17-00550-f004" class="html-fig">Figure 4</a>). (<b>b</b>) Stress–strain curves in one test condition. (<b>c</b>) Predicted versus true categorical distribution of curve type in the test condition. The classifier predicts a mean probability with an error bar provided for each curve feature.</p>
Full article ">Figure 10
<p>Prediction accuracy of the BNN-based curve feature predictor. (<b>a</b>) Predicted versus true mean values for each curve feature, wherein the horizontal and vertical error bars represent the variance of true versus predicted data, respectively. The predicted variance is the upper-bound variance computed by Equation (8). (<b>b</b>) Average calibration plot of observed versus predicted proportion in <span class="html-italic">α</span>-prediction interval for each curve feature, wherein <span class="html-italic">α</span> ranges from 0% to 100% to indicate the data proportion falling within the <span class="html-italic">α</span>-prediction interval.</p>
Full article ">Figure 11
<p>Reconstruction of the stress–strain curve and its uncertainty using the DBN model. (<b>a</b>) Schematic illustrating the rules to reconstruct the expected stress–strain curve and its aleatoric and epistemic uncertainty based on the mean and variance of multiple dual-distribution outputs (see text for the details). (<b>b</b>–<b>d</b>) Examples of the reconstructed stress–strain curve and its aleatoric and epistemic uncertainty at different molding conditions, including (<b>d</b>) Tmold = 80 °C, Ppack = 80 MPa, Pinject = 30 MPa, and Rinject = 58.002 cm<sup>3</sup>/s; (<b>e</b>) Tmold = 50 °C, Ppack = 50 MPa, Pinject = 50 MPa, and Rinject = 58.002 cm<sup>3</sup>/s; and (<b>f</b>) Tmold = 20 °C, Ppack = 80 MPa, Pinject = 30 MPa, and Rinject = 41.43 cm<sup>3</sup>/s, wherein the shadow regions represent the epistemic uncertainty. Experimental data are also added as a reference. (<b>e</b>–<b>g</b>) Predicted curve type distributions at these molding conditions.</p>
Full article ">Figure 12
<p>Maximum uncertainty quantification of stress–strain curve prediction using the DBN model. (<b>a</b>) Schematic illustrating the maximum uncertainty bounds (middle panel) attained by summing up all uncertainty sources (left panel), that is, a variance of <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>δ</mi> </mrow> <mo stretchy="false">¯</mo> </mover> <mo>+</mo> <msub> <mrow> <mi>δ</mi> </mrow> <mrow> <mi>μ</mi> </mrow> </msub> <mo>+</mo> <msub> <mrow> <mi>δ</mi> </mrow> <mrow> <mi>δ</mi> </mrow> </msub> </mrow> </semantics></math> (see Equation (8)) for a normal distribution at a 95% confidence interval. The predicted curve type distribution is provided in the right panel. (<b>b</b>–<b>d</b>) Examples of maximum uncertainty quantification at different molding conditions, including (<b>b</b>) <span class="html-italic">T</span><sub>mold</sub> = 80 °C, <span class="html-italic">P</span><sub>pack</sub> = 20 MPa, <span class="html-italic">P</span><sub>inject</sub> = 50 MPa, and <span class="html-italic">R</span><sub>inject</sub> = 58.002 cm<sup>3</sup>/s and (<b>c</b>) <span class="html-italic">T</span><sub>mold</sub> = 50 °C, <span class="html-italic">P</span><sub>pack</sub> = 20 MPa, <span class="html-italic">P</span><sub>inject</sub> = 70 MPa, and <span class="html-italic">R</span><sub>inject</sub> = 41.43 cm<sup>3</sup>/s and (<b>d</b>) <span class="html-italic">T</span><sub>mold</sub> = 20 °C, <span class="html-italic">P</span><sub>pack</sub> = 50 MPa, <span class="html-italic">P</span><sub>inject</sub> = 70 MPa, and <span class="html-italic">R</span><sub>inject</sub> = 41.43 cm<sup>3</sup>/s.</p>
Full article ">
Back to TopTop