Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 13, October
Previous Issue
Volume 13, August
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Computers, Volume 13, Issue 9 (September 2024) – 29 articles

Cover Story (view full-size image): This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov–Arnold Networks (KANs) to improve road safety. We created a unique dataset for our research on bus drivers for two scenarios: driving without phone interaction and driving while on a phone call. A different KAN-based network was developed for custom action recognition tailored to identifying drivers holding phones. We evaluated the performance of our system against convolutional neural network-based solutions and showed the differences in accuracy and robustness. The work has implications beyond enforcement, providing foundational technology for automating monitoring and improving safety protocols in commercial and public transport sectors. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 760 KiB  
Article
The Influence of National Digital Identities and National Profiling Systems on Accelerating the Processes of Digital Transformation: A Mixed Study Report
by Abdelrahman Ahmed Alhammadi, Saadat M. Alhashmi, Mohammad Lataifeh and John Lewis Rice
Computers 2024, 13(9), 243; https://doi.org/10.3390/computers13090243 - 23 Sep 2024
Viewed by 1004
Abstract
The United Arab Emirates (UAE) is a frontrunner in digitalising government services, demonstrating the successful implementation of National Digital Identity (NDI) systems. Unlike many developing nations with varying levels of success with electronic ID systems due to legal, socio-cultural, and ethical concerns, the [...] Read more.
The United Arab Emirates (UAE) is a frontrunner in digitalising government services, demonstrating the successful implementation of National Digital Identity (NDI) systems. Unlike many developing nations with varying levels of success with electronic ID systems due to legal, socio-cultural, and ethical concerns, the UAE has seamlessly integrated digital identities into various sectors, including security, transportation, and more, through initiatives like UAE Pass. This study draws on the UAE’s functional digital ID systems, such as those utilised in the Dubai Smart City project, to highlight the potential efficiencies and productivity gains in public services while addressing the associated risks of cybersecurity and privacy. This paper provides a comprehensive understanding of the UAE’s NDI and its impact on the nation’s digital transformation agenda, offering a thorough analysis of the effectiveness and challenges of NDIs, explicitly focusing on the UAE’s approach. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptual model.</p>
Full article ">
15 pages, 1681 KiB  
Article
Parallel Attention-Driven Model for Student Performance Evaluation
by Deborah Olaniyan, Julius Olaniyan, Ibidun Christiana Obagbuwa, Bukohwo Michael Esiefarienrhe and Olorunfemi Paul Bernard
Computers 2024, 13(9), 242; https://doi.org/10.3390/computers13090242 - 23 Sep 2024
Viewed by 749
Abstract
This study presents the development and evaluation of a Multi-Task Long Short-Term Memory (LSTM) model with an attention mechanism for predicting students’ academic performance. The research is motivated by the need for efficient tools to enhance student assessment and support tailored educational interventions. [...] Read more.
This study presents the development and evaluation of a Multi-Task Long Short-Term Memory (LSTM) model with an attention mechanism for predicting students’ academic performance. The research is motivated by the need for efficient tools to enhance student assessment and support tailored educational interventions. The model tackles two tasks: predicting overall performance (total score) as a regression task and classifying performance levels (remarks) as a classification task. By handling both tasks simultaneously, it improves computational efficiency and resource utilization. The dataset includes metrics such as Continuous Assessment, Practical Skills, Presentation Quality, Attendance, and Participation. The model achieved strong results, with a Mean Absolute Error (MAE) of 0.0249, Mean Squared Error (MSE) of 0.0012, and Root Mean Squared Error (RMSE) of 0.0346 for the regression task. For the classification task, it achieved perfect scores with an accuracy, precision, recall, and F1 score of 1.0. The attention mechanism enhanced performance by focusing on the most relevant features. This study demonstrates the effectiveness of the Multi-Task LSTM model with an attention mechanism in educational data analysis, offering a reliable and efficient tool for predicting student performance. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Proposed framework.</p>
Full article ">Figure 2
<p>Dataset sample.</p>
Full article ">Figure 3
<p>Q-Q plots.</p>
Full article ">Figure 4
<p>MLST-AM performance plot.</p>
Full article ">Figure 5
<p>Training and validation accuracy and loss.</p>
Full article ">Figure 6
<p>Confusion Matrix for the Classification.</p>
Full article ">
17 pages, 2984 KiB  
Article
Educational Resource Private Cloud Platform Based on OpenStack
by Linchang Zhao, Guoqing Hu and Yongchi Xu
Computers 2024, 13(9), 241; https://doi.org/10.3390/computers13090241 - 23 Sep 2024
Viewed by 562
Abstract
With the rapid development of the education industry and the expansion of university enrollment scale, it is difficult for the original teaching resource operation and maintenance management mode and utilization efficiency to meet the demands of teachers and students for high-quality teaching resources. [...] Read more.
With the rapid development of the education industry and the expansion of university enrollment scale, it is difficult for the original teaching resource operation and maintenance management mode and utilization efficiency to meet the demands of teachers and students for high-quality teaching resources. OpenStack and Ceph technologies provide a new solution for optimizing the utilization and management of educational resources. The educational resource private cloud platform built by them can achieve the unified management and self-service use of the computing resources, storage resources, and network resources required for student learning and teacher instruction. It meets the flexible and efficient use requirements of high-quality teaching resources for student learning and teacher instruction, reduces the construction cost of informationization investment in universities, and improves the efficiency of teaching resource utilization. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the Ceph distributed database structure.</p>
Full article ">Figure 2
<p>Schematic diagram of the deployment architecture.</p>
Full article ">Figure 3
<p>Schematic diagram of compute node server deployment.</p>
Full article ">Figure 4
<p>Schematic diagram of data server deployment.</p>
Full article ">Figure 5
<p>Schematic diagram of storage node server.</p>
Full article ">Figure 6
<p>The HA implementation for proxy nodes.</p>
Full article ">Figure 7
<p>The HA implementation for control nodes.</p>
Full article ">Figure 8
<p>The HA implementation for data node.</p>
Full article ">Figure 9
<p>The HA implementation for compute nodes.</p>
Full article ">Figure 10
<p>The HA implementation for storage nodes.</p>
Full article ">Figure 11
<p>The HA implementation for network nodes.</p>
Full article ">
16 pages, 1575 KiB  
Article
A Secure and Verifiable Blockchain-Based Framework for Personal Data Validation
by Junyan Yu, Ximing Li and Yubin Guo
Computers 2024, 13(9), 240; https://doi.org/10.3390/computers13090240 - 23 Sep 2024
Viewed by 550
Abstract
The online services provided by the Service Provider (SP) have brought significant convenience to people’s lives. Nowadays, people have grown accustomed to obtaining diverse services via the Internet. However, some SP utilize or even tamper with personal data without the awareness or authorization [...] Read more.
The online services provided by the Service Provider (SP) have brought significant convenience to people’s lives. Nowadays, people have grown accustomed to obtaining diverse services via the Internet. However, some SP utilize or even tamper with personal data without the awareness or authorization of the Data Provider (DP), a practice that seriously undermines the authenticity of the DP’s authorization and the integrity of personal data. To address this issue, we propose a Verifiable Authorization Information Management Scheme (VAIMS). During the authorization process, the authorization information and personal data fingerprints will be uploaded to the blockchain for permanent record, and then the SP will store such authorization information and personal data. The DP generates corresponding authorization fingerprints based on the authorization information and stores them independently. Through the authorization information and authorization fingerprints on the chain, the DP can verify the authenticity of the authorization information stored by the SP at any time. Meanwhile, by leveraging the personal data fingerprint on the blockchain, the DP can check whether the personal data stored by the SP has been tampered with. Additionally, the scheme incorporates database technology to accelerate data query. We implemented a VAIMS prototype on Etherum, and experiments demonstrate that the scheme is effective. Full article
Show Figures

Figure 1

Figure 1
<p>Authorization and use process of personal data.</p>
Full article ">Figure 2
<p>Overview of blockchain-based personal data authorisation management scheme.</p>
Full article ">Figure 3
<p>Off-chain verifiable database.</p>
Full article ">Figure 4
<p>Verification Time and query time.</p>
Full article ">Figure 5
<p>System performance under diverse workloads.</p>
Full article ">Figure 6
<p>Performance Comparison of VAIMS and Ethernet in Querying Authorized Transactions.</p>
Full article ">Figure 7
<p>Block query time comparsion between Ethereum and VAIMS.</p>
Full article ">Figure 8
<p>Transaction query time comparsion between Ethereum and VAIMS.</p>
Full article ">
25 pages, 896 KiB  
Article
Enhancing Fake News Detection with Word Embedding: A Machine Learning and Deep Learning Approach
by Mutaz A. B. Al-Tarawneh, Omar Al-irr, Khaled S. Al-Maaitah, Hassan Kanj and Wael Hosny Fouad Aly
Computers 2024, 13(9), 239; https://doi.org/10.3390/computers13090239 - 19 Sep 2024
Cited by 1 | Viewed by 2003
Abstract
The widespread dissemination of fake news on social media has necessitated the development of more sophisticated detection methods to maintain information integrity. This research systematically investigates the effectiveness of different word embedding techniques—TF-IDF, Word2Vec, and FastText—when applied to a variety of machine learning [...] Read more.
The widespread dissemination of fake news on social media has necessitated the development of more sophisticated detection methods to maintain information integrity. This research systematically investigates the effectiveness of different word embedding techniques—TF-IDF, Word2Vec, and FastText—when applied to a variety of machine learning (ML) and deep learning (DL) models for fake news detection. Leveraging the TruthSeeker dataset, which includes a diverse set of labeled news articles and social media posts spanning over a decade, we evaluated the performance of classifiers such as Support Vector Machines (SVMs), Multilayer Perceptrons (MLPs), and Convolutional Neural Networks (CNNs). Our analysis demonstrates that SVMs using TF-IDF embeddings and CNNs employing TF-IDF embeddings achieve the highest overall performance in terms of accuracy, precision, recall, and F1 score. These results suggest that TF-IDF, with its capacity to highlight discriminative features in text, enhances the performance of models like SVMs, which are adept at handling sparse data representations. Additionally, CNNs benefit from TF-IDF by effectively capturing localized features and patterns within the textual data. In contrast, while Word2Vec and FastText embeddings capture semantic and syntactic nuances, they introduce complexities that may not always benefit traditional ML models like MLPs or SVMs, which could explain their relatively lower performance in some cases. This study emphasizes the importance of selecting appropriate embedding techniques based on the model architecture to maximize fake news detection performance. Future research should consider integrating contextual embeddings and exploring hybrid model architectures to further enhance detection capabilities. These findings contribute to the ongoing development of advanced computational tools for combating misinformation. Full article
Show Figures

Figure 1

Figure 1
<p>Machine learning model performance—TF-IDF. (<b>a</b>) Accuracy; (<b>b</b>) precision; (<b>c</b>) recall; (<b>d</b>) F-1 score.</p>
Full article ">Figure 2
<p>Machine learning model performance—Word2Vec. (<b>a</b>) Accuracy; (<b>b</b>) precision; (<b>c</b>) recall; (<b>d</b>) F-1 score.</p>
Full article ">Figure 3
<p>Machine learning model performance—FastText. (<b>a</b>) Accuracy; (<b>b</b>) precision; (<b>c</b>) recall; (<b>d</b>) F-1 score.</p>
Full article ">Figure 4
<p>CNN model learning curves under TF-IDF. (<b>a</b>) CNN-1; (<b>b</b>) CNN-2; (<b>c</b>) CNN-3.</p>
Full article ">Figure 5
<p>CNN models learning under Word2Vec. (<b>a</b>) CNN-1; (<b>b</b>) CNN-2; (<b>c</b>) CNN-3.</p>
Full article ">Figure 6
<p>CNN model learning curves under FastText. (<b>a</b>) CNN-1; (<b>b</b>) CNN-2; (<b>c</b>) CNN-3.</p>
Full article ">Figure 7
<p>Machine learning vs. deep learning performance comparison. (<b>a</b>) Accuracy; (<b>b</b>) precision; (<b>c</b>) recall; (<b>d</b>) F-1 score.</p>
Full article ">
21 pages, 6438 KiB  
Article
Weighted Averages and Polynomial Interpolation for PM2.5 Time Series Forecasting
by Anibal Flores, Hugo Tito-Chura, Victor Yana-Mamani, Charles Rosado-Chavez and Alejandro Ecos-Espino
Computers 2024, 13(9), 238; https://doi.org/10.3390/computers13090238 - 18 Sep 2024
Viewed by 537
Abstract
This article describes a novel method for the multi-step forecasting of PM2.5 time series based on weighted averages and polynomial interpolation. Multi-step prediction models enable decision makers to build an understanding of longer future terms than the one-step-ahead prediction models, allowing for more [...] Read more.
This article describes a novel method for the multi-step forecasting of PM2.5 time series based on weighted averages and polynomial interpolation. Multi-step prediction models enable decision makers to build an understanding of longer future terms than the one-step-ahead prediction models, allowing for more timely decision-making. As the cases for this study, hourly data from three environmental monitoring stations from Ilo City in Southern Peru were selected. The results show average RMSEs of between 1.60 and 9.40 ug/m3 and average MAPEs of between 17.69% and 28.91%. Comparing the results with those derived using the presently implemented benchmark models (such as LSTM, BiLSTM, GRU, BiGRU, and LSTM-ATT) in different prediction horizons, in the majority of environmental monitoring stations, the proposed model outperformed them by between 2.40% and 17.49% in terms of the average MAPE derived. It is concluded that the proposed model constitutes a good alternative for multi-step PM2.5 time series forecasting, presenting similar and superior results to the benchmark models. Aside from the good results, one of the main advantages of the proposed model is that it requires fewer data in comparison with the benchmark models. Full article
Show Figures

Figure 1

Figure 1
<p>The 20-day correlation of Pacocha station.</p>
Full article ">Figure 2
<p>The 20-day correlation of Bolognesi station.</p>
Full article ">Figure 3
<p>The 20-day correlation of Pardo station.</p>
Full article ">Figure 4
<p>How matrix <span class="html-italic">M</span> is used to make predictions.</p>
Full article ">Figure 5
<p>The 24 predicted hours with the weighted average equation.</p>
Full article ">Figure 6
<p>Multi-step predictions of WA and WA + PI.</p>
Full article ">Figure 7
<p>The WA + PI algorithm.</p>
Full article ">Figure 8
<p>Web application for PM2.5 forecasting.</p>
Full article ">Figure 9
<p>Architectures of the benchmark models: (<b>a</b>) LSTM, (<b>b</b>) GRU, (<b>c</b>) BiLSTM, (<b>d</b>) BiGRU, and (<b>e</b>) LSTM−ATT.</p>
Full article ">Figure 10
<p>The 72 predicted hours for Pacocha station.</p>
Full article ">Figure 11
<p>The 72 predicted hours for Bolognesi station.</p>
Full article ">Figure 12
<p>The 72 predicted hours for Pardo station.</p>
Full article ">
19 pages, 7837 KiB  
Article
Evaluating the Impact of Filtering Techniques on Deep Learning-Based Brain Tumour Segmentation
by Sofia Rosa, Verónica Vasconcelos and Pedro J. S. B. Caridade
Computers 2024, 13(9), 237; https://doi.org/10.3390/computers13090237 - 18 Sep 2024
Viewed by 693
Abstract
Gliomas are a common and aggressive kind of brain tumour that is difficult to diagnose due to their infiltrative development, variable clinical presentation, and complex behaviour, making them an important focus in neuro-oncology. Segmentation of brain tumour images is critical for improving diagnosis, [...] Read more.
Gliomas are a common and aggressive kind of brain tumour that is difficult to diagnose due to their infiltrative development, variable clinical presentation, and complex behaviour, making them an important focus in neuro-oncology. Segmentation of brain tumour images is critical for improving diagnosis, prognosis, and treatment options. Manually segmenting brain tumours is time-consuming and challenging. Automatic segmentation algorithms can significantly improve the accuracy and efficiency of tumour identification, thus improving treatment planning and outcomes. Deep learning-based segmentation tumours have shown significant advances in the last few years. This study evaluates the impact of four denoising filters, namely median, Gaussian, anisotropic diffusion, and bilateral, on tumour detection and segmentation. The U-Net architecture is applied for the segmentation of 3064 contrast-enhanced magnetic resonance images from 233 patients diagnosed with meningiomas, gliomas, and pituitary tumours. The results of this work demonstrate that bilateral filtering yields superior outcomes, proving to be a robust and computationally efficient approach in brain tumour segmentation. This method reduces the processing time by 12 epochs, which in turn contributes to lowering greenhouse gas emissions by optimizing computational resources and minimizing energy consumption. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

Figure 1
<p>CE-MRI example from the dataset employed in this study (<b>a</b>). Panels (<b>b</b>–<b>e</b>) show the absolute difference values after applying the (<b>b</b>) median filter, (<b>c</b>) Gaussian filter, (<b>d</b>) bilateral filter, and (<b>e</b>) the anisotropic diffusion filter.</p>
Full article ">Figure 2
<p>Training loss performance of the U-Net model considering the validation dataset for the different pre-processing images: (<b>a</b>) Gaussian, (<b>b</b>) anisotropic diffusion, and (<b>c</b>) bilateral filters. For the anisotropic diffusion filter, the value of iteractions has been fixed at <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>, except for <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, as indicated in the graph key.</p>
Full article ">Figure 3
<p>Training loss performance of the U-Net model considering the validation dataset and filtered images.</p>
Full article ">Figure 4
<p>Example of the segmentation process for the image with the highest value of the Jaccard index: (<b>a</b>) image after bilateral filtering of the original image; (<b>b</b>) original mask; (<b>c</b>) obtained mask; (<b>d</b>) sobreposition of estimated mask contour in the image.</p>
Full article ">Figure 5
<p>Example of typical segmentation failure due to a low-contrast image: (<b>a</b>) original image and (<b>b</b>) original mask.</p>
Full article ">
15 pages, 683 KiB  
Article
Cross-Lingual Short-Text Semantic Similarity for Kannada–English Language Pair
by Muralikrishna S N, Raghurama Holla, Harivinod N and Raghavendra Ganiga
Computers 2024, 13(9), 236; https://doi.org/10.3390/computers13090236 - 18 Sep 2024
Viewed by 771
Abstract
Analyzing the semantic similarity of cross-lingual texts is a crucial part of natural language processing (NLP). The computation of semantic similarity is essential for a variety of tasks such as evaluating machine translation systems, quality checking human translation, information retrieval, plagiarism checks, etc. [...] Read more.
Analyzing the semantic similarity of cross-lingual texts is a crucial part of natural language processing (NLP). The computation of semantic similarity is essential for a variety of tasks such as evaluating machine translation systems, quality checking human translation, information retrieval, plagiarism checks, etc. In this paper, we propose a method for measuring the semantic similarity of Kannada–English sentence pairs that uses embedding space alignment, lexical decomposition, word order, and a convolutional neural network. The proposed method achieves a maximum correlation of 83% with human annotations. Experiments on semantic matching and retrieval tasks resulted in promising results in terms of precision and recall. Full article
Show Figures

Figure 1

Figure 1
<p>Cross-lingual semantic-similarity calculation in multilingual-answer paper evaluation.</p>
Full article ">Figure 2
<p>Proposed architecture for semantic-similarity computation.</p>
Full article ">Figure 3
<p>A detailed architecture for cross-lingual semantic-similarity computation.</p>
Full article ">Figure 4
<p>A few sample texts from the dataset.</p>
Full article ">Figure 5
<p>Cross-lingual STS performance.</p>
Full article ">Figure 6
<p>A comparison of cross-lingual STS performance considering word order using positional embedding and DTW.</p>
Full article ">
18 pages, 5532 KiB  
Article
Enhancing Solar Power Efficiency: Smart Metering and ANN-Based Production Forecasting
by Younes Ledmaoui, Asmaa El Fahli, Adila El Maghraoui, Abderahmane Hamdouchi, Mohamed El Aroussi, Rachid Saadane and Ahmed Chebak
Computers 2024, 13(9), 235; https://doi.org/10.3390/computers13090235 - 17 Sep 2024
Viewed by 912
Abstract
This paper presents a comprehensive and comparative study of solar energy forecasting in Morocco, utilizing four machine learning algorithms: Extreme Gradient Boosting (XGBoost), Gradient Boosting Machine (GBM), recurrent neural networks (RNNs), and artificial neural networks (ANNs). The study is conducted using a smart [...] Read more.
This paper presents a comprehensive and comparative study of solar energy forecasting in Morocco, utilizing four machine learning algorithms: Extreme Gradient Boosting (XGBoost), Gradient Boosting Machine (GBM), recurrent neural networks (RNNs), and artificial neural networks (ANNs). The study is conducted using a smart metering device designed for a photovoltaic system at an industrial site in Benguerir, Morocco. The smart metering device collects energy usage data from a submeter and transmits it to the cloud via an ESP-32 card, enhancing monitoring, efficiency, and energy utilization. Our methodology includes an analysis of solar resources, considering factors such as location, temperature, and irradiance levels, with PVSYST simulation software version 7.2, employed to evaluate system performance under varying conditions. Additionally, a data logger is developed to monitor solar panel energy production, securely storing data in the cloud while accurately measuring key parameters and transmitting them using reliable communication protocols. An intuitive web interface is also created for data visualization and analysis. The research demonstrates a holistic approach to smart metering devices for photovoltaic systems, contributing to sustainable energy utilization, smart grid development, and environmental conservation in Morocco. The performance analysis indicates that ANNs are the most effective predictive model for solar energy forecasting in similar scenarios, demonstrating the lowest RMSE and MAE values, along with the highest R2 value. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the study.</p>
Full article ">Figure 2
<p>Simulation under PVSYST.</p>
Full article ">Figure 3
<p>Key components of the system.</p>
Full article ">Figure 4
<p>System schematic diagram (<b>a</b>), SmartMeter PCB (<b>b</b>), and real system implementation (<b>c</b>).</p>
Full article ">Figure 5
<p>Methodology used to select the best prediction algorithm.</p>
Full article ">Figure 6
<p>Correlation matrix.</p>
Full article ">Figure 7
<p>ANN architecture.</p>
Full article ">Figure 8
<p>Types of simple RNN architectures include: (<b>a</b>) connections between hidden units, (<b>b</b>) connections from output to hidden units, and (<b>c</b>) connections that process the entire sequence to produce a single output [<a href="#B37-computers-13-00235" class="html-bibr">37</a>].</p>
Full article ">Figure 9
<p>General architecture of XGBoost [<a href="#B38-computers-13-00235" class="html-bibr">38</a>].</p>
Full article ">Figure 10
<p>General architecture of GBM.</p>
Full article ">Figure 11
<p>3D design of the installation.</p>
Full article ">Figure 12
<p>Comparative performance results of the four ML algorithms.</p>
Full article ">Figure 13
<p>Authentication page.</p>
Full article ">Figure 14
<p>Data visualization.</p>
Full article ">Figure 15
<p>Comparison between daily and expected PV generation.</p>
Full article ">
17 pages, 3728 KiB  
Article
YOLOv8-Based Drone Detection: Performance Analysis and Optimization
by Betul Yilmaz and Ugurhan Kutbay
Computers 2024, 13(9), 234; https://doi.org/10.3390/computers13090234 - 17 Sep 2024
Viewed by 1602
Abstract
The extensive utilization of drones has led to numerous scenarios that encompass both advantageous and perilous outcomes. By using deep learning techniques, this study aimed to reduce the dangerous effects of drone use through early detection of drones. The purpose of this study [...] Read more.
The extensive utilization of drones has led to numerous scenarios that encompass both advantageous and perilous outcomes. By using deep learning techniques, this study aimed to reduce the dangerous effects of drone use through early detection of drones. The purpose of this study is the evaluation of deep learning approaches such as pre-trained YOLOv8 drone detection for security issues. This study focuses on the YOLOv8 model to achieve optimal performance in object detection tasks using a publicly available dataset collected by Mehdi Özel for a UAV competition that is sourced from GitHub. These images are labeled using Roboflow, and the model is trained on Google Colab. YOLOv8, known for its advanced architecture, was selected due to its suitability for real-time detection applications and its ability to process complex visual data. Hyperparameter tuning and data augmentation techniques were applied to maximize the performance of the model. Basic hyperparameters such as learning rate, batch size, and optimization settings were optimized through iterative experiments to provide the best performance. In addition to hyperparameter tuning, various data augmentation strategies were used to increase the robustness and generalization ability of the model. Techniques such as rotation, scaling, flipping, and color adjustments were applied to the dataset to simulate different conditions and variations. Among the augmentation techniques applied to the specific dataset in this study, rotation was found to deliver the highest performance. Blurring and cropping methods were observed to follow closely behind. The combination of optimized hyperparameters and strategic data augmentation allowed YOLOv8 to achieve high detection accuracy and reliable performance on the publicly available dataset. This method demonstrates the effectiveness of YOLOv8 in real-world scenarios, while also highlighting the importance of hyperparameter tuning and data augmentation in increasing model capabilities. To enhance model performance, dataset augmentation techniques including rotation and blurring are implemented. Following these steps, a significant precision value of 0.946, a notable recall value of 0.9605, and a considerable precision–recall curve value of 0.978 are achieved, surpassing many popular models such as Mask CNN, CNN, and YOLOv5. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>YOLO family timeline.</p>
Full article ">Figure 2
<p>Drone images samples from the dataset [<a href="#B24-computers-13-00234" class="html-bibr">24</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) The location distribution of drone object within dataset. (<b>b</b>) The size distribution of the drone objects within dataset.</p>
Full article ">Figure 4
<p>(<b>a</b>) Precision–recall curve for specific dataset at 50 epochs. (<b>b</b>) Precision–recall curve for specific dataset at 100 epochs. (<b>c</b>) Precision–recall curve for specific dataset at 150 epochs. (<b>d</b>) Precision–recall curve for specific dataset at 200 epochs.</p>
Full article ">Figure 5
<p>(<b>a</b>) Precision–recall curve for specific dataset at 640 image size. (<b>b</b>) Precision–recall curve for specific dataset at 800 image size.</p>
Full article ">Figure 6
<p>(<b>a</b>) Precision–recall curve for flip augmented specific dataset. (<b>b</b>) YOLOv8 model precision-recall curve for rotation-augmented specific dataset. (<b>c</b>) YOLOv8 model precision–recall curve for crop-augmented specific dataset (<b>d</b>) YOLOv8 model precision–recall curve for blurring-augmented specific dataset. (<b>e</b>) YOLOv8 model precision–recall curve for gray-scale-augmented specific dataset.</p>
Full article ">Figure 7
<p>YOLOv8 model results for specific dataset.</p>
Full article ">Figure 8
<p>Precision–recall curve for specific dataset.</p>
Full article ">Figure 9
<p>(<b>a</b>) F1–confidence curve for specific dataset. (<b>b</b>) Recall–confidence curve for specific dataset. (<b>c</b>) Precision–confidence curve for specific dataset.</p>
Full article ">Figure 10
<p>Test results for specific dataset.</p>
Full article ">Figure 11
<p>YOLOv8 model results for augmented dataset.</p>
Full article ">Figure 12
<p>Precision–recall curve for augmented dataset.</p>
Full article ">Figure 13
<p>(<b>a</b>) F1–confidence curve for augmented specific dataset (<b>b</b>) Recall–confidence curve for augmented specific dataset (<b>c</b>) Precision–confidence curve for augmented specific dataset.</p>
Full article ">Figure 14
<p>Augmented dataset result.</p>
Full article ">
16 pages, 1081 KiB  
Article
Optimized Machine Learning Classifiers for Symptom-Based Disease Screening
by Auba Fuster-Palà, Francisco Luna-Perejón, Lourdes Miró-Amarante and Manuel Domínguez-Morales
Computers 2024, 13(9), 233; https://doi.org/10.3390/computers13090233 - 14 Sep 2024
Viewed by 1149
Abstract
This work presents a disease detection classifier based on symptoms encoded by their severity. This model is presented as part of the solution to the saturation of the healthcare system, aiding in the initial screening stage. An open-source dataset is used, which undergoes [...] Read more.
This work presents a disease detection classifier based on symptoms encoded by their severity. This model is presented as part of the solution to the saturation of the healthcare system, aiding in the initial screening stage. An open-source dataset is used, which undergoes pre-processing and serves as the data source to train and test various machine learning models, including SVM, RFs, KNN, and ANNs. A three-phase optimization process is developed to obtain the best classifier: first, the dataset is pre-processed; secondly, a grid search is performed with several hyperparameter variations to each classifier; and, finally, the best models obtained are subjected to additional filtering processes. The best-results model, selected based on the performance and the execution time, is a KNN with 2 neighbors, which achieves an accuracy and F1 score of over 98%. These results demonstrate the effectiveness and improvement of the evaluated models compared to previous studies, particularly in terms of accuracy. Although the ANN model has a longer execution time compared to KNN, it is retained in this work due to its potential to handle more complex datasets in a real clinical context. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
Show Figures

Figure 1

Figure 1
<p>Graphical diagram representing the machine learning algorithms considered in the study for screening system analysis, with colors highlighting key components. (<b>a</b>) Random Forest: green and blue nodes represent data points used in different decision trees, and the final result is determined by majority voting or averaging. (<b>b</b>) K-Nearest Neighbors (KNN): red triangles and blue squares represent different classes, with the green circle being the query point. (<b>c</b>) Support Vector Machine (SVM): yellow circles and blue squares indicate different classes, while the black line is the optimal hyperplane separating them, and the red circles are support vectors. (<b>d</b>) Neural Network (NN): green nodes represent input layers, yellow nodes represent hidden layers, and red nodes represent the output layer.</p>
Full article ">Figure 2
<p>Graphical abstract showing the dataset split into training (green, 70%), evaluation (yellow, 15%), and testing (red, 15%) phases. Multiple algorithms are trained and evaluated using the training and evaluation sets, while the testing set is used for final model performance assessment. abstract for the full processing chain.</p>
Full article ">Figure 3
<p>Schematic representation of the steps followed for the pre-processing of the datasets.</p>
Full article ">Figure 4
<p>Graphical representation and numerical data of the dataset split using hold-out in train, validation, and test subsets.</p>
Full article ">Figure 5
<p>Confusion matrix for the final selected model (KNN with 2 neighbours).</p>
Full article ">
20 pages, 2961 KiB  
Article
Leveraging Large Language Models with Chain-of-Thought and Prompt Engineering for Traffic Crash Severity Analysis and Inference
by Hao Zhen, Yucheng Shi, Yongcan Huang, Jidong J. Yang and Ninghao Liu
Computers 2024, 13(9), 232; https://doi.org/10.3390/computers13090232 - 14 Sep 2024
Viewed by 1371
Abstract
Harnessing the power of Large Language Models (LLMs), this study explores the use of three state-of-the-art LLMs, specifically GPT-3.5-turbo, LLaMA3-8B, and LLaMA3-70B, for crash severity analysis and inference, framing it as a classification task. We generate textual narratives from original traffic crash tabular [...] Read more.
Harnessing the power of Large Language Models (LLMs), this study explores the use of three state-of-the-art LLMs, specifically GPT-3.5-turbo, LLaMA3-8B, and LLaMA3-70B, for crash severity analysis and inference, framing it as a classification task. We generate textual narratives from original traffic crash tabular data using a pre-built template infused with domain knowledge. Additionally, we incorporated Chain-of-Thought (CoT) reasoning to guide the LLMs in analyzing the crash causes and then inferring the severity. This study also examine the impact of prompt engineering specifically designed for crash severity inference. The LLMs were tasked with crash severity inference to: (1) evaluate the models’ capabilities in crash severity analysis, (2) assess the effectiveness of CoT and domain-informed prompt engineering, and (3) examine the reasoning abilities with the CoT framework. Our results showed that LLaMA3-70B consistently outperformed the other models, particularly in zero-shot settings. The CoT and Prompt Engineering techniques significantly enhanced performance, improving logical reasoning and addressing alignment issues. Notably, the CoT offers valuable insights into LLMs’ reasoning process, unleashing their capacity to consider diverse factors such as environmental conditions, driver behavior, and vehicle characteristics in severity analysis and inference. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

Figure 1
<p>Illustration of textual narrative generation.</p>
Full article ">Figure 2
<p>Zero-shot (ZS).</p>
Full article ">Figure 3
<p>Zero-shot with CoT (ZS_CoT).</p>
Full article ">Figure 4
<p>Zero-shot with prompt engineering (ZS_PE).</p>
Full article ">Figure 5
<p>Zero-shot with prompt engineering &amp; CoT (ZS_PE_CoT).</p>
Full article ">Figure 6
<p>Few shot (FS).</p>
Full article ">Figure 7
<p>Exemplar responses of LLMs in different settings.</p>
Full article ">Figure 8
<p>Effect of PE or CoT separately.</p>
Full article ">Figure 9
<p>Performance comparison of models in ZS, ZS_PE, and ZS_PE_CoT.</p>
Full article ">Figure 10
<p>Word cloud for correctly inferred “Minor or non-injury accident” in the ZS_CoT setting.</p>
Full article ">Figure 11
<p>Word cloud for correctly inferred “Serious injury accident” in the ZS_CoT setting.</p>
Full article ">Figure 12
<p>Word cloud for correctly inferred “Fatal accident” in the ZS_CoT setting.</p>
Full article ">Figure 13
<p>Output examples for fatal accidents from LLaMA3-70B in ZS_CoT setting.</p>
Full article ">
13 pages, 3622 KiB  
Article
Assessing the Impact of Prolonged Sitting and Poor Posture on Lower Back Pain: A Photogrammetric and Machine Learning Approach
by Valentina Markova, Miroslav Markov, Zornica Petrova and Silviya Filkova
Computers 2024, 13(9), 231; https://doi.org/10.3390/computers13090231 - 14 Sep 2024
Viewed by 4323
Abstract
Prolonged static sitting at the workplace is considered one of the main risks for the development of musculoskeletal disorders (MSDs) and adverse health effects. Factors such as poor posture and extended sitting are perceived to be a reason for conditions such as lumbar [...] Read more.
Prolonged static sitting at the workplace is considered one of the main risks for the development of musculoskeletal disorders (MSDs) and adverse health effects. Factors such as poor posture and extended sitting are perceived to be a reason for conditions such as lumbar discomfort and lower back pain (LBP), even though the scientific explanation of this relationship is still unclear and raises disputes in the scientific community. The current study focused on evaluating the relationship between LBP and prolonged sitting in poor posture using photogrammetric images, postural angle calculation, machine learning models, and questionnaire-based self-reports regarding the occurrence of LBP and similar symptoms among the participants. Machine learning models trained with this data are employed to recognize poor body postures. Two scenarios have been elaborated for modeling purposes: scenario 1, based on natural body posture tagged as correct and incorrect, and scenario 2, based on incorrect body postures, corrected additionally by the rehabilitator. The achieved accuracies of respectively 75.3% and 85% for both scenarios reveal the potential for future research in enhancing awareness and actively managing posture-related issues that elevate the likelihood of developing lower back pain symptoms. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of the experimental workflow.</p>
Full article ">Figure 2
<p>Participant’s posture with body markers: natural sitting posture (<b>a</b>) and corrected sitting posture (<b>b</b>).</p>
Full article ">Figure 3
<p>Postural angles (1–7) with focus on those related to higher risk of low back pain development.</p>
Full article ">Figure 4
<p>Box plot representation of distributions of the postural angles (2, 6, 7) for 100 participants.</p>
Full article ">Figure 5
<p>Accuracies of all ML methods achieved in both scenarios before and after fine-tuning of the hyperparameters.</p>
Full article ">Figure 6
<p>Assessment of feature importance for prediction of incorrect posture with Random Forest classifier.</p>
Full article ">
16 pages, 13238 KiB  
Article
Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net
by Bastian Oldach, Philipp Wintermeyer and Norbert Kockmann
Computers 2024, 13(9), 230; https://doi.org/10.3390/computers13090230 - 13 Sep 2024
Viewed by 542
Abstract
Miniaturization promotes the efficiency and exploration domain in scientific fields such as computer science, engineering, medicine, and biotechnology. In particular, the field of microfluidics is a flourishing technology, which deals with the manipulation of small volumes of liquid. Dispersed droplets or bubbles in [...] Read more.
Miniaturization promotes the efficiency and exploration domain in scientific fields such as computer science, engineering, medicine, and biotechnology. In particular, the field of microfluidics is a flourishing technology, which deals with the manipulation of small volumes of liquid. Dispersed droplets or bubbles in a second immiscible liquid are of great interest for screening applications or chemical and biochemical reactions. However, since very small dimensions are characterized by phenomena that differ from those at macroscopic scales, a deep understanding of physics is crucial for effective device design. Due to small volumes in miniaturized systems, common measurement techniques are not applicable as they exceed the dimensions of the device by a multitude. Hence, image analysis is commonly chosen as a method to understand ongoing phenomena. Artificial Intelligence is now the state of the art for recognizing patterns in images or analyzing datasets that are too large for humans to handle. X-ray-based Computer Tomography adds a third dimension to images, which results in more information, but ultimately, also in more complex image analysis. In this work, we present the application of the U-Net neural network to extract certain states during droplet formation in a capillary, which forms a constantly repeated process that is captured on tens of thousands of CT images. The experimental setup features a co-flow setup that is based on 3D-printed capillaries with two different cross-sections with an inner diameter, respectively edge length of 1.6 mm. For droplet formation, water was dispersed in silicon oil. The classification into different droplet states allows for 3D reconstruction and a time-resolved 3D analysis of the present phenomena. The original U-Net was modified to process input images of a size of 688 × 432 pixels while the structure of the encoder and decoder path feature 23 convolutional layers. The U-Net consists of four max pooling layers and four upsampling layers. The training was performed on 90% and validated on 10% of a dataset containing 492 images showing different states of droplet formation. A mean Intersection over Union of 0.732 was achieved for a training of 50 epochs, which is considered a good performance. The presented U-Net needs 120 ms per image to process 60,000 images to categorize emerging droplets into 24 states at 905 angles. Once the model is trained sufficiently, it provides accurate segmentation for various flow conditions. The selected images are used for 3D reconstruction enabling the 2D and 3D quantification of emerging droplets in capillaries that feature circular and square cross-sections. By applying this method, a temporal resolution of 25–40 ms was achieved. Droplets that are emerging in capillaries with a square cross-section become bigger under the same flow conditions in comparison to capillaries with a circular cross section. The presented methodology is promising for other periodic phenomena in different scientific disciplines that focus on imaging techniques. Full article
Show Figures

Figure 1

Figure 1
<p>Principle sketch of how a periodic process like repeated slug formation can be classified according to the state of droplet formation. The left side shows a regular slug flow, which is typical for multiphase flows in capillaries, and the droplet formation mechanism with the steps from <math display="inline"><semantics> <msub> <mi>l</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>l</mi> <mn>5</mn> </msub> </semantics></math>. On the right side, the repeated slug flow formation is temporally resolved for the steps <math display="inline"><semantics> <msub> <mi>l</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>l</mi> <mn>5</mn> </msub> </semantics></math> to obtain a series of stationary states that enable 3D analysis.</p>
Full article ">Figure 2
<p>(<b>a</b>) The µ-CT used for the experiments and its surrounding peripherals. (<b>b</b>) A close-up view of the specimen chamber with an installed capillary under investigation.</p>
Full article ">Figure 3
<p>(<b>a</b>) The U-Net architecture that was used for this work with an input image and the classification of the image as the output. (<b>b</b>) An example CT image of an emerging droplet, the human-labeled ground truth and the outlet of the U-Net that was trained for 50 epochs. (<b>c</b>) A sketch of the ARM. The datasets are fed to the U-Net and are classified according to the defined states. Projection images are acquired at each angular position from 0 to <math display="inline"><semantics> <msup> <mn>227</mn> <mo>∘</mo> </msup> </semantics></math> in <math display="inline"><semantics> <msup> <mn>0.25</mn> <mo>∘</mo> </msup> </semantics></math> increments. The U-Net is applied to select one image for each angular position that captures a desired droplet state. The selected projection images are then used to reconstruct a 3D volume for image analysis.</p>
Full article ">Figure 4
<p>The training <span class="html-italic">accuracy</span> (dotted gray line) and validation <span class="html-italic">accuracy</span> (solid black line) for 50 epochs can be tracked on the left Y-axis. The corresponding <span class="html-italic">IoU</span> (red markers) is given on the left Y-axis for 1, 5, 10, 20, 30, and 50 epochs of training.</p>
Full article ">Figure 5
<p>(<b>a</b>) The 2D droplet contours tracked over the 24 steps provided by the U-Net classification by plotting droplet radii over the droplet length. The diagram emphasizes the droplet evolution starting at the filling stage (dotted light-gray lines) and over the necking stage (dashed dark-gray line), until the droplet detaches (solid black line) in the circular capillary (top) with an inner diameter <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math> of 1.6 mm and for the square capillary (bottom) with <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math> = 1.6 mm. (<b>b</b>) The reconstructed 3D representation of the different droplet states in the circular (top) and square (bottom) capillary for the filling stage (left), necking stage (middle), and the detached droplet (right) for a constant Weber number <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>e</mi> </mrow> </semantics></math>.</p>
Full article ">Figure A1
<p>Comparison of the input image, the ground truth, and the U-Net output for 1, 5, 10, 20, 30, and 50 epochs of training.</p>
Full article ">
19 pages, 1495 KiB  
Article
Deep Learning for Predicting Attrition Rate in Open and Distance Learning (ODL) Institutions
by Juliana Ngozi Ndunagu, David Opeoluwa Oyewola, Farida Shehu Garki, Jude Chukwuma Onyeakazi, Christiana Uchenna Ezeanya and Elochukwu Ukwandu
Computers 2024, 13(9), 229; https://doi.org/10.3390/computers13090229 - 11 Sep 2024
Viewed by 746
Abstract
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate [...] Read more.
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate is acknowledged as the most complicated and significant problem facing educational systems and is caused by institutional and non-institutional challenges. In this study, the researchers utilized a dataset obtained from the National Open University of Nigeria (NOUN) from 2012 to 2022, which included comprehensive information about students enrolled in various programs at the university who were inactive and had dropped out. The researchers used deep learning techniques, such as the Long Short-Term Memory (LSTM) model and compared their performance with the One-Dimensional Convolutional Neural Network (1DCNN) model. The results of this study revealed that the LSTM model achieved overall accuracy of 57.29% on the training data, while the 1DCNN model exhibited lower accuracy of 49.91% on the training data. The LSTM indicated a superior correct classification rate compared to the 1DCNN model. Full article
Show Figures

Figure 1

Figure 1
<p>A typical DL workflow to solve real-world problems [<a href="#B7-computers-13-00229" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>Students’ readiness to complete their study.</p>
Full article ">Figure 3
<p>Difficulty in understanding the course material.</p>
Full article ">Figure 4
<p>Institutional challenge dataset: frustration in accessing information from NOUN.</p>
Full article ">Figure 5
<p>Institutional challenge dataset—use of social network platforms.</p>
Full article ">Figure 6
<p>Institutional challenge dataset—academic performance.</p>
Full article ">Figure 7
<p>Institutional challenge dataset—inadequate communication.</p>
Full article ">Figure 8
<p>Non-institutional challenge dataset—family challenges.</p>
Full article ">Figure 9
<p>Non-institutional challenge dataset—financial reasons.</p>
Full article ">Figure 10
<p>Non-institutional dataset—sickness.</p>
Full article ">Figure 11
<p>Correlation matrix between different factors that may affect student attrition in the institutional challenge dataset.</p>
Full article ">Figure 12
<p>Correlation matrix between different factors that may affect student attrition in non-institutional challenge dataset.</p>
Full article ">Figure 13
<p>Training and validation accuracy of LSTM model in the institutional challenge dataset.</p>
Full article ">Figure 14
<p>Training and validation loss of LSTM model in the institutional challenge dataset.</p>
Full article ">Figure 15
<p>Training and validation accuracy of CNN model in the institutional challenge dataset.</p>
Full article ">Figure 16
<p>Training and validation loss of CNN model in the institutional challenge dataset.</p>
Full article ">Figure 17
<p>Training and validation accuracy of LSTM model in the non-institutional challenge dataset.</p>
Full article ">Figure 18
<p>Training and validation loss of LSTM model in the non-institutional challenge dataset.</p>
Full article ">Figure 19
<p>Training and validation accuracy of CNN model in the non-institutional challenge dataset.</p>
Full article ">Figure 20
<p>Training and validation loss of CNN model in the non-institutional challenge dataset.</p>
Full article ">
17 pages, 1343 KiB  
Review
The State of the Art of Digital Twins in Health—A Quick Review of the Literature
by Leonardo El-Warrak and Claudio M. de Farias
Computers 2024, 13(9), 228; https://doi.org/10.3390/computers13090228 - 11 Sep 2024
Viewed by 1776
Abstract
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things [...] Read more.
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things (IoT), cloud computing, and artificial intelligence (AI). Digital twins have applications in a wide range of sectors, from manufacturing and engineering to healthcare. They have been used in managing healthcare facilities, streamlining care processes, personalizing treatments, and enhancing patient recovery. By analysing data from sensors and other sources, healthcare professionals can develop virtual models of patients, organs, and human systems, experimenting with various strategies to identify the most effective approach. This approach can lead to more targeted and efficient therapies while reducing the risk of collateral effects. Digital twin technology can also be used to generate a virtual replica of a hospital to review operational strategies, capabilities, personnel, and care models to identify areas for improvement, predict future challenges, and optimize organizational strategies. The potential impact of this tool on our society and its well-being is quite significant. This article explores how digital twins are being used in healthcare. This article also introduces some discussions on the impact of this use and future research and technology development projections for the use of digital twins in the healthcare sector. Full article
Show Figures

Figure 1

Figure 1
<p>PICO Strategy.</p>
Full article ">Figure 2
<p>PRISMA flow diagram of the selection process.</p>
Full article ">Figure 3
<p>Main axes.</p>
Full article ">
13 pages, 2937 KiB  
Article
An Unsupervised Approach for Treatment Effectiveness Monitoring Using Curvature Learning
by Hersh Sagreiya, Isabelle Durot and Alireza Akhbardeh
Computers 2024, 13(9), 227; https://doi.org/10.3390/computers13090227 - 9 Sep 2024
Viewed by 746
Abstract
Contrast-enhanced ultrasound could assess whether cancer chemotherapeutic agents work in days, rather than waiting 2–3 months, as is typical using the Response Evaluation Criteria in Solid Tumors (RECIST), therefore avoiding toxic side effects and expensive, ineffective therapy. A total of 40 mice were [...] Read more.
Contrast-enhanced ultrasound could assess whether cancer chemotherapeutic agents work in days, rather than waiting 2–3 months, as is typical using the Response Evaluation Criteria in Solid Tumors (RECIST), therefore avoiding toxic side effects and expensive, ineffective therapy. A total of 40 mice were implanted with human colon cancer cells: treatment-sensitive mice in control (n = 10, receiving saline) and treated (n = 10, receiving bevacizumab) groups and treatment-resistant mice in control (n = 10) and treated (n = 10) groups. Each mouse was imaged using 3D dynamic contrast-enhanced ultrasound with Definity microbubbles. Curvature learning, an unsupervised learning approach, quantized pixels into three classes—blue, yellow, and red—representing normal, intermediate, and high cancer probability, both at baseline and after treatment. Next, a curvature learning score was calculated for each mouse using statistical measures representing variations in these three color classes across each frame from cine ultrasound images obtained during contrast administration on a given day (intra-day variability) and between pre- and post-treatment days (inter-day variability). A Wilcoxon rank-sum test compared score distributions between treated, treatment-sensitive mice and all others. There was a statistically significant difference in tumor score between the treated, treatment-sensitive group (n = 10) and all others (n = 30) (p = 0.0051). Curvature learning successfully identified treatment response, detecting changes in tumor perfusion before changes in tumor size. A similar technique could be developed for humans. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Three-dimensional dynamic contrast-enhanced ultrasound (DCE-US) imaging protocol for both treatment-sensitive (LS174T) and treatment-resistant (CT26) tumors. Ten days after tumor cell injection, 3D DCE-US scans (arrows) were performed prior to treatment (baseline scan at day 0) and at subsequent days 1, 3, 7, and 10 after treatment using either bevacizumab (treated mice) or saline (control mice). All mice were sacrificed on day 10, and tumors were excised for histologic analysis.</p>
Full article ">Figure 2
<p>(<b>A</b>) Data processing block diagram to calculate curvature learning (CLS) score, a treatment effectiveness measure. (<b>B</b>) The results generated by the proposed curvature learning algorithm, including the fused heat map (or “embedded image”), the scattergram in multiple color-coded clusters, the three-class scattergram, and finally the percentage of pixels in each of the three classes (blue, yellow, and red). (<b>C</b>) The chart on the left demonstrates a characteristic example of variation in the yellow class on different days (pre-treatment day 0 and post-treatment days 1, 3, 7, and 10). Statistical features from different days were used to calculate intra-day curvature learning scores and the inter-day curvature learning score, resulting in a final curvature learning score representing an overall measure of treatment effectiveness.</p>
Full article ">Figure 3
<p>Curvature learning algorithm results for a typical treatment-sensitive mouse treated with the chemotherapeutic agent bevacizumab (LSBolusAV) on days 0 (baseline), 3, and 10. This mouse has a cell line that is expected to respond to treatment. In the fused heat map, dark blue represents normal tissue, and the progressive color change from dark blue to cyan, green, yellow, orange, and red represents increasing suspicion for tumor. There is a progressive decrease in color classes more suspicious for tumor (yellow, orange, and red) with time. The scattergrams in the middle two columns demonstrate quantization into three color classes: blue, yellow, and red. In the percentage bar plots on the right, there is a decrease in the more suspicious yellow and red color classes with time.</p>
Full article ">Figure 4
<p>Curvature learning algorithm results for a typical treatment-resistant, untreated (CTBolusCTRL) mouse on days 0 (baseline), 3, and 10. This mouse is not expected to respond to treatment. Pixels representing normal tissue are dark blue. There is a progressive increase in color classes more suspicious for tumor (yellow and red) with time both in the fused heat map on the left and the percentage bar plots on the right.</p>
Full article ">Figure 5
<p>Box-and-whisker plot comparing the distribution of curvature learning scores (CLSs) between the four groups of ten mice: treatment-resistant, untreated (CTBolusCTRL); treatment-resistant, treated (CTBolusAV); treatment-sensitive, untreated (LSBolusCTRL); treatment-sensitive, treated (LSBolusAV). The horizontal red lines represent the median, the blue lines indicate the upper and lower quartiles, and the black lines at the ends denote the minimum and maximum values. This figure compares mice pre-treatment and three days after treatment. Only treatment-sensitive mice treated with bevacizumab were expected to respond to treatment, and this group shows a different distribution compared to the other groups.</p>
Full article ">Figure 6
<p>Performance and capability of the proposed method for early diagnosis using only the first day after treatment: Difference in intra-day curvature learning scores pre-treatment and one day after treatment for the following groups: treatment-resistant, untreated (CTBolusCTRL); treatment-resistant, treated (CTBolusAV); treatment-sensitive, untreated (LSBolusCTRL); treatment-sensitive, treated (LSBolusAV). The horizontal red lines represent the median, the blue lines indicate the upper and lower quartiles, and the black lines at the ends denote the minimum and maximum values. There was a statistically significant difference in the final curvature learning score, encompassing differences in intra-day curvature learning scores pre-treatment and one day after treatment, between the treated, treatment-sensitive group and all other groups (<span class="html-italic">p</span> = 0.0051).</p>
Full article ">
17 pages, 9168 KiB  
Article
An Integrated Software-Defined Networking–Network Function Virtualization Architecture for 5G RAN–Multi-Access Edge Computing Slice Management in the Internet of Industrial Things
by Francesco Chiti, Simone Morosi and Claudio Bartoli
Computers 2024, 13(9), 226; https://doi.org/10.3390/computers13090226 - 9 Sep 2024
Viewed by 1001
Abstract
The Internet of Things (IoT), namely, the set of intelligent devices equipped with sensors and actuators and capable of connecting to the Internet, has now become an integral part of the most competitive industries, as it enables optimization of production processes and reduction [...] Read more.
The Internet of Things (IoT), namely, the set of intelligent devices equipped with sensors and actuators and capable of connecting to the Internet, has now become an integral part of the most competitive industries, as it enables optimization of production processes and reduction in operating costs and maintenance time, together with improving the quality of products and services. More specifically, the term Industrial Internet of Things (IIoT) identifies the system which consists of advanced Internet-connected equipment and analytics platforms specialized for industrial activities, where IIoT devices range from small environmental sensors to complex industrial robots. This paper presents an integrated high-level SDN-NFV architecture enabling clusters of smart devices to interconnect and manage the exchange of data with distributed control processes and databases. In particular, it is focused on 5G RAN-MEC slice management in the IIoT context. The proposed system is emulated by means of two distinct real-time frameworks, demonstrating improvements in connectivity, energy efficiency, end-to-end latency and throughput. In addition, its scalability, modularity and flexibility are assessed, making this framework suitable to test advanced and more applications. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed network architecture.</p>
Full article ">Figure 2
<p>Radio Access Network and Edge Computing slice integration and orchestration.</p>
Full article ">Figure 3
<p>(Virtual) Switch configuration and control via OpenFlow interface.</p>
Full article ">Figure 4
<p>The 5G simulated network architecture compliant with 3GPP (TR 21.915).</p>
Full article ">Figure 5
<p>Proposed and developed overall emulation architecture integrating Simu5G (5G RAN), Mininet (Cloud/Fog/Edge) and wide area wireless sensor networks.</p>
Full article ">Figure 6
<p>The 5G-IIoT service-based adopted architecture with a possible example of network deployment via Network Slicing.</p>
Full article ">Figure 7
<p>Network Slicing management scheme applied at the RAN level by exploiting the Carrier Aggregation module provided in Simu5G.</p>
Full article ">Figure 8
<p>In the upper half of the image, a Client MQTT (Publisher) publishes values related to the topic “Kiln Temperature”, while in the lower half, another Client MQTT (Subscriber) subscribes to the same topic and receives the temperature values.</p>
Full article ">Figure 9
<p>The 5G-IIoT deployment scenarios for performance evaluation, where multiple base stations (gNbs) are considered in order to facilitate RAN Slicing.</p>
Full article ">Figure 10
<p>Normalized residual battery level of the mobile robot as a function of time.</p>
Full article ">Figure 11
<p>Round-Trip Time comparison for the cases of vertical approach (Slicing) and horizontal approach (one size fits all).</p>
Full article ">
22 pages, 1904 KiB  
Article
SLACPSS: Secure Lightweight Authentication for Cyber–Physical–Social Systems
by Ahmed Zedaan M. Abed, Tamer Abdelkader and Mohamed Hashem
Computers 2024, 13(9), 225; https://doi.org/10.3390/computers13090225 - 9 Sep 2024
Viewed by 1158
Abstract
The concept of Cyber–Physical–Social Systems (CPSSs) has emerged as a response to the need to understand the interaction between Cyber–Physical Systems (CPSs) and humans. This shift from CPSs to CPSSs is primarily due to the widespread use of sensor-equipped smart devices that are [...] Read more.
The concept of Cyber–Physical–Social Systems (CPSSs) has emerged as a response to the need to understand the interaction between Cyber–Physical Systems (CPSs) and humans. This shift from CPSs to CPSSs is primarily due to the widespread use of sensor-equipped smart devices that are closely connected to users. CPSSs have been a topic of interest for more than ten years, gaining increasing attention in recent years. The inclusion of human elements in CPS research has presented new challenges, particularly in understanding human dynamics, which adds complexity that has yet to be fully explored. CPSSs are a base class and consist of three basic components (cyberspace, physical space, and social space). We map the components of the metaverse with that of a CPSS, and we show that the metaverse is an implementation of a Cyber–Physical–Social System (CPSS). The metaverse is made up of computer systems with many elements, such as artificial intelligence, computer vision, image processing, mixed reality, augmented reality, and extended reality. It also comprises physical systems, controlled objects, and human interaction. The identification process in CPSSs suffers from weak security, and the authentication problem requires heavy computation. Therefore, we propose a new protocol for secure lightweight authentication in Cyber–Physical–Social Systems (SLACPSSs) to offer secure communication between platform servers and users as well as secure interactions between avatars. We perform a security analysis and compare the proposed protocol to the related previous ones. The analysis shows that the proposed protocol is lightweight and secure. Full article
Show Figures

Figure 1

Figure 1
<p>A Cyber–Physical–Social System (CPSS), the image was taken from [<a href="#B8-computers-13-00225" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>The metaverse as a Cyber–Physical–Social System.</p>
Full article ">Figure 3
<p>Security threats to the metaverse [<a href="#B27-computers-13-00225" class="html-bibr">27</a>].</p>
Full article ">Figure 4
<p>System model for metaverse as a Cyber–Physical–Social System.</p>
Full article ">Figure 5
<p>Authentication message transition.</p>
Full article ">
21 pages, 431 KiB  
Article
Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing
by Mauro Femminella and Gianluca Reali
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224 - 6 Sep 2024
Viewed by 838
Abstract
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced [...] Read more.
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called “cold start” events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Figure 1

Figure 1
<p>General model of a controlled serverless computing cluster.</p>
Full article ">Figure 2
<p>Performance of the baseline system without the reinforcement learning in terms of (<b>a</b>) service latency, (<b>b</b>) resource utilization (CPU), and (<b>c</b>) fraction of lost requests.</p>
Full article ">Figure 3
<p>Performance of PPO-driven serverless edge system in terms of service latency as a function of the value of CPU <tt>limits</tt> for both (<b>a</b>) training and (<b>b</b>) test.</p>
Full article ">Figure 4
<p>Performance of PPO-driven serverless edge system in terms of fraction of lost service requests as a function of the value of CPU <tt>limits</tt> for both (<b>a</b>) training and (<b>b</b>) test.</p>
Full article ">Figure 5
<p>Average value of the HPA threshold in the PPO-driven serverless edge system as a function of the value of CPU <tt>limits</tt> for both (<b>a</b>) training and (<b>b</b>) test.</p>
Full article ">Figure 6
<p>Performance of PPO-driven serverless edge system in terms of CPU utilization efficiency as a function of the value of CPU <tt>limits</tt> for both (<b>a</b>) training and (<b>b</b>) test.</p>
Full article ">Figure 7
<p>Boxplot of service latency for PPO-driven serverless edge system (CPU <tt>limits</tt> set to 500 m, <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mi>S</mi> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> s) as a function of the discount factor <math display="inline"><semantics> <mi>γ</mi> </semantics></math> in the test phase.</p>
Full article ">Figure 8
<p>CPU utilization efficiency for PPO-driven serverless edge system (CPU <tt>limits</tt> set to 500 m, <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mi>S</mi> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> s) as a function of the discount factor <math display="inline"><semantics> <mi>γ</mi> </semantics></math> for both training and test phases.</p>
Full article ">
30 pages, 5636 KiB  
Review
A Survey of Blockchain Applicability, Challenges, and Key Threats
by Catalin Daniel Morar and Daniela Elena Popescu
Computers 2024, 13(9), 223; https://doi.org/10.3390/computers13090223 - 6 Sep 2024
Viewed by 1976
Abstract
With its decentralized, immutable, and consensus-based validation features, blockchain technology has grown from early financial applications to a variety of different sectors. This paper aims to outline various applications of the blockchain, and systematically identify general challenges and key threats regarding its adoption. [...] Read more.
With its decentralized, immutable, and consensus-based validation features, blockchain technology has grown from early financial applications to a variety of different sectors. This paper aims to outline various applications of the blockchain, and systematically identify general challenges and key threats regarding its adoption. The challenges are organized into even broader groups, to allow a clear overview and identification of interconnected issues. Potential solutions are introduced into the discussion, addressing their possible ways of mitigating these challenges and their forward-looking effects in fostering the adoption of blockchain technology. The paper also highlights some potential directions for future research that may overcome these challenges to unlock further applications. More generally, the article attempts to describe the potential transformational implications of blockchain technology, through the manner in which it may contribute to the advancement of a diversity of industries. Full article
Show Figures

Figure 1

Figure 1
<p>Evolution of blockchain-related papers published over time. Source: own illustration, based on the findings related to article [<a href="#B2-computers-13-00223" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>Resource-identification methodology.</p>
Full article ">Figure 3
<p>Components of the model proposed in the article [<a href="#B19-computers-13-00223" class="html-bibr">19</a>].</p>
Full article ">Figure 4
<p>Components of the model proposed in the article [<a href="#B20-computers-13-00223" class="html-bibr">20</a>].</p>
Full article ">Figure 5
<p>Components of the model proposed in the article [<a href="#B25-computers-13-00223" class="html-bibr">25</a>].</p>
Full article ">Figure 6
<p>Components of the model proposed in the article [<a href="#B23-computers-13-00223" class="html-bibr">23</a>].</p>
Full article ">Figure 7
<p>Components of the model proposed in the article [<a href="#B35-computers-13-00223" class="html-bibr">35</a>].</p>
Full article ">Figure 8
<p>Components of the model proposed in the article [<a href="#B38-computers-13-00223" class="html-bibr">38</a>].</p>
Full article ">Figure 9
<p>The architecture of the model proposed in the article [<a href="#B39-computers-13-00223" class="html-bibr">39</a>].</p>
Full article ">Figure 10
<p>Components of the model proposed in the article [<a href="#B21-computers-13-00223" class="html-bibr">21</a>].</p>
Full article ">Figure 11
<p>The percent of highlighted challenges in the education domain. Own illustration, based on the results from [<a href="#B40-computers-13-00223" class="html-bibr">40</a>].</p>
Full article ">Figure 12
<p>The types of IoD blockchain-powered schemes described in [<a href="#B33-computers-13-00223" class="html-bibr">33</a>].</p>
Full article ">Figure 13
<p>Sharding technique.</p>
Full article ">Figure 14
<p>Verification tools for smart contracts.</p>
Full article ">Figure 15
<p>Oracle communication.</p>
Full article ">
24 pages, 1191 KiB  
Article
Usability Heuristics for Metaverse
by Khalil Omar, Hussam Fakhouri, Jamal Zraqou and Jorge Marx Gómez
Computers 2024, 13(9), 222; https://doi.org/10.3390/computers13090222 - 6 Sep 2024
Cited by 1 | Viewed by 1028
Abstract
The inclusion of usability heuristics into the metaverse is aimed at solving the unique issues raised by virtual reality (VR), augmented reality (AR), and mixed reality (MR) environments. This research points out the usability challenges of metaverse user interfaces (UIs), such as information [...] Read more.
The inclusion of usability heuristics into the metaverse is aimed at solving the unique issues raised by virtual reality (VR), augmented reality (AR), and mixed reality (MR) environments. This research points out the usability challenges of metaverse user interfaces (UIs), such as information overloading, complex navigation, and the need for intuitive control mechanisms in these immersive spaces. By adapting the existing usability models to suit the metaverse context, this study presents a detailed list of heuristics and sub-heuristics that are designed to improve the overall usability of metaverse UIs. These heuristics are essential when it comes to creating user-friendly, inclusive, and captivating virtual environments (VEs) that take care of the needs of three-dimensional interactions, social dynamics demands, and integration with digital–physical worlds. It should be noted that these heuristics have to keep up with new technological advancements, as well as changing expectations from users, hence ensuring a positive user experience (UX) within the metaverse. Full article
Show Figures

Figure 1

Figure 1
<p>Methodology followed to develop usability heuristics and guidelines for metaverse UIs.</p>
Full article ">Figure 2
<p>Mind-map diagram for the usability heuristics and sub-heuristics for metaverse UIs.</p>
Full article ">
29 pages, 1466 KiB  
Article
Teach Programming Using Task-Driven Case Studies: Pedagogical Approach, Guidelines, and Implementation
by Jaroslav Porubän, Milan Nosál’, Matúš Sulír and Sergej Chodarev
Computers 2024, 13(9), 221; https://doi.org/10.3390/computers13090221 - 5 Sep 2024
Viewed by 737
Abstract
Despite the effort invested to improve the teaching of programming, students often face problems with understanding its principles when using traditional learning approaches. This paper presents a novel teaching method for programming, combining the task-driven methodology and the case study approach. This method [...] Read more.
Despite the effort invested to improve the teaching of programming, students often face problems with understanding its principles when using traditional learning approaches. This paper presents a novel teaching method for programming, combining the task-driven methodology and the case study approach. This method is called a task-driven case study. The case study aspect should provide a real-world context for the examples used to explain the required knowledge. The tasks guide students during the course to ensure that they will not fall into bad practices. We provide reasoning for using the combination of these two methodologies and define the essential properties of our method. Using a specific example of the Minesweeper case study from the Java technologies course, the readers are guided through the process of the case study selection, solution implementation, study guide writing, and course execution. The teachers’ and students’ experiences with this approach, including its advantages and potential drawbacks, are also summarized. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

Figure 1
<p>The relationship between goals and tasks in case studies.</p>
Full article ">Figure 2
<p>The task-driven case study lifecycle.</p>
Full article ">Figure 3
<p>Our case study solution (<b>left</b>) compared with Microsoft Minesweeper (<b>right</b>).</p>
Full article ">Figure 4
<p>The projection of code division to goals using TODO comments in Minesweeper.</p>
Full article ">Figure 5
<p>Relation between implementation tasks and learning goals in the Minesweeper case study.</p>
Full article ">Figure 6
<p>Alignment of lectures and practical lessons in the Java technologies course.</p>
Full article ">Figure 7
<p>An excerpt from a study guide demonstrating the relationship between goals and tasks.</p>
Full article ">Figure 8
<p>A snapshot of the Minesweeper class diagram in the fifth module.</p>
Full article ">
21 pages, 3534 KiB  
Article
Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History
by Rao Mikkilineni, W. Patrick Kelly and Gideon Crawley
Computers 2024, 13(9), 220; https://doi.org/10.3390/computers13090220 - 5 Sep 2024
Cited by 1 | Viewed by 759
Abstract
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. [...] Read more.
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The system receives information from various senses, makes sense of what is being observed, and acts using its experience while the observations are still in progress. We use the General Theory of Information (GTI) to implement a digital genome, specifying the operational processes that design, deploy, operate, and manage a cloud-agnostic distributed application that is independent of IaaS and PaaS infrastructure, which provides the resources required to execute the software components. The digital genome specifies the functional and non-functional requirements that define the goals and best-practice policies to evolve the system using associative memory and event-driven interaction history to maintain stability and safety while achieving the system’s objectives. We demonstrate a structural machine, cognizing oracles, and knowledge structures derived from GTI used for designing, deploying, operating, and managing a distributed video streaming application with autopoietic self-regulation that maintains structural stability and communication among distributed components with shared knowledge while maintaining expected behaviors dictated by functional requirements. Full article
Show Figures

Figure 1

Figure 1
<p>Current state of the art of information processing structures.</p>
Full article ">Figure 2
<p>According to the General Theory of Information [<a href="#B29-computers-13-00220" class="html-bibr">29</a>], information is the bridge between the material, mental, and digital worlds.</p>
Full article ">Figure 3
<p>The digital genome implementation integrates symbolic and sub-symbolic computing.</p>
Full article ">Figure 4
<p>Two information processing structures.</p>
Full article ">Figure 5
<p>Digital genome-driven distributed application with associative memory and event-driven interaction history. Both associative memory and event history shown as networks are described in detail in the video <a href="https://triadicautomata.com/digital-genome-vod-presentation/" target="_blank">https://triadicautomata.com/digital-genome-vod-presentation/</a> (accessed on 30 August 2024).</p>
Full article ">Figure 6
<p>Schema-based service architecture with various components. Both associative memory and event history shown as networks are described in detail in the video <a href="https://triadicautomata.com/digital-genome-vod-presentation/" target="_blank">https://triadicautomata.com/digital-genome-vod-presentation/</a> (accessed on 30 August 2024).</p>
Full article ">Figure 7
<p>Deployment of the VoD service using Cloud resources. An implementation is presented in the video mentioned in this paper. The video refreed to above describes the details of the implementation and various modules.</p>
Full article ">
19 pages, 675 KiB  
Review
Predicting Student Performance in Introductory Programming Courses
by João P. J. Pires, Fernanda Brito Correia, Anabela Gomes, Ana Rosa Borges and Jorge Bernardino
Computers 2024, 13(9), 219; https://doi.org/10.3390/computers13090219 - 5 Sep 2024
Viewed by 753
Abstract
The importance of accurately predicting student performance in education, especially in the challenging curricular unit of Introductory Programming, cannot be overstated. As institutions struggle with high failure rates and look for solutions to improve the learning experience, the need for effective prediction methods [...] Read more.
The importance of accurately predicting student performance in education, especially in the challenging curricular unit of Introductory Programming, cannot be overstated. As institutions struggle with high failure rates and look for solutions to improve the learning experience, the need for effective prediction methods becomes critical. This study aims to conduct a systematic review of the literature on methods for predicting student performance in higher education, specifically in Introductory Programming, focusing on machine learning algorithms. Through this study, we not only present different applicable algorithms but also evaluate their performance, using identified metrics and considering the applicability in the educational context, specifically in higher education and in Introductory Programming. The results obtained through this study allowed us to identify trends in the literature, such as which machine learning algorithms were most applied in the context of predicting students’ performance in Introductory Programming in higher education, as well as which evaluation metrics and datasets are usually used. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

Figure 1
<p>Literature review process steps.</p>
Full article ">Figure 2
<p>Accuracy of the most-used models. The references used are: Jamjoom et al. 2021 [<a href="#B28-computers-13-00219" class="html-bibr">28</a>], Sivasakthi et al. 2022 [<a href="#B29-computers-13-00219" class="html-bibr">29</a>], Ahadi et al. 2015 [<a href="#B30-computers-13-00219" class="html-bibr">30</a>], Khan et al. 2019 [<a href="#B32-computers-13-00219" class="html-bibr">32</a>], Veersamy et al. 2020 [<a href="#B33-computers-13-00219" class="html-bibr">33</a>], Sunday et al. 2020 [<a href="#B36-computers-13-00219" class="html-bibr">36</a>] and Sivasakthi 2017 [<a href="#B37-computers-13-00219" class="html-bibr">37</a>].</p>
Full article ">
18 pages, 5905 KiB  
Article
Detection of Bus Driver Mobile Phone Usage Using Kolmogorov-Arnold Networks
by János Hollósi, Áron Ballagi, Gábor Kovács, Szabolcs Fischer and Viktor Nagy
Computers 2024, 13(9), 218; https://doi.org/10.3390/computers13090218 - 3 Sep 2024
Viewed by 872
Abstract
This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov-Arnold Networks (KAN) to improve road safety and comply with regulations prohibiting phone use while driving. To address the lack of available data for this specific [...] Read more.
This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov-Arnold Networks (KAN) to improve road safety and comply with regulations prohibiting phone use while driving. To address the lack of available data for this specific task, a unique dataset was constructed consisting of images of bus drivers in two scenarios: driving without phone interaction and driving while on a phone call. This dataset provides the basis for the current research. Different KAN-based networks were developed for custom action recognition tailored to the nuanced task of identifying drivers holding phones. The system’s performance was evaluated against convolutional neural network-based solutions, and differences in accuracy and robustness were observed. The aim was to propose an appropriate solution for professional Driver Monitoring Systems (DMS) in research and development and to investigate the efficiency of KAN solutions for this specific sub-task. The implications of this work extend beyond enforcement, providing a foundational technology for automating monitoring and improving safety protocols in the commercial and public transport sectors. In conclusion, this study demonstrates the efficacy of KAN network layers in neural network designs for driver monitoring applications. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Architecture of the LinNet network.</p>
Full article ">Figure 2
<p>Architecture of the LinNet L network.</p>
Full article ">Figure 3
<p>Architecture of the LinKAN network.</p>
Full article ">Figure 4
<p>Architecture of the KAN network.</p>
Full article ">Figure 5
<p>Architecture of the ConvNet network.</p>
Full article ">Figure 6
<p>Architecture of the ConvKANNet network.</p>
Full article ">Figure 7
<p>Sample images from the dataset without phone use.</p>
Full article ">Figure 8
<p>Sample images from the dataset when using the phone.</p>
Full article ">Figure 9
<p>Distribution of images in the dataset by bus driver.</p>
Full article ">Figure 10
<p>Loss and accuracy in the training process without the use of any transformations.</p>
Full article ">Figure 11
<p>Loss and accuracy in the training process with random image flips.</p>
Full article ">Figure 12
<p>Loss and accuracy in the training process with random image inverting.</p>
Full article ">Figure 13
<p>Loss and accuracy in the training process with random image flips and inverting.</p>
Full article ">Figure 14
<p>Comparison of best accuracies.</p>
Full article ">Figure 14 Cont.
<p>Comparison of best accuracies.</p>
Full article ">Figure 15
<p>Changes in the efficiency of networks under different transformations.</p>
Full article ">Figure 16
<p>The highest rate of efficiency degradation during the transformations. (green: KAN-based solutions, blue: MLP-based solutions).</p>
Full article ">
21 pages, 3689 KiB  
Article
Introducing HeliEns: A Novel Hybrid Ensemble Learning Algorithm for Early Diagnosis of Helicobacter pylori Infection
by Sultan Noman Qasem
Computers 2024, 13(9), 217; https://doi.org/10.3390/computers13090217 - 2 Sep 2024
Viewed by 844
Abstract
The Gram-negative bacterium Helicobacter pylori (H. infection) infects the human stomach and is a major cause of gastritis, peptic ulcers, and gastric cancer. With over 50% of the global population affected, early and accurate diagnosis of H. infection infection is crucial for effective [...] Read more.
The Gram-negative bacterium Helicobacter pylori (H. infection) infects the human stomach and is a major cause of gastritis, peptic ulcers, and gastric cancer. With over 50% of the global population affected, early and accurate diagnosis of H. infection infection is crucial for effective treatment and prevention of severe complications. Traditional diagnostic methods, such as endoscopy with biopsy, serology, urea breath tests, and stool antigen tests, are often invasive, costly, and can lack precision. Recent advancements in machine learning (ML) and quantum machine learning (QML) offer promising non-invasive alternatives capable of analyzing complex datasets to identify patterns not easily discernible by human analysis. This research aims to develop and evaluate HeliEns, a novel quantum hybrid ensemble learning algorithm designed for the early and accurate diagnosis of H. infection infection. HeliEns combines the strengths of multiple quantum machine learning models, specifically Quantum K-Nearest Neighbors (QKNN), Quantum Naive Bayes (QNB), and Quantum Logistic Regression (QLR), to enhance diagnostic accuracy and reliability. The development of HeliEns involved rigorous data preprocessing steps, including data cleaning, encoding of categorical variables, and feature scaling, to ensure the dataset’s suitability for quantum machine learning algorithms. Individual models (QKNN, QNB, and QLR) were trained and evaluated using metrics such as accuracy, precision, recall, and F1-score. The ensemble model was then constructed by integrating these quantum models using a hybrid approach that leverages their diverse strengths. The HeliEns model demonstrated superior performance compared to individual models, achieving an accuracy of 94%, precision of 97%, recall of 92%, and an F1-score of 94% in detecting H. infection infection. The quantum ensemble approach effectively mitigated the limitations of individual models, providing a robust and reliable diagnostic tool. HeliEns significantly improved diagnostic accuracy and reliability for early H. infection detection. The integration of multiple quantum ML algorithms within the HeliEns framework enhanced overall model performance. The non-invasive nature of the HeliEns model offers a cost-effective and user-friendly alternative to traditional diagnostic methods. This research underscores the transformative potential of quantum machine learning in healthcare, particularly in enhancing diagnostic efficiency and patient outcomes. HeliEns represents a significant advancement in the early diagnosis of H. infection infection, leveraging quantum machine learning to provide a non-invasive, accurate, and reliable diagnostic tool. This research highlights the importance of QML-driven solutions in healthcare and sets the stage for future research to further refine and validate the HeliEns model in real-world clinical settings. Full article
Show Figures

Figure 1

Figure 1
<p>Architecture of H. infection detection.</p>
Full article ">Figure 2
<p>Model architecture.</p>
Full article ">Figure 3
<p>Mathematical visualization of proposed model.</p>
Full article ">Figure 4
<p>Pairwise scatter plot of encoded features.</p>
Full article ">Figure 5
<p>KNN decision boundary.</p>
Full article ">Figure 6
<p>LR decision boundary.</p>
Full article ">Figure 7
<p>NB decision boundary.</p>
Full article ">Figure 8
<p>Ensemble model decision boundary.</p>
Full article ">
15 pages, 2504 KiB  
Article
Research on Identification of Critical Quality Features of Machining Processes Based on Complex Networks and Entropy-CRITIC Methods
by Dongyue Qu, Wenchao Liang, Yuting Zhang, Chaoyun Gu, Guangyu Zhou and Yong Zhan
Computers 2024, 13(9), 216; https://doi.org/10.3390/computers13090216 - 30 Aug 2024
Viewed by 691
Abstract
Aiming at the difficulty in effectively identifying critical quality features in the complex machining process, this paper proposes a critical quality feature recognition method based on a machining process network. Firstly, the machining process network model is constructed based on the complex network [...] Read more.
Aiming at the difficulty in effectively identifying critical quality features in the complex machining process, this paper proposes a critical quality feature recognition method based on a machining process network. Firstly, the machining process network model is constructed based on the complex network theory. The LeaderRank algorithm is used to identify the critical processes in the machining process. Secondly, the Entropy-CRITIC method is used to calculate the weight of the quality features of the critical processes, and the critical quality features of the critical processes are determined according to weight ranking results. Finally, the feasibility and effectiveness of the method are verified by taking the medium-speed marine diesel engine coupling rod machining as an example. The results show that the method can still effectively identify the critical quality features in the case of small sample data and provide support for machining process optimization and quality control, thus improving product consistency, reliability, and machining efficiency. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

Figure 1
<p>Processing sub-network.</p>
Full article ">Figure 2
<p>Flow chart of critical process identification based on improved LR algorithm.</p>
Full article ">Figure 3
<p>Connecting rod model of marine diesel engine.</p>
Full article ">Figure 4
<p>Visual network model of diesel engine connecting rod processing technology.</p>
Full article ">Figure 5
<p>Ranking of critical quality features of connecting rod: (<b>a</b>) Entropy weight method; (<b>b</b>) CRITIC method; (<b>c</b>) Entropy-CRITIC method.</p>
Full article ">
14 pages, 1893 KiB  
Article
A Study of a Drawing Exactness Assessment Method Using Localized Normalized Cross-Correlations in a Portrait Drawing Learning Assistant System
by Yue Zhang, Zitong Kong, Nobuo Funabiki and Chen-Chien Hsu
Computers 2024, 13(9), 215; https://doi.org/10.3390/computers13090215 - 23 Aug 2024
Viewed by 583
Abstract
Nowadays, portrait drawing has gained significance in cultivating painting skills and human sentiments. In practice, novices often struggle with this art form without proper guidance from professionals, since they lack understanding of the proportions and structures of facial features. To solve this limitation, [...] Read more.
Nowadays, portrait drawing has gained significance in cultivating painting skills and human sentiments. In practice, novices often struggle with this art form without proper guidance from professionals, since they lack understanding of the proportions and structures of facial features. To solve this limitation, we have developed a Portrait Drawing Learning Assistant System (PDLAS) to assist novices in learning portrait drawing. The PDLAS provides auxiliary lines as references for facial features that are extracted by applying OpenPose and OpenCV libraries to a face photo image of the target. A learner can draw a portrait on an iPad using drawing software where the auxiliary lines appear on a different layer to the portrait. However, in the current implementation, the PDLAS does not offer a function to assess the exactness of the drawing result for feedback to the learner. In this paper, we present a drawing exactness assessment method using a Localized Normalized Cross-Correlation (NCC) algorithm in the PDLAS. NCC gives a similarity score between the original face photo and drawing result images by calculating the correlation of the brightness distributions. For precise feedback, the method calculates the NCC for each face component by extracting the bounding box. In addition, in this paper, we improve the auxiliary lines for the nose. For evaluations, we asked students at Okayama University, Japan, to draw portraits using the PDLAS, and applied the proposed method to their drawing results, where the application results validated the effectiveness by suggesting improvements in drawing components. The system usability was also confirmed through a questionnaire with a SUS score. The main finding of this research is that the implementation of the NCC algorithm within the PDLAS significantly enhances the accuracy of novice portrait drawings by providing detailed feedback on specific facial features, proving the system’s efficacy in art education and training. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

Figure 1
<p>Seventy keypoints for facial features in OpenPose.</p>
Full article ">Figure 2
<p>Auxiliary lines by OpenPose and OpenCV.</p>
Full article ">Figure 3
<p>Complete auxiliary lines example.</p>
Full article ">Figure 4
<p>Auxiliary line generation example for eyeglass.</p>
Full article ">Figure 5
<p>Drawing result of <span class="html-italic">User 1</span>. (Reproduced with permission from Yu H.)</p>
Full article ">Figure 6
<p>Drawing result of <span class="html-italic">User 7</span>. (Reproduced with permission from Qi H.)</p>
Full article ">Figure 7
<p>Auxiliary lines before improvement.</p>
Full article ">Figure 8
<p>Improved auxiliary lines.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop