Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (583)

Search Parameters:
Keywords = graphical processing units

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 1079 KiB  
Article
Context-Adaptable Deployment of FastSLAM 2.0 on Graphic Processing Unit with Unknown Data Association
by Jessica Giovagnola, Manuel Pegalajar Cuéllar and Diego Pedro Morales Santos
Appl. Sci. 2024, 14(23), 11466; https://doi.org/10.3390/app142311466 - 9 Dec 2024
Viewed by 560
Abstract
Simultaneous Localization and Mapping (SLAM) algorithms are crucial for enabling agents to estimate their position in unknown environments. In autonomous navigation systems, these algorithms need to operate in real-time on devices with limited resources, emphasizing the importance of reducing complexity and ensuring efficient [...] Read more.
Simultaneous Localization and Mapping (SLAM) algorithms are crucial for enabling agents to estimate their position in unknown environments. In autonomous navigation systems, these algorithms need to operate in real-time on devices with limited resources, emphasizing the importance of reducing complexity and ensuring efficient performance. While SLAM solutions aim at ensuring accurate and timely localization and mapping, one of their main limitations is their computational complexity. In this scenario, particle filter-based approaches such as FastSLAM 2.0 can significantly benefit from parallel programming due to their modular construction. The parallelization process involves identifying the parameters affecting the computational complexity in order to distribute the computation among single multiprocessors as efficiently as possible. However, the computational complexity of methodologies such as FastSLAM 2.0 can depend on multiple parameters whose values may, in turn, depend on each specific use case scenario ( ingi.e., the context), leading to multiple possible parallelization designs. Furthermore, the features of the hardware architecture in use can significantly influence the performance in terms of latency. Therefore, the selection of the optimal parallelization modality still needs to be empirically determined. This may involve redesigning the parallel algorithm depending on the context and the hardware architecture. In this paper, we propose a CUDA-based adaptable design for FastSLAM 2.0 on GPU, in combination with an evaluation methodology that enables the assessment of the optimal parallelization modality based on the context and the hardware architecture without the need for the creation of separate designs. The proposed implementation includes the parallelization of all the functional blocks of the FastSLAM 2.0 pipeline. Additionally, we contribute a parallelized design of the data association step through the Joint Compatibility Branch and Bound (JCBB) method. Multiple resampling algorithms are also included to accommodate the needs of a wide variety of navigation scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>FastSLAM 2.0 pipeline.</p>
Full article ">Figure 2
<p>Observation model—graphical representation.</p>
Full article ">Figure 3
<p>Hardware–software architecture schema.</p>
Full article ">Figure 4
<p>Simulation environment schema.</p>
Full article ">Figure 5
<p>Functional blocks partitioning schema.</p>
Full article ">Figure 6
<p>Detailed heterogeneous architecture pipeline.</p>
Full article ">Figure 7
<p>Data association pipeline.</p>
Full article ">Figure 8
<p>Particle Initialization—elapsed time.</p>
Full article ">Figure 9
<p>Particle Prediction—elapsed time.</p>
Full article ">Figure 10
<p>Mahalanobis Distance—elapsed time.</p>
Full article ">Figure 11
<p>Problem Preparation—elapsed time.</p>
Full article ">Figure 12
<p>Branch and Bound—elapsed time.</p>
Full article ">Figure 13
<p>Proposal Adjustment—elapsed time.</p>
Full article ">Figure 14
<p>Landmark Estimation—elapsed time.</p>
Full article ">Figure 15
<p>Resampling traditional methods—elapsed time.</p>
Full article ">Figure 16
<p>Resampling alternative methods—elapsed time.</p>
Full article ">
17 pages, 3121 KiB  
Article
Real-Time Radar Classification Based on Software-Defined Radio Platforms: Enhancing Processing Speed and Accuracy with Graphics Processing Unit Acceleration
by Seckin Oncu, Mehmet Karakaya, Yaser Dalveren, Ali Kara and Mohammad Derawi
Sensors 2024, 24(23), 7776; https://doi.org/10.3390/s24237776 - 4 Dec 2024
Viewed by 502
Abstract
This paper presents a comprehensive evaluation of real-time radar classification using software-defined radio (SDR) platforms. The transition from analog to digital technologies, facilitated by SDR, has revolutionized radio systems, offering unprecedented flexibility and reconfigurability through software-based operations. This advancement complements the role of [...] Read more.
This paper presents a comprehensive evaluation of real-time radar classification using software-defined radio (SDR) platforms. The transition from analog to digital technologies, facilitated by SDR, has revolutionized radio systems, offering unprecedented flexibility and reconfigurability through software-based operations. This advancement complements the role of radar signal parameters, encapsulated in the pulse description words (PDWs), which play a pivotal role in electronic support measure (ESM) systems, enabling the detection and classification of threat radars. This study proposes an SDR-based radar classification system that achieves real-time operation with enhanced processing speed. Employing the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm as a robust classifier, the system harnesses Graphical Processing Unit (GPU) parallelization for efficient radio frequency (RF) parameter extraction. The experimental results highlight the efficiency of this approach, demonstrating a notable improvement in processing speed while operating at a sampling rate of up to 200 MSps and achieving an accuracy of 89.7% for real-time radar classification. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustration of a functional diagram of an ESM system.</p>
Full article ">Figure 2
<p>Functional structure of an SDR receiver.</p>
Full article ">Figure 3
<p>IQ Implementation of DCR.</p>
Full article ">Figure 4
<p>Illustration of radar pulse and some basic parameters.</p>
Full article ">Figure 5
<p>Illustration of the experimental setup.</p>
Full article ">Figure 6
<p>Frequency measurement results of test scenario.</p>
Full article ">Figure 7
<p>Flowchart of the proposed radar classification algorithm.</p>
Full article ">Figure 8
<p>Clusters in PW-RF plane.</p>
Full article ">Figure 9
<p>Clusters in PW-PA domain.</p>
Full article ">
23 pages, 8542 KiB  
Article
Graphics Processing Unit-Accelerated Propeller Computational Fluid Dynamics Using AmgX: Performance Analysis Across Mesh Types and Hardware Configurations
by Yue Zhu, Jin Gan, Yongshui Lin and Weiguo Wu
J. Mar. Sci. Eng. 2024, 12(12), 2134; https://doi.org/10.3390/jmse12122134 - 22 Nov 2024
Viewed by 493
Abstract
Computational fluid dynamics (CFD) has become increasingly prevalent in marine and offshore engineering, with enhancing simulation efficiency emerging as a critical challenge. This study systematically evaluates the application of graphics processing unit (GPU) acceleration technology in CFD simulation of propeller open water performance. [...] Read more.
Computational fluid dynamics (CFD) has become increasingly prevalent in marine and offshore engineering, with enhancing simulation efficiency emerging as a critical challenge. This study systematically evaluates the application of graphics processing unit (GPU) acceleration technology in CFD simulation of propeller open water performance. Numerical simulations of the VP1304 propeller model were performed using OpenFOAM v2312 integrated with the NVIDIA AmgX library. The research compared GPU acceleration performance against conventional CPU methods across various hardware configurations and mesh types (tetrahedral, hexahedral-dominant, and polyhedral). Results demonstrate that GPU acceleration significantly improved computational efficiency, with tetrahedral meshes achieving over 400% speedup in a 4-GPU configuration, while polyhedral meshes reached over 500% speedup with a fixed mesh count. Among the mesh types, hexahedral-dominant meshes performed best in capturing flow field details. The study also found that GPU acceleration does not compromise simulation accuracy, but its effectiveness is closely related to mesh type and hardware configuration. Notably, GPUs demonstrate more significant advantages when handling large-scale problems. These findings have important practical implications for improving propeller design processes and shortening product development cycles. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>CFD simulation process accelerated through GPU in OpenFOAM.</p>
Full article ">Figure 2
<p>Geometric model of propeller VP1304. (<b>a</b>) Front view; (<b>b</b>) side view.</p>
Full article ">Figure 3
<p>Numerical simulation domain for open water performance of VP1304 propeller.</p>
Full article ">Figure 4
<p>Details of CFD mesh refinement.</p>
Full article ">Figure 5
<p>Comparative illustration of three computational domain dimensions (small, medium, and large).</p>
Full article ">Figure 6
<p>Different mesh types for CFD simulations. (<b>a</b>) Tetrahedral mesh; (<b>b</b>) hex-dominant mesh; (<b>c</b>) polyhedral mesh.</p>
Full article ">Figure 7
<p>Comparison of open water performance of propeller between simulation results on different hardware platforms and experimental data.</p>
Full article ">Figure 8
<p>Comparison of open water performance of propeller between simulation results using different grid types and experimental data, with a base size of 4.5 mm.</p>
Full article ">Figure 9
<p>Pressure distribution contour plots for different mesh types. (<b>a</b>) Tetrahedral mesh; (<b>b</b>) hex-dominant mesh; (<b>c</b>) polyhedral mesh.</p>
Full article ">Figure 10
<p>Vorticity distribution for different mesh types. (<b>a</b>) Tetrahedral mesh; (<b>b</b>) hex-dominant mesh; (<b>c</b>) polyhedral mesh.</p>
Full article ">Figure 11
<p>Velocity distribution for different mesh types. (<b>a</b>) Tetrahedral mesh; (<b>b</b>) hex-dominant mesh; (<b>c</b>) polyhedral mesh.</p>
Full article ">Figure 12
<p>Simulation time versus number of CPU cores for different mesh types. (<b>a</b>) Fixed mesh size (4.5 mm). (<b>b</b>) Fixed mesh count (3.3 million elements).</p>
Full article ">Figure 13
<p><a href="#jmse-12-02134-f012" class="html-fig">Figure 12</a> speedup factor versus number of CPU cores for different element types. (<b>a</b>) Fixed mesh size (4.5 mm). (<b>b</b>) Fixed mesh count (3.3 million elements).</p>
Full article ">Figure 14
<p>Simulation time versus number of GPUs for different mesh types. (<b>a</b>) Fixed mesh size (4.5 mm). (<b>b</b>) Fixed mesh count (3.3 million elements).</p>
Full article ">Figure 15
<p>Speedup factor versus number of GPUs for different mesh types. (<b>a</b>) Fixed mesh size (4.5 mm). (<b>b</b>) Fixed mesh count (3.3 million elements).</p>
Full article ">Figure 16
<p>Speedup of different numbers of GPUs compared to 32-core CPU with consistent mesh size.</p>
Full article ">Figure 17
<p>Speedup of different numbers of GPUs compared to 32-core CPU with consistent mesh number.</p>
Full article ">
20 pages, 5217 KiB  
Article
A Real-Time Signal Measurement System Using FPGA-Based Deep Learning Accelerators and Microwave Photonic
by Longlong Zhang, Tong Zhou, Jie Yang, Yin Li, Zhiwen Zhang, Xiang Hu and Yuanxi Peng
Remote Sens. 2024, 16(23), 4358; https://doi.org/10.3390/rs16234358 - 22 Nov 2024
Viewed by 551
Abstract
Deep learning techniques have been widely investigated as an effective method for signal measurement in recent years. However, most existing deep learning-based methods still face difficulty in deploying on embedded platforms and perform poorly in real-time applications. To address this, this paper develops [...] Read more.
Deep learning techniques have been widely investigated as an effective method for signal measurement in recent years. However, most existing deep learning-based methods still face difficulty in deploying on embedded platforms and perform poorly in real-time applications. To address this, this paper develops two accelerators, as the core of the signal measurement system, for intelligent signal processing. Firstly, by introducing the idea of automated framework, we propose a simplest deep neural network (DNN)-based hardware structure, which automatically maps algorithms to hardware modules, supports configurable parameters, and has the advantage of low latency, with an average inference time of only 3.5 μs. Subsequently, another accelerator is designed with the efficient hardware structure of the long short-term memory (LSTM) + DNN model, demonstrating outstanding performance with a classification accuracy of 98.82%, mean absolute error (MAE) of 0.27°, and root mean square errors (RMSE) of 0.392° after model compression. Moreover, parallel optimization strategies are exploited to further reduce latency and support simultaneous frequency and direction measurement tasks. Finally, we test the actual collected signal data on the XCVU13P field programmable gate array (FPGA). The results show that the time of inference saves 28–31% for the DNN model and 71–73% for the LSTM + DNN model compared to running on graphic processing unit (GPU). In addition, the parallel strategies further decrease the delay by 23.9% and 37.5% when processing continuous data. The FPGA-based and deep learning-assisted hardware accelerators significantly improve real-time performance and provide a promising solution for signal measurement. Full article
Show Figures

Figure 1

Figure 1
<p>Microwave direction finding system with long-baseline array. DDMZM: dual-drive Mach Zehnder modulator; PD: photodetector; LNA: low noise amplifier; E<sub>i</sub>: digitized envelope voltage.</p>
Full article ">Figure 2
<p>The LSTM cell.</p>
Full article ">Figure 3
<p>The proposed architecture of the overall system.</p>
Full article ">Figure 4
<p>The framework from algorithm to hardware implementation based on the DNN model.</p>
Full article ">Figure 5
<p>The least complex hardware structure based on the DNN model.</p>
Full article ">Figure 6
<p>The hardware design of the intelligent processing module based on LSTM + DNN.</p>
Full article ">Figure 7
<p>Parallel strategies within the layers. (<b>a</b>) LSTM layer; (<b>b</b>) FC layer.</p>
Full article ">Figure 8
<p>Coarse-grained inter-layer parallelism strategy between layers. (<b>a</b>) The original latency; (<b>b</b>) the optimized latency.</p>
Full article ">Figure 9
<p>The task-level parallel strategy of the intelligent processing module.</p>
Full article ">Figure 10
<p>The loss and accuracy versus epoch given by the proposed LSTM + DNN model. (<b>a</b>) The loss; (<b>b</b>) The accuracy.</p>
Full article ">Figure 11
<p>The experimental results of DOA estimation, including actual DOA, estimated DOA, and the corresponding errors. (<b>a</b>) The DNN model; (<b>b</b>) the LSTM + DNN model.</p>
Full article ">Figure 12
<p>Utilized area of the compressed model for DOA. The orange represents the LSTM layer, while the green represents the other layers.</p>
Full article ">Figure 13
<p>Comparison of latency for processing multiple input data based on FPGA. (<b>a</b>) DOA task; (<b>b</b>) IFM task.</p>
Full article ">
20 pages, 3466 KiB  
Article
Symmetric Tridiagonal Eigenvalue Solver Across CPU Graphics Processing Unit (GPU) Nodes
by Erika Hernández-Rubio, Alberto Estrella-Cruz, Amilcar Meneses-Viveros, Jorge Alberto Rivera-Rivera, Liliana Ibeth Barbosa-Santillán and Sergio Víctor Chapa-Vergara
Appl. Sci. 2024, 14(22), 10716; https://doi.org/10.3390/app142210716 - 19 Nov 2024
Viewed by 581
Abstract
In this work, an improved and scalable implementation of Cuppen’s algorithm for diagonalizing symmetric tridiagonal matrices is presented. This approach uses a hybrid-heterogeneous parallelization technique, taking advantage of GPU and CPU in a distributed hardware architecture. Cuppen’s algorithm is a theoretical concept and [...] Read more.
In this work, an improved and scalable implementation of Cuppen’s algorithm for diagonalizing symmetric tridiagonal matrices is presented. This approach uses a hybrid-heterogeneous parallelization technique, taking advantage of GPU and CPU in a distributed hardware architecture. Cuppen’s algorithm is a theoretical concept and a powerful tool in various scientific and engineering applications. It is a key player in matrix diagonalization, finding its use in Functional Density Theory (FDT) and Spectral Clustering. This highly efficient and numerically stable algorithm computes eigenvalues and eigenvectors of symmetric tridiagonal matrices, making it a crucial component in many computational methods. One of the challenges in parallelizing algorithms for GPUs is their limited memory capacity. However, we overcome this limitation by utilizing multiple nodes with both CPUs and GPUs. This enables us to solve subproblems that fit within the memory of each device in parallel and subsequently combine these subproblems to obtain the complete solution. The hybrid-heterogeneous approach proposed in this work outperforms the state-of-the-art libraries and also maintains a high degree of accuracy in terms of orthogonality and quality of eigenvectors. Furthermore, the sequential version of the algorithm with our approach in this work demonstrates superior performance and potential for practical use. In the experiments carried out, it was possible to verify that the performance of the implementation that was carried out scales by 2× using two graphic cards in the same node. Notably, Symmetric Tridiagonal Eigenvalue Solvers are fundamental to solving more general eigenvalue problems. Additionally, the divide-and-conquer approach employed in this implementation can be extended to singular value solvers. Given the wide range of eigenvalue problems encountered in scientific and engineering domains, this work is essential in advancing computational methods for efficient and accurate matrix diagonalization. Full article
Show Figures

Figure 1

Figure 1
<p>Heterogeneous parallel architecture.</p>
Full article ">Figure 2
<p>Process flow.</p>
Full article ">Figure 3
<p>Time-symmetric tridiagonal eigensystem.</p>
Full article ">Figure 4
<p>Eigenpairs error <math display="inline"><semantics> <msub> <mrow> <mo>∥</mo> <mi>A</mi> <mi>Q</mi> <mo>−</mo> <mi>Q</mi> <mo>Λ</mo> <mo>∥</mo> </mrow> <mi>F</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Orthogonality error <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> <mi>Q</mi> </mrow> <msup> <mi>Q</mi> <mi>T</mi> </msup> <msub> <mrow> <mo>−</mo> <mi>I</mi> <mo>∥</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">
34 pages, 1063 KiB  
Review
A Survey on Design Space Exploration Approaches for Approximate Computing Systems
by Sepide Saeedi, Ali Piri, Bastien Deveautour, Ian O’Connor, Alberto Bosio, Alessandro Savino and Stefano Di Carlo
Electronics 2024, 13(22), 4442; https://doi.org/10.3390/electronics13224442 - 13 Nov 2024
Viewed by 757
Abstract
Approximate Computing (AxC) has emerged as a promising paradigm to enhance performance and energy efficiency by allowing a controlled trade-off between accuracy and resource consumption. It is extensively adopted across various abstraction levels, from software to architecture and circuit levels, employing diverse methodologies. [...] Read more.
Approximate Computing (AxC) has emerged as a promising paradigm to enhance performance and energy efficiency by allowing a controlled trade-off between accuracy and resource consumption. It is extensively adopted across various abstraction levels, from software to architecture and circuit levels, employing diverse methodologies. The primary objective of AxC is to reduce energy consumption for executing error-resilient applications, accepting controlled and inherently acceptable output quality degradation. However, harnessing AxC poses several challenges, including identifying segments within a design amenable to approximation and selecting suitable AxC techniques to fulfill accuracy and performance criteria. This survey provides a comprehensive review of recent methodologies proposed for performing Design Space Exploration (DSE) to find the most suitable AxC techniques, focusing on both hardware and software implementations. DSE is a crucial design process where system designs are modeled, evaluated, and optimized for various extra-functional system behaviors such as performance, power consumption, energy efficiency, and accuracy. A systematic literature review was conducted to identify papers that ascribe their DSE algorithms, excluding those relying on exhaustive search methods. This survey aims to detail the state-of-the-art DSE methodologies that efficiently select AxC techniques, offering insights into their applicability across different hardware platforms and use-case domains. For this purpose, papers were categorized based on the type of search algorithm used, with Machine Learning (ML) and Evolutionary Algorithms (EAs) being the predominant approaches. Further categorization is based on the target hardware, including Field-Programmable Gate Arrays (FPGAs), Application-Specific Integrated Circuits (ASICs), general-purpose Central Processing Units (CPUs), and Graphics Processing Units (GPUs). A notable observation was that most studies targeted image processing applications due to their tolerance for accuracy loss. By providing an overview of techniques and methods outlined in existing literature pertaining to the DSE of AxC designs, this survey elucidates the current trends and challenges in optimizing approximate designs. Full article
Show Figures

Figure 1

Figure 1
<p>A classification of the AxC techniques.</p>
Full article ">Figure 2
<p>Steps of categorizing the proposed DSE approaches of the reviewed studies.</p>
Full article ">Figure 3
<p>Categorizing the proposed DSE approaches of the reviewed studies based on employed search algorithms and target hardware.</p>
Full article ">Figure 4
<p>Distribution of the proposed DSE approaches of the reviewed studies, based on employed search algorithms.</p>
Full article ">Figure 5
<p>Distribution of the proposed DSE approaches of the reviewed studies, based on the target hardware.</p>
Full article ">Figure 6
<p>Distribution of the proposed DSE approaches of the reviewed studies, based on the use case domains.</p>
Full article ">
19 pages, 5545 KiB  
Article
Edge Computing for AI-Based Brain MRI Applications: A Critical Evaluation of Real-Time Classification and Segmentation
by Khuhed Memon, Norashikin Yahya, Mohd Zuki Yusoff, Rabani Remli, Aida-Widure Mustapha Mohd Mustapha, Hilwati Hashim, Syed Saad Azhar Ali and Shahabuddin Siddiqui
Sensors 2024, 24(21), 7091; https://doi.org/10.3390/s24217091 - 4 Nov 2024
Viewed by 1136
Abstract
Medical imaging plays a pivotal role in diagnostic medicine with technologies like Magnetic Resonance Imagining (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), and ultrasound scans being widely used to assist radiologists and medical experts in reaching concrete diagnosis. Given the recent massive [...] Read more.
Medical imaging plays a pivotal role in diagnostic medicine with technologies like Magnetic Resonance Imagining (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), and ultrasound scans being widely used to assist radiologists and medical experts in reaching concrete diagnosis. Given the recent massive uplift in the storage and processing capabilities of computers, and the publicly available big data, Artificial Intelligence (AI) has also started contributing to improving diagnostic radiology. Edge computing devices and handheld gadgets can serve as useful tools to process medical data in remote areas with limited network and computational resources. In this research, the capabilities of multiple platforms are evaluated for the real-time deployment of diagnostic tools. MRI classification and segmentation applications developed in previous studies are used for testing the performance using different hardware and software configurations. Cost–benefit analysis is carried out using a workstation with a NVIDIA Graphics Processing Unit (GPU), Jetson Xavier NX, Raspberry Pi 4B, and Android phone, using MATLAB, Python, and Android Studio. The mean computational times for the classification app on the PC, Jetson Xavier NX, and Raspberry Pi are 1.2074, 3.7627, and 3.4747 s, respectively. On the low-cost Android phone, this time is observed to be 0.1068 s using the Dynamic Range Quantized TFLite version of the baseline model, with slight degradation in accuracy. For the segmentation app, the times are 1.8241, 5.2641, 6.2162, and 3.2023 s, respectively, when using JPEG inputs. The Jetson Xavier NX and Android phone stand out as the best platforms due to their compact size, fast inference times, and affordability. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Interface of the NeuroImaging Sequence Examiner (NISE) app (<b>left</b>), which displays the sequence, orientation, and relative position of the input brain MRI, alongside the corresponding inference time. The NeuroImaging Volumetric Extractor (NIVE) app (<b>right</b>) showcases the input MRI (<b>top</b>), the generated brain mask (middle), and the skull-stripped output (<b>bottom</b>). The NIVE app also includes a slider for navigating through individual brain slices and an option to save the skull-stripped MRI images.</p>
Full article ">Figure 2
<p>A general comparison between post-training quantization (PTQ) and quantization-aware training (QAT) schemes. The QAT is not employed in this work.</p>
Full article ">Figure 3
<p>System block diagram outlining the research flow for optimal platform selection in real-time deployment of medical imaging-based CAD tools. The process is divided into four phases: (1) identification and selection of classification and segmentation tasks, (2) selection and training of deep learning architectures (Full Integer Quantization is not used in this research due to the sensitive nature of medical diagnosis applications), (3) integration and deployment of trained DL models onto selected hardware and software platforms after conversion to compatible formats, and (4) evaluation of performance based on established parameters.</p>
Full article ">Figure 4
<p>NISE model inference (classification) times using multiple platforms (MATLAB and Python on Lenovo Legion, Python on Raspberry Pi 4B, Python on Xavier NX with and without GPU, and Android) for (<b>a</b>) Jpeg and (<b>b</b>) Dicom 3-channel MRI inputs with 224 × 224 resolution. The top and bottom of each box represent the upper and lower quartiles, respectively. The red line within the box represents the median value, and the red ‘+’ symbols represent the outliers, resulting from the first execution of the app, which is relatively slower as compared to the subsequent executions.</p>
Full article ">Figure 5
<p>NIVE model segmentation times using multiple platforms (MATLAB and Python on Lenovo Legion, Python on Raspberry Pi 4B, Python on Xavier NX with and without GPU, and Android) for (<b>a</b>) Jpeg, (<b>b</b>) Dicom and (<b>c</b>) NIfTI single channel MRI inputs with 256 × 256 resolution. The top and bottom of each box represent the upper and lower quartiles, respectively. The whiskers extending from the box indicate variability outside the upper and lower quartiles. The red line within the box represents the median value, and the red ‘+’ symbols represent the outliers, resulting from the first execution of the app, which is relatively slower as compared to the subsequent executions.</p>
Full article ">Figure 6
<p>Confusion matrix for NISE baseline classification model on 1276 images. Exactly the same confusion matrix is also seen for the float16 TFLite variant. Notably, only two T1 sagittal MRIs were misclassified as FLAIR sagittal.</p>
Full article ">Figure 7
<p>Confusion matrix for NISE DRQ-int8-TFLite classification model on 1276 images. Notably, only three T1 sagittal MRIs were misclassified as FLAIR sagittal.</p>
Full article ">Figure 8
<p>Visualization of segmented brain with its corresponding Dice score of selected slices from 3 subjects of AIH dataset. Comparison of NIVE Dice scores for baseline (<b>left</b>), float16 TFLite (<b>middle</b>) and DRQ-int8-TFLite (<b>right</b>) models. First row contains coronal scans, second row contains sagittal scans, whereas the third row shows axial scans. Green represents the brain region in GT also detected by the model, blue represents the brain in GT not detected by the model, and red represents the brain detected by the model not present in the GT mask.</p>
Full article ">
23 pages, 4829 KiB  
Review
The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning
by Michele Avanzo, Joseph Stancanello, Giovanni Pirrone, Annalisa Drigo and Alessandra Retico
Cancers 2024, 16(21), 3702; https://doi.org/10.3390/cancers16213702 - 1 Nov 2024
Viewed by 3189
Abstract
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms [...] Read more.
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent advancements include the refinement of learning algorithms, the development of convolutional neural networks to efficiently analyze images, and methods to synthesize new images. This renewed enthusiasm was also due to the increase in computational power with graphical processing units and the availability of large digital databases to be mined by neural networks. AI soon began to be applied in medicine, first through expert systems designed to support the clinician’s decision and later with neural networks for the detection, classification, or segmentation of malignant lesions in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone compared with a double reading by two radiologists on screening mammography. Natural language processing, recurrent neural networks, transformers, and generative models have both improved the capabilities of making an automated reading of medical images and moved AI to new domains, including the text analysis of electronic health records, image self-labeling, and self-reporting. The availability of open-source and free libraries, as well as powerful computing resources, has greatly facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools as ‘black boxes’ that require greater interpretability and explainability, and ethical issues related to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive results, AI is one of the most promising resources for frontier research and applications in medicine, in particular for oncological applications. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

Figure 1
<p>Alan Turing at age 16 (<b>a</b>). Source: Archive Centre, King’s College, Cambridge. The Papers of Alan Turing, AMT/K/7/4. The same image after applying a Sobel filter in the x (<b>b</b>) and y (<b>c</b>) direction.</p>
Full article ">Figure 2
<p>Timeline of AI (orange) and of AI in medicine (blue).</p>
Full article ">Figure 3
<p>Scheme of the perceptron.</p>
Full article ">Figure 4
<p>Application of decision trees (<b>a</b>) and support vector machines (<b>b</b>) learning to the classification of iris flower species from petal width and length. Prediction (areas) and training data (dots) and the resulting decision tree are shown on the left and right sides, respectively.</p>
Full article ">Figure 5
<p>Comparison between single-layer (<b>a</b>) and multilayer ANNs (<b>b</b>).</p>
Full article ">
17 pages, 8979 KiB  
Article
Action Recognition in Videos through a Transfer-Learning-Based Technique
by Elizabeth López-Lozada, Humberto Sossa, Elsa Rubio-Espino and Jesús Yaljá Montiel-Pérez
Mathematics 2024, 12(20), 3245; https://doi.org/10.3390/math12203245 - 17 Oct 2024
Viewed by 752
Abstract
In computer vision, human action recognition is a hot topic, popularized by the development of deep learning. Deep learning models typically accept video input without prior processing and train them to achieve recognition. However, conducting preliminary motion analysis can be beneficial in directing [...] Read more.
In computer vision, human action recognition is a hot topic, popularized by the development of deep learning. Deep learning models typically accept video input without prior processing and train them to achieve recognition. However, conducting preliminary motion analysis can be beneficial in directing the model training to prioritize the motion of individuals with less priority for the environment in which the action occurs. This paper puts forth a novel methodology for human action recognition based on motion information that employs transfer-learning techniques. The proposed method comprises four stages: (1) human detection and tracking, (2) motion estimation, (3) feature extraction, and (4) action recognition using a two-stream model. In order to develop this work, a customized dataset was utilized, comprising videos of diverse actions (e.g., walking, running, cycling, drinking, and falling) extracted from multiple public sources and websites, including Pexels and MixKit. This realistic and diverse dataset allowed for a comprehensive evaluation of the proposed method, demonstrating its effectiveness in different scenarios and conditions. Furthermore, the performance of seven pre-trained models for feature extraction was evaluated. The models analyzed were Inception-v3, MobileNet-v2, MobileNet-v3-L, VGG-16, VGG-19, Xception, and ConvNeXt-L. The results demonstrated that the ConvNeXt-L model yielded the most optimal outcomes. Furthermore, using pre-trained models for feature extraction facilitated the training process on a personal computer with a single graphics processing unit, achieving an accuracy of 94.9%. The experimental findings and outcomes suggest that integrating motion information enhances action recognition performance. Full article
(This article belongs to the Special Issue Deep Neural Networks: Theory, Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>The proposed method for HAR based on the analysis of motion features.</p>
Full article ">Figure 2
<p>Diagram illustrating the operation of the FairMOT algorithm used to track people.</p>
Full article ">Figure 3
<p>Diagram depicting the methodology employed by FairMOT for the monitoring of individuals, accompanied by the delineated bounding boxes for the extraction of the subject engaged in the observed actions.</p>
Full article ">Figure 4
<p>Diagram illustrating the outcomes of the optical flow and pose estimation processes, as well as the results of integrating the motion features.</p>
Full article ">Figure 5
<p>Block diagram that illustrates the proposed feature extraction process for this work.</p>
Full article ">Figure 6
<p>Block diagram of the proposed HAR classification model.</p>
Full article ">Figure 7
<p>The distribution of the videos in the dataset according to the source from which they were extracted.</p>
Full article ">Figure 8
<p>The sample videos obtained from the Pexels, NTU RGB+D, and MixKit websites.</p>
Full article ">Figure 9
<p>The distribution of the videos is presented according to the class of membership and source of origin.</p>
Full article ">Figure 10
<p>Graph of the loss during training with four classes of the dataset and its respective confusion matrix. (<b>a</b>) Loss. (<b>b</b>) Confusion matrix.</p>
Full article ">Figure 11
<p>Graph of the loss during training with five classes of the dataset and its respective confusion matrix. (<b>a</b>) Loss. (<b>b</b>) Confusion matrix.</p>
Full article ">
19 pages, 4532 KiB  
Article
Modular Microgrid Technology with a Single Development Environment Per Life Cycle
by Teodora Mîndra, Oana Chenaru, Radu Dobrescu and Lucian Toma
Energies 2024, 17(19), 5016; https://doi.org/10.3390/en17195016 - 9 Oct 2024
Viewed by 843
Abstract
The life cycle of a microgrid covers all the stages from idea to implementation, through exploitation until the end of its life, with a lifespan of around 25 years. Covering them usually requires several software tools, which can make the integration of results [...] Read more.
The life cycle of a microgrid covers all the stages from idea to implementation, through exploitation until the end of its life, with a lifespan of around 25 years. Covering them usually requires several software tools, which can make the integration of results from different stages difficult and may imply costs being hard to estimate from the beginning of a project. This paper proposes a unified platform composed of four modules developed in MATLAB 2022b, designed to assist all the processes a microgrid passes through during its lifetime. This entire platform can be used by a user with low IT knowledge, because it is completed with fill-in-the-blank alone, as a major advantage. The authors detail the architecture, functions and development of the platform, either by highlighting the novel integration of existing MATLAB tools or by developing new ones and designing new user interfaces linked with scripts based on its complex mathematical libraries. By consolidating processes into a single platform, the proposed solution enhances integration, reduces complexity and provides better cost predictability throughout the project’s duration. A proof-of-concept for this platform was presented by applying the life-cycle assessment process on a real-case study, a microgrid consisting of a photovoltaic plant, and an office building as the consumer and energy storage units. This platform has also been developed by involving students within summer internships, as a process strengthening the cooperation between industry and academia. Being an open-source application, the platform will be used within the educational process, where the students will have the possibility to add functionalities, improve the graphical representation, create new reports, etc. Full article
Show Figures

Figure 1

Figure 1
<p>Combining the concepts of the life cycle of a microgrid.</p>
Full article ">Figure 2
<p>The software architecture of the microgrid development platform.</p>
Full article ">Figure 3
<p>One-line diagram of the microgrid.</p>
Full article ">Figure 4
<p>Raw consumption data, consumer total active power.</p>
Full article ">Figure 5
<p>The automatic reporting page.</p>
Full article ">Figure 6
<p>Screenshot of the archiving page.</p>
Full article ">Figure 7
<p>Survival curve as a function of survival probability.</p>
Full article ">Figure 8
<p>Graphical User Interface (GUI) for microgrid performance and equipment status.</p>
Full article ">Figure 9
<p>GUI for equipment specifications.</p>
Full article ">Figure 10
<p>GUI for intervention planning (Screen 4).</p>
Full article ">Figure 11
<p>GUI for maintenance KPIs (Screen 5).</p>
Full article ">
35 pages, 13085 KiB  
Article
Cubic q-Bézier Triangular Patch for Scattered Data Interpolation and Its Algorithm
by Owen Tamin and Samsul Ariffin Abdul Karim
Algorithms 2024, 17(9), 422; https://doi.org/10.3390/a17090422 - 23 Sep 2024
Viewed by 541
Abstract
This paper presents an approach to scattered data interpolation using q-Bézier triangular patches via an efficient algorithm. While existing studies have formed q-Bézier triangular patches through convex combination, their application to scattered data interpolation has not been previously explored. Therefore, this [...] Read more.
This paper presents an approach to scattered data interpolation using q-Bézier triangular patches via an efficient algorithm. While existing studies have formed q-Bézier triangular patches through convex combination, their application to scattered data interpolation has not been previously explored. Therefore, this study aims to extend the use of q-Bézier triangular patches to scattered data interpolation by achieving C1 continuity throughout the data points. We test the proposed scheme using both established data points and real-life engineering problems. We compared the performance of the proposed interpolation scheme with a well-known existing scheme by varying the q parameter. The comparison was based on visualization and error analysis. Numerical and graphical results were generated using MATLAB. The findings indicate that the proposed scheme outperforms the existing scheme, demonstrating a higher coefficient of determination (R2), smaller root mean square error (RMSE), and faster central processing unit (CPU) time. These results highlight the potential of the proposed q-Bézier triangular patches scheme for more accurate and reliable scattered data interpolation via the proposed algorithm. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Bases for <span class="html-italic">q</span>-Bézier triangular.</p>
Full article ">Figure 2
<p>Cubic <span class="html-italic">q</span>-Bézier triangular with different values of <span class="html-italic">q</span>.</p>
Full article ">Figure 3
<p>Control points for cubic <span class="html-italic">q</span>-Bézier triangular patch.</p>
Full article ">Figure 4
<p>Directionals <math display="inline"><semantics> <msub> <mi>ε</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>ε</mi> <mn>2</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>ε</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Two adjacent <span class="html-italic">q</span>-Bézier triangular patches.</p>
Full article ">Figure 6
<p>Triangulation domain for (<b>a</b>) 36 data points, (<b>b</b>) 65 data points, and (<b>c</b>) 100 data points.</p>
Full article ">Figure 7
<p>Surface interpolation based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function with 36, 65, and 100 data points when <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>: (<b>a</b>) true surface; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 8
<p>Contour plots based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function with 36, 65, and 100 data points when <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>: (<b>a</b>) true surface; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 9
<p>Surface interpolation for 36 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 10
<p>Contour plots for 36 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 11
<p>Surface interpolation for 65 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 12
<p>Contour plots for 65 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 13
<p>Surface interpolation for 100 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 14
<p>Contour plots for 100 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 15
<p>Surface interpolation based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function with 36, 65, and 100 data points when <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>: (<b>a</b>) true surface; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 16
<p>Contour plots based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function with 36, 65, and 100 data points when <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>: (<b>a</b>) true surface; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 17
<p>Surface interpolation for 36 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 18
<p>Contour plots for 36 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 19
<p>Surface interpolation for 65 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 20
<p>Contour plots for 65 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 21
<p>Surface interpolation for 100 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 22
<p>Contour plots for 100 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 23
<p>Surface interpolation based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function with 36, 65, and 100 data points when <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>: (<b>a</b>) true surface; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 24
<p>Contour plots based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function with 36, 65, and 100 data points when <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>: (<b>a</b>) true surface; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 25
<p>Surface interpolation for 36 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 26
<p>Contour plots for 36 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 27
<p>Surface interpolation for 65 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 28
<p>Contour plots for 65 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 29
<p>Surface interpolation for 100 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 30
<p>Contour plots for 100 data points based on <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math> function. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 31
<p>Example for (<b>a</b>) true surface, (<b>b</b>) 3D interpolation, and (<b>c</b>) Delaunay triangulation of Seamount data points.</p>
Full article ">Figure 32
<p>Surface interpolation with different <span class="html-italic">q</span> values based on Seamount data points. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 33
<p>Three-dimensional interpolation with different <span class="html-italic">q</span> values based on Seamount data points. (<b>a</b>) <span class="html-italic">q</span> = 0.25. (<b>b</b>) <span class="html-italic">q</span> = 0.5. (<b>c</b>) <span class="html-italic">q</span> = 0.75. (<b>d</b>) <span class="html-italic">q</span> = 1.</p>
Full article ">Figure 34
<p>Example of electric potential of two point charges (Adopted from [<a href="#B38-algorithms-17-00422" class="html-bibr">38</a>]).</p>
Full article ">Figure 35
<p>Delaunay triangulation of 25 data points (adopted from [<a href="#B39-algorithms-17-00422" class="html-bibr">39</a>]).</p>
Full article ">Figure 36
<p>Surface and contour plot for (<b>a</b>) true function and (<b>b</b>) Ali et al. [<a href="#B39-algorithms-17-00422" class="html-bibr">39</a>].</p>
Full article ">Figure 37
<p>Delaunay triangulation of 25, 35, 65, and 100 data points: (<b>a</b>) 25 data points; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 38
<p>Surface interpolation with 25, 36, 65, and 100 data points for <span class="html-italic">q</span> value of 1: (<b>a</b>) 25 data points; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 39
<p>Contour plot with 25, 36, 65, and 100 data points for <span class="html-italic">q</span> value of 1: (<b>a</b>) 25 data points; (<b>b</b>) 36 data points; (<b>c</b>) 65 data points; (<b>d</b>) 100 data points.</p>
Full article ">Figure 40
<p><math display="inline"><semantics> <msup> <mi>R</mi> <mn>2</mn> </msup> </semantics></math> values for 36, 65, and 100 data points across three test functions, evaluated at four selected <span class="html-italic">q</span> values.</p>
Full article ">Figure 41
<p>RMSE values for 36, 65, and 100 data points across three test functions, evaluated at four selected <span class="html-italic">q</span> values.</p>
Full article ">Figure 42
<p>CPU time (in seconds) for 36, 65, and 100 data points across three test functions, evaluated at four selected <span class="html-italic">q</span> values.</p>
Full article ">
26 pages, 1895 KiB  
Article
Enhanced Ischemic Stroke Lesion Segmentation in MRI Using Attention U-Net with Generalized Dice Focal Loss
by Beatriz P. Garcia-Salgado, Jose A. Almaraz-Damian, Oscar Cervantes-Chavarria, Volodymyr Ponomaryov, Rogelio Reyes-Reyes, Clara Cruz-Ramos and Sergiy Sadovnychiy
Appl. Sci. 2024, 14(18), 8183; https://doi.org/10.3390/app14188183 - 11 Sep 2024
Viewed by 1074
Abstract
Ischemic stroke lesion segmentation in MRI images represents significant challenges, particularly due to class imbalance between foreground and background pixels. Several approaches have been developed to achieve higher F1-Scores in stroke lesion segmentation under this challenge. These strategies include convolutional neural networks (CNN) [...] Read more.
Ischemic stroke lesion segmentation in MRI images represents significant challenges, particularly due to class imbalance between foreground and background pixels. Several approaches have been developed to achieve higher F1-Scores in stroke lesion segmentation under this challenge. These strategies include convolutional neural networks (CNN) and models that represent a large number of parameters, which can only be trained on specialized computational architectures that are explicitly oriented to data processing. This paper proposes a lightweight model based on the U-Net architecture that handles an attention module and the Generalized Dice Focal loss function to enhance the segmentation accuracy in the class imbalance environment, characteristic of stroke lesions in MRI images. This study also analyzes the segmentation performance according to the pixel size of stroke lesions, giving insights into the loss function behavior using the public ISLES 2015 and ISLES 2022 MRI datasets. The proposed model can effectively segment small stroke lesions with F1-Scores over 0.7, particularly in FLAIR, DWI, and T2 sequences. Furthermore, the model shows reasonable convergence with their 7.9 million parameters at 200 epochs, making it suitable for practical implementation on mid and high-end general-purpose graphic processing units. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Semantic Segmentation, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Scheme of the proposed model.</p>
Full article ">Figure 2
<p>Distribution of segmentation masks’ sizes (in pixels) with annotation of the first quartile (Q1), median (Q2), and third quartile (Q3): (<b>a</b>) ISLES 2015, (<b>b</b>) ISLES 2022.</p>
Full article ">Figure 3
<p>F1-Scores resulting from changing key hyperparameters <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>F</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mi>FL</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>G</mi> <mi>D</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mi>GDL</mi> </mrow> </semantics></math>. The combination leading to the best results is highlighted in orange. (<b>a</b>) Experiments performed on FLAIR sequences of ISLES 2015. (<b>b</b>) Experiments performed on DWI modality of ISLES 2022.</p>
Full article ">Figure 4
<p>Learning curves comparison: (<b>a</b>) Proposed model, (<b>b</b>) SGD, (<b>c</b>) W/O A.M., (<b>d</b>) CBAM, (<b>e</b>) Dice Loss, (<b>f</b>) Focal Loss.</p>
Full article ">Figure 5
<p>Visual comparison of the model’s versions results: Ground truth masks are displayed in the first column (<b>a</b>,<b>g</b>,<b>m</b>,<b>s</b>). Results of Proposed model are given in the second column (<b>b</b>,<b>h</b>,<b>n</b>,<b>t</b>), of Dice Loss model in the third column (<b>c</b>,<b>i</b>,<b>o</b>,<b>u</b>), of Focal Loss model in the fourth column (<b>d</b>,<b>j</b>,<b>p</b>,<b>v</b>), of W/O A.M. model in the fifth column (<b>e</b>,<b>k</b>,<b>q</b>,<b>w</b>), and of CBAM model in the sixth column (<b>f</b>,<b>l</b>,<b>r</b>,<b>x</b>).</p>
Full article ">Figure 6
<p>Violin plot of the proposed model’s results on FLAIR images using the axial view, where the dot localizes the median, and the white line represents the mean: (<b>a</b>) IoU scores by mask’s size category, (<b>b</b>) F1-Scores by mask’s size category.</p>
Full article ">Figure 7
<p>Performance of the proposed model in segmenting small lesions on different MRI modalities using the ISLES 2015 dataset (dot and white line represent the median and the mean values): (<b>a</b>) IoU scores by MRI modality, (<b>b</b>) F1-Scores by MRI modality.</p>
Full article ">Figure 8
<p>Overall performance of the proposed model on different MRI modalities using the ISLES 2015 dataset (dot and white line represent the median and the mean values): (<b>a</b>) IoU scores in the coronal plane, (<b>b</b>) IoU scores in the sagittal plane.</p>
Full article ">Figure 9
<p>Examples of segmented FLAIR images in the coronal plane by the proposed method (second row) and their corresponding ground truth mask (first row) for mask categories Small (<b>a</b>,<b>e</b>), Medium Down (<b>b</b>,<b>f</b>), Medium Up (<b>c</b>,<b>g</b>), and Large (<b>d</b>,<b>h</b>).</p>
Full article ">Figure 10
<p>Examples of segmented FLAIR images in the sagittal plane by the proposed method (second row) and their corresponding ground truth mask (first row) for mask categories Small (<b>a</b>,<b>e</b>), Medium Down (<b>b</b>,<b>f</b>), Medium Up (<b>c</b>,<b>g</b>), and Large (<b>d</b>,<b>h</b>).</p>
Full article ">Figure 11
<p>Violin plot of the proposed model’s results on DWI and ADC images using the axial view, where the dot localizes the median, and the white line represents the mean: (<b>a</b>) F1-Scores by mask size category using DWI and configuration A (FL = 0.7, GDL = 0.3), (<b>b</b>) F1-Scores by mask size category using DWI and configuration B (FL = 0.9, GDL = 0.1), (<b>c</b>) F1-Scores by mask size category using ADC and configuration A (FL = 0.7, GDL = 0.3), (<b>d</b>) F1-Scores by mask size category using ADC and configuration B (FL = 0.9, GDL = 0.1).</p>
Full article ">Figure 12
<p>Violin plot of the non-segmented images’ mask size in pixels. Mean value is marked as a white line.</p>
Full article ">Figure 13
<p>Examples of ground truth mask of DWI images in the axial plane (first row) and the segmentation results by the proposed method using <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>G</mi> <mi>D</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>F</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math> (second row) and <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>G</mi> <mi>D</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>F</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> (third row) for mask categories Small (<b>a</b>,<b>e</b>,<b>i</b>), Medium Down (<b>b</b>,<b>f</b>,<b>j</b>), Medium Up (<b>c</b>,<b>g</b>,<b>k</b>), and Large (<b>d</b>,<b>h</b>,<b>l</b>).</p>
Full article ">
18 pages, 3741 KiB  
Review
Sherwood (Sh) Number in Chemical Engineering Applications—A Brief Review
by Fabio Montagnaro
Energies 2024, 17(17), 4342; https://doi.org/10.3390/en17174342 - 30 Aug 2024
Cited by 1 | Viewed by 1240
Abstract
This paper reviews a series of cases for which the correct determination of the mass transfer coefficient is decisive for an appropriate design of the system and its operating conditions. The cases are of interest for applications in the energy sector, such as [...] Read more.
This paper reviews a series of cases for which the correct determination of the mass transfer coefficient is decisive for an appropriate design of the system and its operating conditions. The cases are of interest for applications in the energy sector, such as the thermoconversion of a fuel particle, processes in pipes, packed and fluidised beds, and corollary unit operations, such as extraction, absorption, and adsorption. The analysis is carried out by examining the expressions for the determination of the Sherwood number (which contains the mass transfer coefficient), and, when possible, generalised relationships (also in graphic form) have been provided, to offer a useful tool to cognoscenti. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

Figure 1
<p>Plot of Equation (12) for cases reported in <a href="#energies-17-04342-t001" class="html-table">Table 1</a> (mass transfer between a solid object and a fluid), where the fields of validity have been listed.</p>
Full article ">Figure 2
<p>Plot of Equation (20), the case of mass transfer towards a reactive particle in a fluid laminar flow; the values of the Sherwood number are those for a purely mass transfer-controlled system (<math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mo>≥</mo> <mn>10</mn> <mo>;</mo> <mi>P</mi> <mi>e</mi> <mo>=</mo> <mi>R</mi> <mi>e</mi> <mi>S</mi> <mi>c</mi> <mo>≥</mo> <mn>10</mn> <mo>⟺</mo> <mi>S</mi> <mi>c</mi> <mo>≥</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mn>10</mn> </mrow> <mrow> <mi>R</mi> <mi>e</mi> </mrow> </mfrac> </mstyle> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Concepts of molecular and Knudsen diffusivity.</p>
Full article ">Figure 4
<p>The pores’ tortuosity.</p>
Full article ">Figure 5
<p>Plot of Equation (12) in terms of (<b>a</b>) Equation (28) for turbulent flow mass transfer to pipe walls and (<b>b</b>) Equations (29)–(31) for mass transfer in packed beds. Refer to <a href="#energies-17-04342-t002" class="html-table">Table 2</a>, where the fields of validity have been listed.</p>
Full article ">Figure 6
<p>Plot of Equation (39) for cases reported in <a href="#energies-17-04342-t003" class="html-table">Table 3</a> (mass transfer in fluidised beds).</p>
Full article ">Figure 7
<p>Plot of Equation (12) for cases reported in <a href="#energies-17-04342-t004" class="html-table">Table 4</a> (gas–liquid absorption).</p>
Full article ">Figure 8
<p>Plot of (<b>a</b>) Equation (45) for trapped gases in porous media (<math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mo>≤</mo> <mn>1</mn> <mo>;</mo> <mn>200</mn> <mo>≤</mo> <mi>S</mi> <mi>c</mi> <mo>≤</mo> <mn>2000</mn> </mrow> </semantics></math>); (<b>b</b>) Equation (46) for dissolution of a solid from the wall into a falling film.</p>
Full article ">Figure 9
<p>Synoptical graph (plus illustrative legend) for cases describable by the general Equation (12) [<a href="#B7-energies-17-04342" class="html-bibr">7</a>,<a href="#B12-energies-17-04342" class="html-bibr">12</a>,<a href="#B13-energies-17-04342" class="html-bibr">13</a>,<a href="#B14-energies-17-04342" class="html-bibr">14</a>,<a href="#B15-energies-17-04342" class="html-bibr">15</a>,<a href="#B16-energies-17-04342" class="html-bibr">16</a>,<a href="#B20-energies-17-04342" class="html-bibr">20</a>,<a href="#B22-energies-17-04342" class="html-bibr">22</a>,<a href="#B25-energies-17-04342" class="html-bibr">25</a>,<a href="#B27-energies-17-04342" class="html-bibr">27</a>,<a href="#B43-energies-17-04342" class="html-bibr">43</a>].</p>
Full article ">
20 pages, 1471 KiB  
Article
Methodology for Quantification of Technological Processes in Passenger Railway Transport Using Alternatively Powered Vehicles
by Martin Kendra, Daniel Pribula and Tomáš Skrúcaný
Sustainability 2024, 16(16), 7239; https://doi.org/10.3390/su16167239 - 22 Aug 2024
Viewed by 701
Abstract
Due to the reduction in diesel propulsion on railway networks across the world, it is essential to consider the introduction of an alternative propulsion where electrification would not be feasible. The introduction of alternative propulsions may influence the technological processes of train processing [...] Read more.
Due to the reduction in diesel propulsion on railway networks across the world, it is essential to consider the introduction of an alternative propulsion where electrification would not be feasible. The introduction of alternative propulsions may influence the technological processes of train processing and interrupt its quantification methodology, due to their specific operational requirements. The problem of the quantification of technological processes of train processing is not sufficiently solved even in the field of conventional propulsions; therefore, the aim of this paper is to propose a unique methodological procedure for the quantification of selected processes of train processing operated by multiple units with a conventional or alternative propulsion. The new process quantification methodology enables the duration determination of a specific process, which can be simply determined for multiple units of different length and propulsion under local conditions. The duration determination is based on the final formula or its graphical representation. The function is based on data obtained by analysing the evaluated workflow of a model and multiple units using the PERT network analysis method. The proposed methodological procedure is verified by different types of propulsions through a case study using real values. The application of the methodology can prevent the risks related to non-compliance of the required technological times and at the same time increase the sustainability of the operation stability of railway passenger transport. Full article
(This article belongs to the Special Issue Sustainable Transport Research and Railway Network Performance)
Show Figures

Figure 1

Figure 1
<p>Example of network graph and vertices; (<b>a</b>) an example of network graph; (<b>b</b>) vertices of network graph. Source: Authors.</p>
Full article ">Figure 2
<p>Example of network PERT graph for FCMU turnaround. Source: Authors.</p>
Full article ">Figure 3
<p>The curve of turnaround train duration. Source: Authors.</p>
Full article ">Figure 4
<p>The curve of the starting train duration. Source: Authors.</p>
Full article ">Figure 5
<p>The curve of the ending train duration. Source: Authors.</p>
Full article ">
21 pages, 7707 KiB  
Article
Prototype Implementation of a Digitizer for Earthquake Monitoring System
by Emad B. Helal, Omar M. Saad, M. Sami Soliman, Gamal M. Dousoky, Ahmed Abdelazim, Lotfy Samy, Haruichi Kanaya and Ali G. Hafez
Sensors 2024, 24(16), 5287; https://doi.org/10.3390/s24165287 - 15 Aug 2024
Viewed by 865
Abstract
A digitizer is considered one of the fundamental components of an earthquake monitoring system. In this paper, we design and implement a high accuracy seismic digitizer. The implemented digitizer consists of several blocks, i.e., the analog-to-digital converter (ADC), GPS receiver, and microprocessor. Three [...] Read more.
A digitizer is considered one of the fundamental components of an earthquake monitoring system. In this paper, we design and implement a high accuracy seismic digitizer. The implemented digitizer consists of several blocks, i.e., the analog-to-digital converter (ADC), GPS receiver, and microprocessor. Three finite impulse response (FIR) filters are used to decimate the sampling rate of the input seismic data according to user needs. A graphical user interface (GUI) has been designed for enabling the user to monitor the seismic waveform in real time, and process and adjust the parameters of the acquisition unit. The system casing is designed to resist harsh conditions of the environment. The prototype can represent the three component sensors data in the standard MiniSEED format. The digitizer stream seismic data from the remote station to the main center is based on TCP/IP connection. This protocol ensures data transmission without any losses as long as the data still exist in the ring buffer. The prototype was calibrated by real field testing. The prototype digitizer is integrated with the Egyptian National Seismic Network (ENSN), where a commercial instrument is already installed. Case studies shows that, for the same event, the prototype station improves the solution of the ENSN by giving accurate timing and seismic event parameters. Field test results shows that the event arrival time and the amplitude are approximately the same between the prototype digitizer and the calibrated digitizer. Furthermore, the frequency contents are similar between the two digitizers. Therefore, the prototype digitizer captures the main seismic parameters accurately, irrespective of noise existence. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>System wiring diagram.</p>
Full article ">Figure 2
<p>(<b>a</b>) System front panel; (<b>b</b>) system top view.</p>
Full article ">Figure 3
<p>Ring server as a simple SeedLink server.</p>
Full article ">Figure 4
<p>Multistage multi-rate filter design.</p>
Full article ">Figure 5
<p>The FIR magnitude response for (<b>a</b>) the first stage, (<b>b</b>) the second stage, and (<b>c</b>) the third stage.</p>
Full article ">Figure 6
<p>Real-time waveform using the implemented digitizer.</p>
Full article ">Figure 7
<p>Health tab of the system.</p>
Full article ">Figure 8
<p>GPS control tab.</p>
Full article ">Figure 9
<p>Test station with the calibrated digitizer.</p>
Full article ">Figure 10
<p>Acquired data from three sensor channels.</p>
Full article ">Figure 11
<p>Data comparison between test digitizer (<b>a</b>) and centaur digitizer (<b>b</b>).</p>
Full article ">Figure 12
<p>Zooming in on the event interval.</p>
Full article ">Figure 13
<p>Comparison between the two waveforms.</p>
Full article ">Figure 14
<p>Amplitude spectrum for the prototype (<b>a</b>) and calibrated (<b>b</b>) digitizers.</p>
Full article ">Figure 15
<p>Spectrogram for prototype (<b>a</b>) and calibrated (<b>b</b>) digitizers.</p>
Full article ">Figure 16
<p>Case study #1 event map.</p>
Full article ">Figure 17
<p>Case study #2 event map.</p>
Full article ">
Back to TopTop