Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (23,976)

Search Parameters:
Keywords = imaging system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 485 KiB  
Article
Advancements and Strategies in Robotic Planning for Knee Arthroplasty in Patients with Minor Deformities
by Giacomo Capece, Luca Andriollo, Rudy Sangaletti, Roberta Righini, Francesco Benazzo and Stefano Marco Paolo Rossi
Life 2024, 14(12), 1528; https://doi.org/10.3390/life14121528 - 21 Nov 2024
Abstract
Knee arthroplasty, commonly performed to treat osteoarthritis, necessitates precise surgical techniques for optimal outcomes. The introduction of systems such as the Persona Knee System (Zimmer Biomet, Warsaw, IN, USA) has revolutionized knee arthroplasty, promising enhanced precision and better patient outcomes. This study investigates [...] Read more.
Knee arthroplasty, commonly performed to treat osteoarthritis, necessitates precise surgical techniques for optimal outcomes. The introduction of systems such as the Persona Knee System (Zimmer Biomet, Warsaw, IN, USA) has revolutionized knee arthroplasty, promising enhanced precision and better patient outcomes. This study investigates the application of robotic planning specifically in knee prosthetic surgeries, with a focus on Persona Knee System prostheses. We conducted a retrospective analysis of 300 patients who underwent knee arthroplasty using the Persona Knee System between January 2020 and November 2023, including demographic data, surgical parameters, and preoperative imaging. Robotic planning was employed to simulate surgical procedures. The planning process integrated preoperative imaging data from a specific program adopted for conducting digital preoperative planning, and statistical analyses were conducted to assess correlations between patient characteristics and surgical outcomes. Out of 300 patients, 85% presented with minor deformities, validating the feasibility of robotic planning. Robotic planning demonstrated precise prediction of optimal arthroplasty sizes and alignment, closely aligning with preoperative imaging data. This study highlights the potential benefits of robotic planning in knee arthroplasty surgeries, particularly in cases with minor deformities. By leveraging preoperative imaging data and integrating advanced robotic technologies, surgeons can improve precision and efficacy in knee arthroplasty. Moreover, robotic technology allows for a reduced level of constraint in the intraoperative choice between Posterior-Stabilized and Constrained Posterior-Stabilized liners compared with an imageless navigated procedure. Full article
(This article belongs to the Special Issue Advancements in Total Joint Arthroplasty)
29 pages, 4357 KiB  
Article
Analysis of Concrete Air Voids: Comparing OpenAI-Generated Python Code with MATLAB Scripts and Enhancing 2D Image Processing Using 3D CT Scan Data
by Iman Asadi, Andrei Shpak and Stefan Jacobsen
Buildings 2024, 14(12), 3712; https://doi.org/10.3390/buildings14123712 - 21 Nov 2024
Abstract
The air void system in concrete significantly affects its mechanical, thermal, and frost durability properties. This study explored the use of ChatGPT, an AI tool, to generate Python code for analyzing air void parameters in hardened concrete, such as total air void content [...] Read more.
The air void system in concrete significantly affects its mechanical, thermal, and frost durability properties. This study explored the use of ChatGPT, an AI tool, to generate Python code for analyzing air void parameters in hardened concrete, such as total air void content (A), specific surface (α), and air void spacing factor (L). Initially, Python scripts were created by requesting ChatGPT-3.5 to convert MATLAB scripts developed by Fonseca and Scherer in 2015. The results from Python closely matched those from MATLAB when applied to polished sections of seven different concrete mixes, demonstrating ChatGPT’s effectiveness in code conversion. However, generating accurate code without referencing the original MATLAB scripts required detailed prompts, highlighting the need for a strong understanding of the test method. Finally, a Python script was applied to modify void reconstruction in 2D images into 3D by stereology, and comparing this with (3D) CT scanner results, showing comparable results. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
20 pages, 19820 KiB  
Article
AQSFormer: Adaptive Query Selection Transformer for Real-Time Ship Detection from Visual Images
by Wei Yang, Yueqiu Jiang, Hongwei Gao, Xue Bai, Bo Liu and Caifeng Xia
Electronics 2024, 13(23), 4591; https://doi.org/10.3390/electronics13234591 - 21 Nov 2024
Abstract
The Internet of Things (IoT) has emerged as a popular topic in both industrial and academic research. IoT devices are often equipped with rapid response capabilities to ensure seamless communication and interoperability, showing significant potential for IoT-based maritime traffic monitoring and navigation safety [...] Read more.
The Internet of Things (IoT) has emerged as a popular topic in both industrial and academic research. IoT devices are often equipped with rapid response capabilities to ensure seamless communication and interoperability, showing significant potential for IoT-based maritime traffic monitoring and navigation safety tasks. However, this also presents major challenges for maritime surveillance systems. The diversity of IoT devices and variability in collected data are substantial. Visual image ship detection is crucial for maritime tasks, yet it must contend with environmental challenges such as haze and waves that can obscure ship details. To address these challenges, we propose an adaptive query selection transformer (AQSFormer) that utilizes two-dimensional rotational position encoding for absolute positioning and integrates relative positions into the self-attention mechanism to overcome insensitivity to the position. Additionally, the introduced deformable attention module focuses on ship edges, enhancing the feature space resolution. The adaptive query selection module ensures a high recall rate and a high end-to-end processing efficiency. Our method improves the mean average precision to 0.779 and achieves a processing speed of 31.3 frames per second, significantly enhancing both the real-time capabilities and accuracy, proving its effectiveness in ship detection. Full article
20 pages, 2518 KiB  
Review
The Frontiers of Smart Healthcare Systems
by Nan Lin, Rudy Paul, Santiago Guerra, Yan Liu, James Doulgeris, Min Shi, Maohua Lin, Erik D. Engeberg, Javad Hashemi and Frank D. Vrionis
Healthcare 2024, 12(23), 2330; https://doi.org/10.3390/healthcare12232330 - 21 Nov 2024
Abstract
Artificial Intelligence (AI) is poised to revolutionize numerous aspects of human life, with healthcare among the most critical fields set to benefit from this transformation. Medicine remains one of the most challenging, expensive, and impactful sectors, with challenges such as information retrieval, data [...] Read more.
Artificial Intelligence (AI) is poised to revolutionize numerous aspects of human life, with healthcare among the most critical fields set to benefit from this transformation. Medicine remains one of the most challenging, expensive, and impactful sectors, with challenges such as information retrieval, data organization, diagnostic accuracy, and cost reduction. AI is uniquely suited to address these challenges, ultimately improving the quality of life and reducing healthcare costs for patients worldwide. Despite its potential, the adoption of AI in healthcare has been slower compared to other industries, highlighting the need to understand the specific obstacles hindering its progress. This review identifies the current shortcomings of AI in healthcare and explores its possibilities, realities, and frontiers to provide a roadmap for future advancements. Full article
(This article belongs to the Section Artificial Intelligence in Medicine)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of AI Applications in Healthcare.</p>
Full article ">Figure 2
<p>AI in Medical Imaging Workflow.</p>
Full article ">Figure 3
<p>Challenges in AI-Driven Diagnostics.</p>
Full article ">Figure 4
<p>AI-Enhanced Robotic Surgery.</p>
Full article ">Figure 5
<p>Future Applications of AI in Smart Healthcare.</p>
Full article ">
15 pages, 4303 KiB  
Article
Energy Efficiency in Measurement and Image Reconstruction Processes in Electrical Impedance Tomography
by Barbara Stefaniak, Tomasz Rymarczyk, Dariusz Wójcik, Marta Cholewa-Wiktor, Tomasz Cieplak, Zbigniew Orzeł, Janusz Gudowski, Ewa Golec, Michał Oleszek and Marcin Kowalski
Energies 2024, 17(23), 5828; https://doi.org/10.3390/en17235828 - 21 Nov 2024
Abstract
This paper presents an energy optimization approach to applying electrical impedance tomography (EIT) for medical diagnostics, particularly in detecting lung diseases. The designed Lung Electrical Tomography System (LETS) incorporates 102 electrodes and advanced image reconstruction algorithms. Energy efficiency is achieved through the use [...] Read more.
This paper presents an energy optimization approach to applying electrical impedance tomography (EIT) for medical diagnostics, particularly in detecting lung diseases. The designed Lung Electrical Tomography System (LETS) incorporates 102 electrodes and advanced image reconstruction algorithms. Energy efficiency is achieved through the use of modern electronic components and high-efficiency DC/DC converters that reduce the size and weight of the device without the need for additional cooling. Special attention is given to minimizing energy consumption during electromagnetic measurements and data processing, significantly improving the system’s overall performance. Research studies confirm the device’s high energy efficiency while maintaining the accuracy of the classification of lung disease using the LightGBM algorithm. This solution enables long-term patient monitoring and precise diagnosis with reduced energy consumption, marking a key step towards sustainable medical diagnostics based on EIT technology. Full article
Show Figures

Figure 1

Figure 1
<p>Developed vest with 102 textile electrodes, A—central unit of the device (source own).</p>
Full article ">Figure 2
<p>Electrode schema [<a href="#B44-energies-17-05828" class="html-bibr">44</a>].</p>
Full article ">Figure 3
<p>The central unit of the device (signature letter A in <a href="#energies-17-05828-f001" class="html-fig">Figure 1</a>) (source own).</p>
Full article ">Figure 4
<p>Block diagram of the EIT data frame simulation process (source own).</p>
Full article ">Figure 5
<p>Healthy lungs model (source own).</p>
Full article ">Figure 6
<p>Models of considered medical lesions (source own).</p>
Full article ">Figure 7
<p>Boxplot for the first feature for all classes (source own).</p>
Full article ">Figure 8
<p>Confusion matrix for LightGBM model. 0—healthy, 1—COPD, 2—ARDS, 3—PTX, 4—PHTN, 5—PNA, and 6—Bronchospasm (source own).</p>
Full article ">Figure 9
<p>Beeswarm of SHAP values for COPD disease (source own).</p>
Full article ">
13 pages, 5493 KiB  
Article
Research on Rapid Detection Methods of Tea Pigments Content During Rolling of Black Tea Based on Machine Vision Technology
by Hanting Zou, Tianmeng Lan, Yongwen Jiang, Xiao-Lan Yu and Haibo Yuan
Foods 2024, 13(23), 3718; https://doi.org/10.3390/foods13233718 - 21 Nov 2024
Abstract
As a crucial stage in the processing of black tea, rolling plays a significant role in both the color transformation and the quality development of the tea. In this process, the production of theaflavins, thearubigins, and theabrownins is a primary factor contributing to [...] Read more.
As a crucial stage in the processing of black tea, rolling plays a significant role in both the color transformation and the quality development of the tea. In this process, the production of theaflavins, thearubigins, and theabrownins is a primary factor contributing to the alteration in color of rolled leaves. Herein, tea pigments are selected as the key quality indicators during rolling of black tea, aiming to establish rapid detection methods for them. A machine vision system is employed to extract nine color feature variables from the images of samples subjected to varying rolling times. Then, the tea pigment content in the corresponding samples is determined using a UV-visible spectrophotometer. In the meantime, the correlation between color variables and tea pigments is discussed. Additionally, Z-score and PCA are used to eliminate the magnitude difference and redundant information in original data. Finally, the quantitative prediction models of tea pigments based on the images’ color features are established by using PLSR, SVR, and ELM. The data show that the Z-score–PCA–ELM model has the best prediction effect for tea pigments. The Rp values for the model prediction sets are all over 0.96, and the RPD values are all greater than 3.50. In this study, rapid determination methods for tea pigments during rolling of black tea are established. These methods offer significant technical support for the digital production of black tea. Full article
Show Figures

Figure 1

Figure 1
<p>Flow chart of the experiment.</p>
Full article ">Figure 2
<p>Image color feature variables: (<b>a</b>) R, (<b>b</b>) G, (<b>c</b>) B, (<b>d</b>) H, (<b>e</b>) S, (<b>f</b>) V, (<b>g</b>) L, (<b>h</b>) a*, (<b>i</b>) b*.</p>
Full article ">Figure 2 Cont.
<p>Image color feature variables: (<b>a</b>) R, (<b>b</b>) G, (<b>c</b>) B, (<b>d</b>) H, (<b>e</b>) S, (<b>f</b>) V, (<b>g</b>) L, (<b>h</b>) a*, (<b>i</b>) b*.</p>
Full article ">Figure 3
<p>Correlation analysis diagram of tea pigments and image color feature variables.</p>
Full article ">Figure 4
<p>Explanatory variance in principal component analysis.</p>
Full article ">Figure 5
<p>(<b>a</b>) Regression prediction scatter plot based on Z-score–ELM; (<b>b</b>) regression prediction scatter plot based on Z-score–PCA–ELM; (<b>c</b>) line chart of prediction results based on Z-score–ELM; (<b>d</b>) relative error chart of prediction results based on Z-score–PCA–ELM.</p>
Full article ">Figure 6
<p>(<b>a</b>) Regression prediction scatter plot based on Z-score–ELM; (<b>b</b>) regression prediction scatter plot based on Z-score–PCA–ELM; (<b>c</b>) line chart of prediction results based on Z-score–ELM; (<b>d</b>) relative error chart of prediction results based on Z-score-PCA–ELM.</p>
Full article ">Figure 7
<p>(<b>a</b>) Regression prediction scatter plot based on Z-score–ELM; (<b>b</b>) regression prediction scatter plot based on Z-score–PCA–ELM; (<b>c</b>) line chart of prediction results based on Z-score–ELM; (<b>d</b>) relative error chart of prediction results based on Z-score–PCA–ELM.</p>
Full article ">
16 pages, 4570 KiB  
Article
Study of the Possibility to Combine Deep Learning Neural Networks for Recognition of Unmanned Aerial Vehicles in Optoelectronic Surveillance Channels
by Vladislav Semenyuk, Ildar Kurmashev, Dmitriy Alyoshin, Liliya Kurmasheva, Vasiliy Serbin and Alessandro Cantelli-Forti
Modelling 2024, 5(4), 1773-1788; https://doi.org/10.3390/modelling5040092 - 21 Nov 2024
Abstract
This article explores the challenges of integrating two deep learning neural networks, YOLOv5 and RT-DETR, to enhance the recognition of unmanned aerial vehicles (UAVs) within the optical-electronic channels of Sensor Fusion systems. The authors conducted an experimental study to test YOLOv5 and Faster [...] Read more.
This article explores the challenges of integrating two deep learning neural networks, YOLOv5 and RT-DETR, to enhance the recognition of unmanned aerial vehicles (UAVs) within the optical-electronic channels of Sensor Fusion systems. The authors conducted an experimental study to test YOLOv5 and Faster RT-DETR in order to identify the average accuracy of UAV recognition. A dataset in the form of images of two classes of objects, UAVs, and birds, was prepared in advance. The total number of images, including augmentation, amounted to 6337. The authors implemented training, verification, and testing of the neural networks exploiting PyCharm 2024 IDE. Inference testing was conducted using six videos with UAV flights. On all test videos, RT-DETR-R50 was more accurate by an average of 18.7% in terms of average classification accuracy (Pc). In terms of operating speed, YOLOv5 was 3.4 ms more efficient. It has been established that the use of RT-DETR as the only module for UAV classification in optical-electronic detection channels is not effective due to the large volumes of calculations, which is due to the relatively large number of parameters. Based on the obtained results, an algorithm for combining two neural networks is proposed, which allows for increasing the accuracy of UAV and bird classification without significant losses in speed. Full article
Show Figures

Figure 1

Figure 1
<p>Data set preparation in Roboflow.com service: (<b>a</b>) Annotation of UAVs and birds; (<b>b</b>) Data set partitioning interface for training, validation, and testing of neural networks.</p>
Full article ">Figure 2
<p>Metrics of the results of training the YOLOv5 neural network for 100 epochs (O<span class="html-italic">x</span>-axis): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 2 Cont.
<p>Metrics of the results of training the YOLOv5 neural network for 100 epochs (O<span class="html-italic">x</span>-axis): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 3
<p>Metrics of the results of training the RT-DETR neural network for 100 epochs (axis Ox): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 3 Cont.
<p>Metrics of the results of training the RT-DETR neural network for 100 epochs (axis Ox): (<b>a</b>) Precision; (<b>b</b>) Recall; (<b>c</b>) mAP50; (<b>d</b>) mAP50-95.</p>
Full article ">Figure 4
<p>Example of data obtained as a result of validation of the YOLOv5 experimental model.</p>
Full article ">Figure 5
<p>Example of data obtained from the validation of the RT-DETR experimental model.</p>
Full article ">Figure 6
<p>Frames from inference tests of trained neural network models: (<b>a</b>,<b>c</b>) RT-DETR-R50; (<b>b</b>,<b>d</b>) YOLOv5s.</p>
Full article ">Figure 6 Cont.
<p>Frames from inference tests of trained neural network models: (<b>a</b>,<b>c</b>) RT-DETR-R50; (<b>b</b>,<b>d</b>) YOLOv5s.</p>
Full article ">Figure 7
<p>Comparative diagram of the values of the average class probability in UAV recognition by trained neural network models.</p>
Full article ">Figure 8
<p>Algorithm for combining trained neural network models YOLOv5s and RT-DETR-R50.</p>
Full article ">
28 pages, 374 KiB  
Review
Image Processing Hardware Acceleration—A Review of Operations Involved and Current Hardware Approaches
by Costin-Emanuel Vasile, Andrei-Alexandru Ulmămei and Călin Bîră
J. Imaging 2024, 10(12), 298; https://doi.org/10.3390/jimaging10120298 - 21 Nov 2024
Viewed by 75
Abstract
This review provides an in-depth analysis of current hardware acceleration approaches for image processing and neural network inference, focusing on key operations involved in these applications and the hardware platforms used to deploy them. We examine various solutions, including traditional CPU–GPU systems, custom [...] Read more.
This review provides an in-depth analysis of current hardware acceleration approaches for image processing and neural network inference, focusing on key operations involved in these applications and the hardware platforms used to deploy them. We examine various solutions, including traditional CPU–GPU systems, custom ASIC designs, and FPGA implementations, while also considering emerging low-power, resource-constrained devices. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

Figure 1
<p>Typical GPU architecture.</p>
Full article ">
16 pages, 5582 KiB  
Article
Evaluating Brain Tumor Detection with Deep Learning Convolutional Neural Networks Across Multiple MRI Modalities
by Ioannis Stathopoulos, Luigi Serio, Efstratios Karavasilis, Maria Anthi Kouri, Georgios Velonakis, Nikolaos Kelekis and Efstathios Efstathopoulos
J. Imaging 2024, 10(12), 296; https://doi.org/10.3390/jimaging10120296 - 21 Nov 2024
Viewed by 94
Abstract
Central Nervous System (CNS) tumors represent a significant public health concern due to their high morbidity and mortality rates. Magnetic Resonance Imaging (MRI) has emerged as a critical non-invasive modality for the detection, diagnosis, and management of brain tumors, offering high-resolution visualization of [...] Read more.
Central Nervous System (CNS) tumors represent a significant public health concern due to their high morbidity and mortality rates. Magnetic Resonance Imaging (MRI) has emerged as a critical non-invasive modality for the detection, diagnosis, and management of brain tumors, offering high-resolution visualization of anatomical structures. Recent advancements in deep learning, particularly convolutional neural networks (CNNs), have shown potential in augmenting MRI-based diagnostic accuracy for brain tumor detection. In this study, we evaluate the diagnostic performance of six fundamental MRI sequences in detecting tumor-involved brain slices using four distinct CNN architectures enhanced with transfer learning techniques. Our dataset comprises 1646 MRI slices from the examinations of 62 patients, encompassing both tumor-bearing and normal findings. With our approach, we achieved a classification accuracy of 98.6%, underscoring the high potential of CNN-based models in this context. Additionally, we assessed the performance of each MRI sequence across the different CNN models, identifying optimal combinations of MRI modalities and neural networks to meet radiologists’ screening requirements effectively. This study offers critical insights into the integration of deep learning with MRI for brain tumor detection, with implications for improving diagnostic workflows in clinical settings. Full article
Show Figures

Figure 1

Figure 1
<p>Six different MRI sequences of a normal brain examination. From left to right and top to bottom: T1, T2, FLAIR, T1+C, Diffusion, apparent diffusion coefficient (ADC) map.</p>
Full article ">Figure 2
<p>Six different MRI sequences of a verified Brain Tumor examination. From left to right and top to bottom: T1, T2, FLAIR, T1+C, Diffusion, and ADC.</p>
Full article ">Figure 3
<p>Image representation of the preprocessing steps.</p>
Full article ">Figure 4
<p>One normal and two tumor examinations are shown for all six MRI sequences. In all images, the original image is displayed on the left, and the overlap with the heatmap produced from the last convolutional layer of the VGG16 model is displayed on the right. In the titles, N represents the Normal class, and T represents the Tumor class, both followed by the prediction probability for the respective class. Misclassified cases are highlighted in red.</p>
Full article ">Scheme 1
<p>ROCs for FLAIR sequence.</p>
Full article ">Scheme 2
<p>ROCs for T1+C sequence.</p>
Full article ">Scheme 3
<p>ROCs for ADC sequence.</p>
Full article ">Scheme 4
<p>ROCs for T1 sequence.</p>
Full article ">Scheme 5
<p>ROCs for Diffusion sequence.</p>
Full article ">Scheme 6
<p>ROCs for T2 sequence.</p>
Full article ">Scheme 7
<p>(<b>Left</b>): The evaluation metrics results of the experiment are in the whole dataset. (<b>Right</b>): the corresponding ROC curve.</p>
Full article ">
13 pages, 2583 KiB  
Article
Detection of Pest Feeding Traces on Industrial Wood Surfaces with 3D Imaging
by Andrzej Sioma, Keiko Nagashima, Bartosz Lenty, Arkadiusz Hebda, Yasutaka Nakata and Kiichi Harada
Appl. Sci. 2024, 14(23), 10775; https://doi.org/10.3390/app142310775 - 21 Nov 2024
Viewed by 125
Abstract
This paper presents a method for detecting holes and grooves made by wood-boring pests. As part of the production process automation, wood delivered from sawmills is checked for defects visible on its surface. One of the critical defects that disqualifies wood from further [...] Read more.
This paper presents a method for detecting holes and grooves made by wood-boring pests. As part of the production process automation, wood delivered from sawmills is checked for defects visible on its surface. One of the critical defects that disqualifies wood from further processing is the presence of feeding marks left by various types of pests on its surface. This paper proposes a method for detecting this type of damage based on analysis of three-dimensional images of the wood surface. Three-dimensional imaging methods and the image resolutions resulting from the adopted imaging system’s configurations are discussed. An analysis of the advantages and disadvantages of the methods investigated is presented, together with an assessment of their potential use in the implementation of the assigned control task, i.e., the detection of holes and grooves made by pests. Three-dimensional image parameters and interferences affecting the quality of the recorded image are described, along with the designed algorithm for identifying holes and grooves and the parametric description of the identified defect. The imaging effects for selected surfaces bearing signs of pest damage and the parameters describing the effectiveness of the present industrial solution are also presented. This paper demonstrates that it is possible to build a three-dimensional image to identify damage effectively within a minimum diameter of 1mm. It makes it possible to observe the damage carried out by most wood-boring pests. Full article
(This article belongs to the Special Issue Applications of Vision Measurement System on Product Quality Control)
Show Figures

Figure 1

Figure 1
<p>Examples of pest-caused holes on the wood surface analysed using a metrological glass scale with a millimetre grid.</p>
Full article ">Figure 2
<p>The 3D imaging method involved the following steps: (<b>a</b>) camera and laser alignment in relation to the surface imaged on the inspection station; (<b>b</b>) description of the geometry of the laser and camera setting for imaging in the LTM system; (<b>c</b>) view of the laser line projected on the wood surface’ (<b>d</b>) three-dimensional image of the wood surface; and (<b>e</b>) distribution of measurement points (imaging with resolution 0.125 mm/pixel).</p>
Full article ">Figure 3
<p>3D images acquired with different resolutions, as follows: (<b>a</b>) image of the laser line on the wood surface used for profile generation—height profile is calculated based on laser line position in each column; (<b>b</b>) image acquired with a resolution of 0.125 mm/pixel (fully visible hole areas); (<b>c</b>) image acquired with a resolution of 0.2 mm/pixel; (<b>d</b>) image acquired with a resolution of 0.25 mm/pixel; and (<b>e</b>) image acquired with a resolution of 0.5 mm/pixel (some holes missing due to low resolution).</p>
Full article ">Figure 4
<p>Three-dimensional image of the wood surface: (<b>a</b>) regions of the edges detected around each hole; (<b>b</b>) three-dimensional surface of the wood with visible wormholes after the filtering operation and hole position identification.</p>
Full article ">Figure 5
<p>Three-dimensional image of the wood surface: (<b>a</b>) wood surface with visible wormhole; (<b>b</b>) exemplary description of the wormhole using measurement points.</p>
Full article ">Figure 6
<p>Identified surface defects: (<b>a</b>) location and labelling of the defect regions; (<b>b</b>) description of the geometric parameters of each defect area.</p>
Full article ">
15 pages, 2365 KiB  
Article
Session-by-Session Prediction of Anti-Endothelial Growth Factor Injection Needs in Neovascular Age-Related Macular Degeneration Using Optical-Coherence-Tomography-Derived Features and Machine Learning
by Flavio Ragni, Stefano Bovo, Andrea Zen, Diego Sona, Katia De Nadai, Ginevra Giovanna Adamo, Marco Pellegrini, Francesco Nasini, Chiara Vivarelli, Marco Tavolato, Marco Mura, Francesco Parmeggiani and Giuseppe Jurman
Diagnostics 2024, 14(23), 2609; https://doi.org/10.3390/diagnostics14232609 - 21 Nov 2024
Viewed by 170
Abstract
Background/Objectives: Neovascular age-related macular degeneration (nAMD) is a retinal disorder leading to irreversible central vision loss. The pro-re-nata (PRN) treatment for nAMD involves frequent intravitreal injections of anti-VEGF medications, placing a burden on patients and healthcare systems. Predicting injections needs at each monitoring [...] Read more.
Background/Objectives: Neovascular age-related macular degeneration (nAMD) is a retinal disorder leading to irreversible central vision loss. The pro-re-nata (PRN) treatment for nAMD involves frequent intravitreal injections of anti-VEGF medications, placing a burden on patients and healthcare systems. Predicting injections needs at each monitoring session could optimize treatment outcomes and reduce unnecessary interventions. Methods: To achieve these aims, machine learning (ML) models were evaluated using different combinations of clinical variables, including retinal thickness and volume, best-corrected visual acuity, and features derived from macular optical coherence tomography (OCT). A “Leave Some Subjects Out” (LSSO) nested cross-validation approach ensured robust evaluation. Moreover, the SHapley Additive exPlanations (SHAP) analysis was employed to quantify the contribution of each feature to model predictions. Results: Results demonstrated that models incorporating both structural and functional features achieved high classification accuracy in predicting injection necessity (AUC = 0.747 ± 0.046, MCC = 0.541 ± 0.073). Moreover, the explainability analysis identified as key predictors both subretinal and intraretinal fluid, alongside central retinal thickness. Conclusions: These findings suggest that session-by-session prediction of injection needs in nAMD patients is feasible, even without processing the entire OCT image. The proposed ML framework has the potential to be integrated into routine clinical workflows, thereby optimizing nAMD therapeutic management. Full article
Show Figures

Figure 1

Figure 1
<p>Thickness and volume OCT extraction for each ETDRS subfield. After extracting values from the Heidelberg Spectralis software, volume and thickness measurements were also combined in three concentric circles: the central circle (subfield 1; yellow), inner ring (subfields 2, 3, 4, and 5; orange), and outer ring (subfields 6, 7, 8, and 9; light blue) by averaging the corresponding values.</p>
Full article ">Figure 2
<p>Preprocessing of numerical features using a standard scaler with z-score normalization. The left plot shows the original distribution of feature values for a sample numerical feature (i.e., central retinal thickness) across patient sessions. The right plot displays the same feature after normalization using a standard scaler: the data distribution is transformed to have a mean of 0 and a variance of 1. In both plots, the red solid line represents the mean, while the black dashed lines indicate one standard deviation above and below the mean. Standard scaling was applied to all numerical predictors to ensure consistency in model training.</p>
Full article ">Figure 3
<p>Schematic depiction of the “Leave Some Subjects Out” (LSSO) cross-validation approach. For each input combination (i.e., C1, C2, C3, C4), the dataset was randomly divided into training (sessions pertaining to 80% of the patients) and test (sessions pertaining to 20% of the patients) sets. Each model was then optimized by means of a randomized grid search on the training set, and tested on the test set. This process was repeated 10 times, and the results were averaged to select the best-performing model for each combination of input parameters.</p>
Full article ">Figure 4
<p>Boxplots displaying the distribution of MCC scores for each type of machine learning model (blue: SVC, orange: Random Forest, green: Extra Trees Classifier, red: Gradient Boost Classifier, purple: Extreme Gradient Boost Classifier) and input features combinations (C1: volume and thickness of each ETDRS subfield, C2: C1 and BCVA, C3: C1 and clinical annotations, C4: C1, BCVA, and clinical annotations), across the ten iterations of the LSSO procedure. Black lines represent the median, black triangles the mean, whiskers 1.5× the interquartile range and circles data points falling beyond 1.5× the interquartile range.</p>
Full article ">Figure 5
<p>Boxplots displaying the distribution of MCC scores for each type of machine learning model (Extra Trees Classifier, SVC) and input feature combinations (C1: volume and thickness of each ETDRS subfield, C2: C1 and BCVA, C3: C1 and clinical annotations, C4: C1, BCVA, and clinical annotations), across the ten iterations of the LSSO procedure. Black lines represent the median, red triangles the mean, whiskers 1.5× the interquartile range, and circles data points falling beyond 1.5× the interquartile range.</p>
Full article ">Figure 6
<p>ROC AUC curves for the best-performing model for each combination of input parameters (C1, C2, C3, and C4). The solid blue line represents the average ROC AUC across the 10 iterations of the “Leave-Some-Subjects-Out” (LSSO) cross-validation procedure, while the shaded area indicates the standard deviation. The dotted black line represents the ROC curve of a random classifier.</p>
Full article ">Figure 7
<p>SHAP Beeswarm plot listing the top 9 features impacting model outputs. Each point represents a SHAP value for a feature and an individual observation. The blue color represents low values for a variable, while red indicates high values. A higher SHAP value indicates a positive influence on the model’s prediction of the necessity to administer anti-VEGF medications to the patient.</p>
Full article ">
18 pages, 7279 KiB  
Article
Optimizing Waste Sorting for Sustainability: An AI-Powered Robotic Solution for Beverage Container Recycling
by Tianhao Cheng, Daiki Kojima, Hao Hu, Hiroshi Onoda and Andante Hadi Pandyaswargo
Sustainability 2024, 16(23), 10155; https://doi.org/10.3390/su162310155 - 21 Nov 2024
Viewed by 242
Abstract
With Japan facing workforce shortages and the need for enhanced recycling systems due to an aging population and increasing environmental challenges, automation in recycling facilities has become a key component for advancing sustainability goals. This study presents the development of an automated sorting [...] Read more.
With Japan facing workforce shortages and the need for enhanced recycling systems due to an aging population and increasing environmental challenges, automation in recycling facilities has become a key component for advancing sustainability goals. This study presents the development of an automated sorting robot to replace manual processes in beverage container recycling, aiming to address environmental, social, and economic sustainability by optimizing resource efficiency and reducing labor demands. Using artificial intelligence (AI) for image recognition and high-speed suction-based grippers, the robot effectively sorts various container types, including PET bottles and clear and colored glass bottles, demonstrating a pathway toward more sustainable waste management practices. The findings indicate that stabilizing items on the sorting line may enhance acquisition success, although clear container detection remains an AI challenge. This research supports the United Nation’s 2030 Agenda for Sustainable Development by advancing recycling technology to improve waste processing efficiency, thus contributing to reduced pollution, resource conservation, and a sustainable recycling infrastructure. Further development of gripper designs to handle deformed or liquid-containing containers is required to enhance the system’s overall sustainability impact in the recycling sector. Full article
(This article belongs to the Section Waste and Recycling)
Show Figures

Figure 1

Figure 1
<p>The composition of the beverage container sorting robot system.</p>
Full article ">Figure 2
<p>Actual condition of the beverage container sorting system developed for this study.</p>
Full article ">Figure 3
<p>Image of experimental camera field of view.</p>
Full article ">Figure 4
<p>Systematic algorithm of software control system.</p>
Full article ">Figure 5
<p>Confusion matrix of the trained model. This confusion matrix should be read vertically (with columns representing actual classes and rows representing predicted classes). Colum “PET” means a sum of 0.99 because other prediction values are rounded to 0.</p>
Full article ">Figure 6
<p>Segmentation mask for Gaussian background motion model.</p>
Full article ">Figure 7
<p>Segmentation mask for SAM method.</p>
Full article ">Figure 8
<p>The algorithm for identifying the longest cross-section.</p>
Full article ">Figure 9
<p>Thresholding by lightness value in our conveyor belt illustrates the lighting condition.</p>
Full article ">Figure 10
<p>The sample image of our color algorithm result.</p>
Full article ">Figure 11
<p>The image of image recognition AI training data.</p>
Full article ">Figure 12
<p>Examples of image recognition AI does not send acquisition signals from the AI view and object view.</p>
Full article ">Figure 13
<p>Examples of errors due to robot control problems.</p>
Full article ">Figure 14
<p>Examples of errors due to object shape problem.</p>
Full article ">Figure 15
<p>Example of error due to conveyor belt transport.</p>
Full article ">
18 pages, 6146 KiB  
Article
A Near-Infrared Imaging System for Robotic Venous Blood Collection
by Zhikang Yang, Mao Shi, Yassine Gharbi, Qian Qi, Huan Shen, Gaojian Tao, Wu Xu, Wenqi Lyu and Aihong Ji
Sensors 2024, 24(22), 7413; https://doi.org/10.3390/s24227413 - 20 Nov 2024
Viewed by 154
Abstract
Venous blood collection is a widely used medical diagnostic technique, and with rapid advancements in robotics, robotic venous blood collection has the potential to replace traditional manual methods. The success of this robotic approach is heavily dependent on the quality of vein imaging. [...] Read more.
Venous blood collection is a widely used medical diagnostic technique, and with rapid advancements in robotics, robotic venous blood collection has the potential to replace traditional manual methods. The success of this robotic approach is heavily dependent on the quality of vein imaging. In this paper, we develop a vein imaging device based on the simulation analysis of vein imaging parameters and propose a U-Net+ResNet18 neural network for vein image segmentation. The U-Net+ResNet18 neural network integrates the residual blocks from ResNet18 into the encoder of the U-Net to form a new neural network. ResNet18 is pre-trained using the Bootstrap Your Own Latent (BYOL) framework, and its encoder parameters are transferred to the U-Net+ResNet18 neural network, enhancing the segmentation performance of vein images with limited labelled data. Furthermore, we optimize the AD-Census stereo matching algorithm by developing a variable-weight version, which improves its adaptability to image variations across different regions. Results show that, compared to U-Net, the BYOL+U-Net+ResNet18 method achieves an 8.31% reduction in Binary Cross-Entropy (BCE), a 5.50% reduction in Hausdorff Distance (HD), a 15.95% increase in Intersection over Union (IoU), and a 9.20% increase in the Dice coefficient (Dice), indicating improved image segmentation quality. The average error of the optimized AD-Census stereo matching algorithm is reduced by 25.69%, and the improvement of the image stereo matching performance is more obvious. Future research will explore the application of the vein imaging system in robotic venous blood collection to facilitate real-time puncture guidance. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of arm vein imaging.</p>
Full article ">Figure 2
<p>Simulate NIR propagation through arm tissue. (<b>a</b>) Radial two-dimensional cross-section of the local arm model. The black rectangles represent the skin, subcutaneous tissue, and muscle layers, from top to bottom, while the circle represents the radial cross-sections of the vein. (<b>b</b>) The ratio of photon densities at <span class="html-italic">x</span> = 2.00 mm. (<b>c</b>) The ratio of photon densities at <span class="html-italic">y</span> = 3.80 mm. (<b>d</b>) The simulation of photon density variation at an incident light wavelength of 850 nm. (<b>e</b>) Rectangular light source and light-receiving plane model. (<b>f</b>) Circular light source and light-receiving plane model. (<b>g</b>) The ratio of illuminance to mean illuminance on the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 3
<p>Vein imaging device.</p>
Full article ">Figure 4
<p>Schematic diagram of the vein imaging system for robotic venipuncture.</p>
Full article ">Figure 5
<p>(<b>a</b>) U-Net+ResNet18 neural network. (<b>b</b>) Neural network pre-training and model parameters migration.</p>
Full article ">Figure 6
<p>Cross-based Cost Aggregation. (<b>a</b>) Cross-based regions and Support regions, the cross shadows represent the cross-based regions, and the other shadows represent the support regions. (<b>b</b>) Horizontal aggregation, the blue arrows represent the aggregation direction. (<b>c</b>) Vertical aggregation.</p>
Full article ">Figure 7
<p>Vein image random transformation. (<b>a</b>) Original NIR vein image. (<b>b</b>,<b>c</b>) The vein image after random transformation.</p>
Full article ">Figure 8
<p>The variation of the loss function with epoch.</p>
Full article ">Figure 9
<p>NIR vein images segmentation results. (<b>a</b>) Original NIR vein images. (<b>b</b>) NIR vein images segmentation results using the Hessian matrix. (<b>c</b>) NIR vein images segmentation results using BYOL+U-Net+ResNet18 method. (<b>d</b>) Image binarization effect. (<b>e</b>) The labels corresponding to the original image.</p>
Full article ">Figure 10
<p>Variation of each neural network model metric with epochs. (<b>a</b>) Variation of BCE with epochs. (<b>b</b>) Variation of IoU with epochs. (<b>c</b>) Variation of Dice with epochs. (<b>d</b>) Variation of HD with epochs.</p>
Full article ">Figure 11
<p>Vein centerline extraction. (<b>a</b>) Pre-processed NIR greyscale map of veins. (<b>b</b>) Vein centerline extracted by the proposed algorithm in this paper. (<b>c</b>) The image after connecting and eliminating small connected regions using the contour connection algorithm (see the red circles).</p>
Full article ">Figure 12
<p>Comparison of results of stereo matching algorithms. (<b>a</b>) Left image. (<b>b</b>) Right image. (<b>c</b>) Disparity map of AD-Census algorithm. (<b>d</b>) Disparity map of optimization AD-Census algorithm.</p>
Full article ">Figure 13
<p>Vein image visualization process. (<b>a</b>) Original vein image collected by the camera. (<b>b</b>) Vein centerline extraction results. (<b>c</b>) Vein image segmentation results. (<b>d</b>) Disparity map.</p>
Full article ">
21 pages, 2229 KiB  
Article
LH-YOLO: A Lightweight and High-Precision SAR Ship Detection Model Based on the Improved YOLOv8n
by Qi Cao, Hang Chen, Shang Wang, Yongqiang Wang, Haisheng Fu, Zhenjiao Chen and Feng Liang
Remote Sens. 2024, 16(22), 4340; https://doi.org/10.3390/rs16224340 - 20 Nov 2024
Viewed by 146
Abstract
Synthetic aperture radar is widely applied to ship detection due to generating high-resolution images under diverse weather conditions and its penetration capabilities, making SAR images a valuable data source. However, detecting multi-scale ship targets in complex backgrounds leads to issues of false positives [...] Read more.
Synthetic aperture radar is widely applied to ship detection due to generating high-resolution images under diverse weather conditions and its penetration capabilities, making SAR images a valuable data source. However, detecting multi-scale ship targets in complex backgrounds leads to issues of false positives and missed detections, posing challenges for lightweight and high-precision algorithms. There is an urgent need to improve accuracy of algorithms and their deployability. This paper introduces LH-YOLO, a YOLOv8n-based, lightweight, and high-precision SAR ship detection model. We propose a lightweight backbone network, StarNet-nano, and employ element-wise multiplication to construct a lightweight feature extraction module, LFE-C2f, for the neck of LH-YOLO. Additionally, a reused and shared convolutional detection (RSCD) head is designed using a weight sharing mechanism. These enhancements significantly reduce model size and computational demands while maintaining high precision. LH-YOLO features only 1.862 M parameters, representing a 38.1% reduction compared to YOLOv8n. It exhibits a 23.8% reduction in computational load while achieving a mAP50 of 96.6% on the HRSID dataset, which is 1.4% higher than YOLOv8n. Furthermore, it demonstrates strong generalization on the SAR-Ship-Dataset with a mAP50 of 93.8%, surpassing YOLOv8n by 0.7%. LH-YOLO is well-suited for environments with limited resources, such as embedded systems and edge computing platforms. Full article
Show Figures

Figure 1

Figure 1
<p>The overall structure of YOLOv8. YOLOv8 originates from the open-source code made available by Ultralytics. “ ×2” means there are two columns of ConvModule.</p>
Full article ">Figure 2
<p>The structure of LH-YOLO.</p>
Full article ">Figure 3
<p>(<b>a</b>) The fundamental module for the StarNet-nano network. (<b>b</b>) Detailed description of the star operation.</p>
Full article ">Figure 4
<p>(<b>a</b>) The C2f module in the neck of YOLOv8n. (<b>b</b>) The proposed LFE-C2f module.</p>
Full article ">Figure 5
<p>(<b>a</b>) The framework of the decoupled head of YOLOv8n. (<b>b</b>) The framework of the RSCD head.</p>
Full article ">Figure 6
<p>Detection results comparison using the HRSID dataset. Green boxes represent ships that have been correctly detected, while blue boxes indicate incorrectly detected ships. Additionally, red boxes denote ships that were not detected at all.</p>
Full article ">Figure 7
<p>Detection results comparison using the SAR-Ship-Dataset.</p>
Full article ">Figure 8
<p>Visual comparison of detection results between LH-YOLO and four detection models using the HRSID and SAR-Ship-Dataset: (<b>a</b>) Ground Truth, (<b>b</b>) YOLOv3-Tiny. (<b>c</b>) YOLOv5. (<b>d</b>) YOLOv10n. (<b>e</b>) Proposed LH-YOLO.</p>
Full article ">
13 pages, 46604 KiB  
Article
Human Activity Recognition Based on Point Clouds from Millimeter-Wave Radar
by Seungchan Lim, Chaewoon Park, Seongjoo Lee and Yunho Jung
Appl. Sci. 2024, 14(22), 10764; https://doi.org/10.3390/app142210764 - 20 Nov 2024
Viewed by 221
Abstract
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, [...] Read more.
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, a low-power and lightweight design of the HAR system is essential. In this paper, we propose a low-power and lightweight HAR system using point-cloud data collected by radar. The proposed HAR system uses a pillar feature encoder that converts 3D point-cloud data into a 2D image and a classification network based on depth-wise separable convolution for lightweighting. The proposed classification network achieved an accuracy of 95.54%, with 25.77 M multiply–accumulate operations and 22.28 K network parameters implemented in a 32 bit floating-point format. This network achieved 94.79% accuracy with 4 bit quantization, which reduced memory usage to 12.5% compared to existing 32 bit format networks. In addition, we implemented a lightweight HAR system optimized for low-power design on a heterogeneous computing platform, a Zynq UltraScale+ ZCU104 device, through hardware–software implementation. It took 2.43 ms of execution time to perform one frame of HAR on the device and the system consumed 3.479 W of power when running. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection setup.</p>
Full article ">Figure 2
<p>Configuration of dataset classes and their corresponding point clouds: (<b>a</b>) Stretching; (<b>b</b>) Standing; (<b>c</b>) Taking medicine; (<b>d</b>) Squatting; (<b>e</b>) Sitting chair; (<b>f</b>) Reading news; (<b>g</b>) Sitting floor; (<b>h</b>) Picking; (<b>i</b>) Crawl; (<b>j</b>) Lying wave hands; (<b>k</b>) Lying.</p>
Full article ">Figure 3
<p>Overview of the proposed HAR system.</p>
Full article ">Figure 4
<p>Proposed classification network.</p>
Full article ">Figure 5
<p>Training and test loss curve and accuracy curve: (<b>a</b>) Training and test loss curve; (<b>b</b>) Training and test accuracy curve.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">Figure 7
<p>Environment used for FPGA implementation and verification.</p>
Full article ">
Back to TopTop