Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (830)

Search Parameters:
Keywords = automatic acquisition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 10185 KiB  
Article
Research on Shallow Water Depth Remote Sensing Based on the Improvement of the Newton–Raphson Optimizer
by Yanran Li, Bei Liu, Xia Chai, Fengcheng Guo, Yongze Li and Dongyang Fu
Water 2025, 17(4), 552; https://doi.org/10.3390/w17040552 - 14 Feb 2025
Viewed by 268
Abstract
The precise acquisition of water depth data in nearshore shallow waters bears considerable strategic significance for marine environmental monitoring, resource stewardship, navigational infrastructure development, and military security. Conventional bathymetric survey methodologies are constrained by their spatial and temporal limitations, thus failing to satisfy [...] Read more.
The precise acquisition of water depth data in nearshore shallow waters bears considerable strategic significance for marine environmental monitoring, resource stewardship, navigational infrastructure development, and military security. Conventional bathymetric survey methodologies are constrained by their spatial and temporal limitations, thus failing to satisfy the requirements of large-scale, real-time surveillance. While satellite remote sensing technologies present a novel approach to water depth inversion in shallow waters, attaining high-precision inversion in nearshore areas characterized by elevated levels of suspended sediments and diminished transparency remains a formidable challenge. To tackle this issue, this study introduces an enhanced XGBoost model grounded in the Newton–Raphson optimizer (NRBO–XGBoost) and successfully applies it to water depth inversion investigations in the nearshore shallow waters of the Beibu Gulf. The research amalgamates Sentinel-2B multispectral imagery, nautical chart data, and in situ water depth measurements. By ingeniously integrating the Newton–Raphson optimizer with the XGBoost framework, the study realizes the automatic configuration of model training parameters, markedly elevating inversion accuracy. The findings reveal that the NRBO–XGBoost model attains a coefficient of determination (R2) of 0.85 when compared to nautical chart water depth data, alongside a scatter index (SI) of 21%, substantially surpassing conventional models. Additional validation analyses indicate that the model achieves a coefficient of determination (R2) of 0.86 with field-measured data, a mean absolute error (MAE) of 1.60 m, a root mean square error (RMSE) of 2.13 m, and a scatter index (SI) of 13%. Moreover, the model exhibits exceptional performance in extended applications within the waters of Zhanjiang Port (R2 = 0.90), unequivocally affirming its dependability and practicality in intricate nearshore water environments. This study not only provides a fresh solution for remotely sensing water depth in complex nearshore water settings but also imparts valuable technical insights into the associated underwater surveys and marine resource exploitation. Full article
Show Figures

Figure 1

Figure 1
<p>Technical roadmap for remote sensing bathymetry inversion.</p>
Full article ">Figure 2
<p>Distribution map of the nearshore study area in the Beibu Gulf, charted stations (750), and measured stations (30).</p>
Full article ">Figure 3
<p>Analysis of consistency between measured water depth and charted water depth.</p>
Full article ">Figure 4
<p>Correlation coefficients between water depth and various band combinations.</p>
Full article ">Figure 5
<p>Fitted graphs of water depth inversion values and sample values for different models: (<b>a</b>) NRBO–XGBoost model, (<b>b</b>) BP neural network model, (<b>c</b>) support vector regression model, (<b>d</b>) multi-band linear regression model, (<b>e</b>) Stumpf logarithmic ratio model.</p>
Full article ">Figure 6
<p>Water depth inversion map of the nearshore study area in Beibu Gulf.</p>
Full article ">Figure 7
<p>Fitted graph of water depth inversion values and measured values using the segmented NRBO–XGBoost model.</p>
Full article ">Figure 8
<p>Map of the distribution of research areas and chart stations (600) and actual measurement stations (43) in Zhanjiang Port.</p>
Full article ">Figure 9
<p>The correlation coefficient between the water depth of Zhanjiang Port and different wave band combinations.</p>
Full article ">Figure 10
<p>Graph of the fitting results of the inverted data with sample data and measured data: (<b>a</b>) fitting graph of the inverted water depth values using the NRBO–XGBoost model with the sample values; (<b>b</b>) fitting graph of the inverted water depth values using the NRBO–XGBoost model with the measured values.</p>
Full article ">
9 pages, 965 KiB  
Communication
STATom@ic: R Package for Automated Statistical Analysis of Omic Datasets
by Rui S. Treves, Tyler C. Gripshover and Josiah E. Hardesty
Stats 2025, 8(1), 18; https://doi.org/10.3390/stats8010018 - 11 Feb 2025
Viewed by 337
Abstract
Background: The evolution of “omic” technologies, which measure all biological molecules of a specific type (e.g., genomics), has enabled rapid and cost-effective data acquisition, depending on the technique and sample size. This, however, generates new hurdles that need to be addressed and should [...] Read more.
Background: The evolution of “omic” technologies, which measure all biological molecules of a specific type (e.g., genomics), has enabled rapid and cost-effective data acquisition, depending on the technique and sample size. This, however, generates new hurdles that need to be addressed and should be improved upon. This includes selecting the appropriate statistical test based on study design in a high-throughput manner. Methods: An automated statistical analysis pipeline for omic datasets that we coined STATom@ic (pronounced stat-o-matic) was developed in R programming language. Results: We developed an R package that enables statisticians, bioinformaticians, and scientists to perform assumption tests (e.g., normality and variance homogeneity) before selecting appropriate statistical tests. This analysis package can handle two-group and multiple-group comparisons. In addition, this R package can be used for many data formats including normalized counts (RNASeq) and spectral abundance (proteomics and metabolomics). STATom@ic has high precision but lower recall compared to DeSeq2. Conclusions: The STATom@ic R Package is a user-friendly stand-alone or add-on to current bioinformatic workflows that automatically performs appropriate statistical analysis based on the characteristics of the data. Full article
(This article belongs to the Section Biostatistics)
Show Figures

Figure 1

Figure 1
<p>STATom@ic decision tree for two-group statistical comparisons. In this decision tree, data from two groups was processed in this pipeline for assumption testing prior to the final statistical tests including the Wilcoxon Mann–Whitney test (non-normal data), Welch’s test (normal data with non-equal variance), or unpaired Student’s <span class="html-italic">t</span>-test (normal data with equal variance). GLM: generalized linear model.</p>
Full article ">Figure 2
<p>STATom@ic multi-group statistical comparisons. In this decision tree, multiple group data were subjected to assumption testing prior to the final statistical tests including the Kruskal–Wallis test with a Dunn multiple comparison post hoc test (non-normal data), Welch’s test with a post hoc Dunnett-T3 multiple comparison test (normal data with non-equal variance), or a one-/two-way ANOVA with a post hoc Tukey multiple comparison test (normal data with equal variance). GLM: generalized linear model.</p>
Full article ">Figure 3
<p>Example of experimental designs for one- or two-way ANOVAs. Three or more groups of data that do not have overlapping variables are more than likely going to be used for a one-way ANOVA as long as the data are normal with equal variances. Four groups of data with overlapping variables (e.g., sex and treatment) or a 2 × 2 design will be used for a two-way ANOVA as long as the data are normal with equal variances.</p>
Full article ">
14 pages, 2171 KiB  
Article
Individual Cow Recognition Based on Ultra-Wideband and Computer Vision
by Aruna Zhao, Huijuan Wu, Daoerji Fan and Kuo Li
Animals 2025, 15(3), 456; https://doi.org/10.3390/ani15030456 - 6 Feb 2025
Viewed by 353
Abstract
This study’s primary goal is to use computer vision and ultra-wideband (UWB) localisation techniques to automatically mark numerals in cow photos. In order to accomplish this, we created a UWB-based cow localisation system that involves installing tags on cow heads and placing several [...] Read more.
This study’s primary goal is to use computer vision and ultra-wideband (UWB) localisation techniques to automatically mark numerals in cow photos. In order to accomplish this, we created a UWB-based cow localisation system that involves installing tags on cow heads and placing several base stations throughout the farm. The system can determine the distance between each base station and the cow using wireless communication technology, which allows it to determine the cow’s current location coordinates. The study employed a neural network to train and optimise the ranging data gathered in the 1–20 m range in order to solve the issue of significant ranging errors in conventional UWB positioning systems. The experimental data indicates that the UWB positioning system’s unoptimized range error has an absolute mean of 0.18 m and a standard deviation of 0.047. However, when using a neural network-trained model, the ranging error is much decreased, with an absolute mean of 0.038 m and a standard deviation of 0.0079. The average root mean square error (RMSE) of the positioning coordinates is decreased to 0.043 m following the positioning computation utilising the optimised range data, greatly increasing the positioning accuracy. This study used the conventional camera shooting method for image acquisition. Following image acquisition, the system extracts the cow’s coordinate information from the image using a perspective transformation method. This allows for accurate cow identification and number labelling when compared to the location coordinates. According to the trial findings, this plan, which integrates computer vision and UWB positioning technologies, achieves high-precision cow labelling and placement in the optimised system and greatly raises the degree of automation and precise management in the farming process. This technology has many potential applications, particularly in the administration and surveillance of big dairy farms, and it offers a strong technical basis for precision farming. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

Figure 1
<p>Deployment of experiments.</p>
Full article ">Figure 2
<p>Block diagram of UWB positioning system hardware.</p>
Full article ">Figure 3
<p>Positioning algorithms.</p>
Full article ">Figure 4
<p>BP neural network structure: x is input and y is output.</p>
Full article ">Figure 5
<p>Examples of transmission transformations: (<b>a</b>) original image; (<b>b</b>) transformed image.</p>
Full article ">Figure 6
<p>Results for the training and test sets: (<b>a</b>) the RMSE of the training set; (<b>b</b>) the RMSE of the test set.</p>
Full article ">Figure 7
<p>Process of individual identification: (<b>a</b>) schematic of selected areas; (<b>b</b>) results of yolo testing; (<b>c</b>) coordinate conversion results; (<b>d</b>) target identification results.</p>
Full article ">
17 pages, 4256 KiB  
Article
Diagnosis of Wind Turbine Yaw System Based on Self-Attention–Long Short-Term Memory (LSTM)
by Canglin Song, Niaona Zhang, Jingting Shao, Yanbo Wang, Xinyu Liu and Changhong Jiang
Electronics 2025, 14(3), 617; https://doi.org/10.3390/electronics14030617 - 5 Feb 2025
Viewed by 358
Abstract
Addressing the challenges and significant risks associated with diagnosing faults in wind turbine yaw systems, along with the typically low diagnostic accuracy, this study introduces a Long Short-Term Memory (LSTM) neural network augmented by a self-attention mechanism (SAM) as a novel fault diagnosis [...] Read more.
Addressing the challenges and significant risks associated with diagnosing faults in wind turbine yaw systems, along with the typically low diagnostic accuracy, this study introduces a Long Short-Term Memory (LSTM) neural network augmented by a self-attention mechanism (SAM) as a novel fault diagnosis technique for wind turbine yaw systems. The method integrates the automatic weighting capability of the self-attention mechanism on input features with the advantage of LSTM in processing time series data, thereby effectively capturing key information and long-term dependencies in the operating data of the yawing system. This combination enhances the accuracy of fault feature extraction to more accurately identify various types of fault modes within the yawing system. Six types of feature parameters are extracted from the raw data collected by the SCADA (Supervisory Control And Data Acquisition) system of the wind turbine and are utilized as inputs for the diagnostic model. These parameters are then fed into the self-attention–LSTM neural network model to diagnose the health status of the yaw system, including yaw bearing damage, yaw gearbox failure, yaw motor failure, and sensor failure. The experimental results demonstrate that the accuracy of LSTM fault diagnosis, when enhanced with the self-attention mechanism, can reach 98.67% with an appropriate amount of training samples, verifying its significant advantages in terms of accuracy and stability of fault diagnosis. The proposed fault diagnosis method exhibits a better model fitting effect, strong generalization ability, and high accuracy compared to other methods, providing robust support for the reliable operation and maintenance of wind turbines. Full article
Show Figures

Figure 1

Figure 1
<p>Yaw system structure diagram.</p>
Full article ">Figure 2
<p>Yaw system failure classification schematic.</p>
Full article ">Figure 3
<p>Distribution of data visualized by Z-score method.</p>
Full article ">Figure 4
<p>Visualization of the latent space using t-SNE and PCA (<b>a</b>) t-SNE data visualization; (<b>b</b>) PCA dimensionality reduction visualization.</p>
Full article ">Figure 5
<p>LSTM neural network structure diagram.</p>
Full article ">Figure 6
<p>Self-attention–LSTM network structure diagram.</p>
Full article ">Figure 7
<p>Troubleshooting flowchart.</p>
Full article ">Figure 8
<p>Training accuracy curve and loss function curve: (<b>a</b>) Option 1; (<b>b</b>) Option 2; (<b>c</b>) Option 3; (<b>d</b>) Option 4.</p>
Full article ">Figure 9
<p>Model confusion matrix.</p>
Full article ">Figure 10
<p>Simulation results of the classification of different neural networks: (<b>a</b>) self-attention–LSTM; (<b>b</b>) LSTM; (<b>c</b>) GRU; (<b>d</b>) CNN.</p>
Full article ">Figure 11
<p>Performance comparison chart.</p>
Full article ">
22 pages, 4837 KiB  
Article
Development of Deep Intelligence for Automatic River Detection (RivDet)
by Sejeong Lee, Yejin Kong and Taesam Lee
Remote Sens. 2025, 17(2), 346; https://doi.org/10.3390/rs17020346 - 20 Jan 2025
Viewed by 617
Abstract
Recently, the impact of climate change has led to an increase in the scale and frequency of extreme rainfall and flash floods. Due to this, the occurrence of floods and various river disasters has increased, necessitating the acquisition of technologies to prevent river [...] Read more.
Recently, the impact of climate change has led to an increase in the scale and frequency of extreme rainfall and flash floods. Due to this, the occurrence of floods and various river disasters has increased, necessitating the acquisition of technologies to prevent river disasters. Owing to the nature of rivers, areas with poor accessibility exist, and obtaining information over a wide area can be time-consuming. Artificial intelligence technology, which has the potential to overcome these limits, has not been broadly adopted for river detection. Therefore, the current study conducted a performance analysis of artificial intelligence for automatic river path setting via the YOLOv8 model, which is widely applied in various fields. Through the augmentation feature in the Roboflow platform, many river images were employed to train and analyze the river spatial information of each applied image. The overall results revealed that the models with augmentation performed better than the basic models without augmentation. In particular, the flip and crop and shear model showed the highest performance with a score of 0.058. When applied to rivers, the Wosucheon stream showed the highest average confidence across all models, with a value of 0.842. Additionally, the max confidence for each river was extracted, and it was found that models including crop exhibited higher reliability. The results show that the augmentation models better generalize new data and can improve performance in real-world environments. Additionally, the RivDet artificial intelligence model for automatic river path configuration developed in the current study is expected to solve various problems, such as automatic flow rate estimation for river disaster prevention, setting early flood warnings, and calculating the range of flood inundation damage. Full article
Show Figures

Figure 1

Figure 1
<p>Map of the study area with each river name and its geological information, including latitude and longitude.</p>
Full article ">Figure 2
<p>Process of model development via the YOLOv8 model on the Roboflow platform for river detection.</p>
Full article ">Figure 3
<p>The operation process of the yolov8 model. Note that the process has been divided into three sections: backbone, neck, and head.</p>
Full article ">Figure 4
<p>Example of inserting the ‘River’ class in Roboflow.</p>
Full article ">Figure 5
<p>Example of augmentation for a single case. Note that the left plot represents the basic model, whereas the right plot represents the model with the corresponding augmentation applied: (<b>a</b>) flip, (<b>b</b>) 90° rotate, (<b>c</b>) crop, and (<b>d</b>) shear.</p>
Full article ">Figure 6
<p>mAP calculation during the training procedure through each epoch for the basic model.</p>
Full article ">Figure 7
<p>Example of the augmentation for the cases with two augmentations: for each panel, the left plot represents the original photo, whereas the right plot represents the model with the corresponding augmentation applied: (<b>a</b>) flip and 90° rotate, (<b>b</b>) flip and crop, (<b>c</b>) flip and shear, (<b>d</b>) 90° rotate and crop, (<b>e</b>) 90° rotate and shear, and (<b>f</b>) crop and shear.</p>
Full article ">Figure 8
<p>Example of the augmentation for the cases with three (<b>a</b>–<b>d</b>) and four (<b>e</b>) augmentations. Note that in each panel, the left plot represents the original photo, whereas the right plot represents the model with the corresponding augmentation.</p>
Full article ">Figure 9
<p>Heatmap of each augmentation effect on the model performance metrics: the corresponding value between each augmentation model displays the mAP, precision, and recall values for the models with three times augmentations of flip, 90° rotate, crop, or shear.</p>
Full article ">Figure 10
<p>Heatmap of each augmentation effect on the model performance metrics. The corresponding value between each augmentation model displays the mAP, precision, and recall values for the models with five times the amount in data and augmentations of flip, 90° rotate, crop, and shear.</p>
Full article ">Figure 11
<p>Confidence score for 11 rivers according to its augmentation models. The x-axis represents the driver and the y-axis represents the model. The types of augmentation of the model are shown in <a href="#remotesensing-17-00346-t002" class="html-table">Table 2</a> and <a href="#remotesensing-17-00346-t004" class="html-table">Table 4</a>.</p>
Full article ">
18 pages, 5011 KiB  
Article
Improving Industrial Quality Control: A Transfer Learning Approach to Surface Defect Detection
by Ângela Semitela, Miguel Pereira, António Completo, Nuno Lau and José P. Santos
Sensors 2025, 25(2), 527; https://doi.org/10.3390/s25020527 - 17 Jan 2025
Viewed by 672
Abstract
To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) [...] Read more.
To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level. Furthermore, the pre-trained networks achieved higher accuracies on defect classification compared to the self-built network, with ResNet-50 displaying higher accuracy. The inspection system consistently obtained fast and accurate surface classifications because it imposed OK classification on models trained with images from both illumination modes. The obtained surface information was then successfully sent to a server to be forwarded to a graphical user interface for visualization. The developed system showed considerable robustness, demonstrating its potential as an efficient tool for industrial quality control. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental setup (<b>a</b>) and the respective schematic representation highlighting the two illumination modes used and the acquisition parameters (distance and angles) (<b>b</b>), and the camera employed (<b>c</b>). Distances are depicted in centimeters.</p>
Full article ">Figure 2
<p>Data augmentation (<b>a</b>) and image processing (<b>b</b>) threads performed on the captured images.</p>
Full article ">Figure 3
<p>CNNs architectures employed: self-built (<b>a</b>), ResNet-50 (<b>b</b>), and Inception V3 (<b>c</b>).</p>
Full article ">Figure 4
<p>Flowchart detailing the steps of the automatic defect detection algorithm.</p>
Full article ">Figure 5
<p>Telegram to be sent to MES.</p>
Full article ">Figure 6
<p>Image from the painted surfaces before (<b>a</b>) and after processing for scratches (<b>b</b>) and dents (<b>c</b>). Image from the painted surfaces with spots with lack of paint (<b>d</b>). The defects are highlighted by the red circles.</p>
Full article ">Figure 7
<p>Accuracy (<b>upper</b>) and losses (<b>lower</b>) of the self-built CNN (<b>a</b>), ResNet-50 (<b>b</b>), and Inception V3 (<b>c</b>) during training and testing for scratch’s defect category, employing sinusoidal patterns with 20 stripes.</p>
Full article ">Figure 8
<p>Accuracy (<b>upper</b>) and losses (<b>lower</b>) of ResNet-50 during training and testing using sinusoidal patterns with 20 (<b>a</b>) and 40 (<b>b</b>) stripes for the dent defect category.</p>
Full article ">Figure 9
<p>Accuracy and loss of the pre-trained ResNet-50 model during training and testing for the lack of paint defect category.</p>
Full article ">Figure 10
<p>Telegram reception by the implemented MES (<b>a</b>), GUI for a surface OK and NOK due to lack of paint (<b>b</b>).</p>
Full article ">
27 pages, 4902 KiB  
Article
Digitalization of the Workflow for Drone-Assisted Inspection and Automated Assessment of Industrial Buildings for Effective Maintenance Management
by Jorge Torres-Barriuso, Natalia Lasarte, Ignacio Piñero, Eduardo Roji and Peru Elguezabal
Buildings 2025, 15(2), 242; https://doi.org/10.3390/buildings15020242 - 15 Jan 2025
Viewed by 666
Abstract
Industrial buildings are a key element in the industrial fabric, and their maintenance is essential to ensure their proper functioning and avoid disruptions and costly economic losses. Continuous maintenance based on an accurate diagnosis makes it possible to meet the challenges of aging [...] Read more.
Industrial buildings are a key element in the industrial fabric, and their maintenance is essential to ensure their proper functioning and avoid disruptions and costly economic losses. Continuous maintenance based on an accurate diagnosis makes it possible to meet the challenges of aging infrastructures, which demands a reliable data-based assessment for maintenance management implementing corrective and preventive actions, according to the damage criticality. This paper researches an innovative digitalized process for the inspection and diagnosis of industrial buildings, which leads to categorizing and prioritizing maintenance actions in an objective and cost-effective way from the inspection data. The process integrates some technical developments carried out in this work, aimed to automate the workflow: the drone-based inspection, the building condition assessment from the definition of a standardized construction pathology library, and a visual analysis of pathology evolution based on photogrammetry. The use of drones for digitalized inspection involves some challenges related to the positioning of the drone for damage localization, which has been herein overcome by developing a geo-annotation system for image acquisition. This system has also enabled the capture of geo-located images intended to generate 3D photogrammetric models for quantifying the pathological process evolution. Moreover, the assessment procedure outlined through multi-criteria decision-making methodology MIVES establishes a single criterion to automatically weight the relative importance of the damage defined in the library. As a result, this procedure yields the so-called Intervention Urgency Index (IUI), which allows prioritizing the maintenance actions associated with the damage while also considering economic criteria. In such a way, the overall process aims to increase reliability and consistency in the results of inspection and diagnosis needed for the effective maintenance management of industrial buildings. Full article
(This article belongs to the Special Issue Selected Papers from the REHABEND 2024 Congress)
Show Figures

Figure 1

Figure 1
<p>Inspection and pathology assessment methodology.</p>
Full article ">Figure 2
<p>Construction pathology library data structure.</p>
Full article ">Figure 3
<p>Examples of value functions. (<b>a</b>) Linear function representing proportional growth; (<b>b</b>) Concave function representing decreasing growth; (<b>c</b>) Convex function representing increasing growth; (<b>d</b>) Smooth S-shaped function representing gradual transitions; (<b>e</b>) Strong S-shaped function reflecting abrupt transitions.</p>
Full article ">Figure 4
<p>(<b>a</b>) Position of the different photos taken from a drone for photogrammetry application; and (<b>b</b>) Photogrammetric 3D model.</p>
Full article ">Figure 5
<p>Images of the same pathology evolved over time.</p>
Full article ">Figure 6
<p>Representation of the geometrical variation between both 3D models, thus being able to corroborate and quantify the pathology evolution.</p>
Full article ">Figure 7
<p>Value functions resulting from the weighting of each indicator’s alternatives: (<b>a</b>) Severity; (<b>b</b>) Evolution; (<b>c</b>) Impact on others; (<b>d</b>) Extension.</p>
Full article ">Figure 8
<p>IUI Classification.</p>
Full article ">
15 pages, 2807 KiB  
Article
Automatic Characterization of Prostate Suspect Lesions on T2-Weighted Image Acquisitions Using Texture Features and Machine-Learning Methods: A Pilot Study
by Teodora Telecan, Cosmin Caraiani, Bianca Boca, Roxana Sipos-Lascu, Laura Diosan, Zoltan Balint, Raluca Maria Hendea, Iulia Andras, Nicolae Crisan and Monica Lupsor-Platon
Diagnostics 2025, 15(1), 106; https://doi.org/10.3390/diagnostics15010106 - 4 Jan 2025
Viewed by 787
Abstract
Background: Prostate cancer (PCa) is the most frequent neoplasia in the male population. According to the International Society of Urological Pathology (ISUP), PCa can be divided into two major groups, based on their prognosis and treatment options. Multiparametric magnetic resonance imaging (mpMRI) [...] Read more.
Background: Prostate cancer (PCa) is the most frequent neoplasia in the male population. According to the International Society of Urological Pathology (ISUP), PCa can be divided into two major groups, based on their prognosis and treatment options. Multiparametric magnetic resonance imaging (mpMRI) holds a central role in PCa assessment; however, it does not have a one-to-one correspondence with the histopathological grading of tumors. Recently, artificial intelligence (AI)-based algorithms and textural analysis, a subdivision of radiomics, have shown potential in bridging this gap. Objectives: We aimed to develop a machine-learning algorithm that predicts the ISUP grade of manually contoured prostate nodules on T2-weighted images and classifies them into clinically significant and indolent ones. Materials and Methods: We included 55 patients with 76 lesions. All patients were examined on the same 1.5 Tesla mpMRI scanner. Each nodule was manually segmented using the open-source 3D Slicer platform, and textural features were extracted using the PyRadiomics (version 3.0.1) library. The software was based on machine-learning classifiers. The accuracy was calculated based on precision, recall, and F1 scores. Results: The median age of the study group was 64 years (IQR 61–68), and the mean PSA value was 11.14 ng/mL. A total of 85.52% of the nodules were graded PI-RADS 4 or higher. Overall, the algorithm classified indolent and clinically significant PCas with an accuracy of 87.2%. Further, when trained to differentiate each ISUP group, the accuracy was 80.3%. Conclusions: We developed an AI-based decision-support system that accurately differentiates between the two PCa prognostic groups using only T2 MRI acquisitions by employing radiomics with a robust machine-learning architecture. Full article
Show Figures

Figure 1

Figure 1
<p>Dataset sample images, representing manually segmented T2WI images.</p>
Full article ">Figure 2
<p>Graphical description of the study protocol.</p>
Full article ">Figure 3
<p>Graphical representation of the classification algorithm.</p>
Full article ">
17 pages, 2944 KiB  
Article
Enhanced CATBraTS for Brain Tumour Semantic Segmentation
by Rim El Badaoui, Ester Bonmati Coll, Alexandra Psarrou, Hykoush A. Asaturyan and Barbara Villarini
J. Imaging 2025, 11(1), 8; https://doi.org/10.3390/jimaging11010008 - 3 Jan 2025
Viewed by 551
Abstract
The early and precise identification of a brain tumour is imperative for enhancing a patient’s life expectancy; this can be facilitated by quick and efficient tumour segmentation in medical imaging. Automatic brain tumour segmentation tools in computer vision have integrated powerful deep learning [...] Read more.
The early and precise identification of a brain tumour is imperative for enhancing a patient’s life expectancy; this can be facilitated by quick and efficient tumour segmentation in medical imaging. Automatic brain tumour segmentation tools in computer vision have integrated powerful deep learning architectures to enable accurate tumour boundary delineation. Our study aims to demonstrate improved segmentation accuracy and higher statistical stability, using datasets obtained from diverse imaging acquisition parameters. This paper introduces a novel, fully automated model called Enhanced Channel Attention Transformer (E-CATBraTS) for Brain Tumour Semantic Segmentation; this model builds upon 3D CATBraTS, a vision transformer employed in magnetic resonance imaging (MRI) brain tumour segmentation tasks. E-CATBraTS integrates convolutional neural networks and Swin Transformer, incorporating channel shuffling and attention mechanisms to effectively segment brain tumours in multi-modal MRI. The model was evaluated on four datasets containing 3137 brain MRI scans. Through the adoption of E-CATBraTS, the accuracy of the results improved significantly on two datasets, outperforming the current state-of-the-art models by a mean DSC of 2.6% while maintaining a high accuracy that is comparable to the top-performing models on the other datasets. The results demonstrate that E-CATBraTS achieves both high segmentation accuracy and elevated generalisation abilities, ensuring the model is robust to dataset variation. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>E-CATBraTS with channel shuffle module for shuffling embedded feature maps prior to reducing its size using a novel CAT encoding block. The blue background represents the encoder, while the yellow represents the decoder.</p>
Full article ">Figure 2
<p>Swin Transformer with four stages: it takes, as an input, non-overlapping patches of magnetic resonance imaging (MRI) volumes.</p>
Full article ">Figure 3
<p>Channel shuffle. Channels are divided into four subgroups. Yellow, red, green, and blue represent the channels of the four MRI acquisitions: T1, T1-weighted, T2, and T2-FLAIR.</p>
Full article ">Figure 4
<p>A single CAT encoding block. The block takes X as an input and applies a 3D convolution. Next, normalisation is performed using a 3D batch normalisation function before progressing through a channel attention module and activated in a LeakyReLU layer.</p>
Full article ">Figure 5
<p>Brain tumour subregion segmentation in three randomly selected MRI cases from the test UCSF-PDGM dataset. The tumour subcategories: tumour core (TC), whole tumour (WT), and enhancing tumour are highlighted in yellow, blue, and red, respectively.</p>
Full article ">Figure 6
<p>Segmented brain tumours in three randomly selected cases in the test UPENN-GBM dataset. Tumour core (TC) is marked in yellow, whole tumour (WT) in blue, and enhancing tumour (ET) in red.</p>
Full article ">Figure 7
<p>Four cases randomly taken from various datasets with different image quality. For each case, we show at the top row the original brain MRI slice; in row 2, we have the ground truth contoured in red, and in the last row, we show the prediction of the E-CATBraTS model coloured in green with the Dice similarity coefficient (DSC).</p>
Full article ">
37 pages, 3785 KiB  
Review
Key Intelligent Pesticide Prescription Spraying Technologies for the Control of Pests, Diseases, and Weeds: A Review
by Kaiqiang Ye, Gang Hu, Zijie Tong, Youlin Xu and Jiaqiang Zheng
Agriculture 2025, 15(1), 81; https://doi.org/10.3390/agriculture15010081 - 1 Jan 2025
Viewed by 1607
Abstract
In modern agriculture, plant protection is the key to ensuring crop health and improving yields. Intelligent pesticide prescription spraying (IPPS) technologies monitor, diagnose, and make scientific decisions about pests, diseases, and weeds; formulate personalized and precision control plans; and prevent and control pests [...] Read more.
In modern agriculture, plant protection is the key to ensuring crop health and improving yields. Intelligent pesticide prescription spraying (IPPS) technologies monitor, diagnose, and make scientific decisions about pests, diseases, and weeds; formulate personalized and precision control plans; and prevent and control pests through the use of intelligent equipment. This study discusses key IPSS technologies from four perspectives: target information acquisition, information processing, pesticide prescription spraying, and implementation and control. In the target information acquisition section, target identification technologies based on images, remote sensing, acoustic waves, and electronic nose are introduced. In the information processing section, information processing methods such as information pre-processing, feature extraction, pest and disease identification, bioinformatics analysis, and time series data are addressed. In the pesticide prescription spraying section, the impact of pesticide selection, dose calculation, spraying time, and method on the resulting effect and the formulation of prescription pesticide spraying in a certain area are explored. In the implement and control section, vehicle automatic control technology, precision spraying technology, and droplet characteristic control technology and their applications are studied. In addition, this study discusses the future development prospectives of IPPS technologies, including multifunctional target information acquisition systems, decision-support systems based on generative AI, and the development of precision intelligent sprayers. The advancement of these technologies will enhance agricultural productivity in a more efficient, environmentally sustainable manner. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Figure 1
<p>Timeline of IPPS development.</p>
Full article ">Figure 2
<p>Logical sequence diagram for IPPS technologies.</p>
Full article ">Figure 3
<p>Key technologies in IPPS technologies.</p>
Full article ">Figure 4
<p>Schematic of remote sensing.</p>
Full article ">Figure 5
<p>Multispectral remote sensing images.</p>
Full article ">Figure 6
<p>Classification of LiDAR.</p>
Full article ">Figure 7
<p>Relationship between plant defense mechanisms and IPPS.</p>
Full article ">Figure 8
<p>Development timeline of intelligent spraying technologies.</p>
Full article ">Figure 9
<p>Ultrasonic atomization nozzle.</p>
Full article ">
19 pages, 2621 KiB  
Article
The Importance of Automatic Counters for Sustainable Management in Rural Areas: The Case of Hiking Trails in Historic Villages of Portugal
by Ana Luque and Luiz Alves
Land 2025, 14(1), 61; https://doi.org/10.3390/land14010061 - 31 Dec 2024
Viewed by 694
Abstract
The dynamics of territorial planning, the management of its tourism products, and the monitoring of demand flows and their impact on the territorial structure (social, economic and environmental) require tools that support the acquisition of reliable quantitative data, as far as possible in [...] Read more.
The dynamics of territorial planning, the management of its tourism products, and the monitoring of demand flows and their impact on the territorial structure (social, economic and environmental) require tools that support the acquisition of reliable quantitative data, as far as possible in real time, that are easy to manage and allow immediate analysis. In the case of structures and equipment anchored in the nature tourism segment, in particular hiking trails, in addition to determining the demand indices in a network of hiking trails and understanding their territorial and temporal dynamics, the data collected through automatic counters is a crucial tool to support territorial management and evaluate the patterns and flows of tourist demand. Based on these assumptions, this research seeks to analyse demand data observed on eleven hiking trails in the Historic Villages of Portugal, collected through automatic monitoring systems (counters). In four years, between 2020 and 2023, the trails analysed generated a demand of almost 190,000 passages, which translates into an annual average of 47,500 passages in the tourism product “Historic Villages of Portugal” (more than 4800 passages for each trail), mostly in the spring and autumn months, mainly on weekends. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical framework of the Historic Villages of Portugal.</p>
Full article ">Figure 2
<p>Hiking trail passages recorded by automatic counters, by HPV, between 2020 and 2023.</p>
Full article ">Figure 3
<p>Hiking trail passages recorded by automatic counters, by HPV, per year.</p>
Full article ">Figure 4
<p>Total passages in HVP hiking trail, recorded by automatic counters, per year.</p>
Full article ">Figure 5
<p>Passages recorded on the set of hiking trails in the Historic Villages of Portugal, per month (accumulated between 2020 and 2023).</p>
Full article ">Figure 6
<p>Passages recorded on Historical Villages of Portugal hiking trails, per month (accumulated between 2020 and 2023).</p>
Full article ">Figure 7
<p>Passages recorded on the set of hiking trails in the Historical Villages, per weekday (accumulated between 2020 and 2023).</p>
Full article ">Figure 8
<p>Passages recorded on Historical Villages of Portugal hiking trails, per weekday (accumulated between 2020 and 2023).</p>
Full article ">Figure 9
<p>Passages recorded on the set of hiking trails in the Historical Villages, per hour (accumulated between 2020 and 2023).</p>
Full article ">
17 pages, 3110 KiB  
Article
Hybrid Edge–Cloud Models for Bearing Failure Detection in a Fleet of Machines
by Sam Leroux and Pieter Simoens
Electronics 2024, 13(24), 5034; https://doi.org/10.3390/electronics13245034 - 21 Dec 2024
Viewed by 457
Abstract
Real-time condition monitoring of machinery is increasingly being adopted to minimize costs and enhance operational efficiency. By leveraging large-scale data acquisition and intelligent algorithms, failures can be detected and predicted, thereby reducing machine downtime. In this paper, we present a novel hybrid edge–cloud [...] Read more.
Real-time condition monitoring of machinery is increasingly being adopted to minimize costs and enhance operational efficiency. By leveraging large-scale data acquisition and intelligent algorithms, failures can be detected and predicted, thereby reducing machine downtime. In this paper, we present a novel hybrid edge–cloud system for detecting rotational bearing failures using accelerometer data. We evaluate both supervised and unsupervised neural network approaches, highlighting their respective strengths and limitations. Supervised models demonstrate high accuracy but require labeled datasets representative of the failures of interesting data that are challenging to acquire due to the rarity of anomalies. Conversely, unsupervised models rely on data from normal operational conditions, which is more readily available. However, these models classify all deviations from normalcy as anomalies, including those unrelated to failure, leading to costly false positives. To address these challenges, we propose a distributed system that integrates supervised and unsupervised learning. A compact unsupervised model is deployed on edge devices near the machines to compress sensor data, which are then transmitted to a centralized cloud-based system. Over time, these data are automatically labeled and used to train a supervised model, improving the accuracy of failure predictions. Our approach enables efficient, scalable failure detection across a fleet of machines while balancing the trade-offs between supervised and unsupervised learning. Full article
Show Figures

Figure 1

Figure 1
<p>The hybrid edge–cloud system. A small neural network trained for anomaly detection in an unsupervised way is deployed on an edge device close to each machine in the fleet. This model can detect anomalies on its own but is also used to compress the sensor data before they are sent to the cloud. The cloud model aggregates information from multiple machines and uses it to train one large supervised model. The cloud model is train continuously as new data become available.</p>
Full article ">Figure 2
<p>Schematic of the test setup (<b>left</b>) and the actual setup (<b>right</b>).</p>
Full article ">Figure 3
<p>Histogram of experiment duration. Most experiments lasted between 1.5 and 4 h.</p>
Full article ">Figure 4
<p>The indentation of the bearing at the start of the experiment (<b>left</b>) and the fault at the end (<b>right</b>).</p>
Full article ">Figure 5
<p>The raw accelerometer signal for approximately 1 s (50,000 samples) at the beginning of the experiment (<b>top</b>) and at the end of the experiment (<b>bottom</b>). The y-axis indicates the acceleration (in g). Note the difference in scale for the y-axis.</p>
Full article ">Figure 6
<p>Four baseline features, from top to bottom: Peak acceleration, RMS acceleration, Kurtosis and BPFI. All of these methods behave in a similar way.</p>
Full article ">Figure 7
<p>The FFT of the accelerometer signal for a window of approximately 1 s (50,000 samples) at the beginning of the experiment (<b>top</b>) and at the end of the experiment (<b>bottom</b>). Note the difference in scale for the y-axis.</p>
Full article ">Figure 8
<p>The supervised neural network architecture.</p>
Full article ">Figure 9
<p>The unsupervised neural network architecture.</p>
Full article ">Figure 10
<p>The ROC curves for the four baseline methods.</p>
Full article ">Figure 11
<p>The ROC curve and the corresponding AUC score for the unsupervised neural network.</p>
Full article ">Figure 12
<p>The speed of the shaft (<b>c</b>) should be constant at 2000 rpm. Around the 0.75 mark, a disturbance occurs. The prediction of the supervised model (<b>a</b>) remains near zero (no anomaly) while the unsupervised model (<b>b</b>) predicts a higher anomaly score for these data points, even though it is not indicative of a failing bearing.</p>
Full article ">Figure 13
<p>The anomaly score as predicted by the local model (blue) and the cloud model (red) for a single example run.</p>
Full article ">
25 pages, 35789 KiB  
Review
Three-Dimensional Ultrasound for Physical and Virtual Fetal Heart Models: Current Status and Future Perspectives
by Nathalie Jeanne Bravo-Valenzuela, Marcela Castro Giffoni, Caroline de Oliveira Nieblas, Heron Werner, Gabriele Tonni, Roberta Granese, Luis Flávio Gonçalves and Edward Araujo Júnior
J. Clin. Med. 2024, 13(24), 7605; https://doi.org/10.3390/jcm13247605 - 13 Dec 2024
Viewed by 1094
Abstract
Congenital heart defects (CHDs) are the most common congenital defect, occurring in approximately 1 in 100 live births and being a leading cause of perinatal morbidity and mortality. Of note, approximately 25% of these defects are classified as critical, requiring immediate postnatal care [...] Read more.
Congenital heart defects (CHDs) are the most common congenital defect, occurring in approximately 1 in 100 live births and being a leading cause of perinatal morbidity and mortality. Of note, approximately 25% of these defects are classified as critical, requiring immediate postnatal care by pediatric cardiology and neonatal cardiac surgery teams. Consequently, early and accurate diagnosis of CHD is key to proper prenatal and postnatal monitoring in a tertiary care setting. In this scenario, fetal echocardiography is considered the gold standard imaging ultrasound method for the diagnosis of CHD. However, the availability of this examination in clinical practice remains limited due to the need for a qualified specialist in pediatric cardiology. Moreover, in light of the relatively low prevalence of CHD among at-risk populations (approximately 10%), ultrasound cardiac screening for potential cardiac anomalies during routine second-trimester obstetric ultrasound scans represents a pivotal aspect of diagnosing CHD. In order to maximize the accuracy of CHD diagnoses, the views of the ventricular outflow tract and the superior mediastinum were added to the four-chamber view of the fetal heart for routine ultrasound screening according to international guidelines. In this context, four-dimensional spatio-temporal image correlation software (STIC) was developed in the early 2000s. Some of the advantages of STIC in fetal cardiac evaluation include the enrichment of anatomical details of fetal cardiac images in the absence of the pregnant woman and the ability to send volumes for analysis by an expert in fetal cardiology by an internet link. Sequentially, new technologies have been developed, such as fetal intelligent navigation echocardiography (FINE), also known as “5D heart”, in which the nine fetal cardiac views recommended during a fetal echocardiogram are automatically generated from the acquisition of a cardiac volume. Furthermore, artificial intelligence (AI) has recently emerged as a promising technological innovation, offering the potential to warn of possible cardiac anomalies and thus increase the ability of non-cardiology specialists to diagnose CHD. In the early 2010s, the advent of 3D reconstruction software combined with high-definition printers enabled the virtual and 3D physical reconstruction of the fetal heart. The 3D physical models may improve parental counseling of fetal CHD, maternal–fetal interaction in cases of blind pregnant women, and interactive discussions among multidisciplinary health teams. In addition, the 3D physical and virtual models can be an useful tool for teaching cardiovascular anatomy and to optimize surgical planning, enabling simulation rooms for surgical procedures. Therefore, in this review, the authors discuss advanced image technologies that may optimize prenatal diagnoses of CHDs. Full article
(This article belongs to the Section Obstetrics & Gynecology)
Show Figures

Figure 1

Figure 1
<p>Measurements of interventricular septum volume (IVS) using 3D ultrasound with STIC and virtual organ computer-aided analysis (VOCAL) in a fetus from a diabetic mother at 25 weeks of gestation. IVS volume = 0.144 cm<sup>3</sup>.</p>
Full article ">Figure 2
<p>Left ventricle diastolic volume using STIC with virtual organ computer-aided analysis (VOCAL) in a fetus at 30 weeks of gestation. LV volume = 1.3 cm<sup>3</sup>.</p>
Full article ">Figure 3
<p>Evaluation of the tricuspid annular movement using fetal STIC-M (5.4 mm). TAPSE: tricuspid annular plane systolic excursion; RV: right ventricle.</p>
Full article ">Figure 4
<p>Three-dimensional ultrasound with STIC: (<b>A</b>) HDlive mode, providing a reconstruction of the left ventricular outflow tract in a case of transposition of the great arteries and (<b>B</b>) with color Doppler in a first-trimester fetus with tetralogy of Fallot. Observe the pulmonary artery (P) arising from the left ventricle (LV) in image (<b>A</b>) and the overriding of the aorta (<b>A</b>) in image (<b>B</b>). RV: right ventricle; LV: left ventricle; VSD: ventricular septal defect; IVS: ventricular septum.</p>
Full article ">Figure 5
<p>Tomographic ultrasound imaging (TUI) in the rendering mode enables the visualization of sequential axial planes in the case of inlet ventricular septal defect (VSD) (yellow arrows).</p>
Full article ">Figure 6
<p>STIC with HDlive Silhouette mode in a case of coarctation of aorta. Note the discrepancy of the great arteries due to the small aorta. AO: aorta; PA: pulmonary artery; VC: superior vena cava.</p>
Full article ">Figure 7
<p>(<b>A</b>) Three-dimensional ultrasound with Surface Realistic Vue (SRV) imaging in a case of partial anomalous pulmonary vein return with a ventricular septal defect (VSD). Note that 2 of the pulmonary veins return to the right atrium (red arrows). Virtual light source position, 10 o’clock. (<b>B</b>) STIC with color Doppler of a case of total anomalous pulmonary vein return (infradiaphragmatic type). The right (RPV) and left pulmonary veins (LPVs) drain (white arrows) into a collecting vein (COL) and subsequently into a vertical vein (VV), which achieves the right atrium (RA) via the inferior vena cava (IVC). LV: left ventricle; LA: left atrium; RA: right atrium; RV: right ventricle; ** VSD: ventricular septal defect; PV: pulmonary vein; T: tricuspid valve; M: mitral valve.</p>
Full article ">Figure 8
<p>Three-dimensional ultrasound with STIC and HDlive mode in a case of left heterotaxy. Observe that the venous vessel (hemiazygos) is located posteriorly (near to the fetal spine) to the arterial vessel (aorta) at the upper abdomen view. Ao; aorta; Hz: hemiazygos vein; L: fetal left side; R: fetal right side.</p>
Full article ">Figure 9
<p>Extra-hepatic form of agenesis of ductus venosus using three-dimensional ultrasound with STIC. Note the high-resolution color Doppler showing the absence of flow through the DV (red arrow). In this case, the umbilical vein drains into the RA via the inferior vena cava. IVC: inferior vena cava; RA: right atrium.</p>
Full article ">Figure 10
<p>Three-dimensional ultrasound with STIC enabling the reconstruction of the ventricular outflow tracts in a case of double-outlet right vetricle (“Taussig-Bing” anomaly). Note the great arteries arising from the right ventricle (RV) in a parallel relationship. Ao: aorta; PA: pulmonary artery.</p>
Full article ">Figure 11
<p>Tomographic ultrasound imaging (TUI) in the rendering mode in a case of tetralogy of Fallot and in (<b>B</b>) double outflow of the right ventricle (DORV). The right ventricle hypertrophy (yellow arrows) could be observed using this technology (<b>A</b>). Note the great arteries in a parallel relationship (red arrows) in a fetus with Taussig–Bing DORV using color Doppler (<b>B</b>). DORV: double outflow of right ventricle; Ao; aorta; P: pulmonary artery.</p>
Full article ">Figure 12
<p>The reconstruction of the ventricular outflow tracts in a case of transposition of the great arteries (TGA) using STIC with color Doppler (<b>A</b>) and HDlive Silhouette. In image (<b>A</b>), it is evident that the aorta (Ao) arises from the right ventricle (RV). In image (<b>B</b>), the pulmonary artery (PA) is unequivocally identified as originating from the left ventricle (LV). The two arteries are observed to be in a parallel relationship (red arrows), with the aorta located anteriorly to the PA.</p>
Full article ">Figure 13
<p>Three-dimensional ultrasound with STIC in the rendering mode: the measurement of the area of the foramen ovale (FO) was obtained from the four-chamber view of the fetal heart in which the ROI (green line) is the flap of the FO. ROI: region of interest.</p>
Full article ">Figure 14
<p>Three-dimensional with STIC in the rendering mode (<b>A</b>) and HDlive mode (<b>B</b>) of a fetus with Ebstein’s anomaly at 30 weeks of gestation. RA: right atrium; T: tricuspid valve; RV: right ventricle; LA: left atrium; M: mitral valve LV: left ventricle.</p>
Full article ">Figure 15
<p>(<b>A</b>) Reconstruction of the aortic arch using STIC with the inversion mode in a case of coarctation of the aorta. Observe the narrowing of the aortic isthmus (yellow arrow). (<b>B</b>) Aortic and duct arch imaging in a fetus with a normal heart. (<b>B</b>) Sagittal view of a fetus with a normal heart showing the aortic and ductal arches using LumiFlow. (<b>C</b>) First-trimester imaging using HDFlow in a fetus with a right aortic arch (red arrow) and vascular ring (observe the vessels around the trachea). Ao: aorta; BCT: brachiocephalic trunk; LCC: left common carotid; LSCA: left subclavian artery; P: pulmonary artery; DA: ductus arteriosus; Tr: trachea; R: right side; L: left side.</p>
Full article ">Figure 16
<p>Large mass (**) in the ventricular septum and both ventricles, mainly in the left ventricle, in a case of rhabdomyomas with a reduction in the size of the masses after prenatal therapy with sirolimus. LV: left ventricle; LA: left atrium; RA: right atrium; RV: right ventricle; T: tricuspid valve.</p>
Full article ">Figure 17
<p>STIC-M enabling the measurement of mitral annular plane systolic excursion (MAPSE) (5.4 mm). LV: left ventricle.</p>
Full article ">Figure 18
<p>Three-dimensional reconstruction of the left ventricle (LV) using STIC with virtual organ computer-aided analysis (VOCAL) in a fetus at 22 weeks of gestation.</p>
Full article ">Figure 19
<p>FINE navigation (known as “5D heart”) in (<b>A</b>) a case of a malalignment type of ventricular septal defect (***, yellow arrows) and in (<b>B</b>) a case of complete atrioventricular septal defect (AVSD). In case (<b>A</b>), observe the overriding of the aorta (Ao). In case (<b>B</b>), observe that the four-chamber, the five-chamber, and LV outflow tract (LVOT) views (yellow arrows) draw attention to this diagnosis. *** Common AV valve; VSD: ventricular septal defect; ASD: primum atrial septal defect; GN: LVOT with a “goose neck” shape.</p>
Full article ">Figure 20
<p>Automatic measurement of the fetal the cardiac axis (40.3º) using artificial intelligence (“Learning Machine”) in a normal heart using fetal intelligent navigation echocardiography (FINE), also known as “5D Heart”. LV: left ventricle; LA: left atrium; RA: right atrium; RV: right ventricle; A or Ao: aorta; P or PA: pulmonary artery; S: superior vena cava; IVC: inferior vena cava; Desc: descending; Trans: transverse.</p>
Full article ">Figure 21
<p>First-trimester measurement of the cardiac axis (45°) of a normal fetus (yellow arrow). L: left side; R:right side: Ao: aorta; S: spine.</p>
Full article ">Figure 22
<p>Three-dimensional physical model of a fetus with transposition of the great arteries (TGA). RV: right ventricle; Ao: aorta; LV: left ventricle; P: pulmonary artery.</p>
Full article ">Figure 23
<p>Three-dimensional virtual model of fetal heart in a fetus with transposition of the great arteries (TGA) (<b>A</b>) and in a fetus with Ebstein’s anomaly (<b>B</b>). RA: right atrium; RV: right ventricle; LA: left atrium; T: tricuspid valve; M: mitral valve; LV: left ventricle; Ao: aorta; P: pulmonary artery.</p>
Full article ">Figure 24
<p>Following the acquisition of images of the fetal heart with tetralogy of Fallot from 3D ultrasound (heart volumes) using tools from Slicer 3D software (Birmingham, UK), the cardiac structures were segmented, with each cavity identified by a different color (right and left atrium, right and left ventricles, aorta, pulmonary artery, vena cava and pulmonary veins). Thereafter, a raw file format was generated. Based on the 3D data, physical 3D models of the fetal heart were printed using a 3D printer. Ao: aorta; LA: left atrium; P: pulmonary artery; RA: right atrium; LV: left ventricle; RV: right ventricle; VSD: ventricular septal defect.</p>
Full article ">Figure 25
<p>(<b>A</b>) Fetal cardiac MRI (fCMR) performed at 32 weeks and 5 days. Images were obtained at 1.5 T using a balanced turbo field echo (BTFE) sequence, gated with an MRI-compatible Doppler ultrasound (DUS) device (North Medical, Hamburg, Germany). Four-chamber view in systole (<b>A</b>) and diastole (<b>B</b>). LA: left atrium; LV: left ventricle; RA: right atrium; LA: left atrium.</p>
Full article ">Figure 26
<p>Multiplanar display images of a case of hypoplastic left heart syndrome examined at 32 weeks and 3 days. The images were acquired using a balanced turbo field echo (BTFE) sequence at 1.5 T. kt-sense acceleration was used during acquisition. The images were postprocessed using a super-resolution pipeline, resulting in an isovoxel 3D volume dataset. (<b>A</b>) Sagittal two-chamber view. (<b>B</b>) Four-chamber view. (<b>C</b>) Coronal short-axis view through the ventricles. LA: left atrium; LV: left ventricle; RA: right atrium; RV: right ventricle.</p>
Full article ">
17 pages, 11854 KiB  
Article
Digitalization of an Industrial Process for Bearing Production
by Jose-Manuel Rodriguez-Fortun, Jorge Alvarez, Luis Monzon, Ricardo Salillas, Sergio Noriega, David Escuin, David Abadia, Aitor Barrutia, Victor Gaspar, Jose Antonio Romeo, Fernando Cebrian and Rafael del-Hoyo-Alonso
Sensors 2024, 24(23), 7783; https://doi.org/10.3390/s24237783 - 5 Dec 2024
Viewed by 882
Abstract
The developments in sensing, actuation, and algorithms, both in terms of Artificial Intelligence (AI) and data treatment, have open up a wide range of possibilities for improving the quality of the production systems in diverse industrial fields. The present paper describes the automatizing [...] Read more.
The developments in sensing, actuation, and algorithms, both in terms of Artificial Intelligence (AI) and data treatment, have open up a wide range of possibilities for improving the quality of the production systems in diverse industrial fields. The present paper describes the automatizing process performed in a production line for high-quality bearings. The actuation considered new sensing elements at the machine level and the treatment of the information, fusing the different sources in order to detect quality defects in the grinding process (waviness, burns) and monitoring the state of the tool. At a supervision level, an AI model has been developed for monitoring the complete line and compensating deviations in the dimension of the final assembly. The project also contemplated the hardware architecture for improving the data acquisition and communication among the machines and databases, the data treatment units, and the human interfaces. The resulting system gives feedback to the operator when deviations or potential errors are detected so that the quality issues are recognized and can be amended in advance, thereby reducing the quality cost. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified schema with the main elements of the FERSA production line.</p>
Full article ">Figure 2
<p>Main elements in the architecture.</p>
Full article ">Figure 3
<p>Sensors in the figure framed with a red circle: (<b>upper left</b>) accelerometer (PCB Piezotronics with a frequency range of 0.5 to 8000 Hz); (<b>upper right</b>) Equipment for grinding power acquisition; (<b>lower left</b>) thermal camera (FLIR Lepton 3.5); (<b>lower right</b>) AE sensor (Steminc 20 × 1 mm 2 Mhz R).</p>
Full article ">Figure 4
<p>Description of the traceability blockchain architecture.</p>
Full article ">Figure 5
<p>Speed algorithm global architecture.</p>
Full article ">Figure 6
<p>Post-manufacturing quality process result for waviness analysis. Purple line shows the threshold harmonic content for the waviness appearance. Light blue line shows 80% probability threshold for waviness appearance.</p>
Full article ">Figure 7
<p>Thermal image obtained during the grinding process.</p>
Full article ">Figure 8
<p>Schematic example of the virtual sensor solution for thermal damage prediction based on the grinding power measurement and the calculation of the limiting value.</p>
Full article ">Figure 9
<p>Tool profile estimation with maximum, minimum, mean, and RMS values.</p>
Full article ">Figure 10
<p>Evolution of RMS value over consecutive dressing operations.</p>
Full article ">Figure 11
<p>Line dimensional control.</p>
Full article ">Figure 12
<p>Comparison between the predicted harmonic content and the result from the offline quality control.</p>
Full article ">Figure 13
<p>Comparison of the predicted harmonic content and the limits of the waviness protocol.</p>
Full article ">Figure 14
<p>Results of thermal damage for different tests and validation of the prediction tool. The blue line represents the thermal limit and the dots represent the measured power. The circled dots show the workpieces that presented burns during the quality control.</p>
Full article ">Figure 15
<p>Values predicted by the dimensional line model versus actual values measured on the control machine. Scatter plot (<b>left</b>); 2D histogram (<b>right</b>).</p>
Full article ">Figure 16
<p>Recommended production pattern based on operating point (each number/color represents a production pattern). Units are not displayed due to Fersa’s privacy policy.</p>
Full article ">Figure 17
<p>Example of the interface for the assembly quality control.</p>
Full article ">Figure 18
<p>Example of the interface for the grinding quality control.</p>
Full article ">
16 pages, 3161 KiB  
Article
Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging
by Shengyong Xu, Rongsheng Xu, Pan Ma, Zhenhao Huang, Shaodong Wang, Zhe Yang and Qingxi Liao
Agriculture 2024, 14(12), 2215; https://doi.org/10.3390/agriculture14122215 - 4 Dec 2024
Viewed by 610
Abstract
Pod counting of rapeseed is a critical step in breeding, cultivation, and agricultural machinery research. Currently, this process relies entirely on manual labor, which is both labor-intensive and inefficient. This study aims to develop a semi-automatic counting instrument based on transmission image processing [...] Read more.
Pod counting of rapeseed is a critical step in breeding, cultivation, and agricultural machinery research. Currently, this process relies entirely on manual labor, which is both labor-intensive and inefficient. This study aims to develop a semi-automatic counting instrument based on transmission image processing and proposes a new algorithm for processing transmission images of pods to achieve non-destructive, accurate, and rapid determination of the seed count per pod. Initially, the U-NET network was used to segment and remove the stem and beak from the pod image; subsequently, adaptive contrast enhancement was applied to adjust the contrast of the G-channel image of the pod to an appropriate range, effectively eliminating the influence of different varieties and maturity levels on the translucency of the pod skin. After enhancing the contrast, the Sauvola algorithm was employed for threshold segmentation to remove the pod skin, followed by thinning and dilation of the binary image to extract and remove the central ridge lines, detecting the number and area of connected domains. Finally, the seed count was determined based on the ratio of each connected domain’s area to the mean area of all connected domains. A transmission imaging device that mimics the human eye’s method of counting seeds was designed, incorporating an LED transmission light source, photoelectric switch-triggered imaging slot, an industrial camera, and an integrated packaging frame. Human–machine interaction software based on PyQt5 was developed, integrating functions such as communication between upper and lower machines, image acquisition, storage, and processing. Operators simply need to place the pod in an upright position into the imaging device, where its transmission image will be automatically captured and processed. The results are displayed on a touchscreen and stored in Excel spreadsheets. The experimental results show that the instrument is accurate, user-friendly, and significantly reduces labor intensity. For various varieties of rapeseed pods, the seed counting accuracy reached 97.2% with a throughput of 372 pods/h, both of which are significantly better than manual counting and have considerable potential for practical applications. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Figure 1
<p>3D modeling drawings and objects of rapeseed pod transmission imaging device (1. Side panel, 2. camera, 3. LED light source, 4. transparent plate, 5. lithium battery, 6. microcomputer host, 7. touch screen, 8. light source adjustment knob, 9. photoelectric switch).</p>
Full article ">Figure 2
<p>Seed counter of rapeseed silique and its software.</p>
Full article ">Figure 3
<p>Rapeseed identification and testing flowchart.</p>
Full article ">Figure 4
<p>Contrast enhancement effect of images with different contrast.</p>
Full article ">Figure 5
<p>Comparison of seed detection by different detection methods in low contrast. (<b>a</b>) Single missed detection, (<b>b</b>) missed detection due to adhesion, (<b>c</b>) duplicate detection of adhesion.</p>
Full article ">Figure 6
<p>Comparison of seed detection with different detection methods in medium contrast. (<b>a</b>) Single missed detection, (<b>b</b>) missed detection due to adhesion, (<b>c</b>) duplicate detection of adhesion.</p>
Full article ">Figure 7
<p>Comparison of seed detection by different detection methods in high contrast. (<b>a</b>) Single missed detection, (<b>b</b>) missed detection due to adhesion, (<b>c</b>) duplicate detection of adhesion.</p>
Full article ">
Back to TopTop