Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,157)

Search Parameters:
Keywords = automatic identification system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 4116 KiB  
Article
Estimating Speed Error of Commercial Radar Tracking to Inform Whale–Ship Strike Mitigation Efforts
by Samantha Cope King, Brendan Tougher and Virgil Zetterlind
Sensors 2025, 25(6), 1676; https://doi.org/10.3390/s25061676 (registering DOI) - 8 Mar 2025
Viewed by 8
Abstract
Vessel speed reduction measures are a management tool used to reduce the risk of whale–ship strikes and mitigate their impacts. Large ships and other commercial vessels are required to publicly share tracking information, including their speed, via the Automatic Identification System (AIS), which [...] Read more.
Vessel speed reduction measures are a management tool used to reduce the risk of whale–ship strikes and mitigate their impacts. Large ships and other commercial vessels are required to publicly share tracking information, including their speed, via the Automatic Identification System (AIS), which is commonly used to evaluate compliance with these measures. However, smaller vessels are not required to carry AIS and therefore are not as easily monitored. Commercial off-the-shelf marine radar is a practical solution for independently tracking these vessels, although commercial target tracking is typically a black-box process, and the accuracy of reported speed is not available in manufacturer specifications. We conducted a large-scale measurement campaign to estimate radar-reported speed error by comparing concurrent radar- and AIS-reported values. Across 3097 unique vessel tracks from ten locations, there was strong correlation between radar and AIS speed, and radar values were within 1.8 knots of AIS values 95% of the time. Smaller vessels made up a large share of the analyzed tracks, and there was no significant difference in error compared to larger vessels. The results provide error bounds around radar-reported speeds that can be applied to vessels of all sizes, which can inform vessel-speed-monitoring efforts using radar. Full article
(This article belongs to the Section Radar Sensors)
17 pages, 9894 KiB  
Article
Real-Time Automatic Identification of Plastic Waste Streams for Advanced Waste Sorting Systems
by Robert Giel, Mateusz Fiedeń and Alicja Dąbrowska
Sustainability 2025, 17(5), 2157; https://doi.org/10.3390/su17052157 - 2 Mar 2025
Viewed by 231
Abstract
Despite the significant recycling potential, a massive generation of plastic waste is observed year after year. One of the causes of this phenomenon is the issue of ineffective waste stream sorting, primarily arising from the uncertainty in the composition of the waste stream. [...] Read more.
Despite the significant recycling potential, a massive generation of plastic waste is observed year after year. One of the causes of this phenomenon is the issue of ineffective waste stream sorting, primarily arising from the uncertainty in the composition of the waste stream. The recycling process cannot be carried out without the proper separation of different types of plastics from the waste stream. Current solutions in the field of automated waste stream identification rely on small-scale datasets that insufficiently reflect real-world conditions. For this reason, the article proposes a real-time identification model based on a CNN (convolutional neural network) and a newly constructed, self-built dataset. The model was evaluated in two stages. The first stage was based on the separated validation dataset, and the second was based on the developed test bench, a replica of the real system. The model was evaluated under laboratory conditions, with a strong emphasis on maximally reflecting real-world conditions. Once included in the sensor fusion, the proposed approach will provide full information on the characteristics of the waste stream, which will ultimately enable the efficient separation of plastic from the mixed stream. Improving this process will significantly support the United Nations’ 2030 Agenda for Sustainable Development. Full article
Show Figures

Figure 1

Figure 1
<p>Scope of research in the context of the Digital Twin framework for waste-sorting systems.</p>
Full article ">Figure 2
<p>Architectural diagram of YOLOv5 [<a href="#B24-sustainability-17-02157" class="html-bibr">24</a>,<a href="#B25-sustainability-17-02157" class="html-bibr">25</a>,<a href="#B26-sustainability-17-02157" class="html-bibr">26</a>].</p>
Full article ">Figure 3
<p>Architecture of the proposed model for automatic waste stream identification based on YOLOv5.</p>
Full article ">Figure 4
<p>Main stages of dataset creation for automatic waste identification.</p>
Full article ">Figure 5
<p>Example of collected waste: (<b>a</b>) including only target objects; (<b>b</b>) including contaminants.</p>
Full article ">Figure 6
<p>Distribution of label locations for (<b>a</b>) basic dataset (for model 1a and 1b) and (<b>b</b>) extended dataset (for model 2).</p>
Full article ">Figure 7
<p>Scheme of the developed test bed.</p>
Full article ">Figure 8
<p>The developed test bed.</p>
Full article ">Figure 9
<p>The morphology of the Polish plastic waste stream compared to the morphology of the prepared sample.</p>
Full article ">Figure 10
<p>Progression of precision, recall, and mAP throughout the YOLOv5 training process (actual results with solid line and smoothed trend with dotted line) for (<b>a</b>) model 1a, (<b>b</b>) model 1b, and (<b>c</b>) model 2.</p>
Full article ">Figure 11
<p>Confusion matrices generated by the trained YOLO models: (<b>a</b>) model 1a, (<b>b</b>) model 1b, and (<b>c</b>) model 2.</p>
Full article ">Figure 12
<p>The visualization of the model’s performance under conditions replicating a real-world sorting facility (the red box shows detected HDPE, while the pink one – tetrapak).</p>
Full article ">
15 pages, 1521 KiB  
Article
Application of Three-Dimensional Hierarchical Density-Based Spatial Clustering of Applications with Noise in Ship Automatic Identification System Trajectory-Cluster Analysis
by Shih-Ming Wang, Wen-Rong Yang, Qian-Yi Zhuang, Wei-Hong Lin, Mau-Yi Tian, Te-Jen Su and Jui-Chuan Cheng
Appl. Sci. 2025, 15(5), 2621; https://doi.org/10.3390/app15052621 - 28 Feb 2025
Viewed by 203
Abstract
Clustering algorithms are widely used in statistical data analysis as a form of unsupervised machine learning, playing a crucial role in big data mining research for Maritime Intelligent Transportation Systems. While numerous studies have explored methods for optimizing ship trajectory clustering, such as [...] Read more.
Clustering algorithms are widely used in statistical data analysis as a form of unsupervised machine learning, playing a crucial role in big data mining research for Maritime Intelligent Transportation Systems. While numerous studies have explored methods for optimizing ship trajectory clustering, such as narrowing dynamic time windows to prevent errors in time warp calculations or employing the Mahalanobis distance, these methods enhance DBSCAN (Density-Based Spatial Clustering of Applications with Noise) by leveraging trajectory similarity features for clustering. In recent years, machine learning research has rapidly accumulated, and multiple studies have shown that HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) outperforms DBSCAN in achieving accurate and efficient clustering results due to its hierarchical density-based clustering processing technique, particularly in big data mining. This study focuses on the area near Taichung Port in central Taiwan, a crucial maritime shipping route where ship trajectories naturally exhibit a complex and intertwined distribution. Using ship coordinates and heading, the experiment normalized and transformed them into three-dimensional spatial features, employing the HDBSCAN algorithm to obtain optimal clustering results. These results provided a more nuanced analysis compared to human visual observation. This study also utilized O notation and execution time to represent the performance of various methods, with the literature review indicating that HDBSCAN has the same time complexity as DBSCAN but outperforms K-means and other methods. This research involved approximately 293,000 real historical data points and further employed the Silhouette Coefficient and Davies–Bouldin Index to objectively analyze the clustering results. The experiment generated eight clusters with a noise ratio of 12.7%, and the evaluation results consistently demonstrate that HDBSCAN outperforms other methods for big data analysis of ship trajectory clustering. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

Figure 1
<p>This map shows the coast of central Taiwan. There are many red lines on the map, representing the tracks of ships.</p>
Full article ">Figure 2
<p>Research Process Framework.</p>
Full article ">Figure 3
<p>The low-speed data points distributed vertically at specific coordinates.</p>
Full article ">Figure 4
<p>Different colors clearly represent distinct HDBSCAN clustering results.</p>
Full article ">Figure 5
<p>Each cluster distribution.</p>
Full article ">
32 pages, 6751 KiB  
Article
SVIADF: Small Vessel Identification and Anomaly Detection Based on Wide-Area Remote Sensing Imagery and AIS Data Fusion
by Lihang Chen, Zhuhua Hu, Junfei Chen and Yifeng Sun
Remote Sens. 2025, 17(5), 868; https://doi.org/10.3390/rs17050868 - 28 Feb 2025
Viewed by 238
Abstract
Small target ship detection and anomaly analysis play a pivotal role in ocean remote sensing technologies, offering critical capabilities for maritime surveillance, enhancing maritime safety, and improving traffic management. However, existing methodologies in the field of detection are predominantly based on deep learning [...] Read more.
Small target ship detection and anomaly analysis play a pivotal role in ocean remote sensing technologies, offering critical capabilities for maritime surveillance, enhancing maritime safety, and improving traffic management. However, existing methodologies in the field of detection are predominantly based on deep learning models with complex network architectures, which may fail to accurately detect smaller targets. In the classification domain, most studies focus on synthetic aperture radar (SAR) images combined with Automatic Identification System (AIS) data, but these approaches have significant limitations: first, they often overlook further analysis of anomalies arising from mismatched data; second, there is a lack of research on small target ship classification using wide-area optical remote sensing imagery. In this paper, we develop SVIADF, a multi-source information fusion framework for small vessel identification and anomaly detection. The framework consists of two main steps: detection and classification. To address challenges in the detection domain, we introduce the YOLOv8x-CA-CFAR framework. In this approach, YOLOv8x is first utilized to detect suspicious objects and generate image patches, which are then subjected to secondary analysis using CA-CFAR. Experimental results demonstrate that this method achieves improvements in Recall and F1-score by 2.9% and 1.13%, respectively, compared to using YOLOv8x alone. By integrating structural and pixel-based approaches, this method effectively mitigates the limitations of traditional deep learning techniques in small target detection, providing more practical and reliable support for real-time maritime monitoring and situational assessment. In the classification domain, this study addresses two critical challenges. First, it investigates and resolves anomalies arising from mismatched data. Second, it introduces an unsupervised domain adaptation model, Multi-CDT, for heterogeneous multi-source data. This model effectively transfers knowledge from SAR–AIS data to optical remote sensing imagery, thereby enabling the development of a small target ship classification model tailored for optical imagery. Experimental results reveal that, compared to the CDTrans method, Multi-CDT not only retains a broader range of classification categories but also improves target domain accuracy by 0.32%. The model extracts more discriminative and robust features, making it well suited for complex and dynamic real-world scenarios. This study offers a novel perspective for future research on domain adaptation and its application in maritime scenarios. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework of proposed SVIADF method.</p>
Full article ">Figure 2
<p>The original trajectory and resampled trajectory of the ship.</p>
Full article ">Figure 3
<p>Comparison of histograms of various data.</p>
Full article ">Figure 4
<p>Comparison of unsupervised domain adaptation scenarios.</p>
Full article ">Figure 5
<p>Multi-CDT network architecture.</p>
Full article ">Figure 6
<p>Example 1 of the comparison of the detection results of the three models.</p>
Full article ">Figure 7
<p>Example 2 of the comparison of the detection results of the three models.</p>
Full article ">Figure 8
<p>Comparison of the detection results of the three models.</p>
Full article ">Figure 9
<p>Source domain pretraining trend.</p>
Full article ">Figure 10
<p>Target domain training trend.</p>
Full article ">Figure 11
<p>Attention map based on source domain image (SAR remote sensing image).</p>
Full article ">Figure 12
<p>Attention map based on target domain image (optical remote sensing).</p>
Full article ">Figure 13
<p>T-SNE plot based on target domain image (optical remote sensing image).</p>
Full article ">Figure 14
<p>Demonstration example based on Hainan Satellite 1 data.</p>
Full article ">Figure 15
<p>Demonstration example based on Sentinel-1 data.</p>
Full article ">
29 pages, 6142 KiB  
Article
Collision Avoidance Behavior Mining Model Considering Encounter Scenarios
by Shuzhe Chen, Chong Zhang, Lei Wu, Ziwei Wang, Wentao Wu, Shimeng Li and Haotian Gao
Appl. Sci. 2025, 15(5), 2616; https://doi.org/10.3390/app15052616 - 28 Feb 2025
Viewed by 208
Abstract
With the development of intelligent waterborne transportation, mining collision avoidance patterns based on spatiotemporal and motion data of ships are crucial for the autonomous navigation of intelligent ships, which requires accurate collision avoidance information under various encounter scenarios. Addressing the existing issues of [...] Read more.
With the development of intelligent waterborne transportation, mining collision avoidance patterns based on spatiotemporal and motion data of ships are crucial for the autonomous navigation of intelligent ships, which requires accurate collision avoidance information under various encounter scenarios. Addressing the existing issues of low precision and false detection in data mining algorithms, this paper proposes a collision avoidance behavior mining model considering encounter scenarios. The model is based on the Automatic Identification System (AIS) and the International Regulations for Preventing Collisions at Sea (COLREGs); it firstly identifies ship collision avoidance turning points by analyzing trajectory curvature with turning and recovering factors. Then, by combining AIS data and the specific navigational environment, it matches the ship encounter pairs and determines the encounter scenarios. Comparative experiments show that the model demonstrates superior accuracy in various scenarios compared to traditional algorithm. Finally, the model was applied to AIS data east of the Yangtze River Estuary, recognizing a total of 827 instances of ship collision avoidance behavior under different encounter scenarios. The case study shows that the model can precisely mine collision avoidance information, laying a solid foundation for future research on autonomous collision avoidance decision making for intelligent ships. Full article
(This article belongs to the Special Issue Advances in Intelligent Maritime Navigation and Ship Safety)
Show Figures

Figure 1

Figure 1
<p>Traditional encounter scenario division model. (<b>a</b>)The Division Model Based on the Relative Bearing Angle. (<b>b</b>)Limitations of the Division Model Based on the Relative Bearing Angle.</p>
Full article ">Figure 2
<p>The framework of collision avoidance behavior mining.</p>
Full article ">Figure 3
<p>Characteristics of ship AIS trajectories under different conditions. (<b>a</b>) AIS Trajectory of Ships with Data Disturbances. (<b>b</b>) AIS Trajectory of Ships Taking Collision Avoidance Maneuvers. (<b>c</b>) AIS Trajectory of Ships Taking Turning Maneuvers.</p>
Full article ">Figure 4
<p>Schematic diagram of trajectory curvature calculation.</p>
Full article ">Figure 5
<p>Schematic diagram of the sliding window algorithm.</p>
Full article ">Figure 6
<p>Window sliding to recognize the ship’s turning point.</p>
Full article ">Figure 7
<p>Variable sliding window to determine the ship’s recovery behavior.</p>
Full article ">Figure 8
<p>Diagram of the rules for dividing the encounter scenarios.</p>
Full article ">Figure 9
<p>Illustration of trajectory 1. (<b>a</b>) The effect of trajectory 1 repair based on cubic spline interpolation. (<b>b</b>) The course change of trajectory 1.</p>
Full article ">Figure 10
<p>Illustration of trajectory 2. (<b>a</b>) The effect of trajectory 2 repair based on cubic spline interpolation. (<b>b</b>) The course change of trajectory 2.</p>
Full article ">Figure 11
<p>Illustration of turning point identification for trajectory 1 using the DP algorithm.</p>
Full article ">Figure 12
<p>Illustration of turning point identification for trajectory 2 using the DP algorithm.</p>
Full article ">Figure 13
<p>Illustration of turning point identification for trajectory 1 using the proposed model.</p>
Full article ">Figure 14
<p>Illustration of turning point identification for trajectory 2 using the proposed model.</p>
Full article ">Figure 15
<p>The illustration of the ship encounter pairs identified by the model.</p>
Full article ">Figure 16
<p>The data distribution map of the case water area.</p>
Full article ">Figure 17
<p>Statistical results of case waterway identification.</p>
Full article ">Figure 18
<p>Collision avoidance behavior identification results under different encounter scenarios in the case waterway. (<b>a</b>) Head-on Collision Avoidance Data. (<b>b</b>) Crossing Collision Avoidance Data. (<b>c</b>) Overtaking Collision Avoidance Data.</p>
Full article ">
26 pages, 7963 KiB  
Article
Pig Face Open Set Recognition and Registration Using a Decoupled Detection System and Dual-Loss Vision Transformer
by Ruihan Ma, Hassan Ali, Malik Muhammad Waqar, Sang Cheol Kim and Hyongsuk Kim
Animals 2025, 15(5), 691; https://doi.org/10.3390/ani15050691 - 27 Feb 2025
Viewed by 193
Abstract
Effective pig farming relies on precise and adaptable animal identification methods, particularly in dynamic environments where new pigs are regularly added to the herd. However, pig face recognition is challenging due to high individual similarity, lighting variations, and occlusions. These factors hinder accurate [...] Read more.
Effective pig farming relies on precise and adaptable animal identification methods, particularly in dynamic environments where new pigs are regularly added to the herd. However, pig face recognition is challenging due to high individual similarity, lighting variations, and occlusions. These factors hinder accurate identification and monitoring. To address these issues under Open-Set conditions, we propose a three-phase Pig Face Open-Set Recognition (PFOSR) system. In the Training Phase, we adopt a decoupled design, first training a YOLOv8-based pig face detection model on a small labeled dataset to automatically locate pig faces in raw images. We then refine a Vision Transformer (ViT) recognition model via a dual-loss strategy—combining Sub-center ArcFace and Center Loss—to enhance both inter-class separation and intra-class compactness. Next, in the Known Pig Registration Phase, we utilize the trained detection and recognition modules to extract representative embeddings from 56 identified pigs, storing these feature vectors in a Pig Face Feature Gallery. Finally, in the Unknown and Known Pig Recognition and Registration Phase, newly acquired pig images are processed through the same detection–recognition pipeline, and the resulting embeddings are compared against the gallery via cosine similarity. If the system classifies a pig as unknown, it dynamically assigns a new ID and updates the gallery without disrupting existing entries. Our system demonstrates strong Open-Set recognition, achieving an AUROC of 0.922, OSCR of 0.90, and F1-Open of 0.94. In the closed set, it attains a precision@1 of 0.97, NMI of 0.92, and mean average precision@R of 0.96. These results validate our approach as a scalable, efficient solution for managing dynamic farm environments with high recognition accuracy, even under challenging conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed three-phase PFOSR pipeline. In the Training Phase, labeled pig images (image + label: pig + ID) are used to develop a robust detection model and a recognition model featuring a dual-loss design (SubCenterArcFace + Center Loss). In the Known Pig Registration Phase, images accompanied by known pig IDs pass through the pig face detection and recognition modules, and the resulting feature embeddings are registered in a Face Gallery. In the Unknown and Known Pig Recognition and Registration Phase, unlabeled images are again processed by the same detection and recognition models; if, for a new embedding, the similarity score with all existing gallery entries falls below a specified threshold, a new pig ID is assigned, and the Face Gallery is updated accordingly. This iterative process enables Open-Set pig face recognition by dynamically integrating newly encountered pigs.</p>
Full article ">Figure 2
<p>This is the application of our PFOSR system in inference time. During the real-time detection phase of PFOSR, all unknown and known pigs have already been registered. Incoming test images are first processed by the pig face detection and recognition model to extract face features, which are then matched in a 1:N manner against all registered pigs in the feature gallery. The system retrieves the top five matches based on similarity scores and assigns the label of the highest-scoring match to the test image, completing the real-time identification process.</p>
Full article ">Figure 3
<p>The visualization of the sample images of the Small-Scale Pig Face Detection Dataset.</p>
Full article ">Figure 4
<p>The visualization of the sample images of the Unknown Pig Face Test Dataset.</p>
Full article ">Figure 5
<p>Pig Face Recognition Module in our proposed PFOSR System. This figure illustrates the pig face recognition pipeline in our PFOSR system. The input image is resized to 224 × 224 and processed through a ViT-based backbone for feature extraction. The extracted features are refined through an embedding layer consisting of linear layers, batch normalization (BN), ReLU activation, and dropout, producing the final embedding vector. BS (Batch Size) represents the number of images processed at once, while ES (Embedding Size) refers to the dimensionality of the output feature vector.</p>
Full article ">Figure 6
<p>YOLOv8 Performance on Labeled and Unlabeled Datasets. This figure illustrates YOLOv8’s pig face detection across two datasets. In the first row, images from the Small-Scale Pig Face Detection Dataset are shown, with red boxes indicating predictions and blue boxes representing ground truth labels. The second row displays images from the Known Pig Face Recognition Dataset, where red boxes denote YOLOv8’s predictions without ground truth labels. This comparison highlights the model’s effectiveness in both labeled and unlabeled scenarios.</p>
Full article ">Figure 7
<p>F1-Open, CCR and FAR curves of ViT-DL-IN21K model at different thresholds in PFOSR.</p>
Full article ">Figure 8
<p>Visualization of ViT-DL-IN21K Model Performance on the 65 known Pig Face Testing Dataset. Each panel displays a test image with its true ID, followed by the top five most similar gallery images with their cosine similarity scores and true IDs. High similarity scores indicate correct matches for known classes, including newly registered ones.</p>
Full article ">Figure 9
<p>UMAP Visualization of Test Features for Model Performance on Dataset1 and Dataset2 in PFOSR System. This figure presents the UMAP projections of test feature embeddings for three models (ResNet18-DL-IN21K, ResNet50-DL-IN21K, and ViT-DL-IN21K) on two datasets. (<b>A</b>–<b>C</b>) correspond to Dataset1 (Known Pig Face Recognition Dataset test set, 56 known classes), while (<b>D</b>–<b>F</b>) correspond to Dataset2 (65-Known Pig Face Testing Dataset, including 56 known classes and 9 newly registered unknown classes). Each color represents a different pig identity. A more compact and well-separated clustering indicates better feature representation. ViT-DL-IN21K (<b>C</b>,<b>F</b>) shows improved feature clustering, demonstrating its superior ability to distinguish different pig identities, especially in Dataset2, where new classes have been introduced.</p>
Full article ">Figure 10
<p>Confusion Matrices for Models Performance on testing Dataset1 and testing Dataset2 in PFCSR System. This figure presents the confusion matrices for three models (ResNet18-DL-IN21K, ResNet50-DL-IN21K, and ViT-DL-IN21K) evaluated on two test datasets. (<b>A</b>–<b>C</b>) correspond to Dataset1 (Known Pig Face Recognition Dataset test set, 56 known classes), while (<b>D</b>–<b>F</b>) correspond to Dataset2 (65-Known Pig Face Testing Dataset, including 56 known classes and 9 newly registered unknown classes). The diagonal elements represent correct classifications, while off-diagonal elements indicate misclassification. Comparing these matrices, ViT-DL-IN21K (<b>C</b>,<b>F</b>) achieves the lowest misclassification rates, as seen by the darker diagonal and lighter off-diagonal cells. This demonstrates its superior performance in both standard Closed-Set recognition (Dataset1) and recognizing newly registered pigs in Dataset2, proving its robustness and adaptability in pig face recognition.</p>
Full article ">
21 pages, 2896 KiB  
Article
Identifying Behaviours Indicative of Illegal Fishing Activities in Automatic Identification System Data
by Yifan Zhou, Richard Davies, James Wright, Stephen Ablett and Simon Maskell
J. Mar. Sci. Eng. 2025, 13(3), 457; https://doi.org/10.3390/jmse13030457 - 27 Feb 2025
Viewed by 168
Abstract
Identifying illegal fishing activities from Automatic Identification System (AIS) data is difficult since AIS messages are broadcast cooperatively, the ship’s master controls the timing, and the content of the transmission and the activities of interest usually occur far away from the shore. This [...] Read more.
Identifying illegal fishing activities from Automatic Identification System (AIS) data is difficult since AIS messages are broadcast cooperatively, the ship’s master controls the timing, and the content of the transmission and the activities of interest usually occur far away from the shore. This paper presents our work to predict ship types using AIS data from satellites: in such data, there is a pronounced imbalance between the data for different types of ships, the refresh rate is relatively low, and there is a misreporting of information. To mitigate these issues, our prediction algorithm only uses the sequence of ports the ships visited, as inferred from the positions reported in AIS messages. Experiments involving multiple machine learning algorithms showed that such port visits are informative features when inferring ship type. In particular, this was shown to be the case for the fishing vessels, which is the focus of this paper. We then applied a KD-tree to efficiently identify pairs of ships that are close to one another. As this activity is usually dangerous, multiple occurrences of such encounters that are linked to one ship sensibly motivate extra attention. As a result of applying the analysis approach to a month of AIS data related to a large area in Southeast Asia, we identified 17 cases of potentially illegal behaviours. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>(<b>Top</b>) The area of interest of the data used in the experiments. (<b>Bottom</b>) The positions of the ships self-reporting to be fishing vessels.</p>
Full article ">Figure 2
<p>The ROC curves (better performance if the curve is closer to top-left) of classification performance on different ship types using the NB classifier. Each colour denotes “1 fold” from 10-fold cross-validation. (<b>a</b>) Cargo. (<b>b</b>) Tanker. (<b>c</b>) Tug. (<b>d</b>) Passenger. (<b>e</b>) Fishing. (<b>f</b>) Special Purpose. (<b>g</b>) Pleasure Craft. (<b>h</b>) Other.</p>
Full article ">Figure 3
<p>The ROC curves of detecting the fishing vessel using NB classifier on the data without our proposed preprocessing.</p>
Full article ">Figure 4
<p>An illustration of the land (green) and the territorial waters (grey) in the area of interest. The white area is where coopering detection is applied.</p>
Full article ">Figure 5
<p>The visualisations of the Type I coopering behaviour which is described in <a href="#sec4dot3dot1-jmse-13-00457" class="html-sec">Section 4.3.1</a>. The red ship (0) was classified as fishing vessel and visited port(s). Annotations indicate pertinent events. To maintain the anonymity of the specific ships, we have deliberately omitted the numeric values of the latitudes and longitudes in the subfigures. (<b>a</b>) The latitudes and longitudes of the ships that were identified as exhibiting coopering behaviours in the period of one month. (<b>b</b>) The trajectories of ships that were identified as exhibiting coopering behaviours, shown in plan view.</p>
Full article ">Figure 6
<p>The visualisations of the Type II coopering behaviour which is described in <a href="#sec4dot3dot2-jmse-13-00457" class="html-sec">Section 4.3.2</a>. The red ship (0) was classified as fishing vessel and visited port(s). Annotations indicate pertinent events. To maintain the anonymity of the specific ships, we have deliberately omitted the numeric values of the latitudes and longitudes in the subfigures. (<b>a</b>) The latitudes and longitudes of the ships that were identified as exhibiting coopering behaviours in the period of one month. (<b>b</b>) The trajectories of ships that were identified as exhibiting coopering behaviours, shown in plan view.</p>
Full article ">Figure 7
<p>The visualisations of the Type III coopering behaviour which is described in <a href="#sec4dot3dot3-jmse-13-00457" class="html-sec">Section 4.3.3</a>. The red ship (0) was classified as fishing vessel and visited port(s). Annotations indicate pertinent events. To maintain the anonymity of the specific ships, we have deliberately omitted the numeric values of the latitudes and longitudes in the subfigures. (<b>a</b>) The latitudes and longitudes of the ships that were identified to be exhibiting coopering behaviours in the period of one month. (<b>b</b>) The trajectories of ships that were identified to be exhibiting coopering behaviours, shown in plan view.</p>
Full article ">
18 pages, 4433 KiB  
Article
Trajectory Compression Algorithm via Geospatial Background Knowledge
by Yanqi Fang, Xinxin Sun, Yuanqiang Zhang, Jumei Zhou and Hongxiang Feng
J. Mar. Sci. Eng. 2025, 13(3), 406; https://doi.org/10.3390/jmse13030406 - 21 Feb 2025
Viewed by 148
Abstract
The maritime traffic status is monitored through the Automatic Identification System (AIS) installed on vessels. AIS data record the trajectory of each ship. However, due to the short sampling interval of AIS data, there is a significant amount of redundant data, which increases [...] Read more.
The maritime traffic status is monitored through the Automatic Identification System (AIS) installed on vessels. AIS data record the trajectory of each ship. However, due to the short sampling interval of AIS data, there is a significant amount of redundant data, which increases storage space and reduces data processing efficiency. To reduce the redundancy within AIS data, a compression algorithm is necessary to eliminate superfluous points. This paper presents an offline trajectory compression algorithm that leverages geospatial background knowledge. The algorithm employs an adaptive function to preserve points characterized by the highest positional errors and rates of water depth change. It segments trajectories according to their distance from the shoreline, applies varying water depth change rate thresholds depending on geographical location, and determines an optimal distance threshold using the average compression ratio score. To verify the effectiveness of the algorithm, this paper compares it with other algorithms. At the same compression ratio, the proposed algorithm reduces the average water depth error by approximately 99.1% compared to the Douglas–Peucker (DP) algorithm, while also addressing the common problem of compressed trajectories potentially intersecting with obstacles in traditional trajectory compression methods. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Flow chart of trajectory compression algorithm via geospatial background knowledge.</p>
Full article ">Figure 2
<p>The number of clusters–WCSS diagram.</p>
Full article ">Figure 3
<p>An illustration of the calculation of SED.</p>
Full article ">Figure 4
<p>Trajectory compression algorithm via geospatial background knowledge.</p>
Full article ">Figure 5
<p>The average water depth change rate with different threshold coefficients.</p>
Full article ">Figure 6
<p>The compression rate with different threshold coefficients.</p>
Full article ">Figure 7
<p>The ACS for different distance thresholds.</p>
Full article ">Figure 8
<p>Comparison of different algorithms. (<b>a</b>) the average SED error with different compression rates; (<b>b</b>) the maximum SED error with different compression rates; (<b>c</b>) the average water depth error with different compression rates; (<b>d</b>) the max water depth error with different compression rates; (<b>e</b>) the average water depth change with different compression rates; (<b>f</b>) the max water depth change with different compression rates.</p>
Full article ">Figure 8 Cont.
<p>Comparison of different algorithms. (<b>a</b>) the average SED error with different compression rates; (<b>b</b>) the maximum SED error with different compression rates; (<b>c</b>) the average water depth error with different compression rates; (<b>d</b>) the max water depth error with different compression rates; (<b>e</b>) the average water depth change with different compression rates; (<b>f</b>) the max water depth change with different compression rates.</p>
Full article ">Figure 9
<p>Running times of the different algorithms.</p>
Full article ">Figure 10
<p>The trajectory comparison before and after compression: (<b>a</b>) the trajectories before compression; (<b>b</b>) the trajectories after compression. The blue lines represent the shoreline, and the green lines represent the trajectories.</p>
Full article ">Figure 11
<p>The compression results of different algorithms. (<b>a</b>) Original trajectory; (<b>b</b>) TD-TR algorithm compression result; (<b>c</b>) the proposed algorithm compression result.</p>
Full article ">
21 pages, 6937 KiB  
Article
A Quantitative Analysis Study on the Effects of Moisture and Light Source on FTIR Fingerprint Image Quality
by Manjae Shin, Seungbong Lee, Seungbin Baek, Sunghoon Lee and Sungmin Kim
Sensors 2025, 25(4), 1276; https://doi.org/10.3390/s25041276 - 19 Feb 2025
Viewed by 273
Abstract
The frustrated total internal reflection (FTIR) optical fingerprint scanning method is widely used due to its cost-effectiveness. However, fingerprint image quality is highly dependent on fingertip surface conditions, with moisture generally considered a degrading factor. Interestingly, a prior study reported that excessive moisture [...] Read more.
The frustrated total internal reflection (FTIR) optical fingerprint scanning method is widely used due to its cost-effectiveness. However, fingerprint image quality is highly dependent on fingertip surface conditions, with moisture generally considered a degrading factor. Interestingly, a prior study reported that excessive moisture may improve image quality, though their findings were based on qualitative observations, necessitating further quantitative analysis. Additionally, since the FTIR method relies on optical principles, image quality is also influenced by the wavelength of the light source. In this study, we conducted a preliminary clinical experiment to quantitatively analyze the impact of moisture levels on fingertips (wet, dry, and control) and light wavelengths (red, green, and blue) on FTIR fingerprint image quality. A total of 20 male and female participants with no physical impairments were involved. The results suggest that FTIR fingerprint image quality may improve under wet conditions and when illuminated with green and blue light sources compared to dry conditions and red light. Statistical evidence supports this consistent trend. However, given the limited sample size, the statistical validity and generalizability of these findings should be interpreted with caution. These insights provide a basis for optimizing fingerprint imaging conditions, potentially enhancing the reliability and accuracy of automatic fingerprint identification systems (AFIS) by reducing variations in individual fingerprint quality. Full article
Show Figures

Figure 1

Figure 1
<p>In-house FTIR fingerprint imaging device. (<b>a</b>) 3D-print structure blueprint. (<b>b</b>) 3D-printed in-house FTIR fingerprint imaging device.</p>
Full article ">Figure 2
<p>Extracted valid channel data from the image corresponding to its lighting condition. (<b>a</b>) R channel data. (<b>b</b>) G channel data. (<b>c</b>) B channel data.</p>
Full article ">Figure 3
<p>Local brightness error in the FTIR fingerprint images (highlighted areas marked with red circles).</p>
Full article ">Figure 4
<p>FTIR fingerprint image with local brightness error corrected.</p>
Full article ">Figure 5
<p>Normalized FTIR fingerprint image.</p>
Full article ">Figure 6
<p>Gradient images of FTIR fingerprint image: x-axis direction gradient (<b>left</b>) and y-axis direction gradient (<b>right</b>).</p>
Full article ">Figure 7
<p>Visualized ridge orientation map.</p>
Full article ">Figure 8
<p>Projection of the segmented FTIR fingerprint image.</p>
Full article ">Figure 9
<p>Enhanced fingerprint ridge pattern by Gabor kernel.</p>
Full article ">Figure 10
<p>Generic types of fingerprint minutiae. (<b>a</b>) ridge ending. (<b>b</b>) ridge bifurcation. (<b>c</b>) ridge island. (<b>d</b>) ridge spur.</p>
Full article ">Figure 11
<p>Moisture condition analysis on index finger, RM one-way ANOVA test result (number of TP minutiae points of quality index 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 12
<p>Moisture condition analysis on thumb, RM one-way ANOVA test result (number of TP minutiae points of quality index 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 13
<p>Moisture condition analysis on index fingers RM one-way ANOVA test result (survival rate of TP minutiae point quality index of 0.7 to 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 14
<p>Moisture condition analysis on thumb, RM one-way ANOVA test result (survival rate of TP minutiae point quality index 0.7 to 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 15
<p>Light source condition analysis on index finger RM one-way ANOVA test result (number of TP minutiae points of quality index 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 16
<p>Light source condition analysis on thumb, RM one-way ANOVA test result (number of TP minutiae points of quality index 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 17
<p>Light source condition analysis on index finger, RM one-way ANOVA test result (survival rate of TP minutiae point quality index from 0.7 to 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">Figure 18
<p>Light source condition analysis on thumb, RM one-way ANOVA test result (survival rate of TP minutiae point quality index from 0.7 to 0.9 or higher). Markers (triangle, circle, square) represent outlier data. Statistical significance is denoted as follows: ns (not significant) <span class="html-italic">p</span> &gt; 0.05; * <span class="html-italic">p</span> ≤ 0.05; ** <span class="html-italic">p</span> ≤ 0.01; *** <span class="html-italic">p</span> ≤ 0.001; **** <span class="html-italic">p</span> ≤ 0.0001.</p>
Full article ">
19 pages, 552 KiB  
Article
Securing Automatic Identification System Communications Using Physical-Layer Key Generation Protocol
by Jingyu Sun, Zhimin Yi, Ziyi Zhuang and Shengming Jiang
J. Mar. Sci. Eng. 2025, 13(2), 386; https://doi.org/10.3390/jmse13020386 - 19 Feb 2025
Viewed by 276
Abstract
The automatic identification system (AIS) is an essential tool for modern ships, enabling the broadcast of identification and location information. However, the current AIS standard lacks security features, meaning that messages exchanged via AISs are transmitted in plaintext, which leads to security issues [...] Read more.
The automatic identification system (AIS) is an essential tool for modern ships, enabling the broadcast of identification and location information. However, the current AIS standard lacks security features, meaning that messages exchanged via AISs are transmitted in plaintext, which leads to security issues such as privacy leakage. Most existing solutions rely on public key cryptography. This paper proposes a physical-layer key generation protocol based on the current AIS standard (ITU-R M.1371-5). In the case of unicast AIS communication, the protocol utilizes channel randomness to generate symmetric keys for securing communications. Compared to public key cryptography, the proposed protocol offers advantages such as low overhead, elimination of third parties, and ease of implementation. Finally, this paper discusses the security of the protocol against various threats as well as evaluates its performance and overhead. Under common speed and signal-to-noise ratio (SNR) conditions, The protocol generates Advanced Encryption Standard (AES) keys of different lengths in under 4000 ms, and these keys successfully pass the National Institute of Standards and Technology (NIST) randomness test. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Communication scenarios for key generation.</p>
Full article ">Figure 2
<p>Key generation process.</p>
Full article ">Figure 3
<p>(<b>a</b>) Full-duplex probing and (<b>b</b>) half-duplex probing.</p>
Full article ">Figure 4
<p>Correlation time at different relative speeds.</p>
Full article ">Figure 5
<p>Channel probing.</p>
Full article ">Figure 6
<p>Simulation of received signal.</p>
Full article ">Figure 7
<p>Curves showing changes in key generation rate with SNR at different relative speeds.</p>
Full article ">Figure 8
<p>The time cost of generating keys of different bit numbers at different relative speeds.</p>
Full article ">
26 pages, 6862 KiB  
Article
Application of Anti-Collision Algorithm in Dual-Coupling Tag System
by Junpeng Cui, Muhammad Mudassar Raza, Renhai Feng and Jianjun Zhang
Electronics 2025, 14(4), 787; https://doi.org/10.3390/electronics14040787 - 17 Feb 2025
Viewed by 239
Abstract
Radio Frequency Identification (RFID) is a key component in automatic systems that address challenges in environment monitoring. However, tag collision continues to be an essential challenge in such applications due to high-density RFID deployments. This paper addresses the issue of RFID tag collision [...] Read more.
Radio Frequency Identification (RFID) is a key component in automatic systems that address challenges in environment monitoring. However, tag collision continues to be an essential challenge in such applications due to high-density RFID deployments. This paper addresses the issue of RFID tag collision in large-scale intensive tags, particularly in industrial membrane contamination monitoring systems, and improves the system performance by minimizing collision rates through an innovative collision-avoiding algorithm. This research improved the Predictive Framed Slotted ALOHA–Collision Tracking Tree (PRFSCT) algorithm by cooperating probabilistic and deterministic methods through dynamic frame length adjustment and multi-branch tree processes. After simulation and validation in MATLAB R2023a, we performed a hardware test with the RFM3200 and UHFReader18 passive tags. The method’s efficiency is evaluated through collision slot reduction, delay minimization, and enhanced throughput. PRFSCT significantly reduces collision slots when the number of tags to identify is the same for PRFSCT, Framed Slotted ALOHA (FSA), and Collision Tracking Tree (CTT); the PRFSCT method needs the fewest time slots. When identifying more than 200 tags, PRFSCT has 225 collision slots for 500 tags compared to FSA and CTT, which have approximately 715 and 883 for 500 tags, respectively. It demonstrates exceptional stability and adaptability under increased density needs while improving tag reading at distances. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>Integration of RFID-based membrane water purification system; (<b>a</b>) large-scale industrial membrane filtration setup; (<b>b</b>) RFID tags affixed to membrane housings.</p>
Full article ">Figure 2
<p>Block diagram of wireless sensing system.</p>
Full article ">Figure 3
<p>Collision model: (<b>a</b>) reader and tag collision model; (<b>b</b>) multi-reader collision model; (<b>c</b>) multi-label collision model.</p>
Full article ">Figure 4
<p>Schematic diagram of pure ALOHA algorithm collision.</p>
Full article ">Figure 5
<p>Schematic diagram of Slotted ALOHA algorithm collision.</p>
Full article ">Figure 6
<p>Schematic diagram of dynamic framed slotted ALOHA algorithm collision.</p>
Full article ">Figure 7
<p>QT algorithm collision diagram.</p>
Full article ">Figure 8
<p>Comparison of ALOHA algorithm throughput and collision rate: (<b>a</b>) throughput rate curve; (<b>b</b>) collision rate curve.</p>
Full article ">Figure 8 Cont.
<p>Comparison of ALOHA algorithm throughput and collision rate: (<b>a</b>) throughput rate curve; (<b>b</b>) collision rate curve.</p>
Full article ">Figure 9
<p>Tree algorithm throughput comparison.</p>
Full article ">Figure 10
<p>Schematic diagram of Framed Slotted ALOHA algorithm collision.</p>
Full article ">Figure 11
<p>CTT algorithm collision diagram.</p>
Full article ">Figure 12
<p>PRFSCT algorithm flow chart.</p>
Full article ">Figure 13
<p>Simulation comparison of DFSA and Vogt algorithms: (<b>a</b>) number of collision time slots; (<b>b</b>) total number of time slots.</p>
Full article ">Figure 14
<p>Algorithm time slot number simulation: (<b>a</b>) total number of time slots; (<b>b</b>) number of collision time slots.</p>
Full article ">Figure 15
<p>Algorithms’ delay simulation.</p>
Full article ">Figure 16
<p>Matching capacitance change statistics.</p>
Full article ">Figure 17
<p>Placement of tags and readers: (<b>a</b>) 2 tags; (<b>b</b>) 4 tags; (<b>c</b>) 6 tags; (<b>d</b>) 8 tags; (<b>e</b>) 10 tags.</p>
Full article ">Figure 18
<p>Statistics of average tag reading times.</p>
Full article ">
29 pages, 4045 KiB  
Article
Advanced Digital Solutions for Food Traceability: Enhancing Origin, Quality, and Safety Through NIRS, RFID, Blockchain, and IoT
by Matyas Lukacs, Fruzsina Toth, Roland Horvath, Gyula Solymos, Boglárka Alpár, Peter Varga, Istvan Kertesz, Zoltan Gillay, Laszlo Baranyai, Jozsef Felfoldi, Quang D. Nguyen, Zoltan Kovacs and Laszlo Friedrich
J. Sens. Actuator Netw. 2025, 14(1), 21; https://doi.org/10.3390/jsan14010021 - 17 Feb 2025
Viewed by 340
Abstract
The rapid growth of the human population, the increase in consumer needs regarding food authenticity, and the sub-par synchronization between agricultural and food industry production necessitate the development of reliable track and tracing solutions for food commodities. The present research proposes a simple [...] Read more.
The rapid growth of the human population, the increase in consumer needs regarding food authenticity, and the sub-par synchronization between agricultural and food industry production necessitate the development of reliable track and tracing solutions for food commodities. The present research proposes a simple and affordable digital system that could be implemented in most production processes to improve transparency and productivity. The system combines non-destructive, rapid quality assessment methods, such as near infrared spectroscopy (NIRS) and computer/machine vision (CV/MV), with track and tracing functionalities revolving around the Internet of Things (IoT) and radio frequency identification (RFID). Meanwhile, authenticity is provided by a self-developed blockchain-based solution that validates all data and documentation “from farm to fork”. The system is introduced by taking certified Hungarian sweet potato production as a model scenario. Each element of the proposed system is discussed in detail individually and as a part of an integrated system, capable of automatizing most production flows while maintaining complete transparency and compliance with authority requirements. The results include the data and trust model of the system with sequence diagrams simulating the interactions between participants. The study lays the groundwork for future research and industrial applications combining digital tools to improve the productivity and authenticity of the agri-food industry, potentially increasing the level of trust between participants, most importantly for the consumers. Full article
(This article belongs to the Topic Trends and Prospects in Security, Encryption and Encoding)
Show Figures

Figure 1

Figure 1
<p>Simplified sweet potato supply chain with core material flow and management steps, including the proposed digital technologies. Red arrows indicate measured data, blue arrows indicate manually provided data.</p>
Full article ">Figure 2
<p>Physical components of an RFID reader.</p>
Full article ">Figure 3
<p>The connection of the IoT modules to the internet.</p>
Full article ">Figure 4
<p>Summary of the integrated blockchain-based authentication. Note: the RFID IoT module may be replaced with other IoT modules in the system.</p>
Full article ">Figure 5
<p>Summary of the developed track and tracing solution.</p>
Full article ">Figure 6
<p>The system’s data model.</p>
Full article ">Figure 7
<p>The trust model of the track and tracing solution with actor responsibilities.</p>
Full article ">Figure 8
<p>Simplified sequence diagram showing actor interactions with the system. Dashed lines indicate read-only permissions.</p>
Full article ">Figure 9
<p>The tracing system front-end. (<b>A</b>) latest measurement value; (<b>B</b>) graphical representation of the logged data; (<b>C</b>) the MySQL database of the measured results, (<b>D</b>) consumer front-end.</p>
Full article ">
15 pages, 7826 KiB  
Article
Tongue Image Segmentation and Constitution Identification with Deep Learning
by Chien-Ho Lin, Sien-Hung Yang and Jiann-Der Lee
Electronics 2025, 14(4), 733; https://doi.org/10.3390/electronics14040733 - 13 Feb 2025
Viewed by 424
Abstract
Traditional Chinese medicine (TCM) gathers patient information through inspection, olfaction, inquiry, and palpation, analyzing and interpreting the data to make a diagnosis and offer appropriate treatment. Traditionally, the interpretation of this information relies heavily on the physician’s personal knowledge and experience. However, diagnostic [...] Read more.
Traditional Chinese medicine (TCM) gathers patient information through inspection, olfaction, inquiry, and palpation, analyzing and interpreting the data to make a diagnosis and offer appropriate treatment. Traditionally, the interpretation of this information relies heavily on the physician’s personal knowledge and experience. However, diagnostic outcomes can vary depending on the physician’s clinical experience and subjective judgment. This study employs AI methods to focus on localized tongue assessment, developing an automatic tongue body segmentation using the deep learning network “U-Net” through a series of optimization processes applied to tongue surface images. Furthermore, “ResNet34” is utilized for the identification of “cold”, “neutral”, and “hot” constitutions, creating a system that enhances the consistency and reliability of diagnostic results related to the tongue. The final results demonstrate that the AI interpretation accuracy of this system reaches the diagnostic level of junior TCM practitioners (those who have passed the TCM practitioner assessment with ≤5 years of experience). The framework and findings of this study can serve as (1) a foundational step for the future integration of pulse information and electronic medical records, (2) a tool for personalized preventive medicine, and (3) a training resource for TCM students learning to diagnose tongue constitutions such as “cold”, “neutral”, and “hot”. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The system flowchart of the proposed scheme.</p>
Full article ">Figure 2
<p>Experimental imaging of the influence of ambient light.</p>
Full article ">Figure 3
<p>Label description of physical fitness judgment of the same person under different ambient lighting conditions.</p>
Full article ">Figure 4
<p>Analysis method for evaluating the accuracy of TCM physician’s constitution diagnoses.</p>
Full article ">Figure 5
<p>Tongue image cutting module output results (cyan: cold, gray: normal, red: hot).</p>
Full article ">Figure 6
<p>Instructions for generating the input data set of tongue and body diagram for physique identification.</p>
Full article ">Figure 7
<p>Output results of the physique identification model (red: shows some speciality and will have discussion in <a href="#sec4-electronics-14-00733" class="html-sec">Section 4</a>).</p>
Full article ">Figure 8
<p>Description of the basic composition of Type-I to Type-IV datasets. (red number shows total train numbers, and blue number shows total train images in each type).</p>
Full article ">Figure 9
<p>Experimental results for Type-I to Type-IV dataset (red number shows best result).</p>
Full article ">Figure 10
<p>Results of the 5-Fold Cross Validation experiment.</p>
Full article ">Figure 11
<p>Tongue images captured under different ambient lighting conditions.</p>
Full article ">Figure 12
<p>Correlation coefficient analysis results between coverage rate and IoU (crossed red box indicates experimental result’s range).</p>
Full article ">Figure 13
<p>Mean coverage vs. IoU correlation coefficient analysis results.</p>
Full article ">Figure 14
<p>The tongue cutting module cuts out more details (red number shows the test data id).</p>
Full article ">
22 pages, 3970 KiB  
Article
A Monocular Vision-Based Safety Monitoring Framework for Offshore Infrastructures Utilizing Grounded SAM
by Sijie Xia, Rufu Qin, Yang Lu, Lianjiang Ma and Zhenghu Liu
J. Mar. Sci. Eng. 2025, 13(2), 340; https://doi.org/10.3390/jmse13020340 - 13 Feb 2025
Viewed by 461
Abstract
As maritime transportation and human activities at sea continue to grow, ensuring the safety of offshore infrastructure has become an increasingly pressing research focus. However, traditional high-precision sensor systems often involve prohibitive costs, and the Automatic Identification System (AIS) faces signal loss or [...] Read more.
As maritime transportation and human activities at sea continue to grow, ensuring the safety of offshore infrastructure has become an increasingly pressing research focus. However, traditional high-precision sensor systems often involve prohibitive costs, and the Automatic Identification System (AIS) faces signal loss or data manipulation problems, highlighting the need for a complementary, affordable, and reliable supplemental solution. This study introduces a monocular vision-based safety monitoring framework for offshore infrastructures. By combining advanced computer vision techniques such as Grounded SAM and horizon-based self-calibration, the proposed framework achieves accurate vessel detection, instance segmentation, and distance estimation. The model integrates open-vocabulary object detection and zero-shot segmentation, achieving high performance without additional training. To demonstrate the feasibility of the framework in practical applications, we conduct several experiments on public datasets and couple the proposed algorithms with the Leaflet.js and WebRTC libraries to develop a web-based prototype for real-time safety monitoring, providing visualized information and alerts for offshore infrastructure operators in our case study. The experimental results and case study suggest that the framework has notable advantages, including low cost, convenient deployment with minimal maintenance, high detection accuracy, and strong adaptability to diverse application conditions, which brings a supplemental solution to research on offshore infrastructure safety. Full article
Show Figures

Figure 1

Figure 1
<p>Structural components of monocular vision-based safety monitoring framework for offshore infrastructures.</p>
Full article ">Figure 2
<p>Pinhole camera model.</p>
Full article ">Figure 3
<p>Schematic diagram of camera setup and coordinate system definitions. The dashed lines represent the light path through different points in the frame.</p>
Full article ">Figure 4
<p>Illustration of a video frame and its pixel discreteness.</p>
Full article ">Figure 5
<p>Examples of the sea horizon line ROI extraction steps: (<b>a</b>,<b>b</b>) original images and (<b>c</b>,<b>d</b>) ocean surface instance boundaries. The green line represents the completed boundary of the ocean surface instance after correction, while the blue line indicates the boundary before completion; (<b>e</b>,<b>f</b>) show the ROI (represented as a heatmap overlayered on the image) of the sea horizon line.</p>
Full article ">Figure 6
<p>Illustration of angular and positional features of the sea horizon line.</p>
Full article ">Figure 7
<p>Digital maritime map for safety analysis.</p>
Full article ">Figure 8
<p>Segmentation results from Grounded SAM: (<b>a</b>) ground truth masks and (<b>b</b>) prediction masks.</p>
Full article ">Figure 9
<p>Relative error of distance in the monitoring area. (<b>a</b>) RE map (RE &lt; 0.20): the color gradient represents relative error levels, with blue indicating more minor errors and red indicating more significant errors. (<b>b</b>) Relative error vs. pixel index <span class="html-italic">v</span> (<span class="html-italic">u</span> = 1000): the graph shows the trend of the relative error increasing as the target moves farther from the camera and closer to the horizon line. These results demonstrate that the relative error of monocular distance estimation grows with increasing distance and proximity to the horizon.</p>
Full article ">Figure 10
<p>User interface of the web client.</p>
Full article ">
17 pages, 4733 KiB  
Article
Data Cleaning Model of Mine Wind Speed Sensor Based on LOF-GMM and SGAIN
by Jingfeng Ni, Shengya Yang and Yujiao Liu
Appl. Sci. 2025, 15(4), 1801; https://doi.org/10.3390/app15041801 - 10 Feb 2025
Viewed by 409
Abstract
To improve the quality of mine ventilation wind speed sensor data, a data cleaning model for mine ventilation wind speed sensors based on LOF-GMM and SGAIN is proposed. First, the LOF-GMM algorithm was used to identify wind speed sensor data, cluster the data, [...] Read more.
To improve the quality of mine ventilation wind speed sensor data, a data cleaning model for mine ventilation wind speed sensors based on LOF-GMM and SGAIN is proposed. First, the LOF-GMM algorithm was used to identify wind speed sensor data, cluster the data, and determine the threshold of the local outlier factor, enabling automatic identification of abnormal data and recognition of ventilation fault state information. Abnormal data were then removed to create blank missing points. Finally, wind speed data from the normal operating state of the ventilation system were used to train the SGAIN model to obtain its optimal parameters. The trained SGAIN model was then used to fill in the blank points. The results show that the proposed method can effectively detect abnormal wind speed sensor data and identify ventilation system fault information. In terms of imputation performance, this model outperformed other data imputation models such as GAIN, RF, and DAE. Although the imputation speed was slightly lower than that of the RF and DAE models, considering the high accuracy requirements of mine wind speed data, SGAIN is more suitable for use in the field of mine ventilation. Full article
Show Figures

Figure 1

Figure 1
<p>SGAIN overall frame structure diagram (Adapted from ref. [<a href="#B23-applsci-15-01801" class="html-bibr">23</a>]).</p>
Full article ">Figure 2
<p>The effect of the LOF algorithm to discriminate outliers under different thresholds: (<b>a</b>) threshold is 2, (<b>b</b>) threshold is 3, (<b>c</b>) threshold is 4, (<b>d</b>) threshold is 5–14, (<b>e</b>) threshold is 15–29, and (<b>f</b>) threshold is 30.</p>
Full article ">Figure 3
<p>Anomaly sample point local anomaly factor ordering clustering distribution map: (<b>a</b>) Initial distribution plot of the local outlier factor for anomalous samples. (<b>b</b>) Sorted result plot of the local outlier factor for anomalous samples. (<b>c</b>) Clustering plot of the local outlier factor using the GMM algorithm for anomalous samples.</p>
Full article ">Figure 4
<p>The rendering of the LOF-GMM algorithm for outlier discrimination.</p>
Full article ">Figure 5
<p>Failure sample dataset.</p>
Full article ">Figure 6
<p>Clustering distribution diagram of local anomaly factor ordering of fault samples: (<b>a</b>) Initial distribution plot of the local outlier factor for anomalous samples, and (<b>b</b>) clustering plot of the local outlier factor using the GMM algorithm for faulty samples.</p>
Full article ">Figure 7
<p>The effect of the fault sample outlier discrimination of the LOF-GMM algorithm: In the figure, the black data points represent normal sample data, while the gradient-colored data points represent fault data. The larger the anomaly score, the further the data deviated from the normal values.</p>
Full article ">Figure 8
<p>Changes of RMSE and MAE with the number of iterations when the learning rate was 5 × 10<sup>−2</sup>.</p>
Full article ">Figure 9
<p>Changes of RMSE and MAE with the number of iterations when the learning rate was 1 × 10<sup>−2</sup>.</p>
Full article ">Figure 10
<p>RMSE and MAE vary with the learning rate.</p>
Full article ">Figure 11
<p>Noise contamination in data samples introduces artifacts, and different algorithms exhibit varying reconstruction behaviors when handling these distortions: (<b>a</b>) SGAIN algorithm filling effect, (<b>b</b>) GAIN algorithm filling effect, (<b>c</b>) RF algorithm filling effect, and (<b>d</b>) DAE algorithm filling effect.</p>
Full article ">Figure 12
<p>Performance indicators of each algorithm.</p>
Full article ">
Back to TopTop