Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,486)

Search Parameters:
Keywords = automatic processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 25137 KiB  
Article
Intelligent Parking Service System Design Based on Digital Twin for Old Residential Areas
by Wanjing Chen, Xiaoxu Wang and Maoqiang Wu
Electronics 2024, 13(23), 4597; https://doi.org/10.3390/electronics13234597 - 21 Nov 2024
Abstract
Due to the increasing number of vehicles and the limited land supply, old residential areas generally face parking difficulties. An intelligent parking service is a critical study direction to address parking difficulty since it can achieve the automatic management of parking processes and [...] Read more.
Due to the increasing number of vehicles and the limited land supply, old residential areas generally face parking difficulties. An intelligent parking service is a critical study direction to address parking difficulty since it can achieve the automatic management of parking processes and planning of parking spaces. However, the existing intelligent parking service systems have shortcomings such as low information quality, low management efficiency, and single service mode. To address the shortcomings, in this paper, we conduct a systematic study on utilizing digital twin (DT) technology to improve the intelligent parking service system. The main contributions are threefold: (1) We analyze the function requirements of the intelligent parking service for old residential areas, such as visual monitoring, refined management, and simulation optimization. (2) We design a DT-based intelligent parking service system by collecting data on physical parking space, constructing the corresponding virtual parking space, and building the user interaction platform. An old residential area in Guangzhou, China is used as a use case to show that the designed parking service system can meet the function requirements. (3) Through mathematical modeling and simulation evaluation, we utilize two typical intelligent parking services including dynamic parking planning and driving safety assessment to demonstrate the effectiveness of the proposed system. This study provides innovative solutions for parking management in old residential areas, utilizing DT technology to not only improve information quality and management efficiency, but also provide a theoretical basis and practical reference for the intelligent transformation of urban parking services. Full article
22 pages, 10421 KiB  
Article
Distributed High-Speed Videogrammetry for Real-Time 3D Displacement Monitoring of Large Structure on Shaking Table
by Haibo Shi, Peng Chen, Xianglei Liu, Zhonghua Hong, Zhen Ye, Yi Gao, Ziqi Liu and Xiaohua Tong
Remote Sens. 2024, 16(23), 4345; https://doi.org/10.3390/rs16234345 - 21 Nov 2024
Abstract
The accurate and timely acquisition of high-frequency three-dimensional (3D) displacement responses of large structures is crucial for evaluating their condition during seismic excitation on shaking tables. This paper presents a distributed high-speed videogrammetric method designed to rapidly measure the 3D displacement of large [...] Read more.
The accurate and timely acquisition of high-frequency three-dimensional (3D) displacement responses of large structures is crucial for evaluating their condition during seismic excitation on shaking tables. This paper presents a distributed high-speed videogrammetric method designed to rapidly measure the 3D displacement of large shaking table structures at high sampling frequencies. The method uses non-coded circular targets affixed to key points on the structure and an automatic correspondence approach to efficiently estimate the extrinsic parameters of multiple cameras with large fields of view. This process eliminates the need for large calibration boards or manual visual adjustments. A distributed computation and reconstruction strategy, employing the alternating direction method of multipliers, enables the global reconstruction of time-sequenced 3D coordinates for all points of interest across multiple devices simultaneously. The accuracy and efficiency of this method were validated through comparisons with total stations, contact sensors, and conventional approaches in shaking table tests involving large structures with RCBs. Additionally, the proposed method demonstrated a speed increase of at least six times compared to the advanced commercial photogrammetric software. It could acquire 3D displacement responses of large structures at high sampling frequencies in real time without requiring a high-performance computing cluster. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed videogrammetric method.</p>
Full article ">Figure 2
<p>General distributed videogrammetric network.</p>
Full article ">Figure 3
<p>Stereo-matching method of circular targets in large FOV (red dots indicate SIFT feature points of stereo images).</p>
Full article ">Figure 4
<p>Distributed computation and reconstruction strategy.</p>
Full article ">Figure 5
<p>(<b>a</b>) Real structure model. (<b>b</b>) Camera layout and spatial coordinate system. (<b>c</b>) Measurement point distribution.</p>
Full article ">Figure 6
<p>Measurement errors between the videogrammetry and the total station at each checkpoint in the X, Y, and Z directions.</p>
Full article ">Figure 7
<p>Three-dimensional positioning errors of the checkpoint calculated using different methods after each seismic wave load.</p>
Full article ">Figure 8
<p>Comparison of displacement and acceleration response histories obtained by the proposed videogrammetry and contact sensors at points <span class="html-italic">R</span><sub>3</sub> and <span class="html-italic">R</span><sub>18</sub> subjected to different seismic excitations: (<b>a</b>) Experiment No. 1; (<b>b</b>) Experiment No. 3; (<b>c</b>) Experiment No. 5.</p>
Full article ">Figure 9
<p>Time consumption and mean reprojection error of different methods for reconstructing the shaking table dataset.</p>
Full article ">Figure 10
<p>Time consumption of different methods for reconstructing the shaking table dataset.</p>
Full article ">Figure 11
<p>Three-dimensional displacement response histories of measurement points distributed across the coupling beams during (<b>a</b>) Experiment No. 1, (<b>b</b>) Experiment No. 3, and (<b>c</b>) Experiment No. 5.</p>
Full article ">
13 pages, 2625 KiB  
Article
DeepAT: A Deep Learning Wheat Phenotype Prediction Model Based on Genotype Data
by Jiale Li, Zikang He, Guomin Zhou, Shen Yan and Jianhua Zhang
Agronomy 2024, 14(12), 2756; https://doi.org/10.3390/agronomy14122756 - 21 Nov 2024
Abstract
Genomic selection serves as an effective way for crop genetic breeding, capable of significantly shortening the breeding cycle and improving the accuracy of breeding. Phenotype prediction can help identify genetic variants associated with specific phenotypes. This provides a data-driven selection criterion for genomic [...] Read more.
Genomic selection serves as an effective way for crop genetic breeding, capable of significantly shortening the breeding cycle and improving the accuracy of breeding. Phenotype prediction can help identify genetic variants associated with specific phenotypes. This provides a data-driven selection criterion for genomic selection, making the selection process more efficient and targeted. Deep learning has become an important tool for phenotype prediction due to its abilities in automatic feature learning, nonlinear modeling, and high-dimensional data processing. Current deep learning models have improvements in various aspects, such as predictive performance and computation time, but they still have limitations in capturing the complex relationships between genotype and phenotype, indicating that there is still room for improvement in the accuracy of phenotype prediction. This study innovatively proposes a new method called DeepAT, which mainly includes an input layer, a data feature extraction layer, a feature relationship capture layer, and an output layer. This method can predict wheat yield based on genotype data and has innovations in the following four aspects: (1) The data feature extraction layer of DeepAT can extract representative feature vectors from high-dimensional SNP data. By introducing the ReLU activation function, it enhances the model’s ability to express nonlinear features and accelerates the model’s convergence speed; (2) DeepAT can handle high-dimensional and complex genotype data while retaining as much useful information as possible; (3) The feature relationship capture layer of DeepAT effectively captures the complex relationships between features from low-dimensional features through a self-attention mechanism; (4) Compared to traditional RNN structures, the model training process is more efficient and stable. Using a public wheat dataset from AGT, comparative experiments with three machine learning and six deep learning methods found that DeepAT exhibited better predictive performance than other methods, achieving a prediction accuracy of 99.98%, a mean squared error (MSE) of only 28.93 tones, and a Pearson correlation coefficient close to 1, with yield predicted values closely matching observed values. This method provides a new perspective for deep learning-assisted phenotype prediction and has great potential in smart breeding. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>The proposed DeepAT framework. (<b>a</b>) Dataset sources, (<b>b</b>) genotype data processing, (<b>c</b>) allele encoding, (<b>d</b>) experimental procedure, (<b>e</b>) data feature extraction layer, (<b>f</b>) feature relationship capture layer, (<b>g</b>) DeepAT model architecture.</p>
Full article ">Figure 2
<p>Training loss variation comparison of DeepAT with the other genotype prediction methods.</p>
Full article ">Figure 3
<p>Prediction accuracy comparison of DeepAT with the other genotype prediction methods with different evaluation metrics.</p>
Full article ">Figure 4
<p>Correlation between yield predicted and observed values comparison of DeepAT with the other genotype prediction methods.</p>
Full article ">
20 pages, 4529 KiB  
Article
Robust Segmentation of Partial and Imperfect Dental Arches
by Ammar Alsheghri, Ying Zhang, Golriz Hosseinimanesh, Julia Keren, Farida Cheriet and François Guibault
Appl. Sci. 2024, 14(23), 10784; https://doi.org/10.3390/app142310784 - 21 Nov 2024
Viewed by 40
Abstract
Automatic and accurate dental arch segmentation is a fundamental task in computer-aided dentistry. Recent trends in digital dentistry are tackling the design of 3D crowns using artificial intelligence, which initially requires a proper semantic segmentation of teeth from intraoral scans (IOS). In practice, [...] Read more.
Automatic and accurate dental arch segmentation is a fundamental task in computer-aided dentistry. Recent trends in digital dentistry are tackling the design of 3D crowns using artificial intelligence, which initially requires a proper semantic segmentation of teeth from intraoral scans (IOS). In practice, most IOS are partial with as few as three teeth on the scanned arch, and some of them might have preparations, missing, or incomplete teeth. Existing deep learning-based methods (e.g., MeshSegNet, DArch) were proposed for dental arch segmentation, but they are not as efficient for partial arches that include imperfections such as missing teeth and preparations. In this work, we present the ArchSeg framework that can leverage various deep learning models for semantic segmentation of perfect and imperfect dental arches. The Point Transformer V2 deep learning model is used as the backbone for the ArchSeg framework. We present experiments to demonstrate the efficiency of the proposed framework to segment arches with various types of imperfections. Using a raw dental arch scan with two labels indicating the range of present teeth in the arch (i.e., the first and the last teeth), our ArchSeg can segment a standalone dental arch or a pair of aligned master/antagonist arches with more available information (i.e., die mesh). Two generic models are trained for lower and upper arches; they achieve dice similarity coefficient scores of 0.936±0.008 and 0.948±0.007, respectively, on test sets composed of challenging imperfect arches. Our work also highlights the impact of appropriate data pre-processing and post-processing on the final segmentation performance. Our ablation study shows that the segmentation performance of the Point Transformer V2 model integrated in our framework is improved compared with the original standalone model. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>left</b>) Mapping between FDI and 17-class notation; (<b>right</b>) A labeled full upper arch and a labeled full lower arche visualized with Meshlabler.</p>
Full article ">Figure 2
<p>Determining teeth centroids of segmented arches. (<b>a</b>) Teeth centroids of upper arches, (<b>b</b>) Average upper teeth centroids, (<b>c</b>) Teeth centroids of lower arches, (<b>d</b>) Average lower teeth centroids. Different colors correspond to different teeth types.</p>
Full article ">Figure 3
<p>Illustration of the OBB-based registration method for a lower partial arch. (<b>a</b>) Step 1: center mesh, (<b>b</b>) Step 2: compute OBB, (<b>c</b>) Step 3: align OBB to x–y plan, (<b>d</b>) Step 4: map mesh to GCS.</p>
Full article ">Figure 4
<p>Illustration of a set of upper arches before (<b>left</b>) and after being registered (<b>right</b>). (<b>a</b>) Original arches, (<b>b</b>) Registered arches.</p>
Full article ">Figure 5
<p>Illustration of registration failure caused by lingual buccal ambiguity and its effect on segmentation performance. (<b>a</b>) Failed lingual/buccal detection, (<b>b</b>) Prediction with failed registration (<b>c</b>) Lingual/buccal flipping, (<b>d</b>) Prediction with corrected registration.</p>
Full article ">Figure 6
<p>Procedure of missing tooth augmentation. (<b>a</b>) Original ground-truth-labeled arch, (<b>b</b>) Arch with tooth extraction, (<b>c</b>) Arch after label mapping, (<b>d</b>) Final arch with missing teeth.</p>
Full article ">Figure 7
<p>An example of slicing jaw augmentation with a window size of 6.</p>
Full article ">Figure 8
<p>PTV2 model in ArchSeg framework. MLP: Multilayer perceptron; GVA: Grouped vector attention.</p>
Full article ">Figure 9
<p>ArchSeg: the proposed segmentation framework.</p>
Full article ">Figure 10
<p>Distribution of imperfectness of dental arches.</p>
Full article ">Figure 11
<p>Different partial arches, first/second row are upper/lower partial arches of different imperfectness. From left to right: partial only, partial with preparation, partial with missing tooth, and partial with both a preparation and a missing tooth.</p>
Full article ">Figure 12
<p>Class-wise performance with default framework settings in terms of three evaluation metrics: (<b>a</b>) DSC, (<b>b</b>) SEN, and (<b>c</b>) PPV of a test set with 30 upper and 30 lower partial arches.</p>
Full article ">Figure 13
<p>Examples of segmentation results of default settings with PTV2 trained models for 4 lower (top row) and 4 upper (bottom row) partial arches. Each column corresponds to a different type of imperfectness.</p>
Full article ">Figure 14
<p>Examples of segmentation results of default settings with MeshSegNet-trained models for 4 lower (top row) and 4 upper (bottom row) partial arches. Each column corresponds to a different type of imperfectness.</p>
Full article ">Figure 15
<p>Issues of DSC evaluation in practice.</p>
Full article ">Figure 16
<p>Impact of registration for full, semi-partial arch, and partial arch with less than 8 teeth compared with published pre-trained MeshSegNet model, and MeshSegNet and PTV2 models trained on our data and integrated in ArchSeg.</p>
Full article ">
22 pages, 4119 KiB  
Article
Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot
by Janusz Jakubiak and Jakub Delicat
Appl. Sci. 2024, 14(23), 10774; https://doi.org/10.3390/app142310774 - 21 Nov 2024
Viewed by 140
Abstract
The automatic inspection of belt conveyors gathers increasing attention in the mining industry. The utilization of mobile robots to perform the inspection allows increasing the frequency and precision of inspection data collection. One of the issues that needs to be solved is the [...] Read more.
The automatic inspection of belt conveyors gathers increasing attention in the mining industry. The utilization of mobile robots to perform the inspection allows increasing the frequency and precision of inspection data collection. One of the issues that needs to be solved is the location of inspected objects, such as, for example, conveyor idlers in the vicinity of the robot. This paper presents a novel approach to analyze the 3D LIDAR data to detect idler frames in real time with high accuracy. Our method processes a point cloud image to determine positions of the frames relative to the robot. The detection algorithm utilizes density histograms, Euclidean clustering, and a dimension-based classifier. The proposed data flow focuses on separate processing of single scans independently, to minimize the computational load, necessary for real-time performance. The algorithm is verified with data recorded in a raw material processing plant by comparing the results with human-labeled objects. The proposed process is capable of detecting idler frames in a single 3D scan with accuracy above 83%. The average processing time of a single scan is under 22 ms, with a maximum of 75 ms, ensuring that idler frames are detected within the scan acquisition period, allowing continuous operation without delays. These results demonstrate that the algorithm enables the fast and accurate detection and localization of idler frames in real-world scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Idler supports of typical belt conveyors: (<b>a</b>) [<a href="#B4-applsci-14-10774" class="html-bibr">4</a>], (<b>b</b>) [<a href="#B5-applsci-14-10774" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>Activity diagram of point cloud processing.</p>
Full article ">Figure 3
<p>A scheme of the experiment location with marked robot path segments A and B.</p>
Full article ">Figure 4
<p>Images of the experiment location. (<b>a</b>) Path A. (<b>b</b>) Path B. Green rectangles mark the idlers’ supports to be detected.</p>
Full article ">Figure 5
<p>Mobile platform with the sensor module at the experiment site [<a href="#B4-applsci-14-10774" class="html-bibr">4</a>].</p>
Full article ">Figure 6
<p>Transformation of a point cloud in preprocessing stage. (<b>a</b>) Original image from the LIDAR sensor. The red rectangle indicates the area with conveyors. (<b>b</b>) Point cloud with distant points clipped. The boxes show the location of idler supports. (<b>c</b>) The results of the RANSAC algorithm—the ground points marked in red. (<b>d</b>) Aligned point cloud with ground removal.</p>
Full article ">Figure 7
<p>Two-dimensional histograms for a single scan. (<b>a</b>) Projection to the horizontal plane <math display="inline"><semantics> <msub> <mi>H</mi> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </msub> </semantics></math> with manually marked support locations. (<b>b</b>) Projection to the front plane <math display="inline"><semantics> <msub> <mi>H</mi> <mrow> <mi>Y</mi> <mi>Z</mi> </mrow> </msub> </semantics></math>, marked elongated objects.</p>
Full article ">Figure 8
<p>The results of the density-based segmentation (the points of interest marked in blue-green). (<b>a</b>) Points from the <math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math> segmentation. (<b>b</b>) Points from the <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>Z</mi> </mrow> </semantics></math> segmentation. (<b>c</b>) The set difference of points from the <math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>Z</mi> </mrow> </semantics></math> segmentations.</p>
Full article ">Figure 9
<p>Clusters representing idler frame candidates.</p>
Full article ">Figure 10
<p>Examples of detection.</p>
Full article ">Figure 11
<p>Spatial distribution of detection results in robot local coordinates and unrestricted range. (<b>a</b>) Along Path A. (<b>b</b>) Along Path B.</p>
Full article ">Figure 12
<p>Detection results in areas with various theoretical numbers of active LIDAR channels. (<b>a</b>) Along Path A. (<b>b</b>) Along Path B.</p>
Full article ">Figure 13
<p>Spatial distribution of detection results in robot local coordinates in region limited to 6 and more LIDAR planes. (<b>a</b>) Along Path A. (<b>b</b>) Along Path B.</p>
Full article ">Figure 14
<p>Detection of the supports in time—X coordinate of the objects. (<b>a</b>) Path A—in the first row to the left of the robot. (<b>b</b>) Path A—in the first row to the right of the robot. (<b>c</b>) Path B—in the first row to the left of the robot.</p>
Full article ">Figure 15
<p>Duration of processing stages along Path A. (<b>a</b>) For each scan along the trajectory. (<b>b</b>) Box plots of the duration of stages.</p>
Full article ">Figure 16
<p>Duration of processing stages along Path B. (<b>a</b>) For each scan along the trajectory. (<b>b</b>) Box plots of the duration of stages.</p>
Full article ">
15 pages, 5384 KiB  
Article
Gradual Failure of a Rainfall-Induced Creep-Type Landslide and an Application of Improved Integrated Monitoring System: A Case Study
by Jun Guo, Fanxing Meng and Jingwei Guo
Sensors 2024, 24(22), 7409; https://doi.org/10.3390/s24227409 - 20 Nov 2024
Viewed by 206
Abstract
Landslides cause severe damage to life and property with a wide-ranging impact. Infiltration of rainfall is one of the significant factors leading to landslides. This paper reports on a phase creep landslide caused by long-term rainfall infiltration. A detailed geological survey of the [...] Read more.
Landslides cause severe damage to life and property with a wide-ranging impact. Infiltration of rainfall is one of the significant factors leading to landslides. This paper reports on a phase creep landslide caused by long-term rainfall infiltration. A detailed geological survey of the landslide was conducted, and the deformation development pattern and mechanism of the landslide were analyzed in conjunction with climatic characteristics. Furthermore, reinforcement measures specific to the landslide area were proposed. To monitor the stability of the reinforced slope, a Beidou intelligent monitoring and warning system suitable for remote mountainous areas was developed. The system utilizes LoRa Internet of Things (IoT) technology to connect various monitoring components, integrating surface displacement, deep deformation, structural internal forces, and rainfall monitoring devices into a local IoT network. A data processing unit was established on site to achieve preliminary processing and automatic handling of monitoring data. The monitoring results indicate that the reinforced slope has generally stabilized, and the improved intelligent monitoring system has been able to continuously and accurately reflect the real-time working conditions of the slope. Over the two-year monitoring period, 13 early warnings were issued, with more than 90% of the warnings accurately corresponding to actual conditions, significantly improving the accuracy of early warnings. The research findings provide valuable experience and reference for the monitoring and warning of high slopes in mountainous areas. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Distribution of landslide and the threatened area: (<b>a</b>) remote sensing image, (<b>b</b>) threatened building, (<b>c</b>) topographic of the landslide area.</p>
Full article ">Figure 2
<p>(<b>a</b>) Exploratory pit, (<b>b</b>) Quaternary residual layer, (<b>c</b>) the Upper Silurian Gauze Hat Group, (<b>d</b>) the Middle Silurian Luojiaping Group.</p>
Full article ">Figure 3
<p>Cracks induced by landslide: (<b>a</b>) Cracks in rear edge of landslide, (<b>b</b>) Cracks in front edge of landslide.</p>
Full article ">Figure 4
<p>Cracks in side edge of landslide.</p>
Full article ">Figure 5
<p>Cracks on wall and ground.</p>
Full article ">Figure 6
<p>Phenomenon in front of landslide, (<b>a</b>) Building inclination, (<b>b</b>) Wall swelling.</p>
Full article ">Figure 7
<p>Site treatment of the landslide.</p>
Full article ">Figure 8
<p>Algorithm for gyroscope fusion.</p>
Full article ">Figure 9
<p>Deep displacement.</p>
Full article ">Figure 10
<p>Surface displacement.</p>
Full article ">Figure 11
<p>Stress of steel in pile.</p>
Full article ">Figure 12
<p>Monitored precipitation.</p>
Full article ">
29 pages, 8399 KiB  
Article
Automatic Modulation Recognition Based on Multimodal Information Processing: A New Approach and Application
by Wenna Zhang, Kailiang Xue, Aiqin Yao and Yunqiang Sun
Electronics 2024, 13(22), 4568; https://doi.org/10.3390/electronics13224568 - 20 Nov 2024
Viewed by 274
Abstract
Automatic modulation recognition (AMR) has wide applications in the fields of wireless communications, radar systems, and intelligent sensor networks. The existing deep learning-based modulation recognition models often focus on temporal features while overlooking the interrelations and spatio-temporal relationships among different types of signals. [...] Read more.
Automatic modulation recognition (AMR) has wide applications in the fields of wireless communications, radar systems, and intelligent sensor networks. The existing deep learning-based modulation recognition models often focus on temporal features while overlooking the interrelations and spatio-temporal relationships among different types of signals. To overcome these limitations, a hybrid neural network based on a multimodal parallel structure, called the multimodal parallel hybrid neural network (MPHNN), is proposed to improve the recognition accuracy. The algorithm first preprocesses the data by parallelly processing the multimodal forms of the modulated signals before inputting them into the network. Subsequently, by combining Convolutional Neural Networks (CNN) and Bidirectional Gated Recurrent Unit (Bi-GRU) models, the CNN is used to extract spatial features of the received signals, while the Bi-GRU transmits previous state information of the time series to the current state to capture temporal features. Finally, the Convolutional Block Attention Module (CBAM) and Multi-Head Self-Attention (MHSA) are introduced as two attention mechanisms to handle the temporal and spatial correlations of the signals through an attention fusion mechanism, achieving the calibration of the signal feature maps. The effectiveness of this method is validated using various datasets, with the experimental results demonstrating that the proposed approach can fully utilize the information of multimodal signals. The experimental results show that the recognition accuracy of MPHNN on multiple datasets reaches 93.1%, and it has lower computational complexity and fewer parameters than other models. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Visualization of instantaneous amplitude, instantaneous phase, instantaneous frequency, and IQ time-domain plots for 11 modulation modes.</p>
Full article ">Figure 2
<p>Overall architecture of the MPHNN.</p>
Full article ">Figure 3
<p>Structure of CBAM.</p>
Full article ">Figure 4
<p>Working mechanism of the CBAM.</p>
Full article ">Figure 5
<p>Structure of the Multi-Head Self-Attention (MHSA) module.</p>
Full article ">Figure 6
<p>Scaled dot-product attention.</p>
Full article ">Figure 7
<p>Structure of attention fusion mechanism.</p>
Full article ">Figure 8
<p>Bi-GRU information flow transfer diagram.</p>
Full article ">Figure 9
<p>Changes during training: (<b>a</b>) accuracy and (<b>b</b>) loss values.</p>
Full article ">Figure 10
<p>Recognition accuracy of the dataset RadioML2016.10A on several models.</p>
Full article ">Figure 11
<p>Confusion matrix at an SNR of 18 dB for (<b>a</b>) 1D-CNN, (<b>b</b>) 2D-CNN, (<b>c</b>) CLDNN, (<b>d</b>) DenseNet, (<b>e</b>) LSTM, (<b>f</b>) ResNet, and (<b>g</b>) proposed model.</p>
Full article ">Figure 12
<p>Confusion matrix at full SNR: (<b>a</b>) 1D-CNN, (<b>b</b>) 2D-CNN, (<b>c</b>) CLDNN, (<b>d</b>) DenseNet, (<b>e</b>) LSTM, (<b>f</b>) ResNet, and (<b>g</b>) proposed model.</p>
Full article ">Figure 13
<p>Recognition accuracy for each modulated signal in the range of −20~18 db for all seven methods. (<b>a</b>) The recognition accuracy of each modulated signal using 1D-CNN. (<b>b</b>) The recognition accuracy of each modulated signal using 2D-CNN. (<b>c</b>) The recognition accuracy of CLDNN for each modulated signal. (<b>d</b>) The recognition accuracy of each modulated signal using DenseNet. (<b>e</b>) The recognition accuracy of each modulated signal using LSTM. (<b>f</b>) The recognition accuracy of each modulated signal using ResNET. (<b>g</b>) The recognition accuracy of the proposed model in this paper for each modulated signal.</p>
Full article ">Figure 13 Cont.
<p>Recognition accuracy for each modulated signal in the range of −20~18 db for all seven methods. (<b>a</b>) The recognition accuracy of each modulated signal using 1D-CNN. (<b>b</b>) The recognition accuracy of each modulated signal using 2D-CNN. (<b>c</b>) The recognition accuracy of CLDNN for each modulated signal. (<b>d</b>) The recognition accuracy of each modulated signal using DenseNet. (<b>e</b>) The recognition accuracy of each modulated signal using LSTM. (<b>f</b>) The recognition accuracy of each modulated signal using ResNET. (<b>g</b>) The recognition accuracy of the proposed model in this paper for each modulated signal.</p>
Full article ">Figure 13 Cont.
<p>Recognition accuracy for each modulated signal in the range of −20~18 db for all seven methods. (<b>a</b>) The recognition accuracy of each modulated signal using 1D-CNN. (<b>b</b>) The recognition accuracy of each modulated signal using 2D-CNN. (<b>c</b>) The recognition accuracy of CLDNN for each modulated signal. (<b>d</b>) The recognition accuracy of each modulated signal using DenseNet. (<b>e</b>) The recognition accuracy of each modulated signal using LSTM. (<b>f</b>) The recognition accuracy of each modulated signal using ResNET. (<b>g</b>) The recognition accuracy of the proposed model in this paper for each modulated signal.</p>
Full article ">Figure 13 Cont.
<p>Recognition accuracy for each modulated signal in the range of −20~18 db for all seven methods. (<b>a</b>) The recognition accuracy of each modulated signal using 1D-CNN. (<b>b</b>) The recognition accuracy of each modulated signal using 2D-CNN. (<b>c</b>) The recognition accuracy of CLDNN for each modulated signal. (<b>d</b>) The recognition accuracy of each modulated signal using DenseNet. (<b>e</b>) The recognition accuracy of each modulated signal using LSTM. (<b>f</b>) The recognition accuracy of each modulated signal using ResNET. (<b>g</b>) The recognition accuracy of the proposed model in this paper for each modulated signal.</p>
Full article ">Figure 14
<p>Validation on other datasets: (<b>a</b>) RadioML2016.10B and (<b>b</b>) RadioML2018.01A-sample.</p>
Full article ">
13 pages, 275 KiB  
Article
Text-Mining-Based Non-Face-to-Face Counseling Data Classification and Management System
by Woncheol Park, Seungmin Oh and Seonghyun Park
Appl. Sci. 2024, 14(22), 10747; https://doi.org/10.3390/app142210747 - 20 Nov 2024
Viewed by 268
Abstract
This study proposes a system for analyzing non-face-to-face counseling data using text-mining techniques to assess psychological states and automatically classify them into predefined categories. The system addresses the challenge of understanding internal issues that may be difficult to express in traditional face-to-face counseling. [...] Read more.
This study proposes a system for analyzing non-face-to-face counseling data using text-mining techniques to assess psychological states and automatically classify them into predefined categories. The system addresses the challenge of understanding internal issues that may be difficult to express in traditional face-to-face counseling. To solve this problem, a counseling management system based on text mining was developed. In the experiment, we combined TF-IDF and Word Embedding techniques to process and classify client counseling data into five major categories: school, friends, personality, appearance, and family. The classification performance achieved high accuracy and F1-Score, demonstrating the system’s effectiveness in understanding and categorizing clients’ emotions and psychological states. This system offers a structured approach to analyzing counseling data, providing counselors with a foundation for recommending personalized counseling treatments. The findings of this study suggest that in-depth analysis and classification of counseling data can enhance the quality of counseling, even in non-face-to-face environments, offering more efficient and tailored solutions. Full article
Show Figures

Figure 1

Figure 1
<p>System configuration diagram.</p>
Full article ">Figure 2
<p>Process of the proposed system.</p>
Full article ">
24 pages, 21738 KiB  
Article
New Method to Correct Vegetation Bias in a Copernicus Digital Elevation Model to Improve Flow Path Delineation
by Gabriel Thomé Brochado and Camilo Daleles Rennó
Remote Sens. 2024, 16(22), 4332; https://doi.org/10.3390/rs16224332 - 20 Nov 2024
Viewed by 266
Abstract
Digital elevation models (DEM) are widely used in many hydrologic applications, providing key information about the topography, which is a major driver of water flow in a landscape. Several open access DEMs with near-global coverage are currently available, however, they represent the elevation [...] Read more.
Digital elevation models (DEM) are widely used in many hydrologic applications, providing key information about the topography, which is a major driver of water flow in a landscape. Several open access DEMs with near-global coverage are currently available, however, they represent the elevation of the earth’s surface including all its elements, such as vegetation cover and buildings. These features introduce a positive elevation bias that can skew the water flow paths, impacting the extraction of hydrological features and the accuracy of hydrodynamic models. Many attempts have been made to reduce the effects of this bias over the years, leading to the generation of improved datasets based on the original global DEMs, such as MERIT DEM and, more recently, FABDEM. However, even after these corrections, the remaining bias still affects flow path delineation in a significant way. Aiming to improve on this aspect, a new vegetation bias correction method is proposed in this work. The method consists of subtracting from the Copernicus DEM elevations their respective forest height but adjusted by correction factors to compensate for the partial penetration of the SAR pulses into the vegetation cover during the Copernicus DEM acquisition process. These factors were calculated by a new approach where the slope around the pixels at the borders of each vegetation patch were analyzed. The forest height was obtained from a global dataset developed for the year 2019. Moreover, to avoid temporal vegetation cover mismatch between the DEM and the forest height dataset, we introduced a process where the latter is automatically adjusted to best match the Copernicus acquisition year. The correction method was applied for regions with different forest cover percentages and topographic characteristics, and the result was compared to the original Copernicus DEM and FABDEM, which was used as a benchmark for vegetation bias correction. The comparison method was hydrology-based, using drainage networks obtained from topographic maps as reference. The new corrected DEM showed significant improvements over both the Copernicus DEM and FABDEM in all tested scenarios. Moreover, a qualitative comparison of these DEMs was also performed through exhaustive visual analysis, corroborating these findings. These results suggest that the use of this new vegetation bias correction method has the potential to improve DEM-based hydrological applications worldwide. Full article
Show Figures

Figure 1

Figure 1
<p>Position of study areas overlayed to a natural color Sentinel-2 cloud free composite of the year 2020 of South America.</p>
Full article ">Figure 2
<p>Sentinel-2 cloud free composite of the year 2020 and color representation of Copernicus DEM elevations of the study areas. The numbers in the top left corner of each panel refer to the study area depicted on it.</p>
Full article ">Figure 3
<p>Comparison between the forest height datasets. The figure presents a natural color Sentinel-2 cloud free composite of the year 2020 of the entire Area 1, with the subset area marked by the red rectangle (<b>top left</b>); the Sentinel-2 image of the subset area (<b>top center</b>); a grayscale representation of Copernicus DEM elevations on the subset area (<b>top right</b>); the Sentinel-2 composite overlayed by a color representation of Potapov et al. [<a href="#B44-remotesensing-16-04332" class="html-bibr">44</a>] (<b>bottom left</b>), Lang et al. [<a href="#B45-remotesensing-16-04332" class="html-bibr">45</a>] (<b>bottom center</b>) and Tolan et al. [<a href="#B46-remotesensing-16-04332" class="html-bibr">46</a>] (<b>bottom right</b>) forest height datasets, where heights equal to zero are transparent.</p>
Full article ">Figure 4
<p>Effect of forest height overestimation and canopy elevation underestimation on the estimated ground elevation. The illustration represents the difference (Δh1) between the estimated and actual forest heights, the original and corrected DEMs elevation profiles (dotted lines), the differences between the actual and estimated canopy elevations (Δh2), and ground elevations (Δh1 + Δh2).</p>
Full article ">Figure 5
<p>Copernicus DEM vegetation bias correction workflow.</p>
Full article ">Figure 6
<p>Stream flow paths comparison workflow.</p>
Full article ">Figure 7
<p>Flow path displacement area calculation. The illustration shows the drainage network overlayed by an initial point from where the reference and DEM-extracted flow paths are traced, until they the circle with radius <span class="html-italic">r</span> centered around the point. The flow path displacement area, highlighted in gray, is the sum of the areas located between these lines.</p>
Full article ">Figure 8
<p>Example of flow paths selection. The panels present the reference drainage network for Area 1 (<b>left</b>), the set of flow paths extracted from it using a 2000 m radius (<b>center</b>), and the flow paths selected from the latter (<b>right</b>). The lines are represented in yellow color in all panels, with the Sentinel-2 composite of the year 2020 in the background.</p>
Full article ">Figure 9
<p>Comparison between DEMs and vertical profiles in Area 1. The figure presents color representations of Copernicus, FABDEM, and the new corrected DEM elevation data over the study area (<b>top</b>); its natural color Sentinel-2 cloud-free composite of the year 2020, overlayed by the elevation profile lines identified by their respective numbers (<b>bottom left</b>); and charts showing the observed DEM elevations along the profile lines, with the background colored gray in areas covered by vegetation, according to the adjusted forest height obtained for the area (<b>bottom right</b>).</p>
Full article ">Figure 10
<p>Example of a region within Area 1 where the blurring effect was identified. The figure presents a natural color Sentinel-2 cloud-free composite of the year 2020 of the study area, overlayed by red rectangle highlighting the region featured in the other panels (<b>top left</b>); a color representation of the elevations of Copernicus DEM (<b>top right</b>), FABDEM (<b>bottom left</b>) and the new corrected DEM (<b>bottom right</b>), showing the different level of degradation of the finer topographic features visible in the original DEM.</p>
Full article ">Figure 11
<p>Comparison between DEMs and vertical profiles in Area 2. The figure presents color representations of Copernicus, FABDEM, and the new corrected DEM elevation data over the study area (<b>top</b>); its natural color Sentinel-2 cloud-free composite of the year 2020, overlayed by the elevation profile lines identified by their respective numbers (<b>bottom left</b>); and charts showing the observed DEM elevations along the profile lines, with the background colored gray in areas covered by vegetation, according to the adjusted forest height obtained for the area (<b>bottom right</b>).</p>
Full article ">Figure 12
<p>Comparison between DEMs and vertical profiles in Area 3. The figure presents color representations of Copernicus, FABDEM, and the new corrected DEM elevation data over the study area (<b>top</b>); its natural color Sentinel-2 cloud-free composite of the year 2020, overlayed by the elevation profile lines identified by their respective numbers (<b>bottom left</b>); and charts showing the observed DEM elevations along the profile lines, with the background colored gray in areas covered by vegetation, according to the adjusted forest height obtained for the area (<b>bottom right</b>).</p>
Full article ">Figure 13
<p>Example of a region within Area 3 where the blurring effect was identified. The figure presents a natural color Sentinel-2 cloud-free composite of the year 2020 of the study area, overlayed by red rectangle highlighting the region featured in the other panels (<b>top left</b>); a color representation of the elevations of Copernicus DEM (<b>top right</b>), FABDEM (<b>bottom left</b>) and the new corrected DEM (<b>bottom right</b>), showing the different level of degradation of the finer topographic features visible in the original DEM.</p>
Full article ">Figure 14
<p>Comparison between DEMs and vertical profiles in Area 4. The figure presents color representations of Copernicus, FABDEM and our corrected DEM elevation data over the study area (<b>top</b>); its natural color Sentinel-2 cloud-free composite of the year 2020, overlayed by the elevation profile lines identified by their respective numbers (<b>bottom left</b>); and charts showing the observed DEM elevations along the profile lines, with the background colored gray in areas covered by vegetation, according to the adjusted forest height obtained for the area (<b>bottom right</b>).</p>
Full article ">Figure 15
<p>Comparison of drainage networks extracted from the DEMs. The figure is composed of the natural color Sentinel-2 cloud-free composite of the year 2020 of the study areas overlayed by a red rectangle/highlighting the regions featured in the panels below (<b>first row</b>); Sentinel-2 composite of the highlighted regions, overlayed by the reference drainage lines and the ones extracted from Copernicus DEM, FABDEM, and the new corrected DEM, all in yellow color and placed side by side, organized in rows per study area.</p>
Full article ">
58 pages, 18760 KiB  
Article
Research on the Response of Urban Sustainable Development Standards to the United Nations Sustainable Development Goals Based on Knowledge Graphs
by Maomao Yan, Feng Yang, Huiyao Pan and Chao Li
Land 2024, 13(11), 1962; https://doi.org/10.3390/land13111962 - 20 Nov 2024
Viewed by 221
Abstract
In the new era of the vigorous development of digitalization and intelligence, digital technology has widely penetrated various fields. International authoritative standardization bodies, such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), proposed a timely new standard concept [...] Read more.
In the new era of the vigorous development of digitalization and intelligence, digital technology has widely penetrated various fields. International authoritative standardization bodies, such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), proposed a timely new standard concept called Standards Machine Applicable, Readable, and Transferable (SMART) in order to meet this development trend. Its core feature is that the standard can be machine-readable, usable, understandable, and resolvable without human labor so as to achieve the goals of standard formulation, promotion, publicity, and implementation more effectively. Simultaneously, China’s standardization industry is responding to the strategic deployment of “new quality productivity” by actively promoting the digital development of standards and establishing standard information databases, standard formulation management systems, etc., which provide data support and a platform basis for applying new technologies. Advanced technologies such as big data, artificial intelligence, blockchain, and knowledge graphs can be combined with standardization to improve the efficiency of standard development, application accuracy, and implementation effects. To align with these trends, this study focuses on analyzing the responses of national and international standards in the field of urban sustainable development to the United Nations Sustainable Development Goals (UN-SDGs). This study proposes an innovative approach involving the application of knowledge graph technology to the standardization of urban sustainable development and establishing a response correlation between the indicator library for cities’ sustainable development (ILCSD) and SDGs. It also provides additional functions, such as the intelligent extraction of cities’ sustainable characteristic evaluation indicators and aided decision analysis, which greatly enhance the practicability and efficiency of the ILCSD as a technical tool. Based on knowledge graphs, this study analyzes the different responses of important standards in the field of urban sustainable development to the 17 SDGs, accurately identifies weak trends and gaps in standards, and provides a basis for improving the standardization system of urban sustainable development. Simultaneously, by comparing national and international standards and technologies, this study promotes the mutual recognition of standards, which can help China’s urban sustainable development work align with international standards. In addition, the process of establishing and maintaining knowledge graphs facilitates the continuous adoption of new standards through which the indicator library is automatically updated. Finally, in this study, we propose several inspirations for the standardization of urban sustainable development in China, such as an optimization standard system of benchmarking SDGs and a localization application of the original SDG indicators. Full article
(This article belongs to the Section Land Environmental and Policy Impact Assessment)
Show Figures

Figure 1

Figure 1
<p>The technical roadmap of this study.</p>
Full article ">Figure 2
<p>The framework model for urban standardization. The X-axis includes different stages throughout the whole life cycle (Planning, Construction, O&amp;M, Transformation, and Recycling). The Y-axis refers to different aspects of sustainable cities, and the Z-axis indicates the different stages of the PDCA management process (Plan, Do, Check, and Act).</p>
Full article ">Figure 3
<p>A schematic diagram of the entities and relationships. Situation A shows that these ILCSD indicators can map to all levels of SDG goals, targets and the original SDG indicators. Situation B shows that these ILCSD indicators can only map to the SDG goals and targets. Situation C shows that these ILCSD indicators can only map to the SDG goals. Situation D indicates that there is no relationship yet due to insufficient standards or indicators.</p>
Full article ">Figure 4
<p>Flow chart used for automatic creation of a knowledge graph based on Python.</p>
Full article ">Figure 5
<p>The entities and their relationship types in Neo4j.</p>
Full article ">Figure 6
<p>The number of individual entities in Neo4j.</p>
Full article ">Figure 7
<p>(<b>a</b>) Entity type1: SDG of 9 entity types in Neo4j; (<b>b</b>) Entity type2: SDG goal of 9 entity types in Neo4j; (<b>c</b>) Entity type3: SDG target of 9 entity types in Neo4j; (<b>d</b>) Entity type4: SDG indicator of 9 entity types in Neo4j; (<b>e</b>) Entity type5: ILCSD indicator of 9 entity types in Neo4j; (<b>f</b>) Entity type6: National Standard of 9 entity types in Neo4j; (<b>g</b>) Entity type7: National Standard indicator of 9 entity types in Neo4j; (<b>h</b>) Entity type8: International standard of 9 entity types in Neo4j; (<b>i</b>) Entity type9: International standard indicator of 9 entity types in Neo4j.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Entity type1: SDG of 9 entity types in Neo4j; (<b>b</b>) Entity type2: SDG goal of 9 entity types in Neo4j; (<b>c</b>) Entity type3: SDG target of 9 entity types in Neo4j; (<b>d</b>) Entity type4: SDG indicator of 9 entity types in Neo4j; (<b>e</b>) Entity type5: ILCSD indicator of 9 entity types in Neo4j; (<b>f</b>) Entity type6: National Standard of 9 entity types in Neo4j; (<b>g</b>) Entity type7: National Standard indicator of 9 entity types in Neo4j; (<b>h</b>) Entity type8: International standard of 9 entity types in Neo4j; (<b>i</b>) Entity type9: International standard indicator of 9 entity types in Neo4j.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Entity type1: SDG of 9 entity types in Neo4j; (<b>b</b>) Entity type2: SDG goal of 9 entity types in Neo4j; (<b>c</b>) Entity type3: SDG target of 9 entity types in Neo4j; (<b>d</b>) Entity type4: SDG indicator of 9 entity types in Neo4j; (<b>e</b>) Entity type5: ILCSD indicator of 9 entity types in Neo4j; (<b>f</b>) Entity type6: National Standard of 9 entity types in Neo4j; (<b>g</b>) Entity type7: National Standard indicator of 9 entity types in Neo4j; (<b>h</b>) Entity type8: International standard of 9 entity types in Neo4j; (<b>i</b>) Entity type9: International standard indicator of 9 entity types in Neo4j.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Entity type1: SDG of 9 entity types in Neo4j; (<b>b</b>) Entity type2: SDG goal of 9 entity types in Neo4j; (<b>c</b>) Entity type3: SDG target of 9 entity types in Neo4j; (<b>d</b>) Entity type4: SDG indicator of 9 entity types in Neo4j; (<b>e</b>) Entity type5: ILCSD indicator of 9 entity types in Neo4j; (<b>f</b>) Entity type6: National Standard of 9 entity types in Neo4j; (<b>g</b>) Entity type7: National Standard indicator of 9 entity types in Neo4j; (<b>h</b>) Entity type8: International standard of 9 entity types in Neo4j; (<b>i</b>) Entity type9: International standard indicator of 9 entity types in Neo4j.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Entity type1: SDG of 9 entity types in Neo4j; (<b>b</b>) Entity type2: SDG goal of 9 entity types in Neo4j; (<b>c</b>) Entity type3: SDG target of 9 entity types in Neo4j; (<b>d</b>) Entity type4: SDG indicator of 9 entity types in Neo4j; (<b>e</b>) Entity type5: ILCSD indicator of 9 entity types in Neo4j; (<b>f</b>) Entity type6: National Standard of 9 entity types in Neo4j; (<b>g</b>) Entity type7: National Standard indicator of 9 entity types in Neo4j; (<b>h</b>) Entity type8: International standard of 9 entity types in Neo4j; (<b>i</b>) Entity type9: International standard indicator of 9 entity types in Neo4j.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Entity type1: SDG of 9 entity types in Neo4j; (<b>b</b>) Entity type2: SDG goal of 9 entity types in Neo4j; (<b>c</b>) Entity type3: SDG target of 9 entity types in Neo4j; (<b>d</b>) Entity type4: SDG indicator of 9 entity types in Neo4j; (<b>e</b>) Entity type5: ILCSD indicator of 9 entity types in Neo4j; (<b>f</b>) Entity type6: National Standard of 9 entity types in Neo4j; (<b>g</b>) Entity type7: National Standard indicator of 9 entity types in Neo4j; (<b>h</b>) Entity type8: International standard of 9 entity types in Neo4j; (<b>i</b>) Entity type9: International standard indicator of 9 entity types in Neo4j.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Entity type1: SDG of 9 entity types in Neo4j; (<b>b</b>) Entity type2: SDG goal of 9 entity types in Neo4j; (<b>c</b>) Entity type3: SDG target of 9 entity types in Neo4j; (<b>d</b>) Entity type4: SDG indicator of 9 entity types in Neo4j; (<b>e</b>) Entity type5: ILCSD indicator of 9 entity types in Neo4j; (<b>f</b>) Entity type6: National Standard of 9 entity types in Neo4j; (<b>g</b>) Entity type7: National Standard indicator of 9 entity types in Neo4j; (<b>h</b>) Entity type8: International standard of 9 entity types in Neo4j; (<b>i</b>) Entity type9: International standard indicator of 9 entity types in Neo4j.</p>
Full article ">Figure 8
<p>The number of various relationships in Neo4j.</p>
Full article ">Figure 9
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Different relationships between standards and indicators—relationship type1: SDG—divide into_SDG--&gt;SDG goal; (<b>b</b>) different relationships between standards and indicators—relationship type2: SDG goal—subdivided into_SDG--&gt;SDG target; (<b>c</b>) different relationships between standards and indicators—relationship type3: SDG target—include_SDG--&gt;SDG indicator; (<b>d</b>) different relationships between standards and indicators—relationship type4: SDG goal—corresponding_SDG--&gt;ILCSD indicator; (<b>e</b>) different relationships between standards and indicators—relationship type5: SDG target—corresponding_SDG--&gt;ILCSD indicator; (<b>f</b>) different relationships between standards and indicators—relationship type6: SDG indicator—corresponding_SDG--&gt;ILCSD indicator; (<b>g</b>) different relationships between standards and indicators—relationship type7: National Standard—include_GB--&gt;National Standard indicator; (<b>h</b>) different relationships between standards and indicators—relationship type8: National Standard indicator—corresponding_GB--&gt;ILCSD indicator; (<b>i</b>) different relationships between standards and indicators—relationship type9: International standard—include_ISO--&gt;International standard indicator; (<b>j</b>) different relationships between standards and indicators—relationship type10: International standard indicator—corresponding_ISO--&gt;ILCSD indicator.</p>
Full article ">Figure 10
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The knowledge graph shows the visualization of entities and relationships in response to Goal 1; (<b>b</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 2; (<b>c</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 3; (<b>d</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 4; (<b>e</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 5; (<b>f</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 6; (<b>g</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 7; (<b>h</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 8; (<b>i</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 9; (<b>j</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 10; (<b>k</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 11; (<b>l</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 12; (<b>m</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 13; (<b>n</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 14; (<b>o</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 15; (<b>p</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 16; (<b>q</b>) the knowledge graph shows the visualization of entities and relationships in response to Goal 17.</p>
Full article ">
17 pages, 7503 KiB  
Article
Integrating Historical Learning and Multi-View Attention with Hierarchical Feature Fusion for Robotic Manipulation
by Gaoxiong Lu, Zeyu Yan, Jianing Luo and Wei Li
Biomimetics 2024, 9(11), 712; https://doi.org/10.3390/biomimetics9110712 - 20 Nov 2024
Viewed by 241
Abstract
Humans typically make decisions based on past experiences and observations, while in the field of robotic manipulation, the robot’s action prediction often relies solely on current observations, which tends to make robots overlook environmental changes or become ineffective when current observations are suboptimal. [...] Read more.
Humans typically make decisions based on past experiences and observations, while in the field of robotic manipulation, the robot’s action prediction often relies solely on current observations, which tends to make robots overlook environmental changes or become ineffective when current observations are suboptimal. To address this pivotal challenge in robotics, inspired by human cognitive processes, we propose our method which integrates historical learning and multi-view attention to improve the performance of robotic manipulation. Based on a spatio-temporal attention mechanism, our method not only combines observations from current and past steps but also integrates historical actions to better perceive changes in robots’ behaviours and their impacts on the environment. We also employ a mutual information-based multi-view attention module to automatically focus on valuable perspectives, thereby incorporating more effective information for decision-making. Furthermore, inspired by human visual system which processes both global context and local texture details, we have devised a method that merges semantic and texture features, aiding robots in understanding the task and enhancing their capability to handle fine-grained tasks. Extensive experiments in RLBench and real-world scenarios demonstrate that our method effectively handles various tasks and exhibits notable robustness and adaptability. Full article
Show Figures

Figure 1

Figure 1
<p>Part (<b>a</b>) is the trajectory processing modules. Demonstrations are manually collected using a gamepad, and then macro steps are extracted based on keypoint analysis and genetic algorithms. Part (<b>b</b>) extract the hierarchical feature from visual inputs and fuse them by transfusion. The fused visual feature are then processed in the part (<b>c</b>), using mutual information to reduce visual feature redundancy and calculate the weight of each viewpoint. Then the multi-view information is weighted and fused. In part (<b>d</b>), the fused multi-view features are passed through a spatio-temporal attention network, which then output the actions for the robot to execute. The output actions are composed of the 3D pose of the end-effector, positional offsets and gripper state.</p>
Full article ">Figure 2
<p>The yellow curve represents the original trajectory, with blue points indicating the original trajectory points. The green points are key points identified by detecting moments when the robotic arm pauses or the gripper state changes. The orange point is a key point selected through the genetic algorithm, which further optimizes the key points to minimize the trajectory error.</p>
Full article ">Figure 3
<p>RGB images are processed by both PANet and CLIP models to obtain local texture features (<math display="inline"><semantics> <msub> <mi mathvariant="italic">FR</mi> <mi>l</mi> </msub> </semantics></math>) and global semantic features (<math display="inline"><semantics> <msub> <mi mathvariant="italic">FR</mi> <mi>g</mi> </msub> </semantics></math>). These features are combined with the 2D projection of the end-effector pose to form the RGB-A feature (<math display="inline"><semantics> <mi mathvariant="italic">FR</mi> </semantics></math>). Simultaneously, multi-view point cloud data is processed using the Set Abstraction (SA) module of PointNet++ to extract point cloud features (<math display="inline"><semantics> <mi mathvariant="italic">FP</mi> </semantics></math>). The fusion of these visual and point cloud features enhances the robot’s ability to interact with complex environments.</p>
Full article ">Figure 4
<p>The double-head arrow connects the viewpoints before (red box) and after (green box) the view shift. In the task inserting peg, the perspective shifts from the left shoulder view to the front view at the 2nd step as the robot arm blocks the target object from the left shoulder view. In the task item in drawer, the multi-view attention module considers the front viewpoint more valuable at the 4th and 5th steps. In the task stacking blocks, there are no changes in viewpoint.</p>
Full article ">Figure 5
<p>During the testing phase, experiments are conducted with colors and shapes that were not presented during the training phase based on the picking and lifting task.</p>
Full article ">Figure 6
<p>We designed two viewpoints using front and wrist cameras. The viewpoint marked with a green star in the diagram indicates the viewpoint that contains more valuable information. Additionally, the action prediction at each step is based on the observations at the current step, as well as the observations and actions from the past several steps.</p>
Full article ">
14 pages, 26460 KiB  
Article
TF-BAPred: A Universal Bioactive Peptide Predictor Integrating Multiple Feature Representations
by Zhenming Wu, Xiaoyu Guo, Yangyang Sun, Xiaoquan Su and Jin Zhao
Mathematics 2024, 12(22), 3618; https://doi.org/10.3390/math12223618 - 20 Nov 2024
Viewed by 195
Abstract
Bioactive peptides play essential roles in various biological processes and hold significant therapeutic potential. However, predicting the functions of these peptides is challenging due to their diversity and complexity. Here, we develop TF-BAPred, a framework for universal peptide prediction incorporating multiple feature representations. [...] Read more.
Bioactive peptides play essential roles in various biological processes and hold significant therapeutic potential. However, predicting the functions of these peptides is challenging due to their diversity and complexity. Here, we develop TF-BAPred, a framework for universal peptide prediction incorporating multiple feature representations. TF-BAPred feeds original peptide sequences into three parallel modules: a novel feature proposed in this study called FVG extracts the global features of each peptide sequence; an automatic feature recognition module based on a temporal convolutional network extracts the temporal features; and a module integrates multiple widely used features such as AAC, DPC, BPF, RSM, and CKSAAGP. In particular, FVG constructs a fixed-size vector graph to represent the global pattern by capturing the topological structure between amino acids. We evaluated the performance of TF-BAPred and other peptide predictors on different types of peptides, including anticancer peptides, antimicrobial peptides, and cell-penetrating peptides. The benchmarking tests demonstrate that TF-BAPred displays strong generalization and robustness in predicting various types of peptide sequences, highlighting its potential for applications in biomedical engineering. Full article
(This article belongs to the Special Issue Bioinformatics and Mathematical Modelling)
Show Figures

Figure 1

Figure 1
<p>Overview of the TF-BAPred framework. (<b>A</b>) The original peptide sequences are individually input into three channels to extract sequence features from different perspectives. Subsequently, the feature vectors obtained from the three channels are fused and input into a fully connected neural network for classification training and prediction. (<b>B</b>) An example of a fixed-scale vector graph depicting the global structural patterns of each peptide sequence. (<b>C</b>) The framework for temporal feature extraction based on a temporal convolutional network.</p>
Full article ">Figure 2
<p>Example of constructing a fixed-scale vector graph. (<b>a</b>) A peptide sequence <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mi>DVADVMYYV</mi> </mrow> </semantics></math>. (<b>b</b>) Assuming the amino acid alphabet consists of A, D, M, V, and Y, construct a vector graph based on the alphabet. (<b>c</b>) Generate a feature matrix based on the constructed vector graph.</p>
Full article ">Figure 3
<p>ROC curves and corresponding AUC values of the TCN and other compared methods on (<b>A</b>) ACP740, (<b>B</b>) ACPmain, (<b>C</b>) Veltri’s dataset, (<b>D</b>) Ma’s dataset, (<b>E</b>) CPP924, and (<b>F</b>) CPPsite3 datasets.</p>
Full article ">Figure 4
<p>Evaluation of TF-BAPred’s generalizability on anticancer peptide datasets of (<b>A</b>) ACP740 and (<b>B</b>) ACPmain, antimicrobial peptide datasets of (<b>C</b>) Veltri’s dataset and (<b>D</b>) Ma’s dataset, and cell-penetrating peptide datasets of (<b>E</b>) CPP924 and (<b>F</b>) CPPsite3.</p>
Full article ">Figure 5
<p>The accuracies of peptide predictors across different ratios of training and testing datasets on the ACP740 dataset.</p>
Full article ">
20 pages, 4297 KiB  
Article
Precision and Efficiency in Dam Crack Inspection: A Lightweight Object Detection Method Based on Joint Distillation for Unmanned Aerial Vehicles (UAVs)
by Hangcheng Dong, Nan Wang, Dongge Fu, Fupeng Wei, Guodong Liu and Bingguo Liu
Drones 2024, 8(11), 692; https://doi.org/10.3390/drones8110692 - 19 Nov 2024
Viewed by 264
Abstract
Dams in their natural environment will gradually develop cracks and other forms of damage. If not detected and repaired in time, the structural strength of the dam may be reduced, and it may even collapse. Repairing cracks and defects in dams is very [...] Read more.
Dams in their natural environment will gradually develop cracks and other forms of damage. If not detected and repaired in time, the structural strength of the dam may be reduced, and it may even collapse. Repairing cracks and defects in dams is very important to ensure their normal operation. Traditional detection methods rely on manual inspection, which consumes a lot of time and labor, while deep learning methods can greatly alleviate this problem. However, previous studies have often focused on how to better detect crack defects, with the corresponding image resolution not being particularly high. In this study, targeting the scenario of real-time detection by drones, we propose an automatic detection method for dam crack targets directly on high-resolution remote sensing images. First, for high-resolution remote sensing images, we designed a sliding window processing method and proposed corresponding methods to eliminate redundant detection frames. Then, we introduced a Gaussian distribution in the loss function to calculate the similarity of predicted frames and incorporated a self-attention mechanism in the spatial pooling module to further enhance the detection performance of crack targets at various scales. Finally, we proposed a pruning-after-distillation scheme, using the compressed model as the student and the pre-compression model as the teacher and proposed a joint distillation method that allows more efficient distillation under this compression relationship between teacher and student models. Ultimately, a high-performance target detection model can be deployed in a more lightweight form for field operations such as UAV patrols. Experimental results show that our method achieves an mAP of 80.4%, with a parameter count of only 0.725 M, providing strong support for future tasks such as UAV field inspections. Full article
(This article belongs to the Special Issue Advances in Detection, Security, and Communication for UAV)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the overall scheme of this work.</p>
Full article ">Figure 2
<p>Remote sensing images captured by unmanned aerial vehicles.</p>
Full article ">Figure 3
<p>Schematic diagram of overlapping sliding window cutting.</p>
Full article ">Figure 4
<p>The network composition of YOLO_v5.</p>
Full article ">Figure 5
<p>The structure of the feature extraction backbone in YOLO_v5.</p>
Full article ">Figure 6
<p>Dam cracks vary widely in shape and size.</p>
Full article ">Figure 7
<p>Targets of different sizes are inconsistently sensitive to IOU calculations.</p>
Full article ">Figure 8
<p>Decomposition of LKA convolutional structures.</p>
Full article ">Figure 9
<p>The convolutional structure of LSKA.</p>
Full article ">Figure 10
<p>Feature fusion module with the incorporation of LSKA module.</p>
Full article ">Figure 11
<p>The overall framework of the joint feature knowledge distillation algorithm.</p>
Full article ">Figure 12
<p>Distillation strategy based on output information.</p>
Full article ">Figure 13
<p>Distillation strategy based on feature maps.</p>
Full article ">Figure 14
<p>Detection results of original images of dam cracks.</p>
Full article ">Figure 15
<p>Comparison of detection results for cracks in dams. (<b>a</b>) Original image; (<b>b</b>) results of Yolov5n; (<b>c</b>) results of Yolov5n_ns.</p>
Full article ">Figure 16
<p>Comparison of results and heatmap from different distillation methods. (<b>a</b>) represents the model after pruning; (<b>b</b>) represents the model after local distillation based on feature maps; (<b>c</b>) represents the model with knowledge distillation based on output information; and (<b>d</b>) represents the model with multi-strategy joint distillation algorithm.</p>
Full article ">Figure 17
<p>Comparison of results and heatmap from different distillation methods on another image. (<b>a</b>) represents the model after pruning; (<b>b</b>) represents the model after local distillation based on feature maps; (<b>c</b>) represents the model with knowledge distillation based on output information; and (<b>d</b>) represents the model with multi-strategy joint distillation algorithm.</p>
Full article ">
14 pages, 2453 KiB  
Article
Dead Broiler Detection and Segmentation Using Transformer-Based Dual Stream Network
by Gyu-Sung Ham and Kanghan Oh
Agriculture 2024, 14(11), 2082; https://doi.org/10.3390/agriculture14112082 - 19 Nov 2024
Viewed by 298
Abstract
Improving productivity in industrial farming is crucial for precision agriculture, particularly in the broiler breeding sector, where swift identification of dead broilers is vital for preventing disease outbreaks and minimizing financial losses. Traditionally, the detection process relies on manual identification by farmers, which [...] Read more.
Improving productivity in industrial farming is crucial for precision agriculture, particularly in the broiler breeding sector, where swift identification of dead broilers is vital for preventing disease outbreaks and minimizing financial losses. Traditionally, the detection process relies on manual identification by farmers, which is both labor-intensive and inefficient. Recent advances in computer vision and deep learning have resulted in promising automatic dead broiler detection systems. In this study, we present an automatic detection and segmentation system for dead broilers that uses transformer-based dual-stream networks. The proposed dual-stream method comprises two streams that reflect the segmentation and detection networks. In our approach, the detection network supplies location-based features of dead broilers to the segmentation network, aiding in the prevention of live broiler mis-segmentation. This integration allows for more accurate identification and segmentation of dead broilers within the farm environment. Additionally, we utilized the self-attention mechanism of the transformer to uncover high-level relationships among the features, thereby enhancing the overall accuracy and robustness. Experiments indicated that the proposed approach achieved an average IoU of 88% on the test set, indicating its strong detection capabilities and precise segmentation of dead broilers. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Examples of dead broiler dataset. From top to bottom: images of dead broilers, GT masks, and Gaussian heatmaps centered around the locations of the dead broilers.</p>
Full article ">Figure 2
<p>Proposed dual-stream network for the detection and segmentation of dead broiler.</p>
Full article ">Figure 3
<p>Overview of the transformer block with multi-head attention. The figure illustrates the process of recalibrating encoded features using a transformer block, which includes layer normalization, multi-head attention, and an MLP (multi-layer perceptron).</p>
Full article ">Figure 4
<p>Training and validation loss graph of our model.</p>
Full article ">Figure 5
<p>Box plots display the distribution of performance metrics (IoU, Precision, Recall, and F-measure) for each segmentation method (U-Net, FCN, LinkNet, DeepLabV3, and the Proposed method).</p>
Full article ">Figure 6
<p>Visualization results of the proposed method. Blue outlines indicate the predicted segmentation boundaries, while green outlines represent the ground truth (GT).</p>
Full article ">Figure 7
<p>Comparison of segmentation and heatmap results. From left to right: original images, ground truth segmentation (GT-seg), output segmentation (Output-seg), ground truth heatmap (GT-heatmap), and output heatmap (Output-heatmap). Output heatmaps guide the model’s focus, enhancing segmentation accuracy.</p>
Full article ">
20 pages, 15268 KiB  
Article
Automatic Reading and Reporting Weather Information from Surface Fax Charts for Ships Sailing in Actual Northern Pacific and Atlantic Oceans
by Jun Jian, Yingxiang Zhang, Ke Xu and Peter J. Webster
J. Mar. Sci. Eng. 2024, 12(11), 2096; https://doi.org/10.3390/jmse12112096 - 19 Nov 2024
Viewed by 296
Abstract
This study is aimed to improve the intelligence level, efficiency, and accuracy of ship safety and security systems by contributing to the development of marine weather forecasting. The accurate and prompt recognition of weather fax charts is very important for navigation safety. This [...] Read more.
This study is aimed to improve the intelligence level, efficiency, and accuracy of ship safety and security systems by contributing to the development of marine weather forecasting. The accurate and prompt recognition of weather fax charts is very important for navigation safety. This study employed many artificial intelligent (AI) methods including a vectorization approach and target recognition algorithm to automatically detect the severe weather information from Japanese and US weather charts. This enabled the expansion of an existing auto-response marine forecasting system’s applications toward north Pacific and Atlantic Oceans, thus enhancing decision-making capabilities and response measures for sailing ships at actual sea. The OpenCV image processing method and YOLOv5s/YOLO8vn algorithm were utilized to make template matches and locate warning symbols and weather reports from surface weather charts. After these improvements, the average accuracy of the model significantly increased from 0.920 to 0.928, and the detection rate of a single image reached a maximum of 1.2 ms. Additionally, OCR technology was applied to retract texts from weather reports and highlighted the marine areas where dense fog and great wind conditions are likely to occur. Finally, the field tests confirmed that this auto and intelligent system could assist the navigator within 2–3 min and thus greatly enhance the navigation safety in specific areas in the sailing routes with minor text-based communication costs. Full article
(This article belongs to the Special Issue Ship Performance in Actual Seas)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Weather report and warning symbols in JMA surface weather fax chart retrieved from imocwx.com.(accessed on 14 Feb 2022). (<b>b</b>) Warning symbols and wind barbs in a 48 h surface forecast chart issued by US National Weather Service at 1758 UTC on 13 February 2022.</p>
Full article ">Figure 2
<p>JMA (<b>a</b>) original surface fax weather chart, (<b>b</b>) averaged base chart, (<b>c</b>) after binarization, (<b>d</b>) difference between the original (<b>a</b>) and base chart (<b>c</b>), resulting in a pure weather map.</p>
Full article ">Figure 3
<p>Flow chart of the auto-warning system for JMA charts.</p>
Full article ">Figure 4
<p>YOLOv5s-CBAM(SE) network structure diagram, the parts related with CBAM(SE) are marked in gray.</p>
Full article ">Figure 5
<p>YOLOv8n model structure diagram.</p>
Full article ">Figure 6
<p>Comparison of weather briefing text recognition results.</p>
Full article ">Figure 7
<p>Recognition of warning symbols “hPa”, “GW”, “SW”, and “FOG[W]” from the chart <a href="#jmse-12-02096-f002" class="html-fig">Figure 2</a>d.</p>
Full article ">Figure 8
<p>Comparison of detection results of wind barb (interception).</p>
Full article ">Figure 8 Cont.
<p>Comparison of detection results of wind barb (interception).</p>
Full article ">Figure 9
<p>Training process visualization (<b>a</b>) train-loss and (<b>b</b>) mAP values for original and improved YOLOv5s.</p>
Full article ">Figure 10
<p>(<b>a</b>) JMA charts with warning symbols detected, (<b>b</b>) JMA charts with warning area colored, red and yellow for wind speeds greater than 50 kts and 35–49 kts, green for visibility &lt; 0.3 nm (<b>c</b>) US charts with wind levels colored.</p>
Full article ">Figure 11
<p>Field tests of the auto-warning system (<b>upper</b> and <b>middle</b>) US case (<b>bottom</b>) and JMA case.</p>
Full article ">
Back to TopTop