Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,754)

Search Parameters:
Keywords = information fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 9268 KiB  
Article
Structural Health Monitoring and Failure Analysis of Large-Scale Hydro-Steel Structures, Based on Multi-Sensor Information Fusion
by Helin Li, Huadong Zhao, Yonghao Shen, Shufeng Zheng and Rui Zhang
Water 2024, 16(22), 3167; https://doi.org/10.3390/w16223167 - 5 Nov 2024
Abstract
Large-scale hydro-steel structures (LS-HSSs) are vital to hydraulic engineering, supporting critical functions such as water resource management, flood control, power generation, and navigation. However, due to prolonged exposure to severe environmental conditions and complex operational loads, these structures progressively degrade, posing increased risks [...] Read more.
Large-scale hydro-steel structures (LS-HSSs) are vital to hydraulic engineering, supporting critical functions such as water resource management, flood control, power generation, and navigation. However, due to prolonged exposure to severe environmental conditions and complex operational loads, these structures progressively degrade, posing increased risks over time. The absence of effective structural health monitoring (SHM) systems exacerbates these risks, as undetected damage and wear can compromise safety. This paper presents an advanced SHM framework designed to enhance the real-time monitoring and safety evaluation of LS-HSSs. The framework integrates the finite element method (FEM), multi-sensor data fusion, and Internet of Things (IoT) technologies into a closed-loop system for real-time perception, analysis, decision-making, and optimization. The system was deployed and validated at the Luhun Reservoir spillway, where it demonstrated stable and reliable performance for real-time anomaly detection and decision-making. Monitoring results over time were consistent, with stress values remaining below allowable thresholds and meeting safety standards. Specifically, stress monitoring during radial gate operations (with a current water level of 1.4 m) indicated that the dynamic stress values induced by flow vibrations at various points increased by approximately 2 MPa, with no significant impact loads. Moreover, the vibration amplitude during gate operation was below 0.03 mm, confirming the absence of critical structural damage and deformation. These results underscore the SHM system’s capacity to enhance operational safety and maintenance efficiency, highlighting its potential for broader application across water conservancy infrastructure. Full article
(This article belongs to the Special Issue Safety Monitoring of Hydraulic Structures)
22 pages, 46610 KiB  
Article
Autonomous Extraction Technology for Aquaculture Ponds in Complex Geological Environments Based on Multispectral Feature Fusion of Medium-Resolution Remote Sensing Imagery
by Zunxun Liang, Fangxiong Wang, Jianfeng Zhu, Peng Li, Fuding Xie and Yifei Zhao
Remote Sens. 2024, 16(22), 4130; https://doi.org/10.3390/rs16224130 - 5 Nov 2024
Abstract
Coastal aquaculture plays a crucial role in global food security and the economic development of coastal regions, but it also causes environmental degradation in coastal ecosystems. Therefore, the automation, accurate extraction, and monitoring of coastal aquaculture areas are crucial for the scientific management [...] Read more.
Coastal aquaculture plays a crucial role in global food security and the economic development of coastal regions, but it also causes environmental degradation in coastal ecosystems. Therefore, the automation, accurate extraction, and monitoring of coastal aquaculture areas are crucial for the scientific management of coastal ecological zones. This study proposes a novel deep learning- and attention-based median adaptive fusion U-Net (MAFU-Net) procedure aimed at precisely extracting individually separable aquaculture ponds (ISAPs) from medium-resolution remote sensing imagery. Initially, this study analyzes the spectral differences between aquaculture ponds and interfering objects such as saltwater fields in four typical aquaculture areas along the coast of Liaoning Province, China. It innovatively introduces a difference index for saltwater field aquaculture zones (DIAS) and integrates this index as a new band into remote sensing imagery to increase the expressiveness of features. A median augmented adaptive fusion module (MEA-FM), which adaptively selects channel receptive fields at various scales, integrates the information between channels, and captures multiscale spatial information to achieve improved extraction accuracy, is subsequently designed. Experimental and comparative results reveal that the proposed MAFU-Net method achieves an F1 score of 90.67% and an intersection over union (IoU) of 83.93% on the CHN-LN4-ISAPS-9 dataset, outperforming advanced methods such as U-Net, DeepLabV3+, SegNet, PSPNet, SKNet, UPS-Net, and SegFormer. This study’s results provide accurate data support for the scientific management of aquaculture areas, and the proposed MAFU-Net method provides an effective method for semantic segmentation tasks based on medium-resolution remote sensing images. Full article
18 pages, 4612 KiB  
Article
MMS-EF: A Multi-Scale Modular Extraction Framework for Enhancing Deep Learning Models in Remote Sensing
by Hang Yu, Weidong Song, Bing Zhang, Hongbo Zhu, Jiguang Dai and Jichao Zhang
Land 2024, 13(11), 1842; https://doi.org/10.3390/land13111842 (registering DOI) - 5 Nov 2024
Abstract
The analysis of land cover using deep learning techniques plays a pivotal role in understanding land use dynamics, which is crucial for land management, urban planning, and cartography. However, due to the complexity of remote sensing images, deep learning models face practical challenges [...] Read more.
The analysis of land cover using deep learning techniques plays a pivotal role in understanding land use dynamics, which is crucial for land management, urban planning, and cartography. However, due to the complexity of remote sensing images, deep learning models face practical challenges in the preprocessing stage, such as incomplete extraction of large-scale geographic features, loss of fine details, and misalignment issues in image stitching. To address these issues, this paper introduces the Multi-Scale Modular Extraction Framework (MMS-EF) specifically designed to enhance deep learning models in remote sensing applications. The framework incorporates three key components: (1) a multiscale overlapping segmentation module that captures comprehensive geographical information through multi-channel and multiscale processing, ensuring the integrity of large-scale features; (2) a multiscale feature fusion module that integrates local and global features, facilitating seamless image stitching and improving classification accuracy; and (3) a detail enhancement module that refines the extraction of small-scale features, enriching the semantic information of the imagery. Extensive experiments were conducted across various deep learning models, and the framework was validated on two public datasets. The results demonstrate that the proposed approach effectively mitigates the limitations of traditional preprocessing methods, significantly improving feature extraction accuracy and exhibiting strong adaptability across different datasets. Full article
19 pages, 1397 KiB  
Article
Hierarchical Spectral–Spatial Transformer for Hyperspectral and Multispectral Image Fusion
by Tianxing Zhu, Qin Liu and Lixiang Zhang
Remote Sens. 2024, 16(22), 4127; https://doi.org/10.3390/rs16224127 (registering DOI) - 5 Nov 2024
Abstract
This paper presents the Hierarchical Spectral–Spatial Transformer (HSST) network, a novel approach applicable to both drone-based and broader remote sensing platforms for integrating hyperspectral (HSI) and multispectral (MSI) imagery. The HSST network improves upon conventional multi-head self-attention transformers by integrating cross attention, effectively [...] Read more.
This paper presents the Hierarchical Spectral–Spatial Transformer (HSST) network, a novel approach applicable to both drone-based and broader remote sensing platforms for integrating hyperspectral (HSI) and multispectral (MSI) imagery. The HSST network improves upon conventional multi-head self-attention transformers by integrating cross attention, effectively capturing spectral and spatial features across different modalities and scales. The network’s hierarchical design facilitates the extraction of multi-scale information and employs a progressive fusion strategy to incrementally refine spatial details through upsampling. Evaluations on three prominent hyperspectral datasets confirm the HSST’s superior efficacy over existing methods. The findings underscore the HSST’s utility for applications, including drone operations, where the high-fidelity fusion of HSI and MSI data is crucial. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
18 pages, 1962 KiB  
Article
A Hybrid Semantic Representation Method Based on Fusion Conceptual Knowledge and Weighted Word Embeddings for English Texts
by Zan Qiu, Guimin Huang, Xingguo Qin, Yabing Wang, Jiahao Wang and Ya Zhou
Information 2024, 15(11), 708; https://doi.org/10.3390/info15110708 - 5 Nov 2024
Abstract
The accuracy of traditional topic models may be compromised due to the sparsity of co-occurring vocabulary in the corpus, whereas conventional word embedding models tend to excessively prioritize contextual semantic information and inadequately capture domain-specific features in the text. This paper proposes a [...] Read more.
The accuracy of traditional topic models may be compromised due to the sparsity of co-occurring vocabulary in the corpus, whereas conventional word embedding models tend to excessively prioritize contextual semantic information and inadequately capture domain-specific features in the text. This paper proposes a hybrid semantic representation method that combines a topic model that integrates conceptual knowledge with a weighted word embedding model. Specifically, we construct a topic model incorporating the Probase concept knowledge base to perform topic clustering and obtain topic semantic representation. Additionally, we design a weighted word embedding model to enhance the contextual semantic information representation of the text. The feature-based information fusion model is employed to integrate the two textual representations and generate a hybrid semantic representation. The hybrid semantic representation model proposed in this study was evaluated based on various English composition test sets. The findings demonstrate that the model presented in this paper exhibits superior accuracy and practical value compared to existing text representation methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Model preprocessing flowchart.</p>
Full article ">Figure 2
<p>Example of part-of-speech tagging.</p>
Full article ">Figure 3
<p>Model framework diagram.</p>
Full article ">Figure 4
<p>Experimental results of the precision comparison.</p>
Full article ">Figure 5
<p>Experimental results of recall comparison.</p>
Full article ">Figure 6
<p>Experimental results of F1 comparison.</p>
Full article ">
18 pages, 4937 KiB  
Article
Large-Kernel Central Block Masked Convolution and Channel Attention-Based Reconstruction Network for Anomaly Detection of High-Resolution Hyperspectral Imagery
by Qiong Ran, Hong Zhong, Xu Sun, Degang Wang and He Sun
Remote Sens. 2024, 16(22), 4125; https://doi.org/10.3390/rs16224125 - 5 Nov 2024
Abstract
In recent years, the rapid advancement of drone technology has led to an increasing use of drones equipped with hyperspectral sensors for ground imaging. Hyperspectral data captured via drones offer significantly higher spatial resolution, but this also introduces more complex background details and [...] Read more.
In recent years, the rapid advancement of drone technology has led to an increasing use of drones equipped with hyperspectral sensors for ground imaging. Hyperspectral data captured via drones offer significantly higher spatial resolution, but this also introduces more complex background details and larger target scales in high-resolution hyperspectral imagery (HRHSI), posing substantial challenges for hyperspectral anomaly detection (HAD). Mainstream reconstruction-based deep learning methods predominantly emphasize spatial local information in hyperspectral images (HSIs), relying on small spatial neighborhoods for reconstruction. As a result, large anomalous targets and background details are often well reconstructed, leading to poor anomaly detection performance, as these targets are not sufficiently distinguished from the background. To address these limitations, we propose a novel HAD network for HRHSI based on large-kernel central block masked convolution and channel attention, termed LKCMCA. Specifically, we first employ the pixel-shuffle technique to reduce the size of anomalous targets without losing image information. Next, we design a large-kernel central block masked convolution to make the network pay more attention to the surrounding background information, enabling better fusion of the information between adjacent bands. This, coupled with an efficient channel attention mechanism, allows the network to capture deeper spectral features, enhancing the reconstruction of the background while suppressing anomalous targets. Furthermore, we introduce an adaptive loss function by down-weighting anomalous pixels based on the mean absolute error. This loss function is specifically designed to suppress the reconstruction of potentially anomalous pixels during network training, allowing our model to be considered an excellent background reconstruction network. By leveraging reconstruction error, the model effectively highlights anomalous targets. Meanwhile, we produced four benchmark datasets specifically for HAD tasks using existing HRHSI data, addressing the current shortage of HRHSI datasets in the HAD field. Extensive experiments demonstrate that our LKCMCA method achieves superior detection performance, outperforming ten state-of-the-art HAD methods on all datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed LKCMCA methodology.</p>
Full article ">Figure 2
<p>Pixel-shuffle downsampling with a downsampling factor of 2 and the variation of the 3 × 3 convolution kernel.</p>
Full article ">Figure 3
<p>Detailed information of the proposed network structure (the number above Conv, e.g., 64, indicates the number of output channels).</p>
Full article ">Figure 4
<p>Pseudo-color and ground truth maps for the four datasets.</p>
Full article ">Figure 5
<p>HAD colored maps obtained by different algorithms on the Data1 dataset. (<b>a</b>) Ground truth, (<b>b</b>) GRX, (<b>c</b>) LRX, (<b>d</b>) CRDBPSW, (<b>e</b>) FEBPAD, (<b>f</b>) VABS, (<b>g</b>) KIFD, (<b>h</b>) Auto-AD, (<b>i</b>) BSDM, (<b>j</b>) GAED, (<b>k</b>) RGAE, and (<b>l</b>) LKCMCA.</p>
Full article ">Figure 6
<p>HAD colored maps obtained by different algorithms on the Data2 dataset. (<b>a</b>) Ground truth, (<b>b</b>) GRX, (<b>c</b>) LRX, (<b>d</b>) CRDBPSW, (<b>e</b>) FEBPAD, (<b>f</b>) VABS, (<b>g</b>) KIFD, (<b>h</b>) Auto-AD, (<b>i</b>) BSDM, (<b>j</b>) GAED, (<b>k</b>) RGAE, and (<b>l</b>) LKCMCA.</p>
Full article ">Figure 7
<p>HAD colored maps obtained by different algorithms on the Data3 dataset. (<b>a</b>) Ground truth, (<b>b</b>) GRX, (<b>c</b>) LRX, (<b>d</b>) CRDBPSW, (<b>e</b>) FEBPAD, (<b>f</b>) VABS, (<b>g</b>) KIFD, (<b>h</b>) Auto-AD, (<b>i</b>) BSDM, (<b>j</b>) GAED, (<b>k</b>) RGAE, and (<b>l</b>) LKCMCA.</p>
Full article ">Figure 8
<p>HAD colored maps obtained by different algorithms on the Data4 dataset. (<b>a</b>) Ground truth, (<b>b</b>) GRX, (<b>c</b>) LRX, (<b>d</b>) CRDBPSW, (<b>e</b>) FEBPAD, (<b>f</b>) VABS, (<b>g</b>) KIFD, (<b>h</b>) Auto-AD, (<b>i</b>) BSDM, (<b>j</b>) GAED, (<b>k</b>) RGAE, and (<b>l</b>) LKCMCA.</p>
Full article ">Figure 9
<p>ROC curves of different algorithms on four datasets: (<b>a</b>) Data1, (<b>b</b>) Data2, (<b>c</b>) Data3, and (<b>d</b>) Data4.</p>
Full article ">Figure 10
<p>Separable boxplots of different algorithms on four datasets: (<b>a</b>) Data1, (<b>b</b>) Data2, (<b>c</b>) Data3, and (<b>d</b>) Data4.</p>
Full article ">Figure 11
<p>HAD colored maps obtained on the four datasets using the L1 loss function and the proposed loss function.</p>
Full article ">Figure 12
<p>Separable boxplots of the HAD results on the four datasets using the L1 loss function and the proposed loss function: (<b>a</b>) Data1, (<b>b</b>) Data2, (<b>c</b>) Data3, and (<b>d</b>) Data4.</p>
Full article ">
17 pages, 11245 KiB  
Article
Underwater Object Detection Algorithm Based on an Improved YOLOv8
by Fubin Zhang, Weiye Cao, Jian Gao, Shubing Liu, Chenyang Li, Kun Song and Hongwei Wang
J. Mar. Sci. Eng. 2024, 12(11), 1991; https://doi.org/10.3390/jmse12111991 - 5 Nov 2024
Abstract
Due to the complexity and diversity of underwater environments, traditional object detection algorithms face challenges in maintaining robustness and detection accuracy when applied underwater. This paper proposes an underwater object detection algorithm based on an improved YOLOv8 model. First, the introduction of CIB [...] Read more.
Due to the complexity and diversity of underwater environments, traditional object detection algorithms face challenges in maintaining robustness and detection accuracy when applied underwater. This paper proposes an underwater object detection algorithm based on an improved YOLOv8 model. First, the introduction of CIB building blocks into the backbone network, along with the optimization of the C2f structure and the incorporation of large-kernel depthwise convolutions, effectively enhances the model’s receptive field. This improvement increases the capability of detecting multi-scale objects in complex underwater environments without adding a computational burden. Next, the incorporation of a Partial Self-Attention (PSA) module at the end of the backbone network enhances model efficiency and optimizes the utilization of computational resources while maintaining high performance. Finally, the integration of the Neck component from the Gold-YOLO model improves the neck structure of the YOLOv8 model, facilitating the fusion and distribution of information across different levels, thereby achieving more efficient information integration and interaction. Experimental results show that YOLOv8-CPG significantly outperforms the traditional YOLOv8 in underwater environments. Precision and Recall show improvements of 2.76% and 2.06%. Additionally, mAP50 and mAP50-95 metrics have increased by 1.05% and 3.55%, respectively. Our approach provides an efficient solution to the difficulties encountered in underwater object detection. Full article
(This article belongs to the Special Issue Intelligent Measurement and Control System of Marine Robots)
Show Figures

Figure 1

Figure 1
<p>Structure diagram of the CIB. (<b>a</b>) CIB deployment; (<b>b</b>) Internal structure of the CIB.</p>
Full article ">Figure 2
<p>Structure diagram of the CIBC2f.</p>
Full article ">Figure 3
<p>Structure diagram of the low-order collection-distribution mechanism.</p>
Full article ">Figure 4
<p>Some examples from CoopKnowledge’s A dataset.</p>
Full article ">Figure 5
<p>Experimental accuracy data results of YOLOv8s-CPG. (<b>a</b>) Precision data; (<b>b</b>) Detection accuracy data.</p>
Full article ">Figure 5 Cont.
<p>Experimental accuracy data results of YOLOv8s-CPG. (<b>a</b>) Precision data; (<b>b</b>) Detection accuracy data.</p>
Full article ">Figure 6
<p>Ablation experimental data results of YOLOv8s-CIB+PSA+GY (YOLOv8s-CPG).</p>
Full article ">Figure 7
<p>Comparison of model indicators of yolo series.</p>
Full article ">Figure 8
<p>Comparison of YOLO8-CPG in a real underwater environment.</p>
Full article ">Figure 9
<p>Comparison of YOLO8-CPG on the WildFish dataset.</p>
Full article ">
3 pages, 147 KiB  
Editorial
Advances in Uncertain Information Fusion
by Lianmeng Jiao
Entropy 2024, 26(11), 945; https://doi.org/10.3390/e26110945 - 5 Nov 2024
Viewed by 140
Abstract
Information fusion is the combination of information from multiple sources, which aims to draw more comprehensive, specific, and accurate inferences about the world than are achievable from the individual sources in isolation [...] Full article
(This article belongs to the Special Issue Advances in Uncertain Information Fusion)
22 pages, 10749 KiB  
Article
Research on Fault Diagnosis of Rotating Parts Based on Transformer Deep Learning Model
by Zilin Zhang, Yaohua Deng, Xiali Liu and Jige Liao
Appl. Sci. 2024, 14(22), 10095; https://doi.org/10.3390/app142210095 - 5 Nov 2024
Viewed by 187
Abstract
The rotating parts of large and complex equipment are key components that ensure the normal operation of the equipment. Accurate fault diagnosis is crucial for the safe operation of these systems. To simultaneously extract both local and global valuable fault feature information from [...] Read more.
The rotating parts of large and complex equipment are key components that ensure the normal operation of the equipment. Accurate fault diagnosis is crucial for the safe operation of these systems. To simultaneously extract both local and global valuable fault feature information from key components of complex equipment, this study proposes a fault diagnosis network model, named MultiDilatedFormer, which is based on the fusion of transformer and multi-head dilated convolution. The newly designed multi-head dilated convolution module is sequentially integrated into the transformer-encoder architecture, constructing a feature extraction module where the complementary advantages of both components enhance overall performance. Firstly, the sample is expanded into a two-dimensional feature map and then input into the newly designed feature extraction module. Finally, the diagnostic output is performed by the designed patch feature fusion module and classifier module. Additionally, interpretability research is conducted on the proposed model, aiming to understand the decision-making mechanism of the model through visual analysis of the entire decision process. The experimental results on three different datasets indicate that the proposed model achieved high accuracy in fault diagnosis with relatively short data windows. The highest accuracy reached 97.95%, which was up to 10.97% higher than other models. Furthermore, the feasibility of the model is also verified in the actual dataset of the rotating parts of the injection molding machine. The excellent performance of the model on different datasets demonstrates its effectiveness in extracting comprehensive fault feature information and also proves its great potential in practical industrial applications. Full article
Show Figures

Figure 1

Figure 1
<p>MultiDilatedFormer model framework.</p>
Full article ">Figure 2
<p>Schematic diagram of data sliding window sampling.</p>
Full article ">Figure 3
<p>Multi-head self-attention layer.</p>
Full article ">Figure 4
<p>Multi-head dilated convolutional layer.</p>
Full article ">Figure 5
<p>Global average pooling layer.</p>
Full article ">Figure 6
<p>The accuracy curves of XJTU-SY dataset.</p>
Full article ">Figure 7
<p>Normalized confusion matrix of XJTU-SY dataset: (<b>a</b>) WDCNN; (<b>b</b>) DRCNN; (<b>c</b>) DialetedNN; (<b>d</b>) Vision Transformer; (<b>e</b>) MultiDilatedFormer.</p>
Full article ">Figure 8
<p>The t-SNE cluster diagram of XJTU-SY dataset: (<b>a</b>) WDCNN; (<b>b</b>) DRCNN; (<b>c</b>) DialetedNN; (<b>d</b>) Vision Transformer; (<b>e</b>) MultiDilatedFormer (the red circle in the figure represents the mixed parts).</p>
Full article ">Figure 9
<p>Accuracy boxplot of the CWRU dataset.</p>
Full article ">Figure 10
<p>Normalized confusion matrix of CWRU dataset: (<b>a</b>) WDCNN; (<b>b</b>) DRCNN; (<b>c</b>) DialetedNN; (<b>d</b>) Vision Transformer; (<b>e</b>) MultiDilatedFormer.</p>
Full article ">Figure 11
<p>The t-SNE cluster diagram of CWRU dataset: (<b>a</b>) WDCNN; (<b>b</b>) DRCNN; (<b>c</b>) DialetedNN; (<b>d</b>) Vision Transformer; (<b>e</b>) MultiDilatedFormer.</p>
Full article ">Figure 12
<p>Model decision-making process visualization: (<b>a</b>) input sample, size: (1, 105); (<b>b</b>) after expand, size: (105, 105); (<b>c</b>) input embedding, size: (25, 512); (<b>d</b>) positional encoding, size:(25, 512); (<b>e</b>) multi-attention, size: (25, 512); (<b>f</b>) Multi−DilatedConv, size: (25, 512); (<b>g</b>) patch fusion, size: (1, 512); (<b>h</b>) output, size: (1, 10). The left is a bar chart, and the right is a heat map.</p>
Full article ">Figure 13
<p>Visualization of multi-head mechanism: (<b>a</b>) query of MSL; (<b>b</b>) key of MSL; (<b>c</b>) value of MSL; (<b>d</b>) D1 after MDL.</p>
Full article ">Figure 13 Cont.
<p>Visualization of multi-head mechanism: (<b>a</b>) query of MSL; (<b>b</b>) key of MSL; (<b>c</b>) value of MSL; (<b>d</b>) D1 after MDL.</p>
Full article ">Figure 14
<p>Three damaged small components (check valve, check ring): (<b>a</b>) check valve; (<b>b</b>) check ring.</p>
Full article ">Figure 15
<p>Visualization of fault data.</p>
Full article ">Figure 16
<p>The t-SNE cluster diagram of actual scene dataset.</p>
Full article ">
22 pages, 5859 KiB  
Article
A Multi-Active and Multi-Passive Sensor Fusion Algorithm for Multi-Target Tracking in Dense Group Clutter Environments
by Yongquan Zhang, Fan Yang, Wenbo Zhang, Aomen Shang and Zhibin Li
Remote Sens. 2024, 16(22), 4120; https://doi.org/10.3390/rs16224120 - 5 Nov 2024
Viewed by 208
Abstract
Multi-target tracking (MTT) of multi-active and multi-passive sensor (MAMPS) systems in dense group clutter environments is facing significant challenges in measurement fusion. Due to the difference in measurement information characteristics in MAMPS fusion, it is difficult to effectively correlate and fuse different types [...] Read more.
Multi-target tracking (MTT) of multi-active and multi-passive sensor (MAMPS) systems in dense group clutter environments is facing significant challenges in measurement fusion. Due to the difference in measurement information characteristics in MAMPS fusion, it is difficult to effectively correlate and fuse different types of sensors’ measurements, leading to difficulty in taking full advantage of various types of sensors to improve target tracking accuracy. To this end, we present a novel MAMPS fusion algorithm, which is based on centralized measurement association fusion (MAF) and distributed deep neural network (DNN) track fusion, named the MAMPS-MAF-DNN algorithm. Firstly, to reduce the impact of the dense group clutter, a clutter pre-processing algorithm is elaborated, which combines the advantages of the CFDP (cluster by finding density peaks) and double threshold screening algorithms. Then, for the single-active and multi-passive sensor (SAMPS) system, a centralized MAF algorithm based on angle information is developed, called the SAMPS-MAF algorithm. Finally, the SAMPS-MAF algorithm is extended to the MAMPS system within the DNN framework, and the complete MAMPS-MAF-DNN algorithm is proposed. Experimental results indicate that, compared to the existing MAF and covariance intersection (CI) fusion algorithms, the proposed MAMPS-MAF-DNN algorithm can fully combine the advantages of multi-active and multi-passive sensors, efficiently reduce the computational complexity, and obviously improve the tracking accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>The dense group clutter.</p>
Full article ">Figure 2
<p>Distribution of <math display="inline"><semantics> <mi>γ</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>The maximum offset <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msubsup> <mi>α</mi> <mi>k</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>j</mi> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Schematic of statistics.</p>
Full article ">Figure 5
<p>The curve of PDF of <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mrow> <mi>a</mi> <mo>,</mo> <mi>j</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Schematic of measurement association.</p>
Full article ">Figure 7
<p>Distance diagram from <math display="inline"><semantics> <mi>P</mi> </semantics></math> to <math display="inline"><semantics> <mi>L</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>MAMPS-MAF-DNN algorithm procedure.</p>
Full article ">Figure 9
<p>Loss of network in training and testing.</p>
Full article ">Figure 10
<p>Targets’ ground truths.</p>
Full article ">Figure 11
<p>Simulation results of multi-target tracking in the SASPS system.</p>
Full article ">Figure 11 Cont.
<p>Simulation results of multi-target tracking in the SASPS system.</p>
Full article ">Figure 12
<p>Simulation results of multi-target tracking in the SAMPS system.</p>
Full article ">Figure 13
<p>Simulation results of multi-target tracking in the MAMPS system.</p>
Full article ">
19 pages, 3109 KiB  
Article
Text Command Intelligent Understanding for Cybersecurity Testing
by Junkai Yi, Yuan Liu, Zhongbai Jiang and Zhen Liu
Electronics 2024, 13(21), 4330; https://doi.org/10.3390/electronics13214330 - 4 Nov 2024
Viewed by 286
Abstract
Research on named entity recognition (NER) and command-line generation for network security evaluation tools is relatively scarce, and no mature models for recognition or generation have been developed thus far. Therefore, in this study, the aim is to build a specialized corpus for [...] Read more.
Research on named entity recognition (NER) and command-line generation for network security evaluation tools is relatively scarce, and no mature models for recognition or generation have been developed thus far. Therefore, in this study, the aim is to build a specialized corpus for network security evaluation tools by combining knowledge graphs and information entropy for automatic entity annotation. Additionally, a novel NER approach based on the KG-BERT-BiLSTM-CRF model is proposed. Compared to the traditional BERT-BiLSTM model, the KG-BERT-BiLSTM-CRF model demonstrates superior performance when applied to the specialized corpus of network security evaluation tools. The graph attention network (GAT) component effectively extracts relevant sequential content from datasets in the network security evaluation domain. The fusion layer then concatenates the feature sequences from the GAT and BiLSTM layers, enhancing the training process. Upon successful NER execution, in this study, the identified entities are mapped to pre-established command-line data for network security evaluation tools, achieving automatic conversion from textual content to evaluation commands. This process not only improves the efficiency and accuracy of command generation but also provides practical value for the development and optimization of network security evaluation tools. This approach enables the more precise automatic generation of evaluation commands tailored to specific security threats, thereby enhancing the timeliness and effectiveness of cybersecurity defenses. Full article
(This article belongs to the Special Issue Data-Centric Artificial Intelligence: New Methods for Data Processing)
Show Figures

Figure 1

Figure 1
<p>Structure diagram of KG-BERT-BiLSTM-CRF model.</p>
Full article ">Figure 2
<p>Fine tuning of BERT.</p>
Full article ">Figure 3
<p>Fusion Layer.</p>
Full article ">Figure 4
<p>Automatic entity annotation workflow diagram.</p>
Full article ">Figure 5
<p>KG-BERT-BiLSTM-CRF model precision variation diagram.</p>
Full article ">Figure 6
<p>KG-BERT-BiLSTM-CRF model recall variation diagram.</p>
Full article ">Figure 7
<p>KG-BERT-BiLSTM-CRF model F1 variation diagram.</p>
Full article ">Figure 8
<p>Model precision Recall and F1-score comparison diagram.</p>
Full article ">Figure 9
<p>Comparison of loss rate diagram. This figure presents the training loss progression for three different models over multiple training epochs. Panel descriptions are as follows: (<b>a</b>) this panel displays the average loss for all epochs across three models: BERT-Bi-LSTM (blue dashed line), KG-BERT-Bi-LSTM-CRF (red dotted line), and BERT (green solid line); (<b>b</b>) this panel specifically focuses on the detailed view of the average loss between 0 and 0.5, providing a zoomed-in perspective of the initial training phase.</p>
Full article ">
18 pages, 6257 KiB  
Article
Enhanced Disease Detection for Apple Leaves with Rotating Feature Extraction
by Zhihui Qiu, Yihan Xu, Chen Chen, Wen Zhou and Gang Yu
Agronomy 2024, 14(11), 2602; https://doi.org/10.3390/agronomy14112602 - 4 Nov 2024
Viewed by 284
Abstract
Leaf diseases such as Mosaic disease and Black Rot are among the most common diseases affecting apple leaves, significantly reducing apple yield and quality. Detecting leaf diseases is crucial for the prevention and control of these conditions. In this paper, we propose incorporating [...] Read more.
Leaf diseases such as Mosaic disease and Black Rot are among the most common diseases affecting apple leaves, significantly reducing apple yield and quality. Detecting leaf diseases is crucial for the prevention and control of these conditions. In this paper, we propose incorporating rotated bounding boxes into deep learning-based detection, introducing the ProbIoU loss function to better quantify the difference between model predictions and real results in practice. Specifically, we integrated the Plant Village dataset with an on-site dataset of apple leaves from an orchard in Weifang City, Shandong Province, China. Additionally, data augmentation techniques were employed to expand the dataset and address the class imbalance issue. We utilized the EfficientNetV2 architecture with inverted residual structures (FusedMBConv and S-MBConv modules) in the backbone network to build sparse features using a top–down approach, minimizing information loss. The inclusion of the SimAM attention mechanism effectively captures both channel and spatial attention, expanding the receptive field and enhancing feature extraction. Furthermore, we introduced depth-wise separable convolution and the CAFM in the neck network to improve feature fusion capabilities. Finally, experimental results demonstrate that our model outperforms other detection models, achieving 93.3% [email protected], 88.7% Precision, and 89.6% Recall. This approach provides a highly effective solution for the early detection of apple leaf diseases, with the potential to significantly improve disease management in apple orchards. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Study area location and corresponding natural color image.</p>
Full article ">Figure 2
<p>Sample images from the dataset: (<b>a</b>) Alternaria Blotch, (<b>b</b>) Black Rot, (<b>c</b>) Brown Spot, (<b>d</b>) Gray Spot, (<b>e</b>) Mosaic, (<b>f</b>) Rust, (<b>g</b>) Scab, and (<b>h</b>) healthy apple leaves.</p>
Full article ">Figure 3
<p>Methods for representing object detection bounding boxes: (<b>a</b>) horizontal box representation; (<b>b</b>) oriented box.</p>
Full article ">Figure 4
<p>Comparison of detection results using different bounding box methods: (<b>a</b>) detection with horizontal bounding box; (<b>b</b>) detection with rotating bounding box.</p>
Full article ">Figure 5
<p>Overall network structure.</p>
Full article ">Figure 6
<p>The structure of convolution blocks: (<b>a</b>) FusedMBConv; (<b>b</b>) S-MBConv.</p>
Full article ">Figure 7
<p>Traditional Conv and depth-wise separable convolution.</p>
Full article ">Figure 8
<p>The structure of DWConv.</p>
Full article ">Figure 9
<p>The structure of CAFM.</p>
Full article ">Figure 10
<p>Comparison of testing results using different models: (<b>a</b>) original testing images with annotations; (<b>b</b>) detection result of Faster-RCNN; (<b>c</b>) detection result of YOLOv5; (<b>d</b>) detection result of YOLOv8; (<b>e</b>) detection results of our proposed model.</p>
Full article ">Figure 11
<p>Comparison of RMSE counting results for different models.</p>
Full article ">
18 pages, 3921 KiB  
Article
Image Dehazing Enhancement Strategy Based on Polarization Detection of Space Targets
by Shuzhuo Miao, Zhengwei Li, Han Zhang and Hongwen Li
Appl. Sci. 2024, 14(21), 10042; https://doi.org/10.3390/app142110042 - 4 Nov 2024
Viewed by 305
Abstract
In view of the fact that the technology of polarization detection performs better at identifying targets through clouds and fog, the recognition ability of the space target detection system under haze conditions will be improved by applying the technology. However, due to the [...] Read more.
In view of the fact that the technology of polarization detection performs better at identifying targets through clouds and fog, the recognition ability of the space target detection system under haze conditions will be improved by applying the technology. However, due to the low ambient brightness and limited target radiation information during space target detection, the polarization information of space target is seriously lost, and the advantages of polarization detection technology in identifying targets through clouds and fog cannot be effectively exerted under the condition of haze detection. In order to solve the above problem, a dehazing enhancement strategy specifically applied to polarization images of space targets is proposed. Firstly, a hybrid multi-channel interpolation method based on regional correlation analysis is proposed to improve the calculation accuracy of polarization information during preprocessing. Secondly, an image processing method based on full polarization information inversion is proposed to obtain the degree of polarization of the image after inversion and the intensity of the image after dehazing. Finally, the image fusion method based on discrete cosine transform is used to obtain the dehazing polarization fusion enhancement image. The effectiveness of the proposed image processing strategy is verified by carrying out simulated and real space target detection experiments. Compared with other methods, by using the proposed image processing strategy, the quality of the polarization images of space targets obtained under the haze condition is significantly improved. Our research results have important practical implications for promoting the wide application of polarization detection technology in the field of space target detection. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

Figure 1
<p>The diagram of the polarization detection system of space targets.</p>
Full article ">Figure 2
<p>Schematic diagram of the interpolation method proposed in this paper.</p>
Full article ">Figure 3
<p>Subjective comparison results of different interpolation methods.</p>
Full article ">Figure 4
<p>The comparison results of Scene 1–3 before and after degradation.</p>
Full article ">Figure 5
<p>The image processing flowchart of the proposed method.</p>
Full article ">Figure 6
<p>Comparison results of polarization images of simulated space targets after dehazing: (<b>a</b>) Polarization image of the targets obtained by inversion, (<b>b</b>) Intensity image after dehazing.</p>
Full article ">Figure 7
<p>Schematic diagram of polarization image fusion methods.</p>
Full article ">Figure 8
<p>Subjective comparison results of the dehazing effects of simulated space targets.</p>
Full article ">Figure 9
<p>Subjective comparison results of the dehazing effects of the celestial targets and the space targets.</p>
Full article ">
20 pages, 1116 KiB  
Article
Signaling Effects in AI Streamers: Optimal Separation Strategy Under Different Market Conditions
by Ying Yu and Yunpeng Yang
J. Theor. Appl. Electron. Commer. Res. 2024, 19(4), 2997-3016; https://doi.org/10.3390/jtaer19040144 - 3 Nov 2024
Viewed by 457
Abstract
The fusion of livestreaming e-commerce and AI technology is booming, and many firms have started to replace human streamers with AI streamers. Despite their popularity, the acceptance of AI streamers by consumers varies widely and the signaling effects of AI streamers still remain [...] Read more.
The fusion of livestreaming e-commerce and AI technology is booming, and many firms have started to replace human streamers with AI streamers. Despite their popularity, the acceptance of AI streamers by consumers varies widely and the signaling effects of AI streamers still remain unclear. We build an analytical model and compare scenarios where the acceptance level is either exogenously given or endogenously determined, highlighting the implications for firms’ optimal separation strategy. Our findings suggest that in markets with moderate information asymmetry, using both price and acceptance level as joint signals can be more profitable for high-quality firms. Conversely, in highly asymmetric markets, firms must incur additional costs to distinguish their high-quality products, regardless of the signaling strategy employed. Our paper provides strategic insights for firms aiming to leverage AI streamers in diverse market conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Timing of model.</p>
Full article ">Figure 2
<p>The high-quality firm’s profit variation with the fraction of uninformed consumers.</p>
Full article ">Figure 3
<p>The high-quality firm’s profit variation with the fraction of uninformed consumers under different marginal costs for acceptance improvement.</p>
Full article ">
17 pages, 4993 KiB  
Article
NFSA-DTI: A Novel Drug–Target Interaction Prediction Model Using Neural Fingerprint and Self-Attention Mechanism
by Feiyang Liu, Huang Xu, Peng Cui, Shuo Li, Hongbo Wang and Ziye Wu
Int. J. Mol. Sci. 2024, 25(21), 11818; https://doi.org/10.3390/ijms252111818 - 3 Nov 2024
Viewed by 616
Abstract
Existing deep learning methods have shown outstanding performance in predicting drug–target interactions. However, they still have limitations: (1) the over-reliance on locally extracted features by some single encoders, with insufficient consideration of global features, and (2) the inadequate modeling and learning of local [...] Read more.
Existing deep learning methods have shown outstanding performance in predicting drug–target interactions. However, they still have limitations: (1) the over-reliance on locally extracted features by some single encoders, with insufficient consideration of global features, and (2) the inadequate modeling and learning of local crucial interaction sites in drug–target interaction pairs. In this study, we propose a novel drug–target interaction prediction model called the Neural Fingerprint and Self-Attention Mechanism (NFSA-DTI), which effectively integrates the local information of drug molecules and target sequences with their respective global features. The neural fingerprint method is used in this model to extract global features of drug molecules, while the self-attention mechanism is utilized to enhance CNN’s capability in capturing the long-distance dependencies between the subsequences in the target amino acid sequence. In the feature fusion module, we improve the bilinear attention network by incorporating attention pooling, which enhances the model’s ability to learn local crucial interaction sites in the drug–target pair. The experimental results on three benchmark datasets demonstrated that NFSA-DTI outperformed all baseline models in predictive performance. Furthermore, case studies illustrated that our model could provide valuable insights for drug discovery. Moreover, our model offers molecular-level interpretations. Full article
Show Figures

Figure 1

Figure 1
<p>Ablation study on the Human and BioSNAP datasets.</p>
Full article ">Figure 2
<p>(<b>a</b>) The 2D visualization result of mycophenolic acids obtained from NFSA-DTI. The orange highlights indicate possible local binding sites, with darker color and larger area indicating greater likelihood. (<b>b</b>,<b>c</b>) The 2D and 3D diagrams of the interaction between mycophenolic acid and inosine monophosphate dehydrogenase from the PDB online database, drawn by the software Molecular Operating Environment (MOE 2019.0102) [<a href="#B40-ijms-25-11818" class="html-bibr">40</a>].</p>
Full article ">Figure 3
<p>Learning curves of NFSA-DTI when changing some hyperparameters on the validation set of the BindingDB dataset.</p>
Full article ">Figure 4
<p>(<b>A</b>) The framework of NFSA-DTI. It includes four stages. (I) The target protein’s amino acid sequence is transformed into a feature matrix, while the drug molecule’s SMILES is processed into graph structure data. (II) The 3-layer CNN processes the two-dimensional feature matrix and obtains the protein representation after the self-attention enhancing unit. Correspondingly, the 3-layer NFGNN processes the graph structure data and obtains the drug representation. (III) Interactions between protein and drug representations are computed via a bilinear attention mechanism, thereby producing a bilinear attention map. (IV) The joint representation obtained after pooling will be input to the fully connected layer for computing the prediction score p. (<b>B</b>) The framework of ESACM. It comprises three 1D convolutional layers, the corresponding 1D batch normalization layers, a self-attention enhancing unit, and a linear layer. The self-attention enhancing unit consists of two stages. (Stage I) Query, key, and value are computed based on the input matrix, in conjunction with their respective weight matrices. (Stage II) The similarity between query and key is evaluated to derive the attention weights. Subsequently, the output is generated through a weighted summation of the values, utilizing the computed attention weights. (<b>C</b>) The flowchart of NFGNN. Firstly, the target node and neighbor nodes surrounding the target node in the molecular graph are integrated and encoded into numerical features. Similar operations are then performed on subsequent nodes to obtain the final neural fingerprint. Subsequently, after the message passing mechanism has completed one iteration of the graph, the neural fingerprint serves as a fixed auxiliary input for updating the graph after each NFlayer.</p>
Full article ">Figure 5
<p>The flowchart of the bilinear attention network. This module consists of two steps. (Step 1) A bilinear interaction matrix is derived through the computation of protein and drug representations. (Step 2) Subsequently, a joint representation is obtained by employing bilinear pooling and attention pooling.</p>
Full article ">
Back to TopTop