Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (13,472)

Search Parameters:
Keywords = mapping accuracy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2332 KiB  
Article
Fine Estimation of Water Quality in the Yangtze River Basin Based on a Geographically Weighted Random Forest Regression Model
by Fuliang Deng, Wenhui Liu, Mei Sun, Yanxue Xu, Bo Wang, Wei Liu, Ying Yuan and Lei Cui
Remote Sens. 2025, 17(4), 731; https://doi.org/10.3390/rs17040731 (registering DOI) - 19 Feb 2025
Abstract
Water quality evaluation usually relies on limited state-controlled monitoring data, making it challenging to fully capture variations across an entire basin over time and space. The fine estimation of water quality in a spatial context presents a promising solution to this issue; however, [...] Read more.
Water quality evaluation usually relies on limited state-controlled monitoring data, making it challenging to fully capture variations across an entire basin over time and space. The fine estimation of water quality in a spatial context presents a promising solution to this issue; however, traditional analyses often ignore spatial non-stationarity between variables. To solve the above-mentioned problems in water quality mapping research, we took the Yangtze River as our study subject and attempted to use a geographically weighted random forest regression (GWRFR) model to couple massive station observation data and auxiliary data to carry out a fine estimation of water quality. Specifically, we first utilized state-controlled sections’ water quality monitoring data as input for the GWRFR model to train and map six water quality indicators at a 30 m spatial resolution. We then assessed various geographical and environmental factors contributing to water quality and identified spatial differences. Our results show accurate predictions for all indicators: ammonia nitrogen (NH3-N) had the lowest accuracy (R2 = 0.61, RMSE = 0.13), and total nitrogen (TN) had the highest (R2 = 0.74, RMSE = 0.48). The mapping results reveal total nitrogen as the primary pollutant in the Yangtze River basin. Chemical oxygen demand and the permanganate index were mainly influenced by natural factors, while total nitrogen and total phosphorus were impacted by human activities. The spatial distribution of critical influencing factors shows significant clustering. Overall, this study demonstrates the fine spatial distribution of water quality and provides insights into the influencing factors that are crucial for the comprehensive management of water environments. Full article
19 pages, 7319 KiB  
Article
A Dual-Branch U-Net for Staple Crop Classification in Complex Scenes
by Jiajin Zhang, Lifang Zhao and Hua Yang
Remote Sens. 2025, 17(4), 726; https://doi.org/10.3390/rs17040726 - 19 Feb 2025
Abstract
Accurate information on crop planting and spatial distribution is critical for understanding and tracking long-term land use changes. The method of using deep learning (DL) to extract crop information has been applied in large-scale datasets and plain areas. However, current crop classification methods [...] Read more.
Accurate information on crop planting and spatial distribution is critical for understanding and tracking long-term land use changes. The method of using deep learning (DL) to extract crop information has been applied in large-scale datasets and plain areas. However, current crop classification methods face some challenges, such as poor image time continuity, difficult data acquisition, rugged terrain, fragmented plots, and diverse planting conditions in complex scenes. In this study, we propose the Complex Scene Crop Classification U-Net (CSCCU), which aims to improve the mapping accuracy of staple crops in complex scenes by combining multi-spectral bands with spectral features. CSCCU features a dual-branch structure: the main branch concentrates on image feature extraction, while the auxiliary branch focuses on spectral features. In our method, we use the hierarchical feature-level fusion mechanism. Through the hierarchical feature fusion of the shallow feature fusion module (SFF) and the deep feature fusion module (DFF), feature learning is optimized and model performance is improved. We conducted experiments using GaoFen-2 (GF-2) images in Xiuwen County, Guizhou Province, China, and established a dataset consisting of 1000 image patches of size 256, covering seven categories. In our method, the corn and rice accuracies are 89.72% and 88.61%, and the mean intersection over union (mIoU) is 85.61%, which is higher than the compared models (U-Net, SegNet, and DeepLabv3+). Our method provides a novel solution for the classification of staple crops in complex scenes using high-resolution images, which can help to obtain accurate information on staple crops in larger regions in the future. Full article
20 pages, 3079 KiB  
Article
Flow Field Modeling Analysis on Kitchen Environment with Air Conditioning Range Hood
by Xiaoying Huang, Zhihang Shen, Shunyu Zhang, Yongqiang Tan, Ang Li, Bingsong Yu, Yi Jiang, Liang Peng and Zhenlei Chen
Atmosphere 2025, 16(2), 236; https://doi.org/10.3390/atmos16020236 - 19 Feb 2025
Abstract
This study proposes a flow field modeling analysis of kitchen environments with air-conditioning range hoods. The substructure approach is applied to resolve the challenges of low computational efficiency and convergence difficulties associated with the simultaneous consideration of the range hood and the cooling [...] Read more.
This study proposes a flow field modeling analysis of kitchen environments with air-conditioning range hoods. The substructure approach is applied to resolve the challenges of low computational efficiency and convergence difficulties associated with the simultaneous consideration of the range hood and the cooling air-conditioning fan impeller rotation models. The presented approach effectively enhances computational efficiency while ensuring accuracy. A flow field analysis of the air-conditioning substructure was performed in Fluent to obtain the velocity contour plot at the air-conditioning outlet monitoring surface. The data were then mapped to the full kitchen hood model to enable a comprehensive flow field analysis of the kitchen setup. The results show that the proposed substructure-based method to analyze the flow field in kitchens with air-conditioning hoods is computationally efficient, achieving an alignment accuracy above 95% across four measurement points. These findings establish a strong foundation for future comfort assessments and the optimization of kitchens with air-conditioning hoods. Full article
(This article belongs to the Section Air Pollution Control)
25 pages, 25542 KiB  
Article
Automatic Mapping of 10 m Tropical Evergreen Forest Cover in Central African Republic with Sentinel-2 Dynamic World Dataset
by Wenqiong Zhao, Xinyan Zhong, Xiaodong Li, Xia Wang, Yun Du and Yihang Zhang
Remote Sens. 2025, 17(4), 722; https://doi.org/10.3390/rs17040722 - 19 Feb 2025
Abstract
Tropical evergreen forests represent the richest biodiversity in terrestrial ecosystems, and the fine spatial-temporal resolution mapping of these forests is essential for the study and conservation of this vital natural resource. The current methods for mapping tropical evergreen forests frequently exhibit coarse spatial [...] Read more.
Tropical evergreen forests represent the richest biodiversity in terrestrial ecosystems, and the fine spatial-temporal resolution mapping of these forests is essential for the study and conservation of this vital natural resource. The current methods for mapping tropical evergreen forests frequently exhibit coarse spatial resolution and lengthy production cycles. This can be attributed to the inherent challenges associated with monitoring diverse surface changes and the persistence of cloudy, rainy conditions in the tropics. We propose a novel approach to automatically map annual 10 m tropical evergreen forest covers from 2017 to 2023 with the Sentinel-2 Dynamic World dataset in the biodiversity-rich and conservation-sensitive Central African Republic (CAR). The Copernicus Global Land Cover Layers (CGLC) and Global Forest Change (GFC) products were used first to track stable evergreen forest samples. Then, initial evergreen forest cover maps were generated by determining the threshold of evergreen forest cover for each of the yearly median forest cover probability maps. From 2017 to 2023, the annual modified 10 m tropical evergreen forest cover maps were finally produced from the initial evergreen forest cover maps and NEFI (Non-Evergreen Forest Index) images with the estimated thresholds. The results produced by the proposed method achieved an overall accuracy of >94.10% and a Cohen’s Kappa of >87.63% across all years (F1-Score > 94.05%), which represents a significant improvement over the performance of previous methods, including the CGLC evergreen forest cover maps and yearly median forest cover probability maps based on Sentinel-2 Dynamic World. Our findings demonstrate that the proposed method provides detailed spatial characteristics of evergreen forests and time-series change in the Central African Republic, with substantial consistency across all years. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study site and dataset. (<b>a</b>) Geolocation of the Central African Republic in Africa, evergreen forest sourced from CGLS-LC100 land cover map in 2019; (<b>b</b>) Zoomed CGLS-LC100 land cover map in 2019, highlighting the classification for evergreen forest only, from this dataset; (<b>c</b>) Monthly dynamics of the forest cover possibilities in the Sentinel-2 Dynamic World dataset, in which A, B and C refer to typical evergreen forest, and D, E and F refer to non-forest or non-evergreen forest samples. These six samples were selected in the free-clouds area of the monthly Sentinel-2 Dynamic World images in 2020.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Monthly and yearly median forest cover probability maps in the Central African Republic based on Sentinel-2 near real-time Dynamic World data in 2020. The red box in January denotes the zoomed area in the following <a href="#remotesensing-17-00722-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>The subarea of monthly and yearly median forest cover probability maps in <a href="#remotesensing-17-00722-f003" class="html-fig">Figure 3</a> based on Sentinel-2 near real-time Dynamic World data in 2020.</p>
Full article ">Figure 5
<p>Evergreen forest cover sample points in the Dynamic World forest cover probability map and the NEFI image. (<b>a</b>) Evergreen forest cover sample points in the Dynamic World forest cover probability map of 2020; (<b>b</b>) Evergreen forest cover sample points in the Non-Evergreen Forest Index (NEFI) image of 2020; (<b>c</b>) Statistical histogram of evergreen forest cover sample points in the forest cover probability map; (<b>d</b>) Statistical histogram of evergreen forest cover sample points in the NEFI image.</p>
Full article ">Figure 6
<p>Evergreen forest cover maps for different products and methods. (<b>a</b>) CGLS-LC100 evergreen forest cover map in the year of 2020; (<b>b</b>) Evergreen forest cover map generated from yearly median Dynamic World forest cover probability in 2020 only using threshold T1, filtered by GFC; (<b>c</b>) Modified evergreen forest cover map in the year of 2020. The Subarea 1 and Subarea 2 show two zoomed areas in (<b>a</b>–<b>c</b>) and the corresponding Google Earth RGB images, of which the acquisition times are 23 January 2014 for Subarea 1 and 29 July 2012 for Subarea 2.</p>
Full article ">Figure 7
<p>Annual evergreen forest cover maps for different years from 2017 to 2023 produced by the proposed method. Subarea 1 and Subarea 2 show zoomed maps of localized evergreen forest cover by year with the Google Earth image for reference.</p>
Full article ">Figure 8
<p>Evergreen forest cover change maps for different years from 2017 to 2023. (<b>a</b>) Annual evergreen forest cover decreases year map from 2018 to 2023 (red labeling); (<b>b</b>) Annual evergreen forest cover increases year map from 2018 to 2023 (green labeling). The different shaded colors indicate the year in which the first increase and decrease occurred for baseline evergreen forests in 2017. Four columns of southeastern, central-south, central, and southwestern, showing annual evergreen forest cover decreases and increases, respectively aligning with the red box in (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 9
<p>Frequency map of evergreen forest cover maps for different years from 2017 to 2023. Label numbers represent the frequency of occurrences of evergreen forest cover at pixel scale.</p>
Full article ">Figure 10
<p>Comparison of evergreen forest cover mapping with different composites of yearly median and mean Dynamic World Sentinel-2 forest cover probability images. (<b>a</b>) Evergreen forest cover map using mean composite; (<b>b</b>) Evergreen forest cover map using median composite; (<b>c</b>) Dynamic World forest cover probability median map; (<b>d</b>) RGB Google Earth imagery. (<b>e</b>) Overall accuracy assessments of evergreen forest cover maps produced from yearly mean, median, and integration of yearly mean and NEFI image composites in the year of 2020.</p>
Full article ">
25 pages, 12971 KiB  
Article
A Semi-Supervised Diffusion-Based Framework for Weed Detection in Precision Agricultural Scenarios Using a Generative Attention Mechanism
by Ruiheng Li, Xuaner Wang, Yuzhuo Cui, Yifei Xu, Yuhao Zhou, Xuechun Tang, Chenlu Jiang, Yihong Song, Hegan Dong and Shuo Yan
Agriculture 2025, 15(4), 434; https://doi.org/10.3390/agriculture15040434 - 19 Feb 2025
Abstract
The development of smart agriculture has created an urgent demand for efficient and accurate weed recognition and detection technologies. However, the diverse and complex morphology of weeds, coupled with the scarcity of labeled data in agricultural scenarios, poses significant challenges to traditional supervised [...] Read more.
The development of smart agriculture has created an urgent demand for efficient and accurate weed recognition and detection technologies. However, the diverse and complex morphology of weeds, coupled with the scarcity of labeled data in agricultural scenarios, poses significant challenges to traditional supervised learning methods. To address these issues, a weed detection model based on a semi-supervised diffusion generative network is proposed. This model integrates a generative attention mechanism and semi-diffusion loss to enable the efficient utilization of both labeled and unlabeled data. Experimental results demonstrate that the proposed method outperforms existing approaches across multiple evaluation metrics, achieving a precision of 0.94, recall of 0.90, accuracy of 0.92, and mAP@50 and mAP@75 of 0.92 and 0.91, respectively. Compared to traditional methods such as DETR, precision and recall are improved by approximately 10% and 8%, respectively. Additionally, compared to the enhanced YOLOv10, mAP@50 and mAP@75 are increased by 1% and 2%, respectively. The proposed semi-supervised diffusion weed detection model provides an efficient and reliable solution for weed recognition and introduces new research perspectives for the application of semi-supervised learning in smart agriculture. This framework establishes both theoretical and practical foundations for addressing complex target detection challenges in the agricultural domain. Full article
21 pages, 684 KiB  
Article
A High Performance Air-to-Air Unmanned Aerial Vehicle Target Detection Model
by Hexiang Hao, Yueping Peng, Zecong Ye, Baixuan Han, Xuekai Zhang, Wei Tang, Wenchao Kang and Qilong Li
Drones 2025, 9(2), 154; https://doi.org/10.3390/drones9020154 - 19 Feb 2025
Abstract
In the air-to-air UAV target detection tasks, the existing algorithms suffer from low precision, low recall and high dependence on device processing power, which makes it difficult to detect UAV small targets efficiently. To solve the above problems, this paper proposes an high-precision [...] Read more.
In the air-to-air UAV target detection tasks, the existing algorithms suffer from low precision, low recall and high dependence on device processing power, which makes it difficult to detect UAV small targets efficiently. To solve the above problems, this paper proposes an high-precision model, ATA-YOLOv8. In this paper, we analyze the problem of UAV small target detection from the perspective of the efficient receptive field. The proposed model is evaluated using two air-to-air UAV image datasets, MOT-FLY and Det-Fly, and compared with YOLOv8n and other SOTA algorithms. The experimental results show that the mAP50 of ATA-YOLOv8 is 94.9% and 96.4% on the MOT-FLY and Det-Fly datasets, respectively, which are 25% and 5.9% higher than the mAP of YOLOv8n, while maintaining a model size of 5.1 MB. The methods in this paper improve the accuracy of UAV target detection in air-to-air scenarios. The proposed model’s small size, fast speed and high accuracy make it possible for real-time air-to-air UAV detection on edge-computing devices. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the ATA-YOLOv8.</p>
Full article ">Figure 2
<p>The structure of the cross stage detail enhance block.</p>
Full article ">Figure 3
<p>The structure of the Efficient Multi-Scale Attention Module.</p>
Full article ">Figure 4
<p>The structure of cross-stage-partial_partial multi-scale feature aggregation.</p>
Full article ">Figure 5
<p>The structure of Omni-Kernel Module.</p>
Full article ">Figure 6
<p>The structure of the Detail Enhanced Rep Shared Convolutional Detection Head. The shared block in this figure shows the structure. VC, ADC, CDC, VDC and HDC represent the five parallel convolution layers, which are the standard convolution, angle differential convolution, center differential convolution, vertical differential convolution and horizontal differential convolution, respectively.</p>
Full article ">Figure 7
<p>The mAP50, GFLOPs and parameters of ablation experiments.</p>
Full article ">Figure 8
<p>Effective receptive fields of the 13th, 17th, 18th and 21st layers of the network, which corresponds respectively to (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 9
<p>The heat map using HiResCAM, where (<b>a</b>) is the original image; (<b>b</b>) is the heatmap of the YOLOv8 model; (<b>c</b>) is the heatmap of the YOLOv8 + CSDE model; (<b>d</b>) is the heatmap of the YOLOv8 + CSDE + CSP_PMSA + OmniKernel model; (<b>e</b>) is the heatmap of ATA-YOLOv8 model.</p>
Full article ">Figure 10
<p>Visualization of the network’s effective receptive fields.</p>
Full article ">
25 pages, 4205 KiB  
Article
Method of Dynamic Modeling and Robust Optimization for Chain Transmission Mechanism with Time-Varying Load Uncertainty
by Taisu Liu, Yuan Liu, Peitong Liu and Xiaofei Du
Machines 2025, 13(2), 166; https://doi.org/10.3390/machines13020166 - 19 Feb 2025
Abstract
Time-varying driving loads and uncertain structural parameters affect the transmission accuracy of chain transmission mechanisms. To enhance the transmission accuracy and placement consistency of these mechanisms, a robust optimization design method based on Karhunen–Loeve expansion and Polynomial Chaos Expansion (KL-PCE) is proposed. First, [...] Read more.
Time-varying driving loads and uncertain structural parameters affect the transmission accuracy of chain transmission mechanisms. To enhance the transmission accuracy and placement consistency of these mechanisms, a robust optimization design method based on Karhunen–Loeve expansion and Polynomial Chaos Expansion (KL-PCE) is proposed. First, a dynamic model of the chain transmission mechanism, considering multiple contact modes, is established, and the model’s accuracy is verified through experiments. Then, based on the KL-PCE method, a mapping relationship between uncertain input parameters and output responses is established. A robust optimization design model for the chain transmission process is formulated, with transmission accuracy and consistency as objectives. Finally, case studies are used to verify the effectiveness of the proposed method. Thus, the transmission accuracy of the chain transmission mechanism is improved, providing a theoretical foundation for the design of chain transmission mechanisms under time-varying load uncertainties and for improving the accuracy of other complex mechanisms. Full article
(This article belongs to the Special Issue Advancements in Mechanical Power Transmission and Its Elements)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of chain transmission mechanism (in the plane).</p>
Full article ">Figure 2
<p>Topological relationship of the chain transmission mechanism.</p>
Full article ">Figure 3
<p>Schematic diagram of chain system.</p>
Full article ">Figure 4
<p>Geometrical diagram of sprocket tooth groove.</p>
Full article ">Figure 5
<p>Contact relationship between roller and positioning curve of sprocket tooth groove.</p>
Full article ">Figure 6
<p>Contact relationship between roller and top curve of sprocket tooth groove.</p>
Full article ">Figure 7
<p>Contact relationship between adjacent chain links.</p>
Full article ">Figure 8
<p>The test schematic diagram of the propellant transport process.</p>
Full article ">Figure 9
<p>The torque curve for the sprocket.</p>
Full article ">Figure 10
<p>The comparison of simulation displacement and experiment displacement for the modular charge.</p>
Full article ">Figure 11
<p>The torque design curve for the sprocket.</p>
Full article ">Figure 12
<p>The test result of the propellant transport mechanism’s PCE model.</p>
Full article ">Figure 13
<p>The Pareto optimal solution set of the propellant transport process.</p>
Full article ">Figure 14
<p>PDF of the modular charge displacement before and after optimization.</p>
Full article ">Figure A1
<p>The position relation of adjacent rigid bodies.</p>
Full article ">Figure A2
<p>Schematic diagram of virtual body constraints. (<b>a</b>) the Closed loop mechanism; (<b>b</b>) disconnect hinge 4; (<b>c</b>) add the virtual body.</p>
Full article ">
25 pages, 5090 KiB  
Article
Research on Intelligent Verification of Equipment Information in Engineering Drawings Based on Deep Learning
by Zicheng Zhang and Yurou He
Electronics 2025, 14(4), 814; https://doi.org/10.3390/electronics14040814 - 19 Feb 2025
Abstract
This paper focuses on the crucial task of automatic recognition and understanding of table structures in engineering drawings and document processing. Given the importance of tables in information display and the urgent need for automated processing of tables in the digitalization process, an [...] Read more.
This paper focuses on the crucial task of automatic recognition and understanding of table structures in engineering drawings and document processing. Given the importance of tables in information display and the urgent need for automated processing of tables in the digitalization process, an intelligent verification method is proposed. This method integrates multiple key techniques: YOLOv10 is used for table object recognition, achieving a precision of 0.891, a recall rate of 0.899, mAP50 of 0.922, and mAP50-95 of 0.677 in table recognition, demonstrating strong target detection capabilities; the improved LORE algorithm is adopted to extract table structures, breaking through the limitations of the original algorithm by segmenting large-sized images, with a table extraction accuracy rate reaching 91.61% and significantly improving the accuracy of handling complex tables; RapidOCR is utilized to achieve text recognition and cell correspondence, solving the problem of text-cell matching; for equipment name semantic matching, a method based on BERT is introduced and calculated using a comprehensive scoring method. Meanwhile, an improved cuckoo search algorithm is proposed to optimize the adjustment factors, avoiding local optima through sine optimization and the catfish effect. Experiments show the accuracy of equipment name matching in semantic similarity calculation approaches 100%. Finally, the paper provides a concrete system practice to prove the effectiveness of the algorithm. In conclusion, through experimental comparisons, this method exhibits excellent performance in table area location, structure recognition, and semantic matching and is of great significance and practical value in advancing table data processing technology in engineering drawings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The framework of intelligent verification methods.</p>
Full article ">Figure 2
<p>The framework of YOLOv10.</p>
Full article ">Figure 3
<p>Illustration of the improved LORE algorithm.</p>
Full article ">Figure 4
<p>First-last layer average pooling.</p>
Full article ">Figure 5
<p>Improved cuckoo search algorithm flowchart.</p>
Full article ">Figure 6
<p>Iteration curve of algorithm training effectiveness.</p>
Full article ">Figure 7
<p>Display of recognition results.</p>
Full article ">Figure 8
<p>Comparison of the recognition process of this paper’s algorithm with the original LORE algorithm.</p>
Full article ">Figure 9
<p>Iteration curves of the three functions for CS and ICS. (<b>a</b>) Iteration Curves of Function F1 (<b>b</b>) Iteration Curves of Function F2 (<b>c</b>) Iteration Curves of Function F3.</p>
Full article ">Figure 10
<p>Iteration curves of the three algorithms.</p>
Full article ">Figure 11
<p>Schematic diagram of system process, model and components.</p>
Full article ">Figure 12
<p>Matching result system screenshot.</p>
Full article ">
18 pages, 33036 KiB  
Article
Three-Dimensional Magnetotelluric Forward Modeling Using Multi-Task Deep Learning with Branch Point Selection
by Fei Deng, Hongyu Shi, Peifan Jiang and Xuben Wang
Remote Sens. 2025, 17(4), 713; https://doi.org/10.3390/rs17040713 - 19 Feb 2025
Abstract
Magnetotelluric (MT) forward modeling is a key technique in magnetotelluric sounding, and deep learning has been widely applied to MT forward modeling. In three-dimensional (3-D) problems, although existing methods can predict forward modeling results with high accuracy, they often use multiple networks to [...] Read more.
Magnetotelluric (MT) forward modeling is a key technique in magnetotelluric sounding, and deep learning has been widely applied to MT forward modeling. In three-dimensional (3-D) problems, although existing methods can predict forward modeling results with high accuracy, they often use multiple networks to simulate multiple forward modeling parameters, resulting in low efficiency. We apply multi-task learning (MTL) to 3-D MT forward modeling to achieve simultaneous inference of apparent resistivity and impedance phase, effectively improving overall efficiency. Furthermore, through comparative analysis of feature map differences in various decoder layers of the network, we identify the optimal branching point for multi-task learning decoders. This enhances the feature extraction capabilities of the network and improves the prediction accuracy of forward modeling parameters. Additionally, we introduce an uncertainty-based loss function to dynamically balance the learning weights between tasks, addressing the shortcomings of traditional loss functions. Experiments demonstrate that compared with single-task networks and existing multi-task networks, the proposed network (MT-FeatureNet) achieves the best results in terms of Structural Similarity Index Measure (SSIM), Mean Relative Error (MRE), and Mean Absolute Error (MAE). The proposed multi-task learning model not only improves the efficiency and accuracy of 3-D MT forward modeling but also provides a novel approach to the design of multi-task learning network structures. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison between (<b>a</b>) single-task networks and (<b>b</b>) multi-task networks.</p>
Full article ">Figure 2
<p>U-Net single-task network model.</p>
Full article ">Figure 3
<p>U-Net single-task network model.</p>
Full article ">Figure 4
<p>Feature map of layer (<b>a</b>) A and (<b>b</b>) B and (<b>c</b>) C.</p>
Full article ">Figure 5
<p>Loss per Epoch Comparison.</p>
Full article ">Figure 6
<p>Comparison of single anomaly results.</p>
Full article ">Figure 7
<p>Comparison of the results of the two anomalies.</p>
Full article ">Figure 8
<p>Comparison of the results of the three anomalies.</p>
Full article ">Figure 9
<p>Loss per Epoch Comparison.</p>
Full article ">
23 pages, 10921 KiB  
Article
A Weakly Supervised and Self-Supervised Learning Approach for Semantic Segmentation of Land Cover in Satellite Images with National Forest Inventory Data
by Daniel Moraes, Manuel L. Campagnolo and Mário Caetano
Remote Sens. 2025, 17(4), 711; https://doi.org/10.3390/rs17040711 - 19 Feb 2025
Abstract
National Forest Inventories (NFIs) provide valuable land cover (LC) information but often lack spatial continuity and an adequate update frequency. Satellite-based remote sensing offers a viable alternative, employing machine learning to extract thematic data. State-of-the-art methods such as convolutional neural networks rely on [...] Read more.
National Forest Inventories (NFIs) provide valuable land cover (LC) information but often lack spatial continuity and an adequate update frequency. Satellite-based remote sensing offers a viable alternative, employing machine learning to extract thematic data. State-of-the-art methods such as convolutional neural networks rely on fully pixel-level annotated images, which are difficult to obtain. Although reference LC datasets have been widely used to derive annotations, NFIs consist of point-based data, providing only sparse annotations. Weakly supervised and self-supervised learning approaches help address this issue by reducing dependence on fully annotated images and leveraging unlabeled data. However, their potential for large-scale LC mapping needs further investigation. This study explored the use of NFI data with deep learning and weakly supervised and self-supervised methods. Using Sentinel-2 images and the Portuguese NFI, which covers other LC types beyond forest, as sparse labels, we performed weakly supervised semantic segmentation with a convolutional neural network to create an updated and spatially continuous national LC map. Additionally, we investigated the potential of self-supervised learning by pretraining a masked autoencoder on 65,000 Sentinel-2 image chips and then fine-tuning the model with NFI-derived sparse labels. The weakly supervised baseline achieved a validation accuracy of 69.60%, surpassing Random Forest (67.90%). The self-supervised model achieved 71.29%, performing on par with the baseline using half the training data. The results demonstrated that integrating both learning approaches enabled successful countrywide LC mapping with limited training data. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

Figure 1
<p>Study area and location of sample areas used for model training and validation.</p>
Full article ">Figure 2
<p>Example of NFI photo-points: (<b>a</b>) with matching point-patch labels; (<b>b</b>) located at the interface between distinct land covers; and (<b>c</b>) with mismatching point-patch labels.</p>
Full article ">Figure 3
<p>Illustration of distinctly labeled training data. High-resolution image (<b>a</b>), dense labels used in typical fully supervised methods (<b>b</b>) and sparse labels used in our weakly supervised approach (<b>c</b>). Colored and grey pixels correspond to labeled and unlabeled pixels, respectively. The labels in (<b>c</b>) are derived from the photo-point, seen in the center of the 3 × 3 window.</p>
Full article ">Figure 4
<p>Network architecture of our ConvNext-V2 Atto U-Net. The figure also exhibits the ConvNext-V2 block. LN, GRN and GELU stand for Layer Normalization, Global Response Normalization and Gaussian Error Linear Unit, respectively. Conv K × K refers to a convolutional layer with a kernel size of K × K.</p>
Full article ">Figure 5
<p>MAE architecture, illustrating the reconstruction of masked patches. Image representations learned at the encoder can be transferred and applied to different downstream tasks. Each patch corresponds to 8 × 8 pixels.</p>
Full article ">Figure 6
<p>Overall accuracy of the baseline and self-supervised pretrained models. The values represent the average of 10 runs with a 95% confidence interval and were computed on the validation split.</p>
Full article ">Figure 7
<p>Validation split accuracy of the three tested models with distinct training set sizes. The reported values are the average of 10 runs with a 95% confidence interval.</p>
Full article ">Figure 8
<p>Model performance per land cover class measured by the F1-score. For other coniferous, no F1-score was reported for Random Forest, as the model did not predict any sampling units belonging to this class.</p>
Full article ">Figure 9
<p>Example of land cover maps produced by Random Forest, ConvNext-V2 baseline and ConvNext-V2 self-supervised pretrained models.</p>
Full article ">Figure 10
<p>Land cover map of Portugal (2023).</p>
Full article ">Figure A1
<p>Example of 30 × 30 m windows used for training a Random Forest classifier for the homogeneity filter. Annotations as non-homogeneous or homogeneous considered not only the high-resolution images (seen in the figure) but also Sentinel-2 images.</p>
Full article ">
34 pages, 42799 KiB  
Article
YOLO-DentSeg: A Lightweight Real-Time Model for Accurate Detection and Segmentation of Oral Diseases in Panoramic Radiographs
by Yue Hua, Rui Chen and Hang Qin
Electronics 2025, 14(4), 805; https://doi.org/10.3390/electronics14040805 - 19 Feb 2025
Abstract
Panoramic radiography is vital in dentistry, where accurate detection and segmentation of diseased regions aid clinicians in fast, precise diagnosis. However, the current methods struggle with accuracy, speed, feature extraction, and suitability for low-resource devices. To overcome these challenges, this research introduces a [...] Read more.
Panoramic radiography is vital in dentistry, where accurate detection and segmentation of diseased regions aid clinicians in fast, precise diagnosis. However, the current methods struggle with accuracy, speed, feature extraction, and suitability for low-resource devices. To overcome these challenges, this research introduces a unique YOLO-DentSeg model, a lightweight architecture designed for real-time detection and segmentation of oral dental diseases, which is based on an enhanced version of the YOLOv8n-seg framework. First, the C2f(Channel to Feature Map)-Faster structure is introduced in the backbone network, achieving a lightweight design while improving the model accuracy. Next, the BiFPN(Bidirectional Feature Pyramid Network) structure is employed to enhance its multi-scale feature extraction capabilities. Then, the EMCA(Enhanced Efficient Multi-Channel Attention) attention mechanism is introduced to improve the model’s focus on key disease features. Finally, the Powerful-IOU(Intersection over Union) loss function is used to optimize the detection box localization accuracy. Experiments show that YOLO-DentSeg achieves a detection precision (mAP50(Box)) of 87%, segmentation precision (mAP50(Seg)) of 85.5%, and a speed of 90.3 FPS. Compared to YOLOv8n-seg, it achieves superior precise and faster inference times while decreasing the model size, computational load, and parameter count by 44.9%, 17.5%, and 44.5%, respectively. YOLO-DentSeg enables fast, accurate disease detection and segmentation, making it practical for devices with limited computing power and ideal for real-world dental applications. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of oral disease detection and segmentation.</p>
Full article ">Figure 2
<p>YOLO-DentSeg model structure diagram.</p>
Full article ">Figure 3
<p>PConv schematic.</p>
Full article ">Figure 4
<p>Comparison of Faster-Block and Bottleneck structures.</p>
Full article ">Figure 5
<p>C2f-Faster schematic.</p>
Full article ">Figure 6
<p>FPN, PANet, and BiFPN structures.</p>
Full article ">Figure 7
<p>EMCA schematic of attention mechanisms.</p>
Full article ">Figure 8
<p>Schematics of CIOU and PowerIOU. (<b>a</b>) The structure of the original YOLOv8 boundary box loss function, CIoU (Complete Intersection over Union); (<b>b</b>) The structure of the proposed boundary box loss function, Powerful-IoU.</p>
Full article ">Figure 9
<p>The images before and after data augmentation.</p>
Full article ">Figure 10
<p>Comparison of detection and segmentation accuracy averages prior to and following model enhancement.</p>
Full article ">Figure 11
<p>Experimental curves for ablation experiments.</p>
Full article ">Figure 12
<p>Adding experimental curves for different attention modules.</p>
Full article ">Figure 13
<p>Experimental curves with various employed loss functions.</p>
Full article ">Figure 14
<p>Scatterplots of different model experiments. (<b>A</b>) The relationship between the number of parameters and FPS (Frames Per Second) for each model; (<b>B</b>) The relationship between computational complexity (FLOPs) and FPS for each model; (<b>C</b>) The relationship between FPS and mAP50 (Box) for each model; (<b>D</b>) The relationship between FPS and mAP50 (Seg) for each model.</p>
Full article ">Figure 15
<p>Detection segmentation results for different models.</p>
Full article ">
22 pages, 2245 KiB  
Article
A Lightweight Drone Detection Method Integrated into a Linear Attention Mechanism Based on Improved YOLOv11
by Sicheng Zhou, Lei Yang, Huiting Liu, Chongqing Zhou, Jiacheng Liu, Shuai Zhao and Keyi Wang
Remote Sens. 2025, 17(4), 705; https://doi.org/10.3390/rs17040705 - 19 Feb 2025
Abstract
The timely and accurate detection of unidentified drones is vital for public safety. However, the unique characteristics of drones in complex environments and the varied postures they may adopt during approach present significant challenges. Additionally, deep learning algorithms often require large models and [...] Read more.
The timely and accurate detection of unidentified drones is vital for public safety. However, the unique characteristics of drones in complex environments and the varied postures they may adopt during approach present significant challenges. Additionally, deep learning algorithms often require large models and substantial computational resources, limiting their use on low-capacity platforms. To address these challenges, we propose LAMS-YOLO, a lightweight drone detection method based on linear attention mechanisms and adaptive downsampling. The model’s lightweight design, inspired by CPU optimization, reduces parameters using depthwise separable convolutions and efficient activation functions. A novel linear attention mechanism, incorporating an LSTM-like gating system, enhances semantic extraction efficiency, improving detection performance in complex scenarios. Building on insights from dynamic convolution and multi-scale fusion, a new adaptive downsampling module is developed. This module efficiently compresses features while retaining critical information. Additionally, an improved bounding box loss function is introduced to enhance localization accuracy. Experimental results demonstrate that LAMS-YOLO outperforms YOLOv11n, achieving a 3.89% increase in mAP and a 9.35% reduction in parameters. The model also exhibits strong cross-dataset generalization, striking a balance between accuracy and efficiency. These advancements provide robust technical support for real-time drone monitoring. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The structure of YOLOv11.</p>
Full article ">Figure 2
<p>Improvement scheme at the backbone combined with the PP-LCNet. The dotted box represents optional modules. The Stem denotes the initial feature extraction layer built using a standard 3 × 3 convolution. DepthSepConv means depth-wise separable convolutions, DW means depth-wise convolution, PW means point-wise convolution, GAP means global average pooling.</p>
Full article ">Figure 3
<p>The structure of the adaptive downsampling module.</p>
Full article ">Figure 4
<p>The structure of the MILA linear attention mechanism.</p>
Full article ">Figure 5
<p>The scheme of Shape-IoU loss function.</p>
Full article ">Figure 6
<p>Optimization techniques in LAMS-YOLO include utilizing PP-LCNet as the backbone. Furthermore, the Adown incorporates the Mamba-inspired linear attention mechanism to create an enhanced feature fusion network.</p>
Full article ">Figure 7
<p>Comparison of features between birds and drones.</p>
Full article ">Figure 8
<p>Label volume and distribution for drone detection.</p>
Full article ">Figure 9
<p>The PR diagram between YOLOv11n and LAMS-YOLO.</p>
Full article ">Figure 10
<p>Comparison of mAP and model size with different methods.</p>
Full article ">Figure 11
<p>Heat maps of YOLOv11n and LAMS-YOLO for drones obtained by Grad-CAM. (<b>a</b>) Heat maps generated by YOLOv11n for drones on the self-built dataset. (<b>b</b>) Heat maps generated by LAMS-YOLO for drones on the self-built dataset.</p>
Full article ">Figure 12
<p>Comparative experiments with different models on the self-built dataset.</p>
Full article ">Figure 13
<p>Visual comparison of detection results between YOLOv11n and LAMS-YOLO on the self-built dataset. (<b>a</b>) YOLOv11n. (<b>b</b>) LAMS-YOLO.</p>
Full article ">Figure 14
<p>Visual detection results based on LAMS-YOLO in small drone object detection.</p>
Full article ">
13 pages, 968 KiB  
Article
Sentinel Lymph Node Detection in Cervical Cancer: Challenges in Resource-Limited Settings with High Prevalence of Large Tumours
by Szilárd Leó Kiss, Mihai Stanca, Dan Mihai Căpîlna, Tudor Emil Căpîlna, Maria Pop-Suciu, Botond Istvan Kiss, Szilárd Leó Kiss and Mihai Emil Căpîlna
J. Clin. Med. 2025, 14(4), 1381; https://doi.org/10.3390/jcm14041381 - 19 Feb 2025
Abstract
Background/Objectives: Cervical cancer primarily disseminates through the lymphatic system, with the metastatic involvement of pelvic and para-aortic lymph nodes significantly impacting prognosis and treatment decisions. Sentinel lymph node (SLN) mapping is critical in guiding surgical management. However, resource-limited settings often lack advanced [...] Read more.
Background/Objectives: Cervical cancer primarily disseminates through the lymphatic system, with the metastatic involvement of pelvic and para-aortic lymph nodes significantly impacting prognosis and treatment decisions. Sentinel lymph node (SLN) mapping is critical in guiding surgical management. However, resource-limited settings often lack advanced detection tools like indocyanine green (ICG). This study evaluated the feasibility and effectiveness of SLN biopsy using alternative techniques in a high-risk population with a high prevalence of large tumours. Methods: This prospective, observational study included 42 patients with FIGO 2018 stage IA1–IIA1 cervical cancer treated between November 2019 and April 2024. SLN mapping was performed using methylene blue alone or combined with a technetium-99m radiotracer. Detection rates, sensitivity, and false-negative rates were analysed. Additional endpoints included tracer technique comparisons, SLN localization patterns, and factors influencing detection success. Results: SLNs were identified in 78.6% of cases, with bilateral detection in 57.1%. The combined technique yielded higher detection rates (93.3% overall, 80% bilateral) compared to methylene blue alone (70.4% overall, 40.7% bilateral, p < 0.05). The sensitivity and negative predictive values were 70% and 93.87%, respectively. Larger tumours (>4 cm), deep stromal invasion, and prior conization negatively impacted detection rates. False-negative SLNs were associated with larger tumours and positive lymphovascular space invasion. Conclusions: SLN biopsy is feasible in resource-limited settings, with improved detection rates using combined tracer techniques. However, sensitivity remains suboptimal due to a steep learning curve and challenges in high-risk patients. Until a high detection accuracy is achieved, SLN mapping should complement, rather than replace, pelvic lymphadenectomy in high-risk cases. Full article
(This article belongs to the Special Issue Laparoscopy and Surgery in Gynecologic Oncology)
Show Figures

Figure 1

Figure 1
<p>Localization of the SLNs.</p>
Full article ">Figure 2
<p>Sensitivity and negative predictive value.</p>
Full article ">
12 pages, 1544 KiB  
Article
Geocoding Applications for Enhancing Urban Water Supply Network Analysis
by Péter Orgoványi, Tamás Hammer and Tamás Karches
Urban Sci. 2025, 9(2), 51; https://doi.org/10.3390/urbansci9020051 - 18 Feb 2025
Viewed by 126
Abstract
Geospatial tools and geocoding systems play an increasingly significant role in the modernization and operation of municipal water utility networks. This research explored how geocoding systems could improve network management, facilitate leak detection, and enhance hydraulic modeling accuracy. Various geocoding services, including Google, [...] Read more.
Geospatial tools and geocoding systems play an increasingly significant role in the modernization and operation of municipal water utility networks. This research explored how geocoding systems could improve network management, facilitate leak detection, and enhance hydraulic modeling accuracy. Various geocoding services, including Google, Bing Maps, and OpenStreetMap APIs were analyzed using address data from a small Central European municipality. The analysis was performed in February and March of 2024. The accuracy and efficiency of these systems in handling spatial data for domestic water networks were assessed and results showed that geocoding accuracy depended on the quality of the service provider databases and the formatting of input data. Google proved the most reliable, while Bing and OpenStreetMap were less accurate. Additionally, the Location Database developed by Lechner Knowledge Center was used as a reliable local reference for comparison with global services. Geocoding results were integrated into GIS softwares (Google Earth ver. 7.3.6.9796, QGIS ver. 3.36, ArcGIS ver 10.8.2) to enable spatial analysis and comparison of geographic coordinates. The findings highlight geocoding’s critical role in efficient water network management, particularly for mapping consumer data and rapidly localizing leaks and breaks. Our findings directly support hydraulic modeling tasks, contributing to sustainable operations and cost-effective interventions. Full article
Show Figures

Figure 1

Figure 1
<p>Relative hit rate of geocoding services.</p>
Full article ">Figure 2
<p>Accuracy of geocoding services in relation to the Lechner Knowledge Centre’s Access Point database.</p>
Full article ">
17 pages, 5497 KiB  
Article
High Spatiotemporal Resolution Monitoring of Water Body Dynamics in the Tibetan Plateau: An Innovative Method Based on Mixed Pixel Decomposition
by Yuhang Jing and Zhenguo Niu
Sensors 2025, 25(4), 1246; https://doi.org/10.3390/s25041246 - 18 Feb 2025
Viewed by 124
Abstract
The Tibetan Plateau, known as the “Third Pole” and the “Water Tower of Asia”, has experienced significant changes in its surface water due to global warming. Accurately understanding and monitoring the spatiotemporal distribution of surface water is crucial for ecological conservation and the [...] Read more.
The Tibetan Plateau, known as the “Third Pole” and the “Water Tower of Asia”, has experienced significant changes in its surface water due to global warming. Accurately understanding and monitoring the spatiotemporal distribution of surface water is crucial for ecological conservation and the sustainable use of water resources. Among existing satellite data, the MODIS sensor stands out for its long time series and high temporal resolution, which make it advantageous for large-scale water body monitoring. However, its spatial resolution limitations hinder detailed monitoring. To address this, the present study proposes a dynamic endmember selection method based on phenological features, combined with mixed pixel decomposition techniques, to generate monthly water abundance maps of the Tibetan Plateau from 2000 to 2023. These maps precisely depict the interannual and seasonal variations in surface water, with an average accuracy of 95.3%. Compared to existing data products, the water abundance maps developed in this study provide better detail of surface water, while also benefiting from higher temporal resolution, enabling effective capture of dynamic water information. The dynamic monitoring of surface water on the Tibetan Plateau shows a year-on-year increase in water area, with an increasing fluctuation range. The surface water abundance products presented in this study not only provide more detailed information for the fine characterization of surface water but also offer a new technical approach and scientific basis for timely and accurate monitoring of surface water changes on the Tibetan Plateau. Full article
(This article belongs to the Special Issue Feature Papers in Remote Sensors 2024)
Show Figures

Figure 1

Figure 1
<p>Study Area Overview.</p>
Full article ">Figure 2
<p>Workflow of Water Body Abundance Inversion on the Tibetan Plateau.</p>
Full article ">Figure 3
<p>Abundance maps and validation results: (<b>a</b>) Abundance results for July 2017; (<b>b</b>) Distribution of classification accuracy, commission rate, and omission rate; (<b>c</b>) Scatter plot of RMSE and ME distribution.</p>
Full article ">Figure 4
<p>Analysis of Area Trend Over the Year.</p>
Full article ">Figure 5
<p>Comparison with Other Datasets: (<b>a</b>) Comparison of Area with Other Datasets; (<b>b</b>) Correlation of Abundance Map with Other Datasets.</p>
Full article ">Figure 6
<p>Interannual Area Change Diagram.</p>
Full article ">Figure 7
<p>Correlation Analysis with JRC and GSWED Datasets.</p>
Full article ">Figure 8
<p>Identification Results of Small Water Bodies.</p>
Full article ">Figure 9
<p>Identification of Linear Water Bodies.</p>
Full article ">Figure 10
<p>Potential of Abundance Maps in Wetland Classification.</p>
Full article ">
Back to TopTop