Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (13,621)

Search Parameters:
Keywords = mapping accuracy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3109 KiB  
Article
A Machine Learning Classification Approach to Geotechnical Characterization Using Measure-While-Drilling Data
by Daniel Goldstein, Chris Aldrich, Quanxi Shao and Louisa O'Connor
Geosciences 2025, 15(3), 93; https://doi.org/10.3390/geosciences15030093 - 7 Mar 2025
Abstract
Bench-scale geotechnical characterization often suffers from high uncertainty, reducing confidence in geotechnical analysis on account of expensive resource development drilling and mapping. The Measure-While-Drilling (MWD) system uses sensors to collect the drilling data from open-pit blast hole drill rigs. Historically, the focus of [...] Read more.
Bench-scale geotechnical characterization often suffers from high uncertainty, reducing confidence in geotechnical analysis on account of expensive resource development drilling and mapping. The Measure-While-Drilling (MWD) system uses sensors to collect the drilling data from open-pit blast hole drill rigs. Historically, the focus of MWD studies was on penetration rates to identify rock formations during drilling. This study explores the effectiveness of Artificial Intelligence (AI) classification models using MWD data to predict geotechnical categories, including stratigraphic unit, rock/soil strength, rock type, Geological Strength Index, and weathering properties. Feature importance algorithms, Minimum Redundancy Maximum Relevance and ReliefF, identified all MWD responses as influential, leading to their inclusion in Machine Learning (ML) models. ML algorithms tested included Decision Trees, Support Vector Machines (SVMs), Naive Bayes, Random Forests (RFs), K-Nearest Neighbors (KNNs), Linear Discriminant Analysis. KNN, SVMs, and RFs achieved up to 97% accuracy, outperforming other models. Prediction performance varied with class distribution, with balanced datasets showing wider accuracy ranges and skewed datasets achieving higher accuracies. The findings demonstrate a robust framework for applying AI to real-time orebody characterization, offering valuable insights for geotechnical engineers and geologists in improving orebody prediction and analysis Full article
(This article belongs to the Special Issue Digging Deeper: Insights and Innovations in Rock Mechanics)
Show Figures

Figure 1

Figure 1
<p>The MWD data were collected using the following representative drilling rigs: (<b>a</b>) the Terex SKS 12, which drilled 0.229 m production blast holes, and (<b>b</b>) the Epiroc D65, which was used for drilling 0.165 m wall control blast holes.</p>
Full article ">Figure 2
<p>Distributions of MWD datapoints for (<b>a</b>) <span class="html-italic">rop</span>, (<b>b</b>) <span class="html-italic">tor</span>, (<b>c</b>) <span class="html-italic">fob</span>, and (<b>d</b>) <span class="html-italic">bap</span>.</p>
Full article ">Figure 3
<p>Pearson Correlation Coefficient plot for MWD data variables.</p>
Full article ">Figure 4
<p>Distribution of investigated geotechnical categories for (<b>a</b>) stratigraphic unit, (<b>b</b>) rock type, (<b>c</b>) weathering intensity, (<b>d</b>) rock or soil strength and (<b>e</b>) Geological Strength Index.</p>
Full article ">Figure 5
<p>MRMR and ReliefF results for MWD response features.</p>
Full article ">Figure 6
<p>Validation and testing cost scores versus training duration for the investigation of classification-based ML algorithms.</p>
Full article ">Figure 7
<p>Confusion matrices showing testing accuracies (%) for rock types using (<b>a</b>) DTs, (<b>b</b>) SVMs, (<b>c</b>) KNNs, (<b>d</b>) RFs, (<b>e</b>) LDA and (<b>f</b>) NB.</p>
Full article ">
17 pages, 7122 KiB  
Article
Multi-Temporal and Multi-Resolution RGB UAV Surveys for Cost-Efficient Tree Species Mapping in an Afforestation Project
by Saif Ullah, Osman Ilniyaz, Anwar Eziz, Sami Ullah, Gift Donu Fidelis, Madeeha Kiran, Hossein Azadi, Toqeer Ahmed, Mohammed S. Elfleet and Alishir Kurban
Remote Sens. 2025, 17(6), 949; https://doi.org/10.3390/rs17060949 - 7 Mar 2025
Abstract
Accurate, cost-efficient vegetation mapping is critical for managing afforestation projects, particularly in resource-limited areas. This study used a consumer-grade RGB unmanned aerial vehicle (UAV) to evaluate the optimal spatial and temporal resolutions (leaf-off and leaf-on) for precise, economically viable tree species mapping. This [...] Read more.
Accurate, cost-efficient vegetation mapping is critical for managing afforestation projects, particularly in resource-limited areas. This study used a consumer-grade RGB unmanned aerial vehicle (UAV) to evaluate the optimal spatial and temporal resolutions (leaf-off and leaf-on) for precise, economically viable tree species mapping. This study conducted in 2024 in Kasho, Bannu district, Pakistan, using UAV missions at multiple altitudes captured high-resolution RGB imagery (2, 4, and 6 cm) across three sampling plots. A Support Vector Machine (SVM) classifier with 5-fold cross-validation was assessed using accuracy, Shannon entropy, and cost–benefit analyses. The results showed that the 6 cm resolution achieved a reliable accuracy (R2 = 0.92–0.98) with broader coverage (12.3–22.2 hectares), while the 2 cm and 4 cm resolutions offered higher accuracy (R2 = 0.96–0.99) but limited coverage (4.8–14.2 hectares). The 6 cm resolution also yielded the highest benefit–cost ratio (BCR: 0.011–0.015), balancing cost-efficiency and accuracy. This study demonstrates the potential of consumer-grade UAVs for affordable, high-precision tree species mapping, while also accounting for other land cover types such as bare earth and water, supporting budget-constrained afforestation efforts. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The map shows the geographical location of the study area in the Kasho region. RGB UAV images at three resolutions have been captured for a selected sample plot—yellow, which is one of three distinct sample plots for this study—with black rectangles marking the targeted vegetation area used for comparative analysis.</p>
Full article ">Figure 2
<p>Comparison of leaf-off and leaf-on orthoimages for three sample plots (1–3), highlighting seasonal transitions in vegetation classes—from exposed soil and understory in leaf-off to dense canopy coverage in leaf-on images, where red outlines the study area boundary, yellow marks all sample plots, and light blue highlights the selected sample plots for this study.</p>
Full article ">Figure 3
<p>Workflow for precise vegetation mapping and benefit–cost ratio (BCR) analysis.</p>
Full article ">Figure 4
<p>Total time and area coverage efficiency across different resolutions, with median, and standard deviation indicated via error bars.</p>
Full article ">Figure 5
<p>Bar graphs showing the area distribution of vegetation classes across different resolutions (2, 4, and 6 cm) in leaf-on and leaf-off conditions.</p>
Full article ">Figure 6
<p>Precise mapping of vegetation and non-vegetation classes where W = water, BL = barren land, EC = <span class="html-italic">Eucalyptus camaldulensis</span>, PJ = <span class="html-italic">Prosopis juliflora</span>, AA = <span class="html-italic">Ammophila arenaria</span>, and JA = <span class="html-italic">Juncus acutus</span>.</p>
Full article ">Figure 7
<p>Pearson correlation between accuracy, class coverage, and entropy gain/loss across resolutions, where the shape of the points denotes the sample plot number, and the color of the crosses indicates the resolution of the corresponding sample plot.</p>
Full article ">Figure 8
<p>SHAP summary plot of feature contributions to BCR in UAV-based vegetation mapping.</p>
Full article ">Figure 9
<p>Effect of resolution and seasonal condition on BCR, analyzed by two-way ANOVA, highlighting a significant impact of resolution compared to the effect of condition. (α = 0.005).</p>
Full article ">
16 pages, 1104 KiB  
Article
Detection of Fractured Endodontic Instruments in Periapical Radiographs: A Comparative Study of YOLOv8 and Mask R-CNN
by İrem Çetinkaya, Ekin Deniz Çatmabacak and Emir Öztürk
Diagnostics 2025, 15(6), 653; https://doi.org/10.3390/diagnostics15060653 - 7 Mar 2025
Abstract
Background/Objectives: Accurate localization of fractured endodontic instruments (FEIs) in periapical radiographs (PAs) remains a significant challenge. This study aimed to evaluate the performance of YOLOv8 and Mask R-CNN in detecting FEIs and root canal treatments (RCTs) and compare their diagnostic capabilities with those [...] Read more.
Background/Objectives: Accurate localization of fractured endodontic instruments (FEIs) in periapical radiographs (PAs) remains a significant challenge. This study aimed to evaluate the performance of YOLOv8 and Mask R-CNN in detecting FEIs and root canal treatments (RCTs) and compare their diagnostic capabilities with those of experienced endodontists. Methods: A data set of 1050 annotated PAs was used. Mask R-CNN and YOLOv8 models were trained and evaluated for FEI and RCT detection. Metrics including accuracy, intersection over union (IoU), mean average precision at 0.5 IoU (mAP50), and inference time were analyzed. Observer agreement was assessed using inter-class correlation (ICC), and comparisons were made between AI predictions and human annotations. Results: YOLOv8 achieved an accuracy of 97.40%, a mAP50 of 98.9%, and an inference time of 14.6 ms, outperforming Mask R-CNN in speed and mAP50. Mask R-CNN demonstrated an accuracy of 98.21%, a mAP50 of 95%, and an inference time of 88.7 ms, excelling in detailed segmentation tasks. Comparative analysis revealed no statistically significant differences in diagnostic performance between the models and experienced endodontists. Conclusions: Both YOLOv8 and Mask R-CNN demonstrated high diagnostic accuracy and reliability, comparable to experienced endodontists. YOLOv8’s rapid detection capabilities make it particularly suitable for real-time clinical applications, while Mask R-CNN excels in precise segmentation. This study establishes a strong foundation for integrating AI into dental diagnostics, offering innovative solutions to improve clinical outcomes. Future research should address data diversity and explore multimodal imaging for enhanced diagnostic capabilities. Full article
(This article belongs to the Special Issue Advances in Medical Image Processing, Segmentation and Classification)
Show Figures

Figure 1

Figure 1
<p>Representative examples of Mask R-CNN’s performance on periapical radiographs (PAs) for detecting fractured endodontic instruments (FEI) and root canal treatments (RCT). The bounding boxes and associated confidence scores highlight the model’s ability to accurately identify and localize objects. Panels (<b>A1</b>–<b>E1</b>) represent the ground truth annotations marked with blue boxes for FEI and red boxes for RCT, while panels (<b>A2</b>–<b>E2</b>) depict the segmentations generated by the Mask R-CNN model, where FEI is marked with red boxes and RCT with pink boxes.</p>
Full article ">Figure 2
<p>Flowchart of Mask R-CNN architecture. CNN extracts feature maps from the input image. The Region Proposal Network generates candidate regions, which are processed through RoI (Region of Interest) Align to ensure accurate spatial alignment. The extracted features are passed through FC (Fully Connected) layers for classification and bounding box regression. Additionally, Conv (Convolutional) layers are used for mask prediction.</p>
Full article ">Figure 3
<p>Flowchart of YOLO architecture.</p>
Full article ">Figure 4
<p>Saliency map outputs for FEI and RCT detection using YOLO and Mask R-CNN. (<b>A1</b>–<b>D1</b>) Raw periapical radiographs, (<b>A2</b>–<b>D2</b>) corresponding saliency maps. (<b>A</b>) YOLO-based saliency map for FEI detection, (<b>B</b>) YOLO-based saliency map for RCT detection, (<b>C</b>) Mask R-CNN-based saliency map for FEI detection, and (<b>D</b>) Mask R-CNN-based saliency map for RCT detection. The red boxes indicate the regions identified by the models as containing FEI or RCT, highlighting the areas of interest detected by the respective deep learning approaches.</p>
Full article ">Figure 5
<p>Comparison of training and validation losses for YOLOv8 (top) and Mask R-CNN (bottom) models. The YOLOv8 graphs depict box loss (<b>A</b>) and class loss (<b>B</b>), illustrating a steady decrease in both training and validation losses with minimal divergence, indicating strong generalization and effective performance in object localization and classification. In contrast, the Mask R-CNN graph (<b>C</b>) shows the total loss across training and validation, with training loss decreasing rapidly and validation loss stabilizing with slight fluctuations, reflecting its ability to perform detailed segmentation tasks. Overall, YOLOv8 demonstrates faster convergence and smoother loss reduction, while Mask R-CNN exhibits robustness in tasks requiring precise segmentation.</p>
Full article ">
19 pages, 13823 KiB  
Article
Autonomous Agricultural Robot Using YOLOv8 and ByteTrack for Weed Detection and Destruction
by Ardin Bajraktari and Hayrettin Toylan
Machines 2025, 13(3), 219; https://doi.org/10.3390/machines13030219 - 7 Mar 2025
Abstract
Automating agricultural machinery presents a significant opportunity to lower costs and enhance efficiency in both current and future field operations. The detection and destruction of weeds in agricultural areas via robots can be given as an example of this process. Deep learning algorithms [...] Read more.
Automating agricultural machinery presents a significant opportunity to lower costs and enhance efficiency in both current and future field operations. The detection and destruction of weeds in agricultural areas via robots can be given as an example of this process. Deep learning algorithms can accurately detect weeds in agricultural fields. Additionally, robotic systems can effectively eliminate these weeds. However, the high computational demands of deep learning-based weed detection algorithms pose challenges for their use in real-time applications. This study proposes a vision-based autonomous agricultural robot that leverages the YOLOv8 model in combination with ByteTrack to achieve effective real-time weed detection. A dataset of 4126 images was used to create YOLO models, with 80% of the images designated for training, 10% for validation, and 10% for testing. Six different YOLO object detectors were trained and tested for weed detection. Among these models, YOLOv8 stands out, achieving a precision of 93.8%, a recall of 86.5%, and a [email protected] detection accuracy of 92.1%. With an object detection speed of 18 FPS and the advantages of the ByteTrack integrated object tracking algorithm, YOLOv8 was selected as the most suitable model. Additionally, the YOLOv8-ByteTrack model, developed for weed detection, was deployed on an agricultural robot with autonomous driving capabilities integrated with ROS. This system facilitates real-time weed detection and destruction, enhancing the efficiency of weed management in agricultural practices. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Figure 1
<p>Machine vision-based weeding robots: (<b>a</b>) the Bonirob, (<b>b</b>) the ARA, (<b>c</b>) the AVO, (<b>d</b>) the Laserweeder.</p>
Full article ">Figure 2
<p>Overview of the autonomous agricultural robot.</p>
Full article ">Figure 3
<p>Block diagram of the autonomous agricultural robot.</p>
Full article ">Figure 4
<p>Position of the autonomous agricultural robot.</p>
Full article ">Figure 5
<p>Flowchart of autonomous navigation part.</p>
Full article ">Figure 6
<p>YOLOv5 architecture [<a href="#B49-machines-13-00219" class="html-bibr">49</a>].</p>
Full article ">Figure 7
<p>YOLOv8 architecture [<a href="#B49-machines-13-00219" class="html-bibr">49</a>].</p>
Full article ">Figure 8
<p>ByteTrack workflow [<a href="#B55-machines-13-00219" class="html-bibr">55</a>].</p>
Full article ">Figure 9
<p>Types of weeds: (<b>a</b>) Dandelion Weeds, (<b>b</b>) Heliotropium indicum, (<b>c</b>) Young field Thistle Cirsium arvense, (<b>d</b>) Cirsium arvense, (<b>e</b>) Plantago lanceolata, (<b>f</b>) Eclipta, (<b>g</b>) Urtica Diocia.</p>
Full article ">Figure 10
<p>Results for the YOLOv5 model on image.</p>
Full article ">Figure 11
<p>(<b>a</b>) Results of the YOLOv5 Pruned and Quantized with Transfer Learning, (<b>b</b>) Result of the YOLOv5 Pruned and Quantized.</p>
Full article ">Figure 12
<p>Performance curves of YOLOv5: (<b>a</b>) Metrics/precision curves, (<b>b</b>) Metrics/recall curves.</p>
Full article ">Figure 13
<p>Performance curves of YOLOv5: (<b>a</b>) Metrics/mAP@0.5, (<b>b</b>) metrics/mAP@0.5:0.95.</p>
Full article ">Figure 14
<p>Performance results of YOLOv8.</p>
Full article ">
7 pages, 2689 KiB  
Case Report
Cryptic KMT2A::AFDN Fusion Due to AFDN Insertion into KMT2A in a Patient with Acute Monoblastic Leukemia
by Qing Wei, Gokce A. Toruner, Beenu Thakral, Keyur P. Patel, Naveen Pemmaraju, Sa A. Wang, Rashmi Kanagal-Shamanna, Guilin Tang, Ghayas C. Issa, Sanam Loghavi, L Jeffrey Medeiros and Courtney DiNardo
Genes 2025, 16(3), 317; https://doi.org/10.3390/genes16030317 - 7 Mar 2025
Abstract
Background: KMT2A rearrangements occur in ~10% of acute myeloid leukemia (AML) cases and are critical for classification, risk stratification, and use of targeted therapy. However, insertions involving the KMT2A gene can evade detection using chromosomal analysis and/or fluorescence in situ hybridization (FISH). Methods: [...] Read more.
Background: KMT2A rearrangements occur in ~10% of acute myeloid leukemia (AML) cases and are critical for classification, risk stratification, and use of targeted therapy. However, insertions involving the KMT2A gene can evade detection using chromosomal analysis and/or fluorescence in situ hybridization (FISH). Methods: We present a case of a 22-year-old woman with acute monoblastic leukemia harboring a cryptic KMT2A::AFDN fusion identified by RNA sequencing. Initial FISH showed a 3′ KMT2A deletion, while conventional karyotyping and the automated bioinformatic pipeline for optical genome mapping (OGM) did not identify the canonical translocation. Results: To resolve these discrepancies, metaphase KMT2A FISH (break-apart fusion probe) was performed to assess whether KMT2A was translocated to another chromosome. However, the results did not support this possibility. As the fusion signal remained on the normal chromosome 11, with the 5′ KMT2A signal localized to the derivative chromosome 11. A subsequent manual review of the OGM data revealed a cryptic ~300 kb insertion of AFDN into the 3′ region of KMT2A, reconciling the discrepancies between chromosomal analysis, FISH, and RNA fusion results. Conclusions: This case highlights the importance of integrating multiple testing modalities with expert review when there is a discrepancy. Our findings emphasize the need for a comprehensive approach to genomic assessment to enhance diagnostic accuracy and guide therapeutic decision-making. Full article
(This article belongs to the Special Issue Clinical Molecular Genetics in Hematologic Diseases)
Show Figures

Figure 1

Figure 1
<p>Morphologic and immunophenotypic findings in peripheral blood and bone marrow specimens. (<b>A</b>,<b>B</b>): Peripheral blood (<b>A</b>) and bone marrow aspirate (<b>B</b>) smears show large blasts with open chromatin, variably conspicuous nucleoli, round to indented nuclear membranes, and moderate basophilic cytoplasm. No Auer rods were identified (×1000). (<b>C</b>): The bone marrow biopsy specimen shows a hypercellular bone marrow with sheets of large blasts displaying a starry-sky appearance (×400). (<b>D</b>): Immunohistochemical analysis shows that the blasts are positive for lysozyme (×400). (<b>E</b>–<b>I</b>): Flow cytometric immunophenotypic analysis shows that the blasts are positive for CD34, CD117, CD4, CD33, CD38, CD64, CD123, HLA-DR, TdT, and MPO (dim, ~5%).</p>
Full article ">Figure 2
<p>Chromosomal analysis, interphase FISH, RNA fusion panel, and metaphase FISH results. (<b>A</b>): Chromosomal analysis reveals a complex karyotype, 47,XX,+8,del(9)(q21q31)[<a href="#B11-genes-16-00317" class="html-bibr">11</a>]/47,idem,inv(11)(q14q23)[<a href="#B8-genes-16-00317" class="html-bibr">8</a>]. (<b>B</b>): Interphase FISH using a <span class="html-italic">KMT2A</span> break-apart probe shows one intact yellow fusion signal and one green signal, indicating a 3′ <span class="html-italic">KMT2A</span> deletion. (<b>C</b>): RNA fusion panel identifies fusion transcripts between exon 8 of <span class="html-italic">KMT2A</span> and exon 2 of <span class="html-italic">AFDN</span>. (<b>D</b>): Metaphase FISH reveals a normal chromosome 11 with a yellow fusion signal (yellow arrow) and a derivative chromosome 11 with only a 5′ <span class="html-italic">KMT2A</span> green signal (green arrow) and loss of 3′ <span class="html-italic">KMT2A</span>.</p>
Full article ">Figure 3
<p>OGM results. (<b>A</b>) The OGM circos plot reveals trisomy 8, a deletion on the long arm of chromosome 9, and an interchromosomal translocation between chromosomes 9 and 10. Additionally, within chromosome 11q23, one inversion (marked by a blue dot), one deletion (marked by a red dot), and several intrachromosomal rearrangements are detected. (<b>B</b>) Initial review of OGM near the <span class="html-italic">KMT2A</span> locus highlights a 3′ end deletion of <span class="html-italic">KMT2A</span>, indicated by a thick red arrow and line on the reference chromosome 11, corresponding to the red dot in the circos plot (<b>A</b>). Furthermore, an inversion involving the <span class="html-italic">CBL</span> gene is denoted by a thick blue arrow and line on the reference chromosome 11, corresponding to the blue dot in the circos plot (<b>A</b>). (<b>C</b>) Detailed manual review of the insertion event reveals an insertion encompassing exons 2–11 of the <span class="html-italic">AFDN</span> gene, depicted by yellow bars within the red box on consensus map 2. This pattern of yellow bars matches that observed in (<b>D</b>). (<b>D</b>) Reference map and consensus map of chromosome 4 further illustrate the pattern of the <span class="html-italic">AFDN</span> (exon 2-11)). Throughout these figures, the OGM data are represented with specific visual elements to aid interpretation. The blue lines depict the alignment of the sample’s OGM map data (consensus map) to the reference genome. Within these blue lines, blue bars identify regions of consistent alignment or matched segments, while yellow bars pinpoint regions where structural variations—such as deletions or insertions—have been detected.</p>
Full article ">
21 pages, 2488 KiB  
Article
Classification of Mycena and Marasmius Species Using Deep Learning Models: An Ecological and Taxonomic Approach
by Fatih Ekinci, Guney Ugurlu, Giray Sercan Ozcan, Koray Acici, Tunc Asuroglu, Eda Kumru, Mehmet Serdar Guzel and Ilgaz Akata
Sensors 2025, 25(6), 1642; https://doi.org/10.3390/s25061642 - 7 Mar 2025
Viewed by 100
Abstract
Fungi play a critical role in ecosystems, contributing to biodiversity and providing economic and biotechnological value. In this study, we developed a novel deep learning-based framework for the classification of seven macrofungi species from the genera Mycena and Marasmius, leveraging their unique [...] Read more.
Fungi play a critical role in ecosystems, contributing to biodiversity and providing economic and biotechnological value. In this study, we developed a novel deep learning-based framework for the classification of seven macrofungi species from the genera Mycena and Marasmius, leveraging their unique ecological and morphological characteristics. The proposed approach integrates a custom convolutional neural network (CNN) with a self-organizing map (SOM) adapted for supervised learning and a Kolmogorov–Arnold Network (KAN) layer to enhance classification performance. The experimental results demonstrate significant improvements in classification metrics when using the CNN-SOM and CNN-KAN architectures. Additionally, advanced pretrained models such as MaxViT-S and ResNetV2-50 achieved high accuracy rates, with MaxViT-S achieving 98.9% accuracy. Statistical analyses using the chi-square test confirmed the reliability of the results, emphasizing the importance of validating evaluation metrics statistically. This research represents the first application of SOM in fungal classification and highlights the potential of deep learning in advancing fungal taxonomy. Future work will focus on optimizing the KAN architecture and expanding the dataset to include more fungal classes, further enhancing classification accuracy and ecological understanding. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed CNN-SOM architecture.</p>
Full article ">Figure 2
<p>A macroscopic overview of <span class="html-italic">Mycena</span> species.</p>
Full article ">Figure 3
<p>A macroscopic overview of <span class="html-italic">Marasmius</span> species.</p>
Full article ">
25 pages, 23442 KiB  
Article
Evaluating the Accuracy of Land-Use Change Models for Predicting Vegetation Loss Across Brazilian Biomes
by Macleidi Varnier and Eliseu José Weber
Land 2025, 14(3), 560; https://doi.org/10.3390/land14030560 - 7 Mar 2025
Viewed by 172
Abstract
Land-use change models are used to predict future land-use scenarios. Various methods for predicting changes can be found in the literature, which can be divided into two groups: baseline models and machine-learning-based models. Baseline models use clear change logics, such as proximity or [...] Read more.
Land-use change models are used to predict future land-use scenarios. Various methods for predicting changes can be found in the literature, which can be divided into two groups: baseline models and machine-learning-based models. Baseline models use clear change logics, such as proximity or distance from spatial objects. While machine-learning-based models use computational methods and spatial variables to identify patterns that explain the occurrence of changes. Considering these two groups of models, machine-learning-based models are much more widely used, even though their formulation is considerably more complex. However, the lack of studies comparing the performance of models from these two groups makes it impossible to determine the superiority of one over the other. Therefore, this article aims to evaluate and compare the accuracy of baseline and machine-learning-based models for study areas in three Brazilian biomes. Four baseline models (Euclidean distance from anthropic uses, Euclidean distance from vegetation suppressions, null change model, and random change model) and four machine-learning-based models (TerrSet artificial neural network, TerrSet SimWeigth, Weights of Evidence–Dinamica Ego. and Random Forest model) were trained considering the environmental context of the period from 1995 to 2000. The objective was to predict natural vegetation suppression from 2000 to the years 2005, 2010, 2015, and 2020. The predicted maps were evaluated by comparing them with reference land-use maps using rigorous accuracy methods. The results show that, regardless of the underlying method, the models presented similar performance in all situations. The results and discussions provide a contribution to understanding the strengths and weaknesses of various change models in different environmental contexts. Full article
Show Figures

Figure 1

Figure 1
<p>Methodology flowchart.</p>
Full article ">Figure 2
<p>(<b>A</b>) Biome boundaries and study areas. (<b>B</b>) State boundaries and study areas. (<b>C</b>) Grid, study areas, and land use and land cover in 2000 and 2020.</p>
Full article ">Figure 3
<p>(<b>A</b>) ROC and (<b>B</b>) TOC [<a href="#B11-land-14-00560" class="html-bibr">11</a>].</p>
Full article ">Figure 4
<p>TOC of the models for the study area in the Amazon biome.</p>
Full article ">Figure 5
<p>TOC of the models for the study area in the Cerrado biome.</p>
Full article ">Figure 6
<p>TOC of the models for the study area in the Pampa biome.</p>
Full article ">Figure 7
<p>Probability surface with the highest average AUC value over the prediction periods for each study area. (<b>A</b>) Amazon: distance from anthropic uses from 2000. (<b>B</b>) Cerrado: Random Forest. (<b>C</b>) Pampa: Random Forest.</p>
Full article ">Figure 8
<p>Evaluation of the predicted maps for the study area in the Amazon biome using the three-map comparison method.</p>
Full article ">Figure 9
<p>Evaluation of the predicted maps for the study area in the Cerrado biome using the three-map comparison method.</p>
Full article ">Figure 10
<p>Evaluation of the predicted maps for the study area in the Pampa biome using the three-map comparison method <a href="#land-14-00560-t004" class="html-table">Table 4</a> presents the Figure of Merit values for all models and prediction periods in the study area of the Amazon biome. The models that achieved the highest Figure of Merit values for the prediction of 2005 were distance from anthropogenic uses and distance from vegetation suppression, while the models with the lowest values were random and Weights of Evidence–Dinamica Ego. For the 2010 and 2015 predictions, the highest values were found for the distance from anthropogenic uses and ANN-TerrSet models, while the lowest values were obtained for the Random and Dinamica EGO models. And for the 2020 prediction, the highest values were found for the distance from anthropogenic uses and Random Forest models, while the lowest values were found for the Random and Weights of Evidence–Dinamica Ego models.</p>
Full article ">Figure 11
<p>Three-map evaluation: Amazon Biome: (<b>A</b>) 2005, (<b>B</b>) 2010, (<b>C</b>) 2015, (<b>D</b>) 2020. The model presented for all periods is based on Euclidean distance from anthropogenic in 2000.</p>
Full article ">Figure 12
<p>Three-map evaluation: Cerrado Biome: (<b>A</b>) 2005, (<b>B</b>) 2010, (<b>C</b>) 2015, (<b>D</b>) 2020. The model presented for all periods is ANN-TerrSet.</p>
Full article ">Figure 13
<p>Three-map evaluation: Pampa Biome: (<b>A</b>) 2005, (<b>B</b>) 2010, (<b>C</b>) 2015, (<b>D</b>) 2020. The model presented for 2005 and 2010 is ANN-TerrSet, and for 2015 and 2020, it is Random Forest.</p>
Full article ">
16 pages, 20081 KiB  
Article
YOLO-ACE: Enhancing YOLO with Augmented Contextual Efficiency for Precision Cotton Weed Detection
by Qi Zhou, Huicheng Li, Zhiling Cai, Yiwen Zhong, Fenglin Zhong, Xiaoyu Lin and Lijin Wang
Sensors 2025, 25(5), 1635; https://doi.org/10.3390/s25051635 - 6 Mar 2025
Viewed by 102
Abstract
Effective weed management is essential for protecting crop yields in cotton production, yet conventional deep learning approaches often falter in detecting small or occluded weeds and can be restricted by large parameter counts. To tackle these challenges, we propose YOLO-ACE, an advanced extension [...] Read more.
Effective weed management is essential for protecting crop yields in cotton production, yet conventional deep learning approaches often falter in detecting small or occluded weeds and can be restricted by large parameter counts. To tackle these challenges, we propose YOLO-ACE, an advanced extension of YOLOv5s, which was selected for its optimal balance of accuracy and speed, making it well suited for agricultural applications. YOLO-ACE integrates a Context Augmentation Module (CAM) and Selective Kernel Attention (SKAttention) to capture multi-scale features and dynamically adjust the receptive field, while a decoupled detection head separates classification from bounding box regression, enhancing overall efficiency. Experiments on the CottonWeedDet12 (CWD12) dataset show that YOLO-ACE achieves notable [email protected] and [email protected]:0.95 scores—95.3% and 89.5%, respectively—surpassing previous benchmarks. Additionally, we tested the model’s transferability and generalization across different crops and environments using the CropWeed dataset, where it achieved a competitive [email protected] of 84.3%, further showcasing its robust ability to adapt to diverse conditions. These results confirm that YOLO-ACE combines precise detection with parameter efficiency, meeting the exacting demands of modern cotton weed management. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Diagram of the algorithm structure of YOLOv5s; (<b>b</b>) diagram of the algorithm of YOLO-ACE; (<b>c</b>) components in the algorithmic structure.</p>
Full article ">Figure 2
<p>YOLO-ACE module integration flowchart.</p>
Full article ">Figure 3
<p>Network architecture diagram of Context Augmentation Module.</p>
Full article ">Figure 4
<p>Fusion methods of CAM: (<b>a</b>) and (<b>b</b>) show direct feature map integration via weighting and concatenation, respectively, while (<b>c</b>) employs an adaptive fusion—combining convolution, splicing, and softmax—to merge information from three channels.</p>
Full article ">Figure 5
<p>Selective kernel attention.</p>
Full article ">Figure 6
<p>Examples of the 12 categories of weeds in CottonWeedDet12.</p>
Full article ">Figure 7
<p>Convergence curves of YOLO-ACE: (<b>a</b>) mAP at IoU = 0.5; (<b>b</b>) mAP across IoU thresholds from 0.5 to 0.95.</p>
Full article ">Figure 8
<p>Detection Comparative Analysis of Cotton Weed Detection under Challenging Conditions: (<b>a</b>) Weeds exhibiting diverse shapes, sizes, and dimensions; (<b>b</b>) YOLOv5s failing to detect small and occluded targets; (<b>c</b>) YOLO-ACE demonstrating enhanced detection of small and occluded targets.</p>
Full article ">Figure 9
<p>Comparative heatmap visualizations of YOLOv5 variants with module integrations—none, decoupled head (DH), SKAttention (SK), Context Augmentation Module (CAM), and Full Integration (YOLO-ACE).</p>
Full article ">Figure 10
<p>Robust Weed Detection under Variable Conditions: (<b>a</b>–<b>d</b>) present detection outcomes under diverse lighting conditions and viewing angles.</p>
Full article ">Figure 11
<p>Analysis of YOLO-ACE Detection Failures: (<b>a</b>,<b>b</b>) reveal that severe occlusion or overlap can lead to missed weed detections due to inherent ambiguities. (<b>c</b>,<b>d</b>) show that enhanced feature extraction may misclassify subtle plant features as weeds, resulting in false positives.</p>
Full article ">
19 pages, 6875 KiB  
Article
Estimation of Forest Canopy Height Using ATLAS Data Based on Improved Optics and EEMD Algorithms
by Guanran Wang, Ying Yu, Mingze Li, Xiguang Yang, Hanyuan Dong and Xuebing Guan
Remote Sens. 2025, 17(5), 941; https://doi.org/10.3390/rs17050941 - 6 Mar 2025
Viewed by 112
Abstract
The Ice, Cloud, and Land Elevation 2 (ICESat-2) mission uses a micropulse photon-counting lidar system for mapping, which provides technical support for capturing forest parameters and carbon stocks over large areas. However, the current algorithm is greatly affected by the slope, and the [...] Read more.
The Ice, Cloud, and Land Elevation 2 (ICESat-2) mission uses a micropulse photon-counting lidar system for mapping, which provides technical support for capturing forest parameters and carbon stocks over large areas. However, the current algorithm is greatly affected by the slope, and the extraction of the forest canopy height in the area with steep terrain is poor. In this paper, an improved algorithm was provided to reduce the influence of topography on canopy height estimation and obtain higher accuracy of forest canopy height. First, the improved clustering algorithm based on ordering points to identify the clustering structure (OPTICS) algorithm was developed and used to remove the noisy photons, and then the photon points were divided into canopy photons and ground photons based on mean filtering and smooth filtering, and the pseudo-signal photons were removed according to the distance between the two photons. Finally, the photon points were classified and interpolated again to obtain the canopy height. The results show that the improved algorithm was more effective in estimating ground elevation and canopy height, and the result was better in areas with less noise. The root mean square error (RMSE) values of the ground elevation estimates are within the range of 1.15 m for daytime data and 0.67 m for nighttime data. The estimated RMSE values for vegetation height ranged from 3.83 m to 2.29 m. The improved algorithm can provide a good basis for forest height estimation, and its DEM and CHM accuracy improved by 36.48% and 55.93%, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The graph on the left represents the study area. The upper picture on the right shows the forest cover in the area shown and the lower figure shows the appearance of the drone sensor used in this experiment.</p>
Full article ">Figure 2
<p>Schematic diagram of the ICESat-2/ATLAS track.</p>
Full article ">Figure 3
<p>Search for a neighborhood shape diagram. The circle and the ellipse represent the original neighborhood shape and the improved neighborhood shape, respectively.</p>
Full article ">Figure 4
<p>Core distance and reachability distance schematic diagram. The red arrow indicates the distance from the center point to different locations. The circle and the ellipse represent the original neighborhood shape and the improved neighborhood shape, respectively.</p>
Full article ">Figure 5
<p>The decomposition results of initial ground photons using EMD. (<b>a</b>) Original signal; (<b>b</b>) IMF 1; (<b>c</b>) IMF 2; (<b>d</b>) IMF 3; (<b>e</b>) IMF 4; (<b>f</b>) IMF 5; (<b>g</b>) IMF 6; (<b>h</b>) IMF 7; (<b>i</b>) IMF 8; (<b>j</b>) the final residual.</p>
Full article ">Figure 6
<p>Canopy photon recognition algorithm flow.</p>
Full article ">Figure 7
<p>The final results of noise-removal algorithm (low signal-to-noise ratio, complex terrain).</p>
Full article ">Figure 8
<p>The final results of noise-removal algorithm (high signal-to-noise ratio, relatively uncomplicated terrain).</p>
Full article ">Figure 9
<p>The final ground photon extraction and the ground surface generation.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>d</b>) The DSM and DEM results for high signal-to-noise ratio and the DSM and DEM results for low signal-to-noise ratio, respectively.</p>
Full article ">Figure 11
<p>(<b>a1</b>–<b>a3</b>) The DEM, DSM, and CHM error box plots in the high signal-to-noise ratio region and (<b>b1</b>–<b>b3</b>) the DEM, DSM, and CHM error box plots in the low signal-to-noise ratio region, respectively. I stands for improved algorithm and O stands for original algorithm.</p>
Full article ">Figure 11 Cont.
<p>(<b>a1</b>–<b>a3</b>) The DEM, DSM, and CHM error box plots in the high signal-to-noise ratio region and (<b>b1</b>–<b>b3</b>) the DEM, DSM, and CHM error box plots in the low signal-to-noise ratio region, respectively. I stands for improved algorithm and O stands for original algorithm.</p>
Full article ">Figure 12
<p>Scatter plot of the change in the orbital direction of the residual extension satellite due to slope.</p>
Full article ">Figure 13
<p>Scatter plot of orbital direction change in residual extension satellite affected by forest cover.</p>
Full article ">
13 pages, 3490 KiB  
Article
QSA-QConvLSTM: A Quantum Computing-Based Approach for Spatiotemporal Sequence Prediction
by Wenbin Yu, Zongyuan Chen, Chengjun Zhang and Yadang Chen
Information 2025, 16(3), 206; https://doi.org/10.3390/info16030206 - 6 Mar 2025
Viewed by 99
Abstract
The ability to capture long-distance dependencies is critical for improving the prediction accuracy of spatiotemporal prediction models. Traditional ConvLSTM models face inherent limitations in this regard, along with the challenge of information decay, which negatively impacts prediction performance. To address these issues, this [...] Read more.
The ability to capture long-distance dependencies is critical for improving the prediction accuracy of spatiotemporal prediction models. Traditional ConvLSTM models face inherent limitations in this regard, along with the challenge of information decay, which negatively impacts prediction performance. To address these issues, this paper proposes a QSA-QConvLSTM model, which integrates quantum convolution circuits and quantum self-attention mechanisms. The quantum self-attention mechanism maps query, key, and value vectors using variational quantum circuits, effectively enhancing the ability to model long-distance dependencies in spatiotemporal data. Additionally, the use of quantum convolution circuits improves the extraction of spatial features. Experiments on the Moving MNIST dataset demonstrate the superiority of the QSA-QConvLSTM model over existing models, including ConvLSTM, TrajGRU, PredRNN, and PredRNN v2, with MSE and SSIM scores of 44.3 and 0.906, respectively. Ablation studies further verify the effectiveness and necessity of the quantum convolution circuits and quantum self-attention modules, providing an efficient and accurate approach to quantized modeling for spatiotemporal prediction tasks. Full article
(This article belongs to the Special Issue Quantum Information Processing and Machine Learning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of quantum self-attention mechanism.</p>
Full article ">Figure 2
<p>Quantization of linear mapping.</p>
Full article ">Figure 3
<p>Variational quantum circuit diagram.</p>
Full article ">Figure 4
<p>Quantization of convolution map.</p>
Full article ">Figure 5
<p>QSA-ConvLSTM model.</p>
Full article ">Figure 6
<p>Multi-layer quantum circuit.</p>
Full article ">Figure 7
<p>Curves of evaluation metric variations.</p>
Full article ">Figure 8
<p>Display of prediction samples: (<b>a</b>) prediction example 1, (<b>b</b>) prediction example 2.</p>
Full article ">
27 pages, 10829 KiB  
Article
Potentiality Delineation of Groundwater Recharge in Arid Regions Using Multi-Criteria Analysis
by Heba El-Bagoury, Mahmoud H. Darwish, Sedky H. A. Hassan, Sang-Eun Oh, Kotb A. Attia and Hanaa A. Megahed
Water 2025, 17(5), 766; https://doi.org/10.3390/w17050766 - 6 Mar 2025
Viewed by 119
Abstract
This study integrates morphometric analysis, remote sensing, and GIS with the analytical hierarchical process (AHP) to identify high potential groundwater recharge areas in Wadi Abadi, Egyptian Eastern Desert, supporting sustainable water resource management. Groundwater recharge primarily comes from rainfall and Nile River water, [...] Read more.
This study integrates morphometric analysis, remote sensing, and GIS with the analytical hierarchical process (AHP) to identify high potential groundwater recharge areas in Wadi Abadi, Egyptian Eastern Desert, supporting sustainable water resource management. Groundwater recharge primarily comes from rainfall and Nile River water, particularly for Quaternary aquifers. The analysis focused on the Quaternary and Nubian Sandstone aquifers, evaluating 16 influencing parameters, including elevation, slope, rainfall, lithology, soil type, and land use/land cover (LULC). The drainage network was derived from a 30 m-resolution Digital Elevation Model (DEM). ArcGIS 10.8 was used to classify the basin into 13 sub-basins, with layers reclassified and weighted using a raster calculator. The groundwater potential map revealed that 24.95% and 29.87% of the area fall into very low and moderate potential categories, respectively, while low, high, and very high potential zones account for 18.62%, 17.65%, and 8.91%. Data from 41 observation wells were used to verify the potential groundwater resources. In this study, the ROC curve was applied to assess the accuracy of the GWPZ models generated through different methods. The validation results indicated that approximately 87% of the wells corresponded accurately with the designated zones on the GWPZ map, confirming its reliability. Over-pumping in the southwest has significantly lowered water levels in the Quaternary aquifer. This study provides a systematic approach for identifying groundwater recharge zones, offering insights that can support resource allocation, well placement, and aquifer sustainability in arid regions. This study also underscores the importance of recharge assessment for shallow aquifers, even in hyper-arid environments. Full article
(This article belongs to the Special Issue Advance in Groundwater in Arid Areas)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Egypt Landsat satellite image; (<b>b</b>) study area by Google Earth image, 2024.</p>
Full article ">Figure 2
<p>Geological map of Wadi Abadi Basin (after CONCO, 1987 [<a href="#B63-water-17-00766" class="html-bibr">63</a>]).</p>
Full article ">Figure 3
<p>The flowchart of approaches and methodology.</p>
Full article ">Figure 4
<p>Geographical distribution of the main groundwater aquifers and forty-two of drilled wells of the Wadi Abadi basin.</p>
Full article ">Figure 5
<p>Hydrogeological cross-section (A–A′) of the Nubia Sandstone aquifer at the study area.</p>
Full article ">Figure 6
<p>(<b>a</b>) Digital elevation model; (<b>b</b>) slope; (<b>c</b>) aspect; (<b>d</b>) rainfall distribution; (<b>e</b>) lithology; (<b>f</b>) soil types; and (<b>g</b>) LULC.</p>
Full article ">Figure 7
<p>(<b>a</b>) Stream order and number; (<b>b</b>) stream length; (<b>c</b>) bifurcation ratio; (<b>d</b>) drainage density; (<b>e</b>) length of overland flow; (<b>f</b>) stream frequency; (<b>g</b>) drainage texture; (<b>h</b>) elongation ratio; and (<b>i</b>) relief ratio.</p>
Full article ">Figure 8
<p>A groundwater potential zone map (GWPZ) associated with observation wells illustrating the classes of potential recharge zoning at Wadi Abadi.</p>
Full article ">
37 pages, 7441 KiB  
Review
Hexahedral Projections: A Comprehensive Review and Ranking
by Aleksandar Dimitrijević and Peter Strobl
ISPRS Int. J. Geo-Inf. 2025, 14(3), 122; https://doi.org/10.3390/ijgi14030122 - 6 Mar 2025
Viewed by 95
Abstract
Hexahedral projections—mapping the Earth’s surface onto the faces of a circumscribed cube—have drawn scientific interest for over half a century. During this time, numerous projections with diverse characteristics have been developed. This paper provides the most comprehensive review of these projections to date, [...] Read more.
Hexahedral projections—mapping the Earth’s surface onto the faces of a circumscribed cube—have drawn scientific interest for over half a century. During this time, numerous projections with diverse characteristics have been developed. This paper provides the most comprehensive review of these projections to date, offering a detailed examination of the processes involved in projecting the Earth onto a cube, with a focus on distortion and accuracy. A numerical and graphical analysis of the characteristics of hexahedral projections is presented, serving as the foundation for a composite hierarchical metric based on ranking. This metric is used to rank hexahedral projections according to individual criteria, groups of criteria, and overall performance. Full article
Show Figures

Figure 1

Figure 1
<p>Hexahedral projection: projecting the Earth’s surface onto a cube.</p>
Full article ">Figure 2
<p>The complete transformation pipeline of geodata from the geodetic ellipsoidal to the hexahedral coordinate system through six stages: (<b>a</b>) ellipsoid, (<b>b</b>) sphere, (<b>c</b>) graticule system rotation, (<b>d</b>) cube side delimitation, (<b>e</b>) mapping to the front side, and (<b>f</b>) projection onto the plane.</p>
Full article ">Figure 3
<p>Deviation of approximations relative to the exact authalic latitude. Comparisons were made between series approximations with three terms (Adams 3 and Karney 3) and a series with six terms (Karney 6). Deviations are presented in meters for the authalic sphere.</p>
Full article ">Figure 4
<p>Error in geodetic latitude after conversion to authalic latitude and back to geodetic latitude, i.e., after successive application of the forward and inverse transformations. The error is expressed in meters for the authalic sphere case. Comparisons were made between cases where a closed-form equation (indicated by CF in the name) is used for the forward transformation and cases where an approximation is applied for both forward and inverse transformations. The number in the method name denotes the number of coefficients used in the series expansions.</p>
Full article ">Figure 5
<p>The edges of the cube projected onto the globe (<b>a</b>) and in the Plate Carrée projection (<b>b</b>).</p>
Full article ">Figure 6
<p>Comparison of graticule systems in RAN, ADC, MSC, and QSC projections. In the conformal projection (RAN), all lines remain inherently smooth. Relaxed projections (ADC) preserve line smoothness with more uniform spacing along the side edges. Compromise projections (MSC) display sharp line breaks at the cube edges, while in equal-area projections (QSC), line breaks are visible along both the edges and diagonals.</p>
Full article ">Figure 7
<p>The projection is defined only for one quarter of the page—specifically, the right-angled triangle with its vertex at the center of the page (white triangle on the right). The remaining three quarters (gray triangles indicated by arrows) are obtained through mirroring.</p>
Full article ">Figure 8
<p>Discontinuities along certain cube edges in the ARV projection. Red indicates edges where discrepancies exist between adjacent sides. The left image shows a magnified view of the discontinuity crossing the North American continent, while the right image displays the locations of the discontinuous edges on the cube.</p>
Full article ">Figure 9
<p>Appearance of side S<sub>0</sub> in the TAN projection for selected values of parameter <span class="html-italic">p</span>: (<b>a</b>) <span class="html-italic">p</span> = 0.1, (<b>b</b>) <span class="html-italic">p</span> = 0.5, (<b>c</b>) <span class="html-italic">p</span> = 1.0, (<b>d</b>) <span class="html-italic">p</span> = 1.5, and (<b>e</b>) <span class="html-italic">p</span> = 2.0.</p>
Full article ">Figure 10
<p>Layout of the graticule system and continental outlines on the S<sub>0</sub> side of selected hexahedral projections: (<b>a</b>) RAN, (<b>b</b>) ADC, (<b>c</b>) TAN (<span class="html-italic">p</span> = 0.5), (<b>d</b>) MSC, and (<b>e</b>) QSC.</p>
Full article ">Figure 11
<p>Tissot’s indicatrices overlaid on a cube face for selected hexahedral projections: (<b>a</b>) RAN, (<b>b</b>) ADC, (<b>c</b>) TAN (<span class="html-italic">p</span> = 0.5), (<b>d</b>) MSC, and (<b>e</b>) QSC. The shapes along the face diagonals in the QSC projection were obtained using a numerical method, as Tissot’s formulas cannot be applied at discontinuities.</p>
Full article ">Figure 12
<p>Distortion spatial distribution diagrams for the EVR projection, with variations in color-coding: (<b>a</b>) grayscale, (<b>b</b>) red-to-white-to-blue gradient, (<b>c</b>) black-to-blue-to-white- to-red-to-black gradient, and (<b>d</b>) discretized grayscale with isolines.</p>
Full article ">Figure 13
<p>Distortion spatial distribution diagrams for the S2Q projection: (<b>a</b>) maximum angular deviation (ω), (<b>b</b>) angular distortion (α), (<b>c</b>) areal scale (σ), and (<b>d</b>) grid oversampling factor (GOF).</p>
Full article ">Figure 14
<p>Normalized dependence of areal distortion metrics on the projection parameter <span class="html-italic">p</span> for the TAN projection.</p>
Full article ">Figure 15
<p>Normalized dependence of angular distortion metrics on the projection parameter <span class="html-italic">p</span> for the TAN projection.</p>
Full article ">Figure 16
<p>Normalized dependence of GOF on the projection parameter <span class="html-italic">p</span> for the TAN projection.</p>
Full article ">
18 pages, 13360 KiB  
Article
The Relationships Between Vegetation Changes and Groundwater Table Depths for Woody Plants in the Sangong River Basin, Northwest China
by Han Wu, Jie Bai, Junli Li, Ran Liu, Jin Zhao and Xuanlong Ma
Remote Sens. 2025, 17(5), 937; https://doi.org/10.3390/rs17050937 - 6 Mar 2025
Viewed by 83
Abstract
Woody plants serve as crucial ecological barriers surrounding oases in arid and semi-arid regions, playing a vital role in maintaining the stability and supporting sustainable development of oases. However, their sparse distribution makes significant challenges in accurately mapping their spatial extent using medium-resolution [...] Read more.
Woody plants serve as crucial ecological barriers surrounding oases in arid and semi-arid regions, playing a vital role in maintaining the stability and supporting sustainable development of oases. However, their sparse distribution makes significant challenges in accurately mapping their spatial extent using medium-resolution remote sensing imagery. In this study, we utilized high-resolution Gaofen (GF-2) and Landsat 5/7/8 satellite images to quantify the relationship between vegetation growth and groundwater table depths (GTD) in a typical inland river basin from 1988 to 2021. Our findings are as follows: (1) Based on the D-LinkNet model, the distribution of woody plants was accurately extracted with an overall accuracy (OA) of 96.06%. (2) Approximately 95.33% of the desert areas had fractional woody plant coverage (FWC) values of less than 10%. (3) The difference between fractional woody plant coverage and fractional vegetation cover proved to be a fine indicator for delineating the range of desert-oasis ecotone. (4) The optimal GTD for Haloxylon ammodendron and Tamarix ramosissima was determined to be 5.51 m and 3.36 m, respectively. Understanding the relationship between woody plant growth and GTD is essential for effective ecological conservation and water resource management in arid and semi-arid regions. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) represents the location of the study area, (<b>b</b>) represents groundwater contour maps.</p>
Full article ">Figure 2
<p>Technical workflow chart.</p>
Full article ">Figure 3
<p>Schematic diagram for calculating the time-series enhanced vegetation index (EVI) for woody plants combined GF-2 and Landsat satellite images.</p>
Full article ">Figure 4
<p>Detailed comparison of woody plant mapping using three models at four sample sites. (a), (b), (c) and (d) represent the number of each sample site. Red represents extracted patches of woody plants.</p>
Full article ">Figure 5
<p>(<b>a</b>) the maps of fractional woody plant cover (FWC) in the middle and lower reaches of the SRB; (<b>b</b>) the maps of fractional vegetation cover (FVC) in the middle and lower reaches of the SRB; (<b>c</b>) the statistical distribution of FWC (<b>d</b>) the statistical distribution of FVC.</p>
Full article ">Figure 6
<p>(<b>a</b>) represents the change curves of FVC and FWC, and (<b>b</b>) represents the differences between FVC and FWC within 15 km from oasis.</p>
Full article ">Figure 7
<p>Spatiotemporal variations (<b>a</b>), statistical distribution (<b>b</b>) and annual time series (<b>c</b>) of the EVI from 1988 to 2021 in the middle and lower reaches of the SRB.</p>
Full article ">Figure 8
<p>Impact of GTD on EVI for (<b>a</b>,<b>b</b>) APOL, (<b>c</b>,<b>d</b>) APOU and (<b>e</b>,<b>f</b>) ADFO in the middle and lower reaches of the SRB. The pink-shaded region shows the 95% confidence interval of the regression.</p>
Full article ">Figure 9
<p>Impact of GTD and precipitation (PRE) on EVI for (<b>a</b>,<b>b</b>) <span class="html-italic">H. ammodendron</span> and (<b>c</b>,<b>d</b>) <span class="html-italic">T. ramosissima</span> in the lower reaches of the SRB. The pink-shaded region shows the 95% confidence interval of the regression.</p>
Full article ">Figure 10
<p>Diagram of the lognormal distribution fit between EVI and GTD for <span class="html-italic">H. amodendron</span> (red) and <span class="html-italic">T. ramosissima</span> (green) in the lower reaches of the SRB.</p>
Full article ">
11 pages, 4983 KiB  
Article
High-Sensitivity Magnetic Field Sensor Based on an Optoelectronic Oscillator with a Mach–Zehnder Interferometer
by Mingjian Zhu, Pufeng Gao, Shiyi Cai, Naihan Zhang, Beilei Wu, Yan Liu, Bin Yin and Muguang Wang
Sensors 2025, 25(5), 1621; https://doi.org/10.3390/s25051621 - 6 Mar 2025
Viewed by 116
Abstract
A high-sensitivity magnetic field sensor based on an optoelectronic oscillator (OEO) with a Mach–Zehnder interferometer (MZI) is proposed and experimentally demonstrated. The magnetic field sensor consists of a fiber Mach–Zehnder interferometer, with the lower arm of the interferometer wound around a magnetostrictive transducer. [...] Read more.
A high-sensitivity magnetic field sensor based on an optoelectronic oscillator (OEO) with a Mach–Zehnder interferometer (MZI) is proposed and experimentally demonstrated. The magnetic field sensor consists of a fiber Mach–Zehnder interferometer, with the lower arm of the interferometer wound around a magnetostrictive transducer. Due to the magnetostrictive effect, an optical phase shift induced by magnetic field variation is generated between two orthogonal light waves transmitted in the upper and lower arms of the MZI. The polarization-dependent property of a Mach–Zehnder modulator (MZM) is utilized to transform the magnetostrictive phase shift into the phase difference between the sidebands and optical carrier, which is mapped to the oscillating frequency upon the completion of an OEO loop. High-sensitivity magnetic field sensing is achieved by observing the frequency shift of the radio frequency (RF) signal. Temperature-induced cross-sensitivity is mitigated through precise length matching of the MZI arms. In the experiment, the high magnetic field sensitivity of 6.824 MHz/mT with a range of 25 mT to 25.3 mT is achieved and the sensing accuracy measured by an electrical spectrum analyzer (ESA) at “maxhold” mode is 0.002 mT. The proposed sensing structure has excellent magnetic field detection performance and provides a solution for temperature-insensitive magnetic field detection, which would have broad application prospects. Full article
(This article belongs to the Special Issue Advances in Microwave Photonics)
Show Figures

Figure 1

Figure 1
<p>Schematic layout of an OEO-based magnetic field sensing system with enhanced sensitivity. Points a–e: the optical signals output from the PBS, PBC, MZM, PS-FBG and Pol, respectively.</p>
Full article ">Figure 2
<p>Optical spectral transformation of OEO. (<b>a</b>–<b>e</b>) the optical spectra of the signals emitted from the PBS, PBC, MZM, PS-FBG and Pol, respectively; Red and blue arrows: two optical signals with specific amplitudes and polarization states transmitted along the fiber.</p>
Full article ">Figure 3
<p>The schematic of the fiber’s various regions within the solenoid.</p>
Full article ">Figure 4
<p>(<b>a</b>) Optical spectrum at the Pol output; (<b>b</b>) electrical spectrum of the OEO’s 1.07536 GHz oscillation signal.</p>
Full article ">Figure 5
<p>(<b>a</b>) Spectra of temperature stability testing for the sensing system; (<b>b</b>) variation of oscillation frequency with temperature.</p>
Full article ">Figure 6
<p>(<b>a</b>) Superposition spectrum of the oscillating signals as the magnetic field incrementally rises; (<b>b</b>) variation of oscillation frequency with magnetic field.</p>
Full article ">Figure 7
<p>Frequency stability measurement for 5 min at “maxhold” mode.</p>
Full article ">
18 pages, 4613 KiB  
Article
Virtual and Real Occlusion Processing Method of Monocular Visual Assembly Scene Based on ORB-SLAM3
by Hanzhong Xu, Chunping Chen, Qingqing Yin, Chao Ma and Feiyan Guo
Machines 2025, 13(3), 212; https://doi.org/10.3390/machines13030212 - 6 Mar 2025
Viewed by 85
Abstract
Addressing the challenge of acquiring depth information in aero-engine assembly scenes using monocular vision, which complicates mixed reality (MR) virtual and real occlusion processing, we propose an ORB-SLAM3-based monocular vision assembly scene virtual and real occlusion processing method. The method proposes optimizing ORB-SLAM3 [...] Read more.
Addressing the challenge of acquiring depth information in aero-engine assembly scenes using monocular vision, which complicates mixed reality (MR) virtual and real occlusion processing, we propose an ORB-SLAM3-based monocular vision assembly scene virtual and real occlusion processing method. The method proposes optimizing ORB-SLAM3 for matching and depth point reconstruction using the MNSTF algorithm. MNSTF can solve the problems of feature point extraction and matching in weakly textured and texture-less scenes by expressing the structure and texture information of the local images. It is then proposed to densify the sparse depth map using the double-three interpolation method, and the complete depth map of the real scene is created by combining the 3D model depth information in the process model. Finally, by comparing the depth values of each pixel point in the real and virtual scene depth maps, the virtual occlusion relationship of the assembly scene is correctly displayed. Experimental validation was performed with an aero-engine piping connector assembly scenario and by comparing it with Holynski’s and Kinect’s methods. The results showed that in terms of virtual and real occlusion accuracy, the average improvement was 2.2 and 3.4 pixel points, respectively. In terms of real-time performance, the real-time frame rate of this paper’s method can reach 42.4 FPS, an improvement of 77.4% and 87.6%, respectively. This shows that the method in this paper has good performance in terms of the accuracy and timeliness of virtual and real occlusion. This study further demonstrates that the proposed method can effectively address the challenges of virtual and real occlusion processing in monocular vision within the context of mixed reality-assisted assembly processes. Full article
Show Figures

Figure 1

Figure 1
<p>Monocular vision-based occlusion processing flow framework.</p>
Full article ">Figure 2
<p>Schematic diagram of occlusion relationships.</p>
Full article ">Figure 3
<p>Feature point pair-pole geometry constraints.</p>
Full article ">Figure 4
<p>Key points and camera positions.</p>
Full article ">Figure 5
<p>Comparison of tracking and positioning effects of different algorithms. (<b>a</b>) Comparison of trajectory translations; (<b>b</b>) Comparison of trajectory rotations.</p>
Full article ">Figure 6
<p>RGB image and depth diagram of the assembly scene. (<b>a</b>) Original RGB images; (<b>b</b>) Densified depth image; (<b>c</b>–<b>f</b>) Results of the densified depth maps for different assembly scenes.</p>
Full article ">Figure 7
<p>Rendering process for virtual–real occlusion of assembly objects.</p>
Full article ">Figure 8
<p>Aero-engine external accessories exhaust pipe bolt virtual model and scene fusion.</p>
Full article ">Figure 9
<p>Comparison of edge errors in depth images. (<b>a</b>) Schematic diagram of the accessory connecting tube and its contour extraction. (<b>b</b>) Depth map sampling point error.</p>
Full article ">
Back to TopTop