Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (723)

Search Parameters:
Keywords = terrestrial images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 25542 KiB  
Article
Automatic Mapping of 10 m Tropical Evergreen Forest Cover in Central African Republic with Sentinel-2 Dynamic World Dataset
by Wenqiong Zhao, Xinyan Zhong, Xiaodong Li, Xia Wang, Yun Du and Yihang Zhang
Remote Sens. 2025, 17(4), 722; https://doi.org/10.3390/rs17040722 - 19 Feb 2025
Abstract
Tropical evergreen forests represent the richest biodiversity in terrestrial ecosystems, and the fine spatial-temporal resolution mapping of these forests is essential for the study and conservation of this vital natural resource. The current methods for mapping tropical evergreen forests frequently exhibit coarse spatial [...] Read more.
Tropical evergreen forests represent the richest biodiversity in terrestrial ecosystems, and the fine spatial-temporal resolution mapping of these forests is essential for the study and conservation of this vital natural resource. The current methods for mapping tropical evergreen forests frequently exhibit coarse spatial resolution and lengthy production cycles. This can be attributed to the inherent challenges associated with monitoring diverse surface changes and the persistence of cloudy, rainy conditions in the tropics. We propose a novel approach to automatically map annual 10 m tropical evergreen forest covers from 2017 to 2023 with the Sentinel-2 Dynamic World dataset in the biodiversity-rich and conservation-sensitive Central African Republic (CAR). The Copernicus Global Land Cover Layers (CGLC) and Global Forest Change (GFC) products were used first to track stable evergreen forest samples. Then, initial evergreen forest cover maps were generated by determining the threshold of evergreen forest cover for each of the yearly median forest cover probability maps. From 2017 to 2023, the annual modified 10 m tropical evergreen forest cover maps were finally produced from the initial evergreen forest cover maps and NEFI (Non-Evergreen Forest Index) images with the estimated thresholds. The results produced by the proposed method achieved an overall accuracy of >94.10% and a Cohen’s Kappa of >87.63% across all years (F1-Score > 94.05%), which represents a significant improvement over the performance of previous methods, including the CGLC evergreen forest cover maps and yearly median forest cover probability maps based on Sentinel-2 Dynamic World. Our findings demonstrate that the proposed method provides detailed spatial characteristics of evergreen forests and time-series change in the Central African Republic, with substantial consistency across all years. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study site and dataset. (<b>a</b>) Geolocation of the Central African Republic in Africa, evergreen forest sourced from CGLS-LC100 land cover map in 2019; (<b>b</b>) Zoomed CGLS-LC100 land cover map in 2019, highlighting the classification for evergreen forest only, from this dataset; (<b>c</b>) Monthly dynamics of the forest cover possibilities in the Sentinel-2 Dynamic World dataset, in which A, B and C refer to typical evergreen forest, and D, E and F refer to non-forest or non-evergreen forest samples. These six samples were selected in the free-clouds area of the monthly Sentinel-2 Dynamic World images in 2020.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Monthly and yearly median forest cover probability maps in the Central African Republic based on Sentinel-2 near real-time Dynamic World data in 2020. The red box in January denotes the zoomed area in the following <a href="#remotesensing-17-00722-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>The subarea of monthly and yearly median forest cover probability maps in <a href="#remotesensing-17-00722-f003" class="html-fig">Figure 3</a> based on Sentinel-2 near real-time Dynamic World data in 2020.</p>
Full article ">Figure 5
<p>Evergreen forest cover sample points in the Dynamic World forest cover probability map and the NEFI image. (<b>a</b>) Evergreen forest cover sample points in the Dynamic World forest cover probability map of 2020; (<b>b</b>) Evergreen forest cover sample points in the Non-Evergreen Forest Index (NEFI) image of 2020; (<b>c</b>) Statistical histogram of evergreen forest cover sample points in the forest cover probability map; (<b>d</b>) Statistical histogram of evergreen forest cover sample points in the NEFI image.</p>
Full article ">Figure 6
<p>Evergreen forest cover maps for different products and methods. (<b>a</b>) CGLS-LC100 evergreen forest cover map in the year of 2020; (<b>b</b>) Evergreen forest cover map generated from yearly median Dynamic World forest cover probability in 2020 only using threshold T1, filtered by GFC; (<b>c</b>) Modified evergreen forest cover map in the year of 2020. The Subarea 1 and Subarea 2 show two zoomed areas in (<b>a</b>–<b>c</b>) and the corresponding Google Earth RGB images, of which the acquisition times are 23 January 2014 for Subarea 1 and 29 July 2012 for Subarea 2.</p>
Full article ">Figure 7
<p>Annual evergreen forest cover maps for different years from 2017 to 2023 produced by the proposed method. Subarea 1 and Subarea 2 show zoomed maps of localized evergreen forest cover by year with the Google Earth image for reference.</p>
Full article ">Figure 8
<p>Evergreen forest cover change maps for different years from 2017 to 2023. (<b>a</b>) Annual evergreen forest cover decreases year map from 2018 to 2023 (red labeling); (<b>b</b>) Annual evergreen forest cover increases year map from 2018 to 2023 (green labeling). The different shaded colors indicate the year in which the first increase and decrease occurred for baseline evergreen forests in 2017. Four columns of southeastern, central-south, central, and southwestern, showing annual evergreen forest cover decreases and increases, respectively aligning with the red box in (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 9
<p>Frequency map of evergreen forest cover maps for different years from 2017 to 2023. Label numbers represent the frequency of occurrences of evergreen forest cover at pixel scale.</p>
Full article ">Figure 10
<p>Comparison of evergreen forest cover mapping with different composites of yearly median and mean Dynamic World Sentinel-2 forest cover probability images. (<b>a</b>) Evergreen forest cover map using mean composite; (<b>b</b>) Evergreen forest cover map using median composite; (<b>c</b>) Dynamic World forest cover probability median map; (<b>d</b>) RGB Google Earth imagery. (<b>e</b>) Overall accuracy assessments of evergreen forest cover maps produced from yearly mean, median, and integration of yearly mean and NEFI image composites in the year of 2020.</p>
Full article ">
24 pages, 8896 KiB  
Article
A Prediction of Estuary Wetland Vegetation with Satellite Images
by Min Yang, Bin Guo, Ning Gao, Yang Yu, Xiaoli Song and Yanfeng Gu
J. Mar. Sci. Eng. 2025, 13(2), 287; https://doi.org/10.3390/jmse13020287 - 4 Feb 2025
Abstract
Estuarine wetlands are the transition zone between marine, freshwater, and terrestrial ecosystems and are more ecologically fragile. In recent years, the spread of exotic vegetation, specifically Spartina alterniflora, in the Yellow River estuary wetlands has significantly encroached upon the habitats of native [...] Read more.
Estuarine wetlands are the transition zone between marine, freshwater, and terrestrial ecosystems and are more ecologically fragile. In recent years, the spread of exotic vegetation, specifically Spartina alterniflora, in the Yellow River estuary wetlands has significantly encroached upon the habitats of native species such as Phragmites australis, Suaeda glauca Bunge, and Tamarix chinensis Lour. With advances in land prediction modeling, predicting wetland vegetation distribution can aid management and decision-making for ecological restoration. We selected the core area as the study object and coupled the hydrological model MIKE 21 with the PLUS model to predict the potential future distribution of invasive and dominant species in the region. (1) Based on the fine classification results from satellite images of GF1/G2/G5, we gained an understanding of the changes in wetland vegetation types in the core area of the reserve in 2018 and 2020. (2) Using public data such as ERA5 and GEO as input for basic environmental data, using MIKE 21 to provide high-spatial-resolution hydrodynamic parameters for the PLUS model as an environmental driver, we modeled the spatial distribution of various wetland vegetation in the Yellow River estuary wetland in Dongying under different artificial restoration measures. (3) We predicted the 2022 distribution of typical vegetation in the region, used the classification results of GF6 as the actual distribution, compared the spatial distribution with the actual distribution, and obtained a kappa coefficient of 0.78; the predicted values of the model are highly consistent with the true values. This study combines the fine classification results of vegetation based on hyperspectral remote sensing, the construction of a coupled model, and the prediction effect of typical species, providing a reference for constructing and optimizing the vegetation prediction model of estuarine wetlands. It also allows scientific and effective decision-making for the management of ecological restoration of delta wetlands. Full article
Show Figures

Figure 1

Figure 1
<p>Research area-the Yellow River estuary wetlands.</p>
Full article ">Figure 2
<p>Schematic diagram of the coupling models.</p>
Full article ">Figure 3
<p>Grid range of MIKE 21.</p>
Full article ">Figure 4
<p>Schematic diagram of the ecological restoration area of the Yellow River estuary delta.</p>
Full article ">Figure 5
<p>The processing flow of MIKE 21-PLUS coupling model in artificial ecological restoration measures.</p>
Full article ">Figure 6
<p>Hydrodynamic simulation of MIKE21 model. (<b>a</b>) Hydrodynamic simulation within the restoration area; (<b>b</b>) current velocity without tidal creek; and (<b>c</b>) flow velocity under tidal ditch conditions.</p>
Full article ">Figure 7
<p>Changes in the distribution of features in the Yellow River estuary, 2018–2022.</p>
Full article ">Figure 8
<p>Model realization process for mowing and replanting.</p>
Full article ">Figure 9
<p>Comparison of observed and modeled salinity in Laizhou Bay. (Salinity data from the environmental survey of Laizhou Bay in August 2020 by Beihai Bureau of the Ministry of Natural Resources).</p>
Full article ">Figure 10
<p>Comparison of simulation results based on MIKE 21-PLUS with actual results.</p>
Full article ">Figure 11
<p>Comparison of simulation results between natural and artificial restoration scenarios in the restoration area.</p>
Full article ">Figure 12
<p>Environmental drivers of the evolution of the distribution of <span class="html-italic">Spartina alterniflora</span>, <span class="html-italic">Suaeda glauca Bunge</span>, and <span class="html-italic">Reed</span> (<span class="html-italic">Phragmites australis</span>).</p>
Full article ">Figure 13
<p>Comparison of <span class="html-italic">F</span><sub>1</sub>-<span class="html-italic">score</span> for simulating vegetation distribution in the Yellow River estuary wetland in 2022 using MIKE21-PLUS and PLUS.</p>
Full article ">
22 pages, 29748 KiB  
Article
An Integrated Method for Inverting Beach Surface Moisture by Fusing Unmanned Aerial Vehicle Orthophoto Brightness with Terrestrial Laser Scanner Intensity
by Jun Zhu, Kai Tan, Feijian Yin, Peng Song and Faming Huang
Remote Sens. 2025, 17(3), 522; https://doi.org/10.3390/rs17030522 - 3 Feb 2025
Abstract
Beach surface moisture (BSM) is crucial to studying coastal aeolian sand transport processes. However, traditional measurement techniques fail to accurately monitor moisture distribution with high spatiotemporal resolution. Remote sensing technologies have garnered widespread attention for providing rapid and non-contact moisture measurements, but a [...] Read more.
Beach surface moisture (BSM) is crucial to studying coastal aeolian sand transport processes. However, traditional measurement techniques fail to accurately monitor moisture distribution with high spatiotemporal resolution. Remote sensing technologies have garnered widespread attention for providing rapid and non-contact moisture measurements, but a single method has inherent limitations. Passive remote sensing is challenged by complex beach illumination and sediment grain size variability. Active remote sensing represented by LiDAR (light detection and ranging) exhibits high sensitivity to moisture, but requires cumbersome intensity correction and may leave data holes in high-moisture areas. Using machine learning, this research proposes a BSM inversion method that fuses UAV (unmanned aerial vehicle) orthophoto brightness with intensity recorded by TLSs (terrestrial laser scanners). First, a back propagation (BP) network rapidly corrects original intensity with in situ scanning data. Second, beach sand grain size is estimated based on the characteristics of the grain size distribution. Then, by applying nearest point matching, intensity and brightness data are fused at the point cloud level. Finally, a new BP network coupled with the fusion data and grain size information enables automatic brightness correction and BSM inversion. A field experiment at Baicheng Beach in Xiamen, China, confirms that this multi-source data fusion strategy effectively integrates key features from diverse sources, enhancing the BP network predictive performance. This method demonstrates robust predictive accuracy in complex beach environments, with an RMSE of 2.63% across 40 samples, efficiently producing high-resolution BSM maps that offer values in studying aeolian sand transport mechanisms. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Field deployment at the Baicheng Beach and the surface moisture sampling points (green). The wind rose was generated based on the average wind frequency for July 2024.</p>
Full article ">Figure 2
<p>(<b>a</b>) Samples with different moisture levels with a Spyder standard gray card. (<b>b</b>) Samples with different moisture levels with a Spyder 24-color standard color card (from left to right, the moisture of the samples from top to bottom are 5.87%, 8.27%, 5.39%, 5.28%, 4.36%, 4.35%, 5.94%, 4.21%, 3.09%, 7.17%, 4.61%, and 5.31%).</p>
Full article ">Figure 3
<p>The workflow of the proposed method.</p>
Full article ">Figure 4
<p>The process of feature parameter extraction: (<b>a</b>) Extraction of sample information; (<b>b</b>) acquisition of feature parameters; (<b>c</b>) Gaussian-fitting the histogram of color parameters and intensity parameters.</p>
Full article ">Figure 5
<p>(<b>a</b>) Original intensity distribution. (<b>b</b>) Corrected intensity distribution.</p>
Full article ">Figure 6
<p>(<b>a</b>) Characteristics of sediment grain size distribution. (<b>b</b>) Sediment average grain size vs. distance from sampling point to beach berm.</p>
Full article ">Figure 7
<p>Correlation coefficient matrix between the feature parameters and moisture content.</p>
Full article ">Figure 8
<p>(<b>a</b>) Distribution of beach surface moisture and estimation errors. (<b>b</b>) Measured moisture of the samples vs. estimated moisture of the samples.</p>
Full article ">Figure 9
<p>(<b>a</b>) Relationship between original intensity and distance. (<b>b</b>) Relationship between corrected intensity and distance. (<b>c</b>) Relationship between original intensity and incidence angle. (<b>d</b>) Relationship between corrected intensity and incidence angle.</p>
Full article ">Figure 10
<p>Relationship between feature parameters and moisture content under different grain size conditions. (<b>a</b>) V (from HSV color space) vs. moisture. (<b>b</b>) Intensity vs. moisture.</p>
Full article ">Figure 11
<p>(<b>a</b>) Distribution of beach surface moisture based on intensity. (<b>b</b>) Distribution of beach surface moisture based on brightness.</p>
Full article ">
14 pages, 4345 KiB  
Article
Morphological and Transcriptome Analysis of the Near-Threatened Orchid Habenaria radiata with Petals Shaped Like a Flying White Bird
by Seiji Takeda, Yuki Nishikawa, Tsutomu Tachibana, Takumi Higaki, Tomoaki Sakamoto and Seisuke Kimura
Plants 2025, 14(3), 393; https://doi.org/10.3390/plants14030393 - 28 Jan 2025
Abstract
Orchids have evolved flowers with unique morphologies through coevolution with pollinators, such as insects. Among the floral organs, the lip (labellum), one of the three petals, exhibits a distinctive shape and plays a crucial role in attracting pollinators and facilitating pollination in many [...] Read more.
Orchids have evolved flowers with unique morphologies through coevolution with pollinators, such as insects. Among the floral organs, the lip (labellum), one of the three petals, exhibits a distinctive shape and plays a crucial role in attracting pollinators and facilitating pollination in many orchids. The lip of the terrestrial orchid Habenaria radiata is shaped like a flying white bird and is believed to attract and provide a platform for nectar-feeding pollinators, such as hawk moths. To elucidate the mechanism of lip morphogenesis, we conducted time-lapse imaging of blooming flowers to observe the extension process of the lip and analyzed the cellular morphology during the generation of serrations. We found that the wing part of the lip folds inward in the bud and fully expands in two hours after blooming. The serrations of the lip were initially formed through cell division and later deepened through polar cell elongation. Transcriptome analysis of floral buds revealed the expression of genes involved in floral organ development, cell division, and meiosis. Additionally, genes involved in serration formation are also expressed in floral buds. This study provides insights into the mechanism underlying the formation of the unique lip morphology in Habenaria radiata. Full article
(This article belongs to the Section Plant Development and Morphogenesis)
Show Figures

Figure 1

Figure 1
<p>Flowering and withering of <span class="html-italic">Habenaria radiata</span> flowers captured by interval shooting (time-lapse). (<b>A</b>–<b>G</b>) Flowering and withering of flowers. The date (dd/mm) and time of capture are shown. Note that the upper flower, which bloomed later, withered earlier than the lower flower (<b>G</b>), probably due to the loss of pollinium (arrowheads in (<b>D</b>,<b>E</b>)) and subsequent pollination. (<b>H<sub>1</sub></b>–<b>H<sub>12</sub></b>,<b>I<sub>1</sub></b>–<b>I<sub>6</sub></b>) Side (<b>H<sub>1</sub></b>–<b>H<sub>12</sub></b>) and front (<b>I<sub>1</sub></b>–<b>I<sub>6</sub></b>) views of lip unfolding. The captured time is shown in each panel.</p>
Full article ">Figure 2
<p>Lip development. (<b>A</b>–<b>E</b>) Floral buds at different stages. Arrowheads in B, C, and D indicate the growing spur. (<b>F</b>–<b>J</b>) Lip inside the floral bud shown in (<b>A</b>–<b>E</b>). The lengths of the floral buds are 1 mm (<b>A</b>,<b>F</b>), 2 mm (<b>B</b>,<b>G</b>), 4 mm (<b>C</b>,<b>H</b>), 6 mm (<b>D</b>,<b>I</b>), and 7 mm (<b>E</b>,<b>J</b>). Scale bars: A, F = 0.5 mm; B, G = 1 mm; C, D, E, H, I, J = 2 mm.</p>
Full article ">Figure 3
<p>Cell shape changes during the development of lip serration. (<b>A</b>) Confocal laser microscopy images of petal margin cells. Petals from early to late (<b>A1</b>–<b>A5</b>) stages were excised, stained, and observed. (<b>B1</b>–<b>B5</b>) Distribution of cell area. (<b>C1</b>–<b>C5</b>) Elongation direction of each cell. The direction of serration elongation was set as 90 degrees, with the angles relative to this direction shown in different colors. (<b>D1</b>–<b>D5</b>) Scatter plots of cell area and elongation direction for each stage. Up to time point 3, cell proliferation occurs, and from time point 4 onward, the serrations deepen due to polarized cell elongation. Scale bars: 50 µm.</p>
Full article ">Figure 4
<p>Transcriptome analysis of floral buds. (<b>A</b>) Venn diagram showing genes expressed more than twice as much in floral buds of 3 mm, 4 mm, and 5 mm sizes compared to leaves. (<b>B</b>) Self-organizing map (SOM) analysis of the genes expressed in floral buds. The letters represent clusters with similar expression patterns, and the numbers indicate the gene number in each cluster. Clusters G, H, and I show an increase in expression during bud development, while clusters A, B, and C show a decrease in expression.</p>
Full article ">Figure 5
<p>RT-PCR of floral homeotic and MADS genes in <span class="html-italic">Habenaria radiata</span>. Underlined genes are reported for the first time in this work. HrACTIN was used as the control.</p>
Full article ">Figure 6
<p>Overexpression of <span class="html-italic">HrAP2</span> and <span class="html-italic">HrAG2</span> in <span class="html-italic">Arabidopsis thaliana</span>. (<b>A</b>) <span class="html-italic">ap2-3</span> mutant. (<b>B</b>) <span class="html-italic">35S:HrAP2</span> plant in the <span class="html-italic">ap2-3</span> background. Arrowhead indicates the petaloid stamen. Note that sepal-like organs are generated in the first whorl, and more stamens were produced compared to the <span class="html-italic">ap2-3</span> mutant. (<b>C</b>) <span class="html-italic">ag-1</span> flower. (<b>D</b>) <span class="html-italic">35S:HrAG2</span> plant in the <span class="html-italic">ag-1</span> background. Note that stamen-like organs are generated. (<b>E</b>,<b>F</b>) <span class="html-italic">35S:HrAG2</span> plants in <span class="html-italic">AG-1</span> (wild-type sibling) background. (<b>E</b>) Stamens are generated instead of petals in the second whorl. (<b>F</b>) Vegetative phenotype showing hyponastic growth in leaves, resulting in the curled leaves. Scale bars: A, B, C, D, E = 1 mm; F = 5 mm.</p>
Full article ">Figure 7
<p>Digital gene expression (DGE) of genes involved in serration formation in floral buds.</p>
Full article ">
18 pages, 6072 KiB  
Article
Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas
by Volker Reinprecht and Daniel Scott Kieffer
Remote Sens. 2025, 17(3), 405; https://doi.org/10.3390/rs17030405 - 24 Jan 2025
Viewed by 314
Abstract
Variations in vegetation indices derived from multispectral images and digital terrain models from satellite imagery have been successfully used for reclamation and hazard management in former mining areas. However, low spatial resolution and the lack of sufficiently detailed information on surface morphology have [...] Read more.
Variations in vegetation indices derived from multispectral images and digital terrain models from satellite imagery have been successfully used for reclamation and hazard management in former mining areas. However, low spatial resolution and the lack of sufficiently detailed information on surface morphology have restricted such studies to large sites. This study investigates the application of small, unmanned aerial vehicles (UAVs) equipped with multispectral sensors for land cover classification and vegetation monitoring. The application of UAVs bridges the gap between large-scale satellite remote sensing techniques and terrestrial surveys. Photogrammetric terrain models and orthoimages (RGB and multispectral) obtained from repeated mapping flights between November 2023 and May 2024 were combined with an ALS-based reference terrain model for object-based image classification. The collected data enabled differentiation between natural forests and areas affected by former mining activities, as well as the identification of variations in vegetation density and growth rates on former mining areas. The results confirm that small UAVs provide a versatile and efficient platform for classifying and monitoring mining areas and forested landslides. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Overview of the study site (“Trassbruch Gossendorf”) based on the digital elevation model; (<b>B</b>) oblique photograph. Former mining and mine dump areas, access roads and the landslide area are highlighted in (<b>A</b>).</p>
Full article ">Figure 2
<p>(<b>A</b>) Study site with the boundaries of former mining, mine dump and landslide affected areas. (<b>B</b>) Subset at the southern slope, visualizing the segmentation and the effect of the 0.5 m buffer around the sampling points and the typical tree crown dimension (diameter ~2–3 m).</p>
Full article ">Figure 3
<p>Python-based OBIA workflow, including a summary of each processing step.</p>
Full article ">Figure 4
<p>Classified map datasets for all four classification periods. (<b>A</b>) November 2023 (sunny, oblique flight); (<b>B</b>) December 2023 (overcast, nadir flight); (<b>C</b>) April 2024 (overcast, nadir flight); (<b>D</b>) May 2024 (sunny, nadir flight). [X] = area prone to misclassification (Zone A2), [Y] = old mine dump (Zone B1), that was only partially cleared for operation.</p>
Full article ">Figure 5
<p>(<b>A</b>) Parameter variation during the cross-validation process (global performance metrics and class performance metrics). (<b>B</b>) Classification metrics for all flight epochs including combined confusion matrices. (<b>C</b>) Confusion matrices derived from holdout dataset (holdout confusion matrix). The confusion matrices were standardized in horizontal direction and the corresponding sample number is given in square brackets.</p>
Full article ">Figure 6
<p>Time series for the mean NDVI, NDRE, height above rDTM (dDTM), height above rDSM and (dDSM) extracted from the former mining zones (mine dump, mine), the landslide area and the natural forest.</p>
Full article ">
21 pages, 2750 KiB  
Article
Comparison of Tree Typologies Mapping Using Random Forest Classifier Algorithm of PRISMA and Sentinel-2 Products in Different Areas of Central Italy
by Eros Caputi, Gabriele Delogu, Alessio Patriarca, Miriam Perretta, Giulia Mancini, Lorenzo Boccia, Fabio Recanatesi and Maria Nicolina Ripa
Remote Sens. 2025, 17(3), 356; https://doi.org/10.3390/rs17030356 - 22 Jan 2025
Viewed by 353
Abstract
The continuous development of satellite imagery, coupled with advancements in machine learning technologies, allows detailed mapping of terrestrial landscapes. This study evaluates the classification performance of tree typologies using Sentinel-2 and PRISMA data, focusing on central Italy’s different areas. The purpose is to [...] Read more.
The continuous development of satellite imagery, coupled with advancements in machine learning technologies, allows detailed mapping of terrestrial landscapes. This study evaluates the classification performance of tree typologies using Sentinel-2 and PRISMA data, focusing on central Italy’s different areas. The purpose is to assess the role of spectral and spatial resolution in land cover classification, contributing to forest management and conservation efforts. Random Forest Classifier was applied to classify tree typologies across two study areas: the Roman Coastal region and the Lake Vico Basin. Ground truth (GT) data, collected from a trial citizen survey campaign, were used for training and validation. PRISMA datasets, particularly when processed with PCA, consistently outperformed Sentinel-2. The PRISMA PCA dataset achieved the highest overall accuracy with 71.09% for the Roman Coastal region and 87.15% for the Lake Vico Basin, emphasizing the value of spectral resolution. However, Sentinel-2 showed comparative strength in spatially heterogeneous areas. Tree typologies with more uniform distribution, such as hazelnut and chestnut, achieved higher classification accuracy compared to mixed-species forests. The study assesses that Sentinel-2 remains a viable alternative where spatial resolution is critical also considering the limited PRISMA images’ availability. Moreover, the work explores the potential of combining satellites and accurate GT for improved land cover mapping. Full article
Show Figures

Figure 1

Figure 1
<p>On the left the location of study areas in national and regional borders, on the right, view of the areas from aerial photos and the perimeters of study areas (red lines) in (<b>a</b>) Lake Vico Basin and in (<b>b</b>) Roman Coastal.</p>
Full article ">Figure 2
<p>Flow chart with main operations of the work.</p>
Full article ">Figure 3
<p>Spectral signature (with normalized values using the min/max method) extracted and graphed from the GT in the Roman Coastal datasets. In the BSD, the wavelengths are highlighted and the intervals removed with the BBD algorithms corresponding to the Water (~1400 nm and ~1900 nm) and CH<sub>4</sub> (~2400 nm) absorption regions.</p>
Full article ">Figure 4
<p>Tree typologies distribution map in the Roman Coastal area obtained from classifications with different datasets: (<b>a</b>) Sentinel, (<b>b</b>) PRISMA BSD, and (<b>c</b>) PRISMA PCA.</p>
Full article ">Figure 5
<p>Spectral signature (with normalized values using the min/max method) extracted and graphed from the GT in Lake Vico Basin datasets. In the BSD, the wavelengths are highlighted and the intervals removed with BBD algorithms corresponding to the water (~1400 nm and ~1900 nm) and CH<sub>4</sub> (~2400 nm) absorption regions.</p>
Full article ">Figure 6
<p>Tree typologies distribution maps for Lake Vico Basin area obtained from classification of (<b>a</b>) Sentinel-2 dataset, (<b>b</b>) PRISMA dataset, and (<b>c</b>) PRISMA PCA dataset.</p>
Full article ">
19 pages, 3375 KiB  
Article
Enhancing Cross-Modal Camera Image and LiDAR Data Registration Using Feature-Based Matching
by Jennifer Leahy, Shabnam Jabari, Derek Lichti and Abbas Salehitangrizi
Remote Sens. 2025, 17(3), 357; https://doi.org/10.3390/rs17030357 - 22 Jan 2025
Viewed by 370
Abstract
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This [...] Read more.
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This paper introduces a new pipeline for camera–LiDAR post-registration to produce colorized point clouds. Utilizing deep learning-based matching between 2D spherical projection LiDAR feature layers and camera images, we can map 3D LiDAR coordinates to image grey values. Various LiDAR feature layers, including intensity, bearing angle, depth, and different weighted combinations, are used to find correspondence with camera images utilizing state-of-the-art deep learning matching algorithms, i.e., SuperGlue and LoFTR. Registration is achieved using collinearity equations and RANSAC to remove false matches. The pipeline’s accuracy is tested using survey-grade terrestrial datasets from the TX5 scanner, as well as datasets from a custom-made, low-cost mobile mapping system (MMS) named Simultaneous Localization And Mapping Multi-sensor roBOT (SLAMM-BOT) across diverse scenes, in which both outperformed their baseline solutions. SuperGlue performed best in high-feature scenes, whereas LoFTR performed best in low-feature or sparse data scenes. The LiDAR intensity layer had the strongest matches, but combining feature layers improved matching and reduced errors. Full article
(This article belongs to the Special Issue Remote Sensing Satellites Calibration and Validation)
Show Figures

Figure 1

Figure 1
<p>General flowchart of the methodology.</p>
Full article ">Figure 2
<p>Proposed optical and LiDAR data integration method.</p>
Full article ">Figure 3
<p>Camera-to-ground coordinate system transformations. The rotational extrinsic parameters of the LiDAR sensor are represented by the angles (<span class="html-italic">ω</span>, <span class="html-italic">φ</span>, <span class="html-italic">κ</span>), which describe the orientation of the camera in the 3D space. The camera’s principal point is denoted by (<span class="html-italic">x</span><sub>p</sub>, <span class="html-italic">y</span><sub>p</sub>), and <span class="html-italic">f</span> represents the focal length. The ground coordinates are represented by (<span class="html-italic">X</span>, <span class="html-italic">Y</span>, <span class="html-italic">Z</span>), corresponding to the real-world position in the ground reference system.</p>
Full article ">Figure 4
<p>The experimental scenes employed in this study. The six scenes were acquired in outdoor and indoor environments, representing different object arrangements, lighting conditions, and spatial compositions.</p>
Full article ">Figure 5
<p>Comparison of single frame (<b>left</b>) vs. densified aggregated frames (<b>right</b>).</p>
Full article ">Figure 6
<p>Comparison of different images: (<b>a</b>) optical; (<b>b</b>) bearing angle; (<b>c</b>) intensity; and (<b>d</b>) depth.</p>
Full article ">Figure 7
<p>Before (<b>left</b>) and after (<b>right</b>) attempts to remedy the range dispersions in the SLAMM-BOT depth image.</p>
Full article ">Figure 8
<p>Viable matches from the intensity image (<b>top</b>) vs. false matches from the depth image (<b>bottom</b>). The color scheme represents match confidence, with red representing high confidence and blue representing low confidence.</p>
Full article ">
43 pages, 19436 KiB  
Article
Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning
by Sarah Witzmann, Christoph Gollob, Ralf Kraßnitzer, Tim Ritter, Andreas Tockner, Lukas Moik, Valentin Sarkleti, Tobias Ofner-Graff, Helmut Schume and Arne Nothdurft
Remote Sens. 2025, 17(2), 269; https://doi.org/10.3390/rs17020269 - 14 Jan 2025
Viewed by 470
Abstract
The presence of sufficient natural regeneration in mature forests is regarded as a pivotal criterion for their future stability, ensuring seamless reforestation following final harvesting operations or forest calamities. Consequently, forest regeneration is typically quantified as part of forest inventories to monitor its [...] Read more.
The presence of sufficient natural regeneration in mature forests is regarded as a pivotal criterion for their future stability, ensuring seamless reforestation following final harvesting operations or forest calamities. Consequently, forest regeneration is typically quantified as part of forest inventories to monitor its occurrence and development over time. Light detection and ranging (LiDAR) technology, particularly ground-based LiDAR, has emerged as a powerful tool for assessing typical forest inventory parameters, providing high-resolution, three-dimensional data on the forest structure. Therefore, it is logical to attempt a LiDAR-based quantification of forest regeneration, which could greatly enhance area-wide monitoring, further supporting sustainable forest management through data-driven decision making. However, examples in the literature are relatively sparse, with most relevant studies focusing on an indirect quantification of understory density from airborne LiDAR data (ALS). The objective of this study is to develop an accurate and reliable method for estimating regeneration coverage from data obtained through personal laser scanning (PLS). To this end, 19 forest inventory plots were scanned with both a personal and a high-resolution terrestrial laser scanner (TLS) for reference purposes. The voxelated point clouds obtained from the personal laser scanner were converted into raster images, providing either the canopy height, the total number of filled voxels (containing at least one LiDAR point), or the ratio of filled voxels to the total number of voxels. Local maxima in these raster images, assumed to be likely to contain tree saplings, were then used as seed points for a raster-based tree segmentation, which was employed to derive the final regeneration coverage estimate. The results showed that the estimates differed from the reference in a range of approximately −10 to +10 percentage points, with an average deviation of around 0 percentage points. In contrast, visually estimated regeneration coverages on the same forest plots deviated from the reference by between −20 and +30 percentage points, approximately −2 percentage points on average. These findings highlight the potential of PLS data for automated forest regeneration quantification, which could be further expanded to include a broader range of data collected during LiDAR-based forest inventory campaigns. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>GeoSLAM Zeb Horizon (<b>left</b>) and RIEGL VZ-600i (<b>right</b>) during fieldwork.</p>
Full article ">Figure 2
<p>Screenshot from the GeoAce App (ITS Geo Solutions GmbH, Jena, Germany) during fieldwork. The green crosses mark the positions of marked trees, while the red dot marks the plot center. The positions of the surveyed trees were visualized in CloudCompare to avoid the clipping of trees smaller or larger than the predefined threshold.</p>
Full article ">Figure 3
<p>Schematical illustration of the general workflow.</p>
Full article ">Figure 4
<p>Illustration of quality measures for regeneration quantification as functions of the voxel resolution, class threshold, and cloth resolution. The different class thresholds and cloth resolutions are represented by different colors and line types, respectively.</p>
Full article ">Figure 5
<p>Illustration of quality measures for regeneration quantification as functions of thresholds for tree detection and segmentation. The different detection thresholds are represented by different colors, as described in the legend.</p>
Full article ">Figure 6
<p>Comparison of deviations achieved with M1–M5 across all parameter combinations.</p>
Full article ">Figure 7
<p>Comparison of deviations achieved with M1–M5 with optimized parameter combinations in gray and deviations of visual estimations in green (Op 1 to Op 3 represent the deviations achieved by the three different operators).</p>
Full article ">Figure 8
<p>Comparison of regeneration coverages. The results from the visual estimates are depicted in grey and those from the best LiDAR-based methods (M2 and M5) in blue and green, respectively.</p>
Full article ">Figure 9
<p>Plot-wise depiction of estimated and reference regeneration coverages. Since the visual estimates, represented by the grey bars, are averaged across the estimates of all 3 operators, the error bars plotted with the latter represent the highest and lowest estimates, respectively. The reference coverages are depicted in black, whereas the coverages derived from M2 to M5 are depicted in blue and green, respectively.</p>
Full article ">Figure 10
<p>Depiction of Plot 7. (<b>a</b>) shows the point cloud of this plot. The red lines in (<b>b</b>,<b>c</b>) represent the outlines of the manually cropped reference tree crowns. The blue lines represent the outlines of the tree crowns as segmented with <span class="html-italic">M</span><sub>2</sub> (<b>b</b>) and <span class="html-italic">M</span><sub>5</sub> (<b>c</b>), respectively. The colors of the pixels and color scales in (<b>b</b>,<b>c</b>) represent height (in meters) and voxel density, respectively.</p>
Full article ">Figure A1
<p>Illustration of the calculated crown areas of Plot 1 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A2
<p>Illustration of the calculated crown areas of Plot 2 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A3
<p>Illustration of the calculated crown areas of Plot 3 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A4
<p>Illustration of the calculated crown areas of Plot 4 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A5
<p>Illustration of the calculated crown areas of Plot 6 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A6
<p>Illustration of the calculated crown areas of Plot 7 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M<sub>5</sub>), respectively.</p>
Full article ">Figure A7
<p>Illustration of the calculated crown areas of Plot 8 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A8
<p>Illustration of the calculated crown areas of Plot 9 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A9
<p>Illustration of the calculated crown areas of Plot 10 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A10
<p>Illustration of the calculated crown areas of Plot 11 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A11
<p>Illustration of the calculated crown areas of Plot 12 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A12
<p>Illustration of the calculated crown areas of Plot 13 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A13
<p>Illustration of the calculated crown areas of Plot 14 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A14
<p>Illustration of the calculated crown areas of Plot 15 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A15
<p>Illustration of the calculated crown areas of Plot 16 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A16
<p>Illustration of the calculated crown areas of Plot 17 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A17
<p>Illustration of the calculated crown areas of Plot 18 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A18
<p>Illustration of the calculated crown areas of Plot 19 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A19
<p>Illustration of the calculated crown areas of Plot 20 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A20
<p>Schematical illustration of methods M1, M2, M3, and M5, starting from the step of treetop detection.</p>
Full article ">
20 pages, 9822 KiB  
Article
Bridging Disciplines with Photogrammetry: A Coastal Exploration Approach for 3D Mapping and Underwater Positioning
by Ali Alakbar Karaki, Ilaria Ferrando, Bianca Federici and Domenico Sguerso
Remote Sens. 2025, 17(1), 73; https://doi.org/10.3390/rs17010073 - 28 Dec 2024
Viewed by 560
Abstract
Conventional methodologies often struggle in accurately positioning underwater habitats and elucidating the complex interactions between terrestrial and aquatic environments. This study proposes an innovative methodology to bridge the gap between these domains, enabling integrated 3D mapping and underwater positioning. The method integrates UAV [...] Read more.
Conventional methodologies often struggle in accurately positioning underwater habitats and elucidating the complex interactions between terrestrial and aquatic environments. This study proposes an innovative methodology to bridge the gap between these domains, enabling integrated 3D mapping and underwater positioning. The method integrates UAV (Uncrewed Aerial Vehicles) photogrammetry for terrestrial areas with underwater photogrammetry performed by a snorkeler. The innovative aspect of the proposed approach relies on detecting the snorkeler positions on orthorectified images as an alternative to the use of GNSS (Global Navigation Satellite System) positioning, thanks to an image processing tool. Underwater camera positions are estimated through precise time synchronization with the UAV frames, producing a georeferenced 3D model that seamlessly joins terrestrial and submerged landscapes. This facilitates the understanding of the spatial context of objects on the seabed and presents a cost-effective and comprehensive tool for 3D coastal mapping, useful for coastal management to support coastal resilience. Full article
Show Figures

Figure 1

Figure 1
<p>The UAV (in red) and snorkeler (in green) paths and their intersections (in the black circles) over the study area.</p>
Full article ">Figure 2
<p>Methodology steps: (1) UAV and underwater image acquisition; (2) generation of orthophoto over the study area; (3) snorkeler position extraction; (4) UAV and underwater frames time synchronization; (5) underwater photogrammetry processing; (6) seamless 3D georeferenced model. The arrows indicate the sequence of steps and data flow between aerial (green) and underwater processes (red) and time synchronization (blue).</p>
Full article ">Figure 3
<p>Distribution of GCPs (1 to 18) over the study area (UAV orthophoto as background).</p>
Full article ">Figure 4
<p>Study area location: The red dot indicates the study area, Sestri Levante, located in the east cost of Liguria region (Italy).</p>
Full article ">Figure 5
<p>Original DSM with surface water distortion (<b>a</b>) and modified DSM with flat water surface equivalent to zero (<b>b</b>).</p>
Full article ">Figure 6
<p>The 3D model, and orthophoto.</p>
Full article ">Figure 7
<p>UAV image, orthorectified image, and orthorectified image overlaid on the DSM (right). The DSM is color-coded based on elevation, starting from blue at the lowest elevations (zero) and transitioning to red at the highest elevations (as shown in <a href="#remotesensing-17-00073-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 8
<p>The interactive Python application for snorkeler coordinates and frames extraction on the orthophotos.</p>
Full article ">Figure 9
<p>Manual camera calibration with parameters estimated from.</p>
Full article ">Figure 10
<p>Histogram shows the frequencies of the overall error (expressed in meters) of <span class="html-italic">x</span> and <span class="html-italic">y</span> coordinates.</p>
Full article ">Figure 11
<p>Scatter plot showing the <span class="html-italic">x</span> and <span class="html-italic">y</span> errors over two categories: “M” in blue and “B” in green, referring middle or boundary snorkeler’s position within the orthophoto, respectively.</p>
Full article ">Figure 12
<p>Merged model (emerged and submerged) with georeferenced underwater features; (<b>a</b>) rocky and sandy patches; (<b>b</b>) seagrass; (<b>c</b>) dead seagrass.</p>
Full article ">Figure 13
<p>Virtual model of Silence Bay in Sestri Levante (Italy).</p>
Full article ">Figure 14
<p>Virtual model showing the underwater seagrass.</p>
Full article ">
27 pages, 14735 KiB  
Article
Traditional and New Sensing Techniques Combination for the Identification of the Forgotten “New Flour-Weighing House” in Valencia, Spain
by Antonio Gómez-Gil, Giacomo Patrucco and José Luis Lerma
Appl. Sci. 2024, 14(24), 11962; https://doi.org/10.3390/app142411962 - 20 Dec 2024
Viewed by 576
Abstract
In the city of Valencia (Spain), there existed from the Middle Ages until the mid-nineteenth century a building that fulfilled a municipal strategic function: The “new flour-weighing house”. Its purpose was to distribute food to the population and collect the corresponding indirect municipal [...] Read more.
In the city of Valencia (Spain), there existed from the Middle Ages until the mid-nineteenth century a building that fulfilled a municipal strategic function: The “new flour-weighing house”. Its purpose was to distribute food to the population and collect the corresponding indirect municipal taxes. Today, the existence of this building is not remembered, neither by scientists nor by citizens, and its importance, location and appearance are unknown. The building investigated, behind which the medieval façade of the “flour-weighing house” is hidden, is the Colomina Palace. In the investigation, its growth phases have been detected, and an idea of its structural organisation has been obtained. Research and investigation have been carried out by consulting historical, cartographic and archival material, together with advanced geomatics techniques, including close-range photogrammetry, terrestrial laser scanning and thermography. The fuse of colour and thermal imagery, together with point clouds and 3D models, help to visualise and check the different spatial transformations of the current “Colomina Palace”, adapting the sequence from medieval times into present. The methodology proposed in this study avoids the need to carry out destructive tests and the processing of permits, which speeds up decision-making and historical architectural reconstruction. Full article
(This article belongs to the Special Issue Application of Digital Technology in Cultural Heritage)
Show Figures

Figure 1

Figure 1
<p>Corner of <span class="html-italic">Harina Street</span> and <span class="html-italic">Almudín Street</span>. A construction method different from that of a house can be observed, both in the corner and in the wall, which is made up of ashlars: (<b>a</b>) View from <span class="html-italic">Calle de la Harina</span>, at left view of the old flour-weighing house corner with the <span class="html-italic">Almudín</span> gate in the background; (<b>b</b>) Old flour-weighing house’s corner still existing.</p>
Full article ">Figure 2
<p>Location of the different buildings involved in the wheat handling process, based on Fortea’s plan (1738). Adaptation by the authors (the street names are reported in their original form in Spanish).</p>
Full article ">Figure 3
<p><span class="html-italic">Almudín</span> plan. Image CTAV.</p>
Full article ">Figure 4
<p>Medieval shipyards or <span class="html-italic">Atarazanas</span> in the <span class="html-italic">Grao</span> (Valencia).</p>
Full article ">Figure 5
<p><span class="html-italic">Silos</span> or Wheat warehouse of Burjassot (Valencia).</p>
Full article ">Figure 6
<p><span class="html-italic">Almudín</span> (Valencia).</p>
Full article ">Figure 7
<p>Palace of the Marquise of Colomina, Valencia.</p>
Full article ">Figure 8
<p>Plan by A. Mancelli, 1608, the flour-weighing house is numbered 95.</p>
Full article ">Figure 9
<p>Plan by V. Tosca, 1703; the flour-weighing house is numbered 95.</p>
Full article ">Figure 10
<p>Plan by J. Fortea, 1738; the flour-weighing house appears numbered 90.</p>
Full article ">Figure 11
<p>Plan by the Army Corps of Engineers, 1869 (the street names are reported in their original form in Spanish).</p>
Full article ">Figure 12
<p>Plan by the army general staff corps, 1883.</p>
Full article ">Figure 13
<p>Plan of the unbuilt proposal dated June 1863, with two floors. (<b>a</b>) East façade; (<b>b</b>) South façade. Master builder, Manuel Ferrando. A.H.M.V.</p>
Full article ">Figure 14
<p>Plan of built proposal dated June 1863, with three floors. (<b>a</b>) East façade; (<b>b</b>) South façade. Master builder, Manuel Ferrando. A.H.M.V.</p>
Full article ">Figure 15
<p>Plan of the first level of the framework state prior to the intervention, 1995. Architect Francisco Esquembre. A. I. A. V.</p>
Full article ">Figure 16
<p>Flowchart of the proposed workflow.</p>
Full article ">Figure 17
<p>Point cloud after registration of the eight TLS scans.</p>
Full article ">Figure 18
<p>Oriented block of images (relative orientation) and the sparse cloud of tie points.</p>
Full article ">Figure 19
<p>Southern façade: radiometric difference between (<b>a</b>) TLS point cloud textured from the images acquired from the embedded camera and (<b>b</b>) TLS point cloud textured from the oriented photogrammetric block composed of visible images.</p>
Full article ">Figure 20
<p>East façade anomalies. In the graphical scale located on the left side of the image, it is possible to see the correspondence between the false-colour representation and the temperature trend. In this case, the thermal range observed in this TIR image spans from 14.4 °C (dark purple) to 19.2 °C (white).</p>
Full article ">Figure 21
<p>South façade anomalies. In the graphical scale located on the left side of the image, it is possible to see the correspondence between the false-colour representation and the temperature trend. In this case, the thermal range observed in this TIR image spans from 15.6 °C (dark purple) to 18.4 °C (white).</p>
Full article ">Figure 22
<p>(<b>a</b>) 3D mesh (derived from the LiDAR point cloud) with the oriented thermal images; (<b>b</b>) Oriented thermal image projected onto the 3D mesh. The temperature of the thermal image is represented using a false-color visualization (Iron palette).</p>
Full article ">Figure 23
<p>(<b>a</b>) 3D mesh after the first phase of the texturization; (<b>b</b>) Editing of the texture. Each colour is associated with a different TIR image used during the mosaicking process. Therefore, each area (represented by the solid colour) is thermally mapped based on the corresponding TIR image.</p>
Full article ">Figure 24
<p>Thermal orthophoto (eastern façade).</p>
Full article ">Figure 25
<p>Thermal orthophoto (southern façade).</p>
Full article ">Figure 26
<p>Current state of the east façade.</p>
Full article ">Figure 27
<p>Current state of the south façade.</p>
Full article ">Figure 28
<p>Plan of the building’s growth phases from 1517 to 1863 (Esquembre project).</p>
Full article ">Figure 29
<p>Drawn south façade anomalies.</p>
Full article ">Figure 30
<p>East façade anomalies visualized on CAD drawing.</p>
Full article ">Figure 31
<p>East façade in 1517.</p>
Full article ">Figure 32
<p>3D hypothesis of the appearance of the east façade in 1517.</p>
Full article ">Figure 33
<p>East façade before 1704.</p>
Full article ">Figure 34
<p>South façade hypothesis before 1704.</p>
Full article ">Figure 35
<p>3D Hypothesis of the appearance of the building before 1704.</p>
Full article ">
24 pages, 46652 KiB  
Article
Hyperspectral Reconstruction Method Based on Global Gradient Information and Local Low-Rank Priors
by Chipeng Cao, Jie Li, Pan Wang, Weiqiang Jin, Runrun Zou and Chun Qi
Remote Sens. 2024, 16(24), 4759; https://doi.org/10.3390/rs16244759 - 20 Dec 2024
Viewed by 473
Abstract
Hyperspectral compressed imaging is a novel imaging detection technology based on compressed sensing theory that can quickly acquire spectral information of terrestrial objects in a single exposure. It combines reconstruction algorithms to recover hyperspectral data from low-dimensional measurement images. However, hyperspectral images from [...] Read more.
Hyperspectral compressed imaging is a novel imaging detection technology based on compressed sensing theory that can quickly acquire spectral information of terrestrial objects in a single exposure. It combines reconstruction algorithms to recover hyperspectral data from low-dimensional measurement images. However, hyperspectral images from different scenes often exhibit high-frequency data sparsity and existing deep reconstruction algorithms struggle to establish accurate mapping models, leading to issues with detail loss in the reconstruction results. To address this issue, we propose a hyperspectral reconstruction method based on global gradient information and local low-rank priors. First, to improve the prior model’s efficiency in utilizing information of different frequencies, we design a gradient sampling strategy and training framework based on decision trees, leveraging changes in the loss function gradient information to enhance the model’s predictive capability for data of varying frequencies. Second, utilizing the local low-rank prior characteristics of the representative coefficient matrix, we develop a sparse sensing denoising module to effectively improve the local smoothness of point predictions. Finally, by establishing a regularization term for the reconstruction process based on the semantic similarity between the denoised results and prior spectral data, we ensure spatial consistency and spectral fidelity in the reconstruction results. Experimental results indicate that the proposed method achieves better detail recovery across different scenes, demonstrates improved generalization performance for reconstructing information of various frequencies, and yields higher reconstruction quality. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Structural composition of the DCCHI system and data structure of SD-CASSI detector sampling.</p>
Full article ">Figure 2
<p>Reconstruction algorithm framework.</p>
Full article ">Figure 3
<p>RGB images from the KAIST, Harvard, and hyperspectral remote sensing datasets.</p>
Full article ">Figure 4
<p>Selected spectral curves of the pixel point with coordinates (180, 70), showing a visual comparison of different methods in the spectral dimension and comparing the pseudocolor images and local spatial detail information under different wavelengths.</p>
Full article ">Figure 5
<p>Selected spectral curves of the pixel point with coordinates (150, 100), showing a visual comparison of different methods in the spectral dimension and comparing the pseudocolor images and local spatial detail information of different wavelengths.</p>
Full article ">Figure 6
<p>Comparison of the spectral consistency of reconstruction results with different methods on the PaviaU hyperspectral remote sensing datast at sample point coordinates (180, 110), along with a comparison of the spatial detail information of the reconstruction results at different wavelengths.</p>
Full article ">Figure 7
<p>Comparison of the spectral consistency of reconstruction results with different methods on the PaviaC hyperspectral remote sensing dataset at sample point coordinates (190, 50), along with a comparison of the spatial detail information of the reconstruction results at different wavelengths.</p>
Full article ">Figure 8
<p>Comparison of spectral reconstruction results for different crops.</p>
Full article ">Figure 9
<p>Impact of hyperparameter settings on reconstruction quality.</p>
Full article ">Figure 10
<p>Comparison of pseudocolor images generated from the predictions of different prior models for the Harvard Scene 04 hyperspectral data at the 5th, 12th, and 25th bands, along with the spectral differences of the predictions at different wavelengths.</p>
Full article ">Figure 11
<p>Comparison of pseudocolor images generated from the predictions of different prior models for the PaviaU hyperspectral remote sensing data at the 3rd, 13th, and 26th bands, along with the spectral differences of the predictions at different wavelengths.</p>
Full article ">Figure 12
<p>Variation in reconstruction quality with increasing iteration count under the same solving framework for different regularization constraint methods.</p>
Full article ">
19 pages, 4990 KiB  
Article
A 3D Surface Reconstruction Pipeline for Plant Phenotyping
by Lina Stausberg, Berit Jost, Lasse Klingbeil and Heiner Kuhlmann
Remote Sens. 2024, 16(24), 4720; https://doi.org/10.3390/rs16244720 - 17 Dec 2024
Viewed by 588
Abstract
Plant phenotyping plays a crucial role in crop science and plant breeding. However, traditional methods often involve time-consuming and manual observations. Therefore, it is essential to develop automated, sensor-driven techniques that can provide objective and rapid information. Various methods rely on camera systems, [...] Read more.
Plant phenotyping plays a crucial role in crop science and plant breeding. However, traditional methods often involve time-consuming and manual observations. Therefore, it is essential to develop automated, sensor-driven techniques that can provide objective and rapid information. Various methods rely on camera systems, including RGB, multi-spectral, and hyper-spectral cameras, which offer valuable insights into plant physiology. In recent years, 3D sensing systems such as laser scanners have gained popularity due to their ability to capture structural plant parameters that are difficult to obtain using spectral sensors. Unlike images, point clouds are not structured and require pre-processing steps to extract precise information and handle noise or missing points. One approach is to generate mesh-based surface representations using triangulation. A key challenge in the 3D surface reconstruction of plants is the pre-processing of point clouds, which involves removing non-plant noise from the scene, segmenting point clouds from populations to individual plants, and further dividing individual plants into their respective organs. In this study, we will not focus on the segmentation aspect but rather on the other pre-processing steps, like denoising parameters, which depend on the data type. We present an automated pipeline for converting high-resolution point clouds into surface models of plants. The pipeline incorporates additional pre-processing steps such as outlier removal, denoising, and subsampling to ensure the accuracy and quality of the reconstructed surfaces. Data were collected using three different sensors: a handheld scanner, a terrestrial laser scanner (TLS), and a mobile mapping platform, under varying conditions from controlled laboratory environments to complex field settings. The investigation includes five different plant species, each with distinct characteristics, to demonstrate the potential of the pipeline. In a next step, phenotypic traits such as leaf area, leaf area index (LAI), and leaf angle distribution (LAD) were calculated to further illustrate the pipeline’s potential and effectiveness. The pipeline is based on the Open3D framework and is available open source. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Workflow of the automated pipeline. In red, the input and output files of the pipeline are shown; in blue, the intermediate steps to get there are shown; and, in green, the optional parameter calculation from the surface model is shown.</p>
Full article ">Figure 2
<p>Visual representation of the ball-pivoting algorithm [<a href="#B24-remotesensing-16-04720" class="html-bibr">24</a>]. Starting with the seed triangle, which is characterized by three points within the point cloud, the algorithm simulates the rotation of a ball along the axis established by any two out of the three points. At each juncture where the ball intersects with three distinct points, a novel triangle is instantiated.</p>
Full article ">Figure 3
<p>Extraction of the leaf area and leaf angles (inclination angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math> and azimuth angle <math display="inline"><semantics> <mi>φ</mi> </semantics></math>). The meshed surface is reconstructed using the ball-pivoting algorithm and, afterwards, the normal vectors of the triangles are calculated. From the area of the single triangles, we are able to compute the area of the whole plant or leaf, while the normal vectors are used to calculate the leaf angles.</p>
Full article ">Figure 4
<p>Sensors used for the different measurements. The handheld scanner produces a high resolution point clouds under laboratory conditions, without noise or outliers. The flatbed scanner is used for reference measurements for the plants measured with the handheld scanner to underline the potential of our pipeline. The TLS scanner produces high-resolution point clouds under field conditions, as well as the mobile mapping platform. Additionally, the mobile mapping platform produces geo-referenced point clouds, making it possible to identify the measured plants over the whole growing season.</p>
Full article ">Figure 5
<p>High-resolution point clouds of the four reference plants measured under laboratory conditions. The handheld scanner produces a complete point cloud of each plant with a minimum point-to-point resolution of <math display="inline"><semantics> <mrow> <mn>12</mn> <mo> </mo> <mrow> <mi mathvariant="normal">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Example point cloud of soybeans measured with a mobile mapping system under field conditions.</p>
Full article ">Figure 7
<p>Measurement setup for the TLS measurements. The blue rectangles represent the measured plots, the red circles indicate the 5 positions of the target characters. The purple circles indicate the 15 scanner positions.</p>
Full article ">Figure 8
<p>Example point cloud of soybeans measured with a TLS under field conditions.</p>
Full article ">Figure 9
<p>Surface models of the four reference plants generated using our pipeline.</p>
Full article ">Figure 10
<p>Examples of the reconstructed surface of the single soybean plant (green model) and the calculated leaf area (blue line) of a single soybean plant over the measurement time.</p>
Full article ">Figure 11
<p>(<b>a</b>) Exemplary point cloud of one soybean plant in the field. (<b>b</b>) Exemplary mesh of one soybean plant in the field.</p>
Full article ">Figure 12
<p>Leaf angle distributions of all measured soybean plots. The two different columns represent the two varieties (olive green: Minngold, dark green: Eiko) and the upper line are the measurements captured in the morning, while the lower line shows the midday measurements. Each line in the figure describes the leaf angles distribution for one plot in the field.</p>
Full article ">
22 pages, 7862 KiB  
Article
Comparison Between Thermal-Image-Based and Model-Based Indices to Detect the Impact of Soil Drought on Tree Canopy Temperature in Urban Environments
by Takashi Asawa, Haruki Oshio and Yumiko Yoshino
Remote Sens. 2024, 16(23), 4606; https://doi.org/10.3390/rs16234606 - 8 Dec 2024
Viewed by 796
Abstract
This study aimed to determine whether canopy and air temperature difference (ΔT) as an existing simple normalizing index can be used to detect an increase in canopy temperature induced by soil drought in urban parks, regardless of the unique energy balance and three-dimensional [...] Read more.
This study aimed to determine whether canopy and air temperature difference (ΔT) as an existing simple normalizing index can be used to detect an increase in canopy temperature induced by soil drought in urban parks, regardless of the unique energy balance and three-dimensional (3D) structure of urban trees. Specifically, we used a thermal infrared camera to measure the canopy temperature of Zelkova serrata trees and compared the temporal variation of ΔT to that of environmental factors, including solar radiation, wind speed, vapor pressure deficit, and soil water content. Normalization based on a 3D energy-balance model was also performed and used for comparison with ΔT. To represent the 3D structure, a terrestrial light detection and ranging-derived 3D tree model was used as the input spatial data. The temporal variation in ΔT was similar to that of the index derived using the energy-balance model, which considered the 3D structure of trees and 3D radiative transfer, with a correlation coefficient of 0.85. In conclusion, the thermal-image-based ΔT performed comparably to an index based on the 3D energy-balance model and detected the increase in canopy temperature because of the reduction in soil water content for Z. serrata trees in an urban environment. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Overview of the study site: (<b>a</b>) map and aerial photographs; (<b>b</b>) target trees. Aerial photographs were obtained through the Geospatial Information Authority of Japan in June 2019.</p>
Full article ">Figure 2
<p>Schematic diagram of the study site, including the measurement points.</p>
Full article ">Figure 3
<p>Photographs of the measurement points: (<b>a</b>) Point A; (<b>b</b>) Point B; (<b>c</b>) Point C.</p>
Full article ">Figure 4
<p>Areas used to acquire (<b>a</b>) leaf temperature for calculating ΔT, (<b>b</b>) input values for numerical simulation, and (<b>c</b>) the normalized index α. In (<b>a</b>,<b>b</b>), the areas are indicated by white lines. In (<b>a</b>), a shaded portion used to detect low-quality thermal images is represented by a black square. In (<b>c</b>), the voxels used to calculate the mean value of the normalized index α are highlighted. A visible image of the target tree is shown in (<b>d</b>) for reference.</p>
Full article ">Figure 5
<p>Schematic of the input parameters for the FLiESvox model. Input parameters are shown in blue and their sources in black. Some parameters were set for each wavelength region, i.e., ultraviolet (UV), visible (VIS), and near-infrared (NIR).</p>
Full article ">Figure 6
<p>Temporal variation of the all measured data (all times): (<b>a</b>) SWC and rainfall; (<b>b</b>) air temperature; (<b>c</b>) global solar radiation; (<b>d</b>) wind speed; (<b>e</b>) VPD; (<b>f</b>) ΔT. The SWC is the mean of the data obtained at depths of 15 and 35 cm at points far from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). For ΔT, the box-and-whisker plot of the light blue line shows the data from 10:30 to 12:30, which was used for the analysis.</p>
Full article ">Figure 7
<p>Relationship between ΔT and environmental factors: (<b>a</b>,<b>e</b>) SWC; (<b>b</b>,<b>f</b>) global solar radiation; (<b>c</b>,<b>g</b>) wind speed; (<b>d</b>,<b>h</b>) VPD. Each circle in the graph represents an individual measurement: (<b>a</b>–<b>d</b>) all data were obtained between 10:30 and 12:30 local standard time (LST); (<b>e</b>–<b>h</b>) data were obtained between 10:30 and 12:30 LST when the global solar radiation exceeded 800 W m<sup>−2</sup>. The SWC is the mean of data obtained at depths of 15 cm and 35 cm on points far from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). Each graph shows a correlation coefficient (R) and linear regression line (broken gray line).</p>
Full article ">Figure 8
<p>Relationship between ΔT and SWC from 10:30 to 12:30 LST for different conditions of solar radiation, VPD, and wind speed. For solar radiation, there are three classes: 0–200 W m<sup>−2</sup>, 200–800 W m<sup>−2</sup>, and higher; for VPD, there are three classes: 0–0.2 kPa, 0.2–0.4 kPa, and higher; and for wind speed, there are two classes: 0–1.5 m s<sup>−1</sup> and higher. The SWC is the mean of data obtained at depths of 15 cm and 35 cm on points far from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). The dotted line represents the SWC value corresponding to the permanent wilting point. Filled circles indicate that both SWC values at the two depths are lower than the permanent wilting point. When the number of samples is greater than 5, the regression line and correlation coefficient (R) are shown in this Figure.</p>
Full article ">Figure 9
<p>Temporal variation in the mean value of measured data between 10:30 and 12:30 local standard time (LST) under conditions of global solar radiation greater than 800 W m<sup>−2</sup>: (<b>a</b>) SWC; (<b>b</b>) global solar radiation; (<b>c</b>) wind speed; (<b>d</b>) VPD; (<b>e</b>) ΔT; (<b>f</b>) ΔT corrected for the effect of wind speed (ΔT<sub>cor</sub>); (<b>g</b>) the normalized index α; (<b>h</b>) overlaid plots of SWC, solar radiation, wind speed, and ΔT; (<b>i</b>) overlaid plots of SWC, ΔT, ΔT<sub>cor</sub>, and α. In (<b>h</b>,<b>i</b>), the values normalized by the minimum and maximum values are plotted for each item.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>d</b>) Relationship between ΔT and environmental factors, (<b>e</b>–<b>h</b>) relationship between ΔT corrected for wind speed effect (ΔT<sub>cor</sub>) and environmental factors, and (<b>i</b>–<b>l</b>) relationship between the normalized index α and environmental factors: (<b>a</b>,<b>e</b>,<b>i</b>) SWC; (<b>b</b>,<b>f</b>,<b>j</b>) global solar radiation; (<b>c</b>,<b>g</b>,<b>k</b>) wind speed; and (<b>d</b>,<b>h</b>,<b>l</b>) VPD. The SWC is the mean of data obtained at depths of 15 and 35 cm on points distant from the side ditch (<a href="#remotesensing-16-04606-f003" class="html-fig">Figure 3</a>b). Each graph shows a correlation coefficient (R), regression equation, normalized root mean squared error (E), and regression line (broken gray line).</p>
Full article ">Figure 11
<p>Relationship between ΔT and wind speed for data obtained under conditions where it was anticipated that SWC would have no effect on the canopy temperature. Each graph shows a correlation coefficient (R), regression equation, and regression line (broken gray line).</p>
Full article ">Figure 12
<p>Relationship between ΔT and normalized index α. The graph shows a correlation coefficient (R) and linear regression line (gray broken line).</p>
Full article ">Figure 13
<p>Temporal variation in the maximum ΔT (ΔT<sub>max</sub>) between 10:30 and 12:30.</p>
Full article ">
24 pages, 40976 KiB  
Article
Monitoring Anthropogenically Disturbed Parcels with Soil Erosion Dynamics Change Based on an Improved SegFormer
by Zhenqiang Li, Jialin Li, Jie Li, Zhangxuan Li, Kuncheng Jiang, Yuyang Ma and Chuli Hu
Remote Sens. 2024, 16(23), 4494; https://doi.org/10.3390/rs16234494 - 29 Nov 2024
Viewed by 540
Abstract
Amidst burgeoning socioeconomic development, anthropogenic activities have exacerbated soil erosion. This erosion, characterized by its brief duration, high frequency, and considerable environmental degradation, presents a major challenge to ecological systems. Therefore, it is imperative to regulate and remediate erosion–prone, anthropogenically disturbed parcels, with [...] Read more.
Amidst burgeoning socioeconomic development, anthropogenic activities have exacerbated soil erosion. This erosion, characterized by its brief duration, high frequency, and considerable environmental degradation, presents a major challenge to ecological systems. Therefore, it is imperative to regulate and remediate erosion–prone, anthropogenically disturbed parcels, with dynamic change detection (CD) playing a crucial role in enhancing management efficiency. Currently, traditional methods for change detection, such as field surveys and visual interpretation, suffer from time inefficiencies, complexity, and high resource consumption. Meanwhile, despite advancements in remote sensing technology that have improved the temporal and spatial resolution of images, the complexity and heterogeneity of terrestrial cover types continue to limit large–scale dynamic monitoring of anthropogenically disturbed soil erosion parcels (ADPSE) using remote sensing techniques. To address this, we propose a novel ISegFormer model, which integrates the SegFormer network with a pseudo–residual multilayer perceptron (PR–MLP), cross–scale boundary constraint module (CSBC), and multiscale feature fusion module (MSFF). The PR–MLP module improves feature extraction by capturing spatial contextual information, while the CSBC module enhances boundary prediction through high– and low–level semantic guidance. The MSFF module fuses multiscale features with attention mechanisms, boosting segmentation precision for diverse change types. Model performance is evaluated using metrics, such as precision, recall, F1–score, intersection over union (IOU), and mean intersection over union (mIOU). The results demonstrate that our improved model performs exceptionally well in dynamic monitoring tasks for ADPSE. Compared to five other models, our model achieved an mIOU of 72.34% and a Macro–F1 score of 83.55% across twelve types of ADPSE changes, surpassing the other models by 1.52–2.48% in mIOU and 2.25–3.64% in Macro–F1 score. This work provides a theoretical and methodological foundation for policy–making in soil and water conservation departments. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of sample areas in Hubei Province.</p>
Full article ">Figure 2
<p>Network structure of the improved SegFormer.</p>
Full article ">Figure 3
<p>Network structure of SegFormer.</p>
Full article ">Figure 4
<p>Structure of the PR–MLP.</p>
Full article ">Figure 5
<p>The structure of the CSBC.</p>
Full article ">Figure 6
<p>Structure of the MSFF.</p>
Full article ">Figure 7
<p>Structure of the AIB.</p>
Full article ">Figure 8
<p>Segmentation results by different models on the test dataset.</p>
Full article ">Figure 9
<p>Ablation experiment segmentation results of roads under construction. (<b>a</b>) 2020 image. (<b>b</b>) 2021 image. (<b>c</b>) Ground truth. (<b>d</b>) Results predicted by SegFormer. (<b>e</b>) Results predicted by PR–MLP–SegFormer. (<b>f</b>) Results predicted by CSBC module added to PR–MLP–SegFormer. (<b>g</b>) Results predicted by MSFF module added to PR–MLP–SegFormer. (<b>h</b>) Results predicted by our improved model.</p>
Full article ">Figure 10
<p>Ablation experiment segmentation results of roads under construction. (<b>a</b>) 2020 image. (<b>b</b>) 2021 image. (<b>c</b>) Ground truth. (<b>d</b>) Results predicted by SegFormer. (<b>e</b>) Results predicted by PR–MLP–SegFormer. (<b>f</b>) Results predicted by CSBC module added to PR–MLP–SegFormer. (<b>g</b>) Results predicted by MSFF module added to PR–MLP–SegFormer. (<b>h</b>) Results predicted by our improved model.</p>
Full article ">Figure 11
<p>Spatial distribution of change categories.</p>
Full article ">
16 pages, 6176 KiB  
Article
Influence of the Inclusion of Off-Nadir Images on UAV-Photogrammetry Projects from Nadir Images and AGL (Above Ground Level) or AMSL (Above Mean Sea Level) Flights
by Francisco Agüera-Vega, Ezequiel Ferrer-González, Patricio Martínez-Carricondo, Julián Sánchez-Hermosilla and Fernando Carvajal-Ramírez
Drones 2024, 8(11), 662; https://doi.org/10.3390/drones8110662 - 10 Nov 2024
Viewed by 914
Abstract
UAV-SfM techniques are in constant development to address the challenges of accurate and precise mapping in terrains with complex morphologies. In contrast with the traditional photogrammetric processes, where only nadir images were considered, the combination of those with oblique imagery, also called off-nadir, [...] Read more.
UAV-SfM techniques are in constant development to address the challenges of accurate and precise mapping in terrains with complex morphologies. In contrast with the traditional photogrammetric processes, where only nadir images were considered, the combination of those with oblique imagery, also called off-nadir, has emerged as an optimal solution to achieve higher accuracy in these kinds of landscapes. UAV flights at a constant height above ground level (AGL) have also been considered a possible alternative to improve the resulting 3D point clouds compared to those obtained from constant height above mean sea level (AMSL) flights. The aim of this study is to evaluate the effect of incorporating oblique images as well as the type of flight on the accuracy and precision of the point clouds generated through UAV-SfM workflows for terrains with complex geometries. For that purpose, 58 scenarios with different camera angles and flight patterns for the oblique images were considered, 29 for each type of flight (AMSL and AGL). The 3D point cloud derived from each of the 58 scenarios was compared with a reference 3D point cloud acquired with a terrestrial laser scanner (TLS). The results obtained confirmed that both incorporating oblique images and using AGL flight mode have a positive effect on the mapping. Combination of nadir image blocks, obtained from an AGL crosshatch flight plan, with supplemental oblique images collected with a camera angle of between 20° and 35° yielded the best accuracy and precision records. Full article
(This article belongs to the Collection Feature Papers of Drones Volume II)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Study area location, marked by a black dot; (<b>b</b>) study area situation, marked by a rectangle with coordinates of its vertices; (<b>b<sub>1</sub></b>) panoramic view, taken from UAV; (<b>c</b>) location and detailed definition of the shape of the study area; (<b>d</b>) scale color contour map, where TLS and ground control points (GCPs) of photogrammetric projects are marked. Coordinates refer to UTM (Zone 30, European Terrestrial Reference System 89 (ETRS89)), (EGM08 geoid model).</p>
Full article ">Figure 1 Cont.
<p>(<b>a</b>) Study area location, marked by a black dot; (<b>b</b>) study area situation, marked by a rectangle with coordinates of its vertices; (<b>b<sub>1</sub></b>) panoramic view, taken from UAV; (<b>c</b>) location and detailed definition of the shape of the study area; (<b>d</b>) scale color contour map, where TLS and ground control points (GCPs) of photogrammetric projects are marked. Coordinates refer to UTM (Zone 30, European Terrestrial Reference System 89 (ETRS89)), (EGM08 geoid model).</p>
Full article ">Figure 2
<p>Oblique-image patterns (<b>a</b>) cross-centered in the study area, CROSS; (<b>b</b>) outside box, BOX EX; (<b>c</b>) inside box, BOX IN; (<b>d</b>) outside and inside box, BOX EX + IN; (<b>e</b>) NS curve, CUR NS; (<b>f</b>) EW curve, CUR EW; (<b>g</b>) NS and EW curves, CUR NS + EW; (<b>h</b>) Notes.</p>
Full article ">Figure 3
<p>Reference cloud acquisition and processing workflow.</p>
Full article ">Figure 4
<p>General description of the M3C2 algorithm and the user-defined parameters: D (normal scale), d (projection scale), and h (cylinder height). (<b>Step 1</b>): D represents the diameter of a sphere centered on the point currently studied (j). All points of this cloud included in the sphere are used to fit a plane, whose director vector (N) defines the normal orientation of the cloud at that point. (<b>Step 2</b>): d and h are defined. The cylinder axis contains the director vector calculated in the first step. The points of each cloud contained in the cylinder (green dots for reference cloud and magenta for compared cloud), are projected onto the cylinder axis, and the distance of each projection to point j is calculated. Two sets of values are defined, one corresponding to the reference cloud and another corresponding to the compared cloud. The distance between the mean values (a<sub>1</sub> and a<sub>2</sub>) derived from each set corresponds to the distance between the clouds at that point.</p>
Full article ">Figure 5
<p>M3C2-calculated mean distance (accuracy) between the TLS reference cloud and clouds obtained from different UAV-SfM image configurations: (<b>a</b>) AMSL; (<b>b</b>) AGL.</p>
Full article ">Figure 6
<p>M3C2-calculated standard deviation (precision) between the TLS reference cloud and clouds obtained from different UAV-SfM image configurations: (<b>a</b>) AMSL; (<b>b</b>) AGL.</p>
Full article ">Figure 7
<p>M3C2-calculated distances between TLS reference cloud and clouds obtained from the flights reference (AMSL and AGL), AMSL + CROSS at 22.5° (best accuracy and precision for AMSL and oblique image combinations), AGL + BOX IN at 33.75° (best accuracy for AGL and oblique image combinations), and AGL + CUR NS + CUR EW at 33.75° (best precision for AGL and oblique image combinations). The left column represents the distribution maps of all the points. The middle column represents the distribution of points where distances take positive values, and the right column demonstrates where negative distance has been measured. Dimensions in the color scale are in m.</p>
Full article ">
Back to TopTop