Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (428)

Search Parameters:
Keywords = terrestrial LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5510 KiB  
Article
Unveiling Population Structure Dynamics of Populus euphratica Riparian Forests Along the Tarim River Using Terrestrial LiDAR
by Alfidar Arkin, Asadilla Yusup, Ümüt Halik, Abdulla Abliz, Ailiya Ainiwaer, Aolei Tian and Maimaiti Mijiti
Forests 2025, 16(2), 368; https://doi.org/10.3390/f16020368 - 18 Feb 2025
Viewed by 98
Abstract
The Populus euphratica desert riparian forest, predominantly distributed along the Tarim River in northwestern China, has experienced significant degradation due to climate change and anthropogenic activities. Despite its ecological importance, systematic assessments of P. euphratica stand structure across the entire Tarim River remain [...] Read more.
The Populus euphratica desert riparian forest, predominantly distributed along the Tarim River in northwestern China, has experienced significant degradation due to climate change and anthropogenic activities. Despite its ecological importance, systematic assessments of P. euphratica stand structure across the entire Tarim River remain scarce. This study employed terrestrial laser scanning (TLS) to capture high-resolution 3D structural data from 2741 individual trees across 30 plots within six transects, covering the 1300 km mainstream of the Tarim River. ANOVA, PCA, and RDA were applied to examine tree structure variation and environmental influences. Results revealed a progressive decline in key structural parameters from the upper to lower reaches of the river, with the lower reaches showing pronounced degradation. Stand density decreased from 440 to 257 trees per hectare, mean stand height declined from 9.3 m to 5.6 m, mean crown diameter reduced from 4.1 m to 3.8 m, canopy cover dropped from 62% to 42%, and the leaf area index fell from 0.51 to 0.29. Age class distributions varied along the river, highlighting population structures indicative of growth in the upper reaches, stability in the middle reaches, and decline in the lower reaches. Abiotic factors, including groundwater depth, soil salinity, soil moisture, and precipitation, exhibited strong correlations with stand structural parameters (p < 0.05, R2 ≥ 0.69). The findings highlight significant spatial variations in tree structure, with healthier growth in the upper reaches and degradation in the lower reaches, enhance our understanding of forest development processes, and emphasize the urgent need for targeted conservation strategies. This comprehensive quantification of P. euphratica stand structure and its environmental drivers offer valuable insights into the dynamics of desert riparian forest ecosystems. The findings contribute to understanding forest development processes and provide a scientific basis for formulating effective conservation strategies to sustain these vital desert ecosystems, as well as for the monitoring of regional environmental changes. Full article
Show Figures

Figure 1

Figure 1
<p>Sketch map of research transects along the main stream of the Tarim River. Note: (<b>a</b>) Map of China, (<b>b</b>) presents the topography of Xinjiang, with the study area clearly defined by the red rectangular box, (<b>c</b>) presents a detailed map of the study area, covering information on land use types, river distribution, and sampling site locations within the region.</p>
Full article ">Figure 2
<p>Extraction of tree structural attributes using TLS point cloud. Note: (<b>a</b>) TLS data acquisition in the sample plot (50 × 50 m); raw data. (<b>b</b>) The classification of tree and ground point clouds, segmentation of individual trees, and measurement of tree height (H), crown diameter (CD), and diameter at breast height (DBH) are all performed using the point cloud data.</p>
Full article ">Figure 3
<p>Distribution patterns of <span class="html-italic">P. euphratica</span> tree structural attributes (H, DBH, and CD) in the upper, middle, and lower reaches of the Tarim River. Note: (<b>a</b>–<b>c</b>) show the distribution of tree height (H); (<b>d</b>–<b>f</b>) show the distribution of diameter at breast height (DBH); (<b>g</b>–<b>i</b>) show the distribution of crown diameter (CD). The number of trees counted in (<b>d</b>–<b>f</b>) is relatively smaller because the TLS was unable to obtain the DBH of a few trees.</p>
Full article ">Figure 4
<p>Comparison of <span class="html-italic">P. euphratica</span> tree structural attributes (H, DBH, and CD) across the upper, middle, and lower reaches of the Tarim River.</p>
Full article ">Figure 5
<p>Age class distribution of <span class="html-italic">P. euphratica</span> across the upper, middle, and lower reaches of the Tarim River.</p>
Full article ">Figure 6
<p>Correlation between <span class="html-italic">P. euphratica</span> basic structural parameters and abiotic factors.</p>
Full article ">Figure 7
<p>Redundancy analysis and principal components analysis of <span class="html-italic">P. euphratica</span> stand structures and abiotic factors. Note: Arrows denote variable directions or vectors. Lengths and directions show contributions to principal components: longer means more influence, shorter less. Directions indicate variable orientation in the space. Different letters (A–I) with small circles are different sample points. Observing their distribution analyzes sample similarities/differences. Clustered circles suggest similar features, dispersed ones imply differences. This pattern reveals potential data structures or grouping info.</p>
Full article ">Figure 8
<p>Response of <span class="html-italic">P. euphratica</span> structural parameters to increasing distance from the river channel.</p>
Full article ">
26 pages, 93658 KiB  
Article
Sustainable Digital Innovation for Regional Museums Through Cost-Effective Digital Reconstruction and Exhibition Co-Design: A Case Study of the Ryushi Memorial Museum
by Yaotian Ai, Xinru Zhu and Kayoko Nohara
Sustainability 2025, 17(4), 1598; https://doi.org/10.3390/su17041598 - 14 Feb 2025
Viewed by 322
Abstract
While national museums focus on broader national narratives, regional museums function as vital community hubs, establishing deeper local connections and facilitating intimate interactions between local residents and their heritage. These regional museums face dual challenges in their sustainable digital transformation, including the following: [...] Read more.
While national museums focus on broader national narratives, regional museums function as vital community hubs, establishing deeper local connections and facilitating intimate interactions between local residents and their heritage. These regional museums face dual challenges in their sustainable digital transformation, including the following: technical barriers arising from the high costs of traditional digitization methods like Terrestrial Laser Scanning (TLS) and humanistic challenges, including preserving distinctive multi-directional communication and balancing professionalism and authority with collaborative community engagement in the digitization process. This study addresses these challenges through a case study of the Ryushi Memorial Museum in Ota City, Tokyo. We present a comprehensive approach that integrates technical innovation with community engagement, including the following: (1) A cost-effective workflow combining photogrammetry with iPad LiDAR technology for spatial reconstruction, demonstrated through the digital reconstruction of the museum’s Atelier and Jibutsudo (family hall for worshipping Buddha); (2) a new Exhibition Co-Design framework that co-ordinates diverse stakeholders to create digital exhibitions while balancing professional guidance with community participation. Through questionnaire surveys and semi-structured interviews with museum volunteers, we demonstrate how this approach enhances community engagement by enabling volunteers to incorporate their local knowledge into digital exhibitions while maintaining professionalism and authority. This cost-effective model for spatial reconstruction and community-driven digital design can serve as a reference for other regional museums to help them achieve sustainable digital innovation in the digital age. Full article
(This article belongs to the Special Issue Cultural Heritage Conservation and Sustainable Development)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the methods used in this research.</p>
Full article ">Figure 2
<p>Atelier of the Ryushi Memorial Museum. Source: Ryushi Memorial Museum website (<a href="https://www.ota-bunka.or.jp/facilities/ryushi/park/" target="_blank">https://www.ota-bunka.or.jp/facilities/ryushi/park/</a>, accessed on 3 August 2024).</p>
Full article ">Figure 3
<p>Jibutsudo in the Ryushi Memorial Museum. Photograph taken by the author on 12 December 2022.</p>
Full article ">Figure 4
<p>ColorChecker Passport Photo 2 in the Atelier of the Ryushi Memorial Museum. Photograph taken by the author on 5 July 2022.</p>
Full article ">Figure 5
<p>A Part of the point cloud in the Atelier of the Ryushi Memorial Museum.</p>
Full article ">Figure 6
<p>Digital reconstruction of the Jibutsudo using photogrammetry exclusively without implementing color correction.</p>
Full article ">Figure 7
<p>Pre-calibration photograph of the Jibutsudo, demonstrating uncorrected color representation. Photograph taken by the author on 22 October 2022.</p>
Full article ">Figure 8
<p>Post-calibration photograph of the Jibutsudo, showcasing enhanced color accuracy after DCP application.</p>
Full article ">Figure 9
<p>Consolidated point cloud of the Jibutsudo after Merging and Duplicate Removal using CloudCompare.</p>
Full article ">Figure 10
<p>Attempted digital reconstruction of a portion of the Atelier using Context Capture.</p>
Full article ">Figure 11
<p>Attempted digital reconstruction of a portion of the Atelier using Agisoft Metashape.</p>
Full article ">Figure 12
<p>Final outcome of the triangulation process.</p>
Full article ">Figure 13
<p>Digital reconstruction of the Jibutsudo using photogrammetry.</p>
Full article ">Figure 14
<p>Digital reconstruction of the Jibutsudo using point cloud data.</p>
Full article ">Figure 15
<p>Jibutsudo model refinement using DPModeler 2.3 and MeshLab 2022.02.</p>
Full article ">Figure 16
<p>Completed model of the Atelier.</p>
Full article ">Figure 17
<p>Completed model of the Jibutsudo.</p>
Full article ">Figure 18
<p>Jibutsudo (family hall for worshipping Buddha) model on Sketchfab. Screenshot taken on 6 August 2024 via <a href="https://skfb.ly/oMTJ7" target="_blank">https://skfb.ly/oMTJ7</a>, accessed on 6 August 2024.</p>
Full article ">Figure 19
<p>Framework for Exhibition Co-Design in the Ryushi Memorial Museum.</p>
Full article ">Figure 20
<p>The result of volunteers’ current views on the Ryushi Memorial Museum.</p>
Full article ">Figure 21
<p>A Part of the SCAT (Steps for Coding and Theorization) analysis form.</p>
Full article ">
22 pages, 29748 KiB  
Article
An Integrated Method for Inverting Beach Surface Moisture by Fusing Unmanned Aerial Vehicle Orthophoto Brightness with Terrestrial Laser Scanner Intensity
by Jun Zhu, Kai Tan, Feijian Yin, Peng Song and Faming Huang
Remote Sens. 2025, 17(3), 522; https://doi.org/10.3390/rs17030522 - 3 Feb 2025
Viewed by 528
Abstract
Beach surface moisture (BSM) is crucial to studying coastal aeolian sand transport processes. However, traditional measurement techniques fail to accurately monitor moisture distribution with high spatiotemporal resolution. Remote sensing technologies have garnered widespread attention for providing rapid and non-contact moisture measurements, but a [...] Read more.
Beach surface moisture (BSM) is crucial to studying coastal aeolian sand transport processes. However, traditional measurement techniques fail to accurately monitor moisture distribution with high spatiotemporal resolution. Remote sensing technologies have garnered widespread attention for providing rapid and non-contact moisture measurements, but a single method has inherent limitations. Passive remote sensing is challenged by complex beach illumination and sediment grain size variability. Active remote sensing represented by LiDAR (light detection and ranging) exhibits high sensitivity to moisture, but requires cumbersome intensity correction and may leave data holes in high-moisture areas. Using machine learning, this research proposes a BSM inversion method that fuses UAV (unmanned aerial vehicle) orthophoto brightness with intensity recorded by TLSs (terrestrial laser scanners). First, a back propagation (BP) network rapidly corrects original intensity with in situ scanning data. Second, beach sand grain size is estimated based on the characteristics of the grain size distribution. Then, by applying nearest point matching, intensity and brightness data are fused at the point cloud level. Finally, a new BP network coupled with the fusion data and grain size information enables automatic brightness correction and BSM inversion. A field experiment at Baicheng Beach in Xiamen, China, confirms that this multi-source data fusion strategy effectively integrates key features from diverse sources, enhancing the BP network predictive performance. This method demonstrates robust predictive accuracy in complex beach environments, with an RMSE of 2.63% across 40 samples, efficiently producing high-resolution BSM maps that offer values in studying aeolian sand transport mechanisms. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Field deployment at the Baicheng Beach and the surface moisture sampling points (green). The wind rose was generated based on the average wind frequency for July 2024.</p>
Full article ">Figure 2
<p>(<b>a</b>) Samples with different moisture levels with a Spyder standard gray card. (<b>b</b>) Samples with different moisture levels with a Spyder 24-color standard color card (from left to right, the moisture of the samples from top to bottom are 5.87%, 8.27%, 5.39%, 5.28%, 4.36%, 4.35%, 5.94%, 4.21%, 3.09%, 7.17%, 4.61%, and 5.31%).</p>
Full article ">Figure 3
<p>The workflow of the proposed method.</p>
Full article ">Figure 4
<p>The process of feature parameter extraction: (<b>a</b>) Extraction of sample information; (<b>b</b>) acquisition of feature parameters; (<b>c</b>) Gaussian-fitting the histogram of color parameters and intensity parameters.</p>
Full article ">Figure 5
<p>(<b>a</b>) Original intensity distribution. (<b>b</b>) Corrected intensity distribution.</p>
Full article ">Figure 6
<p>(<b>a</b>) Characteristics of sediment grain size distribution. (<b>b</b>) Sediment average grain size vs. distance from sampling point to beach berm.</p>
Full article ">Figure 7
<p>Correlation coefficient matrix between the feature parameters and moisture content.</p>
Full article ">Figure 8
<p>(<b>a</b>) Distribution of beach surface moisture and estimation errors. (<b>b</b>) Measured moisture of the samples vs. estimated moisture of the samples.</p>
Full article ">Figure 9
<p>(<b>a</b>) Relationship between original intensity and distance. (<b>b</b>) Relationship between corrected intensity and distance. (<b>c</b>) Relationship between original intensity and incidence angle. (<b>d</b>) Relationship between corrected intensity and incidence angle.</p>
Full article ">Figure 10
<p>Relationship between feature parameters and moisture content under different grain size conditions. (<b>a</b>) V (from HSV color space) vs. moisture. (<b>b</b>) Intensity vs. moisture.</p>
Full article ">Figure 11
<p>(<b>a</b>) Distribution of beach surface moisture based on intensity. (<b>b</b>) Distribution of beach surface moisture based on brightness.</p>
Full article ">
23 pages, 9203 KiB  
Article
Improved Cylinder-Based Tree Trunk Detection in LiDAR Point Clouds for Forestry Applications
by Shaobo Ma, Yongkang Chen, Zhefan Li, Junlin Chen and Xiaolan Zhong
Sensors 2025, 25(3), 714; https://doi.org/10.3390/s25030714 - 24 Jan 2025
Viewed by 518
Abstract
The application of LiDAR technology in extracting individual trees and stand parameters plays a crucial role in forest surveys. Accurate identification of individual tree trunks is a critical foundation for subsequent parameter extraction. For LiDAR-acquired forest point cloud data, existing two-dimensional (2D) plane-based [...] Read more.
The application of LiDAR technology in extracting individual trees and stand parameters plays a crucial role in forest surveys. Accurate identification of individual tree trunks is a critical foundation for subsequent parameter extraction. For LiDAR-acquired forest point cloud data, existing two-dimensional (2D) plane-based algorithms for tree trunk detection often suffer from spatial information loss, resulting in reduced accuracy, particularly for tilted trees. While cylinder fitting algorithms provide a three-dimensional (3D) solution for trunk detection, their performance in complex forest environments remains limited due to sensitivity to parameters like distance thresholds. To address these challenges, this study proposes an improved individual tree trunk detection algorithm, Random Sample Consensus Cylinder Fitting (RANSAC-CyF), specifically optimized for detecting cylindrical tree trunks. Validated in three forest plots with varying complexities in Tianhe District, Guangzhou, the algorithm demonstrated significant advantages in the inlier rate, detection success rate, and robustness for tilted trees. The study showed the following results: (1) The average difference between the inlier rates of tree trunks and non-tree points for the three sample plots using RANSAC-CyF were 0.59, 0.63, and 0.52, respectively, which were significantly higher than those using the Least Squares Circle Fitting (LSCF) algorithm and the Random Sample Consensus Circle Fitting (RANSAC-CF) algorithm (p < 0.05). (2) RANSAC-CyF required only 2 and 8 clusters to achieve a 100% detection success rate in Plot 1 and Plot 2, while the other algorithms needed 26 and 40 clusters. (3) The effective distance threshold range of RANSAC-CyF was more than twice that of the comparison algorithms, maintaining stable inlier rates above 0.9 across all tilt angles. (4) The RANSAC-CyF algorithm still achieved good detection performance in the challenging Plot 3. Together, the other two algorithms failed to detect. The findings highlight the RANSAC-CyF algorithm’s superior accuracy, robustness, and adaptability in complex forest environments, significantly improving the efficiency and precision of individual tree trunk detection for forestry surveys and ecological research. Full article
(This article belongs to the Special Issue Application of LiDAR Remote Sensing and Mapping)
Show Figures

Figure 1

Figure 1
<p>Schematic map of the study area.</p>
Full article ">Figure 2
<p>Element distribution maps for Plot 1 (<b>a</b>), Plot 2 (<b>b</b>), and Plot 3 (<b>c</b>).</p>
Full article ">Figure 3
<p>Visualization of preprocessed point cloud data, where colors are assigned based on height values, for (<b>a</b>) Plot 1, (<b>b</b>) Plot 2, and (<b>c</b>) Plot 3.</p>
Full article ">Figure 4
<p>Cylindrical model expression.</p>
Full article ">Figure 5
<p>Distribution of the average inlier rate differences between trunk and non-trunk point clouds with varying surface normal weights.</p>
Full article ">Figure 6
<p>Cylindrical fitting pseudo-code.</p>
Full article ">Figure 7
<p>Individual tree identification pseudo-code.</p>
Full article ">Figure 8
<p>Fitting results for the three algorithms: (<b>a</b>) LSCF, Plot 1; (<b>b</b>) RANSAC-CF, Plot 1; (<b>c</b>) RANSAC-CF, Plot 1; (<b>d</b>) LSCF, Plot 2; (<b>e</b>) RANSAC-CF, Plot 2; (<b>f</b>) RANSAC-CF, Plot 2; (<b>g</b>) LSCF, Plot 3; (<b>h</b>) RANSAC-CF, Plot 3; (<b>i</b>) RANSAC-CF, Plot 3.</p>
Full article ">Figure 9
<p>Distribution of the success rates in achieving perfect individual tree detection with different numbers of samples for the three algorithms: (<b>a</b>) Plot 1; (<b>b</b>) Plot 2; (<b>c</b>) Plot 3.</p>
Full article ">Figure 10
<p>Distribution of discrimination values with varying distance thresholds for the three algorithms: (<b>a</b>) Plot 1; (<b>b</b>) Plot 2; (<b>c</b>) Plot 3.</p>
Full article ">Figure 11
<p>Changes in fitting inlier rates of the three algorithms at different tilt angles: (<b>a</b>) 18, Plot 1; (<b>b</b>) 28, Plot 1.</p>
Full article ">Figure 12
<p>Plot point clouds: (<b>a</b>) 18 Plot 1, Complete trunk; (<b>b</b>) 21 Plot 3, Incomplete trunk; (<b>c</b>) 10 Plot 1, Target ball; (<b>d</b>) 4 Plot 2, Wall; (<b>e</b>) 34 Plot 2, Shrub; (<b>f</b>) 26 Plot 2, Shrub.</p>
Full article ">Figure 13
<p>Comparison of fitting results for selected point clouds by the three algorithms: (<b>I</b>) complete trunk point cloud (16 Plot1), (<b>II</b>) incomplete trunk point cloud (28 Plot 1), (<b>III</b>) wall point cloud (4 Plot 2), (<b>IV</b>) target ball point cloud (20 Plot 1), and (<b>V</b>,<b>VI</b>) shrub point clouds (34, 18 Plot 2).</p>
Full article ">Figure 14
<p>Comparison of fitting results for tilted trunk point clouds by the three algorithms: (<b>I</b>–<b>III</b>): LSCF, RANSAC-CF, RANSAC-CyF) complete trunk point cloud (18, Plot 1), (<b>IV</b>–<b>VI</b>): LSCF, RANSAC-CF, RANSAC-CyF) incomplete trunk point cloud (28, Plot 1).</p>
Full article ">
19 pages, 3375 KiB  
Article
Enhancing Cross-Modal Camera Image and LiDAR Data Registration Using Feature-Based Matching
by Jennifer Leahy, Shabnam Jabari, Derek Lichti and Abbas Salehitangrizi
Remote Sens. 2025, 17(3), 357; https://doi.org/10.3390/rs17030357 - 22 Jan 2025
Viewed by 718
Abstract
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This [...] Read more.
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This paper introduces a new pipeline for camera–LiDAR post-registration to produce colorized point clouds. Utilizing deep learning-based matching between 2D spherical projection LiDAR feature layers and camera images, we can map 3D LiDAR coordinates to image grey values. Various LiDAR feature layers, including intensity, bearing angle, depth, and different weighted combinations, are used to find correspondence with camera images utilizing state-of-the-art deep learning matching algorithms, i.e., SuperGlue and LoFTR. Registration is achieved using collinearity equations and RANSAC to remove false matches. The pipeline’s accuracy is tested using survey-grade terrestrial datasets from the TX5 scanner, as well as datasets from a custom-made, low-cost mobile mapping system (MMS) named Simultaneous Localization And Mapping Multi-sensor roBOT (SLAMM-BOT) across diverse scenes, in which both outperformed their baseline solutions. SuperGlue performed best in high-feature scenes, whereas LoFTR performed best in low-feature or sparse data scenes. The LiDAR intensity layer had the strongest matches, but combining feature layers improved matching and reduced errors. Full article
(This article belongs to the Special Issue Remote Sensing Satellites Calibration and Validation)
Show Figures

Figure 1

Figure 1
<p>General flowchart of the methodology.</p>
Full article ">Figure 2
<p>Proposed optical and LiDAR data integration method.</p>
Full article ">Figure 3
<p>Camera-to-ground coordinate system transformations. The rotational extrinsic parameters of the LiDAR sensor are represented by the angles (<span class="html-italic">ω</span>, <span class="html-italic">φ</span>, <span class="html-italic">κ</span>), which describe the orientation of the camera in the 3D space. The camera’s principal point is denoted by (<span class="html-italic">x</span><sub>p</sub>, <span class="html-italic">y</span><sub>p</sub>), and <span class="html-italic">f</span> represents the focal length. The ground coordinates are represented by (<span class="html-italic">X</span>, <span class="html-italic">Y</span>, <span class="html-italic">Z</span>), corresponding to the real-world position in the ground reference system.</p>
Full article ">Figure 4
<p>The experimental scenes employed in this study. The six scenes were acquired in outdoor and indoor environments, representing different object arrangements, lighting conditions, and spatial compositions.</p>
Full article ">Figure 5
<p>Comparison of single frame (<b>left</b>) vs. densified aggregated frames (<b>right</b>).</p>
Full article ">Figure 6
<p>Comparison of different images: (<b>a</b>) optical; (<b>b</b>) bearing angle; (<b>c</b>) intensity; and (<b>d</b>) depth.</p>
Full article ">Figure 7
<p>Before (<b>left</b>) and after (<b>right</b>) attempts to remedy the range dispersions in the SLAMM-BOT depth image.</p>
Full article ">Figure 8
<p>Viable matches from the intensity image (<b>top</b>) vs. false matches from the depth image (<b>bottom</b>). The color scheme represents match confidence, with red representing high confidence and blue representing low confidence.</p>
Full article ">
44 pages, 24354 KiB  
Article
Estimating Subcanopy Solar Radiation Using Point Clouds and GIS-Based Solar Radiation Models
by Daniela Buchalová, Jaroslav Hofierka, Jozef Šupinský and Ján Kaňuk
Remote Sens. 2025, 17(2), 328; https://doi.org/10.3390/rs17020328 - 18 Jan 2025
Viewed by 575
Abstract
This study explores advanced methodologies for estimating subcanopy solar radiation using LiDAR (Light Detection and Ranging)-derived point clouds and GIS (Geographic Information System)-based models, with a focus on evaluating the impact of different LiDAR data types on model performance. The research compares the [...] Read more.
This study explores advanced methodologies for estimating subcanopy solar radiation using LiDAR (Light Detection and Ranging)-derived point clouds and GIS (Geographic Information System)-based models, with a focus on evaluating the impact of different LiDAR data types on model performance. The research compares the performance of two modeling approaches—r.sun and the Point Cloud Solar Radiation Tool (PCSRT)—in capturing solar radiation dynamics beneath tree canopies. The models were applied to two contrasting environments: a forested area and a built-up area. The r.sun model, based on raster data, and the PCSRT model, which uses voxelized point clouds, were evaluated for their accuracy and efficiency in simulating solar radiation. Data were collected using terrestrial laser scanning (TLS), unmanned laser scanning (ULS), and aerial laser scanning (ALS) to capture the structural complexity of canopies. Results indicate that the choice of LiDAR data significantly affects model outputs. PCSRT, with its voxel-based approach, provides higher precision in heterogeneous forest environments. Among the LiDAR types, ULS data provided the most accurate solar radiation estimates, closely matching in situ pyranometer measurements, due to its high-resolution coverage of canopy structures. TLS offered detailed local data but was limited in spatial extent, while ALS, despite its broader coverage, showed lower precision due to insufficient point density under dense canopies. These findings underscore the importance of selecting appropriate LiDAR data for modeling solar radiation, particularly in complex environments. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

Figure 1
<p>Locations of study areas. (<b>A</b>): Forested area; (<b>B</b>): built-up area; (<b>C</b>): side view of the forested area; (<b>D</b>): side view of the built-up area—Jesenná Street. Green lines indicate canopy areas.</p>
Full article ">Figure 2
<p>Data collection methods used in the study areas; TLS (terrestrial laser scanning), ALS (aerial laser scanning), ULS (unmanned laser scanning).</p>
Full article ">Figure 3
<p>TLS positions in (<b>A</b>): forested area; (<b>B</b>): built-up area.</p>
Full article ">Figure 4
<p>The TLS point cloud density in the forested area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 5
<p>The TLS point cloud density in the built-up area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 6
<p>The ULS point cloud density in the forested area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 7
<p>The ALS point cloud density in the forested area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 8
<p>The ALS point cloud density in the built-up area; (<b>A</b>): total points, (<b>B</b>): ground points.</p>
Full article ">Figure 9
<p>Localization of pyranometers in the forested area; (<b>A</b>): detailed photo of pyranometer in location A, (<b>B</b>): detailed photo of pyranometer in location B, (<b>C</b>): detailed photo of pyranometer in location C, (<b>D</b>): detailed photo of pyranometer in location D.</p>
Full article ">Figure 10
<p>Localization of pyranometer in the built-up area; (<b>A</b>): detailed photo of pyranometer in location A.</p>
Full article ">Figure 11
<p>Selected polygons for detailed data analysis in the forested area. P1: High vegetation; P2: meadow; P3: low vegetation; P4: high vegetation with canopy gaps.</p>
Full article ">Figure 12
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 1, 10 × 10 m, high vegetation.</p>
Full article ">Figure 13
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 2, 10 × 10 m, meadow.</p>
Full article ">Figure 14
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 3, 10 × 10 m, low vegetation.</p>
Full article ">Figure 15
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 4, 10 × 10 m, high vegetation with a gap in the vegetation.</p>
Full article ">Figure 16
<p>Selected polygons for detailed data analysis—built-up area. P1: high vegetation; P2: roof; P3: parking lot; P4: high vegetation.</p>
Full article ">Figure 17
<p>Comparison of TLS and ALS data from the top and side views of polygon 1, 10 × 10 m, high vegetation.</p>
Full article ">Figure 18
<p>Comparison of TLS and ALS data from the top and side views of polygon 2, 10 × 10 m, roof.</p>
Full article ">Figure 19
<p>Comparison of TLS and ALS data from the top and side views of polygon 3, 10 × 10 m, parking.</p>
Full article ">Figure 20
<p>Comparison of TLS and ALS data from the top and side views of polygon 4, 10 × 10 m, high vegetation.</p>
Full article ">Figure 21
<p>Estimated subcanopy solar radiation by PCSRT in forested area—ALS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 22
<p>Estimated subcanopy solar radiation by PCSRT in forested area—ULS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 23
<p>Estimated subcanopy solar radiation by PCSRT in forested area—TLS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 24
<p>Estimated subcanopy solar radiation by r.sun—ULS data; 27 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 25
<p>Estimated subcanopy solar radiation by r.sun—ALS data; 27 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 26
<p>Estimated subcanopy solar radiation by r.sun—TLS data; 27 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 27
<p>Estimated subcanopy solar radiation by PCSRT in built-up area—ALS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 28
<p>Estimated subcanopy solar radiation by PCSRT in built-up area—TLS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 29
<p>Estimated subcanopy solar radiation by r.sun—ALS data; 28 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 30
<p>Estimated subcanopy solar radiation by r.sun—TLS data; 28 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 31
<p>Solar irradiance difference maps between r.sun and PCSRT models using the TLS and ALS data, built-up area; (<b>A</b>): difference between r.sun TLS—r.sun ALS, (<b>B</b>): difference between PCSRT TLS—PCSRT ALS, (<b>C</b>): difference between r.sun TLS—PCSRT TLS, (<b>D</b>): difference between r.sun ALS—PCSRT ALS.</p>
Full article ">Figure 32
<p>Solar irradiance difference maps between r.sun and PCSRT models—ULS, ALS, and TLS data, forested area; (<b>A</b>): difference between r.sun ULS—r.sun ALS, (<b>B</b>): difference between r.sun ULS—r.sun TLS, (<b>C</b>): difference between r.sun TLS—r.sun ALS, (<b>D</b>): difference between PCSRT ULS—PCSRT ALS, (<b>E</b>): PCSRT ULS—PCSRT TLS, (<b>F</b>): PCSRT TLS—PCSRT ALS, (<b>G</b>): r.sun ULS—PCSRT ULS, (<b>H</b>): r.sun TLS—PCSRT TLS, (<b>I</b>): r.sun ALS—PCSRT ALS.</p>
Full article ">Figure 33
<p>Solar irradiance difference histograms between r.sun and PCSRT models using the TLS and ALS data, built-up area; (<b>A</b>): difference between r.sun TLS—r.sun ALS, (<b>B</b>): difference between PCSRT TLS—PCSRT ALS, (<b>C</b>): difference between r.sun TLS—PCSRT TLS, (<b>D</b>): difference between r.sun ALS—PCSRT ALS.</p>
Full article ">Figure 34
<p>Solar irradiance histograms between r.sun and PCSRT models—ULS, ALS, and TLS data, forested area; (<b>A</b>): difference between r.sun ULS—r.sun ALS, (<b>B</b>): difference between r.sun ULS—r.sun TLS, (<b>C</b>): difference between r.sun TLS—r.sun ALS, (<b>D</b>): difference between PCSRT ULS—PCSRT ALS, (<b>E</b>): PCSRT ULS—PCSRT TLS, (<b>F</b>): PCSRT TLS—PCSRT ALS, (<b>G</b>): r.sun ULS—PCSRT ULS, (<b>H</b>): r.sun TLS—PCSRT TLS, (<b>I</b>): r.sun ALS—PCSRT ALS..</p>
Full article ">
13 pages, 3746 KiB  
Article
NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest
by Adam Korycki, Cory Yeaton, Gregory S. Gilbert, Colleen Josephson and Steve McGuire
Forests 2025, 16(1), 173; https://doi.org/10.3390/f16010173 - 17 Jan 2025
Viewed by 630
Abstract
Forest mapping provides critical observational data needed to understand the dynamics of forest environments. Notably, tree diameter at breast height (DBH) is a metric used to estimate forest biomass and carbon dioxide (CO2) sequestration. Manual methods of forest mapping are [...] Read more.
Forest mapping provides critical observational data needed to understand the dynamics of forest environments. Notably, tree diameter at breast height (DBH) is a metric used to estimate forest biomass and carbon dioxide (CO2) sequestration. Manual methods of forest mapping are labor intensive and time consuming, a bottleneck for large-scale mapping efforts. Automated mapping relies on acquiring dense forest reconstructions, typically in the form of point clouds. Terrestrial laser scanning (TLS) and mobile laser scanning (MLS) generate point clouds using expensive LiDAR sensing and have been used successfully to estimate tree diameter. Neural radiance fields (NeRFs) are an emergent technology enabling photorealistic, vision-based reconstruction by training a neural network on a sparse set of input views. In this paper, we present a comparison of MLS and NeRF forest reconstructions for the purpose of trunk diameter estimation in a mixed-evergreen Redwood forest. In addition, we propose an improved DBH-estimation method using convex-hull modeling. Using this approach, we achieved 1.68 cm RMSE (2.81%), which consistently outperformed standard cylinder modeling approaches. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Forestry: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>NeRF scene representation flow. Sparse images with corresponding poses are sampled using ray-tracing to generate 5D input vector comprised of location (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> </mrow> </semantics></math>) and viewing direction (<math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>,</mo> <mi>ϕ</mi> </mrow> </semantics></math>). A cascaded MLP learns the weights to map this 5D vector to output color (r, g, b) and volume density <math display="inline"><semantics> <mi>σ</mi> </semantics></math>. Volume rendering composites the learned rays to novel views.</p>
Full article ">Figure 2
<p>Quadruped robot creating a dense LiDAR-inertial reconstruction in a forest environment (<b>left</b>). LIOSAM visualization of estimated trajectory (torqouise), loop-closure events (yellow), and tightly aligned LiDAR scans (magenta) (<b>right</b>).</p>
Full article ">Figure 3
<p>TreeTool process applied to a forest NeRF reconstruction. (<b>A</b>) ground segmentation, (<b>B</b>) trunk segmentation, and (<b>C</b>) trunk modeling. Our tree modeling approach considers trees as stacks of convex-hull slices which outperformed other approaches by 3–4× in terms of DBH estimation accuracy.</p>
Full article ">Figure 4
<p>Forest reconstructions produced by SLAM (<b>bottom</b> row) and NeRF (<b>top</b> row) methods of both datasets. Adjacent plots are data collection trajectories for each reconstruction. In dataset A, we illustrate the effectiveness of segmentation between the ground points (orange) and trees (violet). We use a z-axis color gradient to enhance the visualization of dataset B reconstructions, as this region included more complex ground-level vegetation. The figure also compares a zoomed-in section of a tree trunk. The NeRF reconstruction is approximately 4× denser compared to SLAM, and is of higher surface quality.</p>
Full article ">Figure 5
<p>Four comparisons of RANSAC and convex-hull modeling approaches. Deltas between manual DBH and each modeling approach are provided on the top line. RANSAC cylinder modeling consistently under-fits well-represented trunk projections. Convex hull DBH estimation outperformed RANSAC by 3–4×.</p>
Full article ">
31 pages, 31280 KiB  
Article
Three-Dimensional Digital Documentation for the Conservation of the Prambanan Temple Cluster Using Guided Multi-Sensor Techniques
by Anindya Sricandra Prasidya, Irwan Gumilar, Irwan Meilano, Ikaputra Ikaputra, Rochmad Muryamto and Erlyna Nour Arrofiqoh
Heritage 2025, 8(1), 32; https://doi.org/10.3390/heritage8010032 - 16 Jan 2025
Viewed by 674
Abstract
The Prambanan Temple cluster is a world heritage site that has significant value for humanity, a multiple zone cluster arrangement of highly ornamented towering temples, and a Hindu architectural pattern design. It lies near the Opak Fault, at the foothills of Mount Merapi, [...] Read more.
The Prambanan Temple cluster is a world heritage site that has significant value for humanity, a multiple zone cluster arrangement of highly ornamented towering temples, and a Hindu architectural pattern design. It lies near the Opak Fault, at the foothills of Mount Merapi, on an unstable ground layer, and is surrounded by human activities in Yogyakarta, Indonesia. The site’s vulnerability implies the necessity of 3D digital documentation for its conservation, but its complexity poses difficulties. This work aimed to address this challenge by introducing the utilization of architectural pattern design (APD) to guide multi-sensor line-ups for documentation. First, APDs were established from the literature to derive the associated multiple detail levels; then, multiple sensors and modes of light detection and ranging (Lidar) scanners and photogrammetry were utilized according to their detail requirements and, finally, point cloud data were processed, integrated, assessed, and validated by the proof of the existence of an APD. The internal and external qualities of each sensor result showed the millimeter- to centimeter-range root mean squared error, with the terrestrial laser scanner (TLS) having the best accuracy, followed by aerial close-range and terrestrial-mode photogrammetry and nadiral Lidar and photogrammetry. Two relative cloud distance analyses of every point cloud model to the reference model (TLS) returned the millimeter and centimeter ranges of the mean distance values. Furthermore, visually, every point cloud model from each sensor successfully complemented each other. Therefore, we can conclude that our approach is promising for complex heritage documentation. These results provide a solid foundation for future analyses, particularly in assessing structural vulnerabilities and informing conservation strategies. Full article
(This article belongs to the Special Issue 3D Reconstruction of Cultural Heritage and 3D Assets Utilisation)
Show Figures

Figure 1

Figure 1
<p>The Prambanan Temple cluster’s location and its concentric layout arrangement.</p>
Full article ">Figure 2
<p>The proposed workflow. Solid arrows represent the main data flow, while the dashed arrows represent the supporting data flow.</p>
Full article ">Figure 3
<p>Data acquisition concept for three different scale levels. Darker colors represent larger scale levels, while lighter colors represent smaller scale levels. Google Earth and SketchUp 3D Warehouse provide background images for the left and center illustrations, respectively.</p>
Full article ">Figure 4
<p>Distribution of GDCPs (<b>a</b>) and FDCPs (<b>b</b>).</p>
Full article ">Figure 5
<p>Summary of the quality assessment of each sensor processing result.</p>
Full article ">Figure 6
<p>Orthophoto result of the sites that cover the first and second courtyards.</p>
Full article ">Figure 7
<p>Registered and georeferenced 3D point clouds from (<b>a</b>) TLS 2020 and (<b>b</b>) TLS 2023.</p>
Full article ">Figure 8
<p>Point cloud model from aerial UAV Lidar.</p>
Full article ">Figure 9
<p>Dense point cloud model from CR-UAVP integrated with terrestrial photogrammetry.</p>
Full article ">Figure 10
<p>Point cloud results from multiple sensors and their combinations (the example of the Brahma Temple). (<b>a</b>) Nadiral UAV Lidar; (<b>b</b>) TLS; (<b>c</b>) CR-UAV photogrammetry; (<b>d</b>) terrestrial photogrammetry; (<b>e</b>) combination of each sensor point clouds. The true color and texture of the temple are presented by (<b>b</b>–<b>d</b>), while (<b>a</b>) only displays the scalar color scale based on the Z coordinate.</p>
Full article ">Figure 11
<p>C2C and M3C2 Euclidean distance analysis results of the six main temples of interest.</p>
Full article ">Figure 12
<p>C2C results of the corridor part of the Shiva Temple.</p>
Full article ">Figure 13
<p>Four sample measurements captured on the base part (Bhurloka) of a single temple.</p>
Full article ">Figure 14
<p>Rectangular-based planimetric proportions of the Garuda, Nandhi, and Hamsha Temples.</p>
Full article ">Figure 15
<p>The parameter of the Cartesian–cruciform-based planimetric proportion.</p>
Full article ">Figure 16
<p>Cartesian–cruciform planimetric proportion of the Shiva, Vishnu, and Brahma Temples (Bhurloka part).</p>
Full article ">Figure 17
<p>Cartesian–cruciform planimetric proportion of the Shiva, Vishnu, and Brahma Temples (Bhuvarloka part).</p>
Full article ">Figure 18
<p>Cartesian–cruciform planimetric proportion of the Garuda, Nandhi, and Hamsha Temples (Bhuvarloka part).</p>
Full article ">Figure 19
<p>Cartesian–cruciform planimetric proportion of the Shiva, Vishnu, and Brahma Temples (Svarloka part).</p>
Full article ">Figure 20
<p>Cartesian–cruciform planimetric proportion of the Garuda, Nandhi, and Hamsha Temples (Svarloka part).</p>
Full article ">
43 pages, 19436 KiB  
Article
Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning
by Sarah Witzmann, Christoph Gollob, Ralf Kraßnitzer, Tim Ritter, Andreas Tockner, Lukas Moik, Valentin Sarkleti, Tobias Ofner-Graff, Helmut Schume and Arne Nothdurft
Remote Sens. 2025, 17(2), 269; https://doi.org/10.3390/rs17020269 - 14 Jan 2025
Viewed by 648
Abstract
The presence of sufficient natural regeneration in mature forests is regarded as a pivotal criterion for their future stability, ensuring seamless reforestation following final harvesting operations or forest calamities. Consequently, forest regeneration is typically quantified as part of forest inventories to monitor its [...] Read more.
The presence of sufficient natural regeneration in mature forests is regarded as a pivotal criterion for their future stability, ensuring seamless reforestation following final harvesting operations or forest calamities. Consequently, forest regeneration is typically quantified as part of forest inventories to monitor its occurrence and development over time. Light detection and ranging (LiDAR) technology, particularly ground-based LiDAR, has emerged as a powerful tool for assessing typical forest inventory parameters, providing high-resolution, three-dimensional data on the forest structure. Therefore, it is logical to attempt a LiDAR-based quantification of forest regeneration, which could greatly enhance area-wide monitoring, further supporting sustainable forest management through data-driven decision making. However, examples in the literature are relatively sparse, with most relevant studies focusing on an indirect quantification of understory density from airborne LiDAR data (ALS). The objective of this study is to develop an accurate and reliable method for estimating regeneration coverage from data obtained through personal laser scanning (PLS). To this end, 19 forest inventory plots were scanned with both a personal and a high-resolution terrestrial laser scanner (TLS) for reference purposes. The voxelated point clouds obtained from the personal laser scanner were converted into raster images, providing either the canopy height, the total number of filled voxels (containing at least one LiDAR point), or the ratio of filled voxels to the total number of voxels. Local maxima in these raster images, assumed to be likely to contain tree saplings, were then used as seed points for a raster-based tree segmentation, which was employed to derive the final regeneration coverage estimate. The results showed that the estimates differed from the reference in a range of approximately −10 to +10 percentage points, with an average deviation of around 0 percentage points. In contrast, visually estimated regeneration coverages on the same forest plots deviated from the reference by between −20 and +30 percentage points, approximately −2 percentage points on average. These findings highlight the potential of PLS data for automated forest regeneration quantification, which could be further expanded to include a broader range of data collected during LiDAR-based forest inventory campaigns. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>GeoSLAM Zeb Horizon (<b>left</b>) and RIEGL VZ-600i (<b>right</b>) during fieldwork.</p>
Full article ">Figure 2
<p>Screenshot from the GeoAce App (ITS Geo Solutions GmbH, Jena, Germany) during fieldwork. The green crosses mark the positions of marked trees, while the red dot marks the plot center. The positions of the surveyed trees were visualized in CloudCompare to avoid the clipping of trees smaller or larger than the predefined threshold.</p>
Full article ">Figure 3
<p>Schematical illustration of the general workflow.</p>
Full article ">Figure 4
<p>Illustration of quality measures for regeneration quantification as functions of the voxel resolution, class threshold, and cloth resolution. The different class thresholds and cloth resolutions are represented by different colors and line types, respectively.</p>
Full article ">Figure 5
<p>Illustration of quality measures for regeneration quantification as functions of thresholds for tree detection and segmentation. The different detection thresholds are represented by different colors, as described in the legend.</p>
Full article ">Figure 6
<p>Comparison of deviations achieved with M1–M5 across all parameter combinations.</p>
Full article ">Figure 7
<p>Comparison of deviations achieved with M1–M5 with optimized parameter combinations in gray and deviations of visual estimations in green (Op 1 to Op 3 represent the deviations achieved by the three different operators).</p>
Full article ">Figure 8
<p>Comparison of regeneration coverages. The results from the visual estimates are depicted in grey and those from the best LiDAR-based methods (M2 and M5) in blue and green, respectively.</p>
Full article ">Figure 9
<p>Plot-wise depiction of estimated and reference regeneration coverages. Since the visual estimates, represented by the grey bars, are averaged across the estimates of all 3 operators, the error bars plotted with the latter represent the highest and lowest estimates, respectively. The reference coverages are depicted in black, whereas the coverages derived from M2 to M5 are depicted in blue and green, respectively.</p>
Full article ">Figure 10
<p>Depiction of Plot 7. (<b>a</b>) shows the point cloud of this plot. The red lines in (<b>b</b>,<b>c</b>) represent the outlines of the manually cropped reference tree crowns. The blue lines represent the outlines of the tree crowns as segmented with <span class="html-italic">M</span><sub>2</sub> (<b>b</b>) and <span class="html-italic">M</span><sub>5</sub> (<b>c</b>), respectively. The colors of the pixels and color scales in (<b>b</b>,<b>c</b>) represent height (in meters) and voxel density, respectively.</p>
Full article ">Figure A1
<p>Illustration of the calculated crown areas of Plot 1 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A2
<p>Illustration of the calculated crown areas of Plot 2 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A3
<p>Illustration of the calculated crown areas of Plot 3 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A4
<p>Illustration of the calculated crown areas of Plot 4 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A5
<p>Illustration of the calculated crown areas of Plot 6 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A6
<p>Illustration of the calculated crown areas of Plot 7 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M<sub>5</sub>), respectively.</p>
Full article ">Figure A7
<p>Illustration of the calculated crown areas of Plot 8 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A8
<p>Illustration of the calculated crown areas of Plot 9 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A9
<p>Illustration of the calculated crown areas of Plot 10 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A10
<p>Illustration of the calculated crown areas of Plot 11 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A11
<p>Illustration of the calculated crown areas of Plot 12 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A12
<p>Illustration of the calculated crown areas of Plot 13 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A13
<p>Illustration of the calculated crown areas of Plot 14 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A14
<p>Illustration of the calculated crown areas of Plot 15 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A15
<p>Illustration of the calculated crown areas of Plot 16 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A16
<p>Illustration of the calculated crown areas of Plot 17 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A17
<p>Illustration of the calculated crown areas of Plot 18 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A18
<p>Illustration of the calculated crown areas of Plot 19 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A19
<p>Illustration of the calculated crown areas of Plot 20 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.</p>
Full article ">Figure A20
<p>Schematical illustration of methods M1, M2, M3, and M5, starting from the step of treetop detection.</p>
Full article ">
23 pages, 6882 KiB  
Article
Estimation of Individual Tree Structure and Wood Density Parameters for Ginkgo biloba Using Terrestrial LiDAR and Resistance Drill Data
by Ting Li, Xin Shen, Kai Zhou and Lin Cao
Remote Sens. 2025, 17(1), 99; https://doi.org/10.3390/rs17010099 - 30 Dec 2024
Viewed by 559
Abstract
Individual tree structure and wood density are important indicators of forest quality and key parameters for biomass calculation. To explore the extraction accuracy of individual tree structure parameters based on LiDAR technology, as well as the correlation between individual tree structure parameters, resistance [...] Read more.
Individual tree structure and wood density are important indicators of forest quality and key parameters for biomass calculation. To explore the extraction accuracy of individual tree structure parameters based on LiDAR technology, as well as the correlation between individual tree structure parameters, resistance value and wood density can be beneficial for providing new ideas for predicting wood density. Taking a 23-year-old Ginkgo plantation as the research object, the tree QSM (Quantitative Structure Model) was constructed based on terrestrial and backpack LiDAR point clouds, and the individual tree structure parameters were extracted. The accuracy of estimating structure parameters based on two types of point clouds was compared. A wood density prediction model was constructed using principal component analysis based on the resistance, diameter, tree height, and crown width. The accuracy verification was carried out and it showed that the estimation accuracies of individual tree structure parameters (DBH, tree height, and crown width) extracted from tree QSM constructed based on TLS and BLS all had R2 > 0.8. The estimation accuracy of DBH based on TLS was slightly higher than that based on BLS, and the estimation accuracy of tree height and crown width based on TLS was slightly lower than that based on BLS. BLS has great potential in accurately obtaining forest structure information, improving forest information collection efficiency, promoting forest resource monitoring, forest carbon sink estimation, and forest ecological research. The feasibility of predicting the wood basic density based on wood resistance (R2 = 0.51) and combined with DBH, tree height, and crown width (R2 = 0.49) was relatively high. Accurate and non-destructive estimation of the wood characteristics of standing timber can guide forest cultivation and management and promote sustainable management and utilization of forests. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Technical roadmap of the research. BLS: Backpack Laser Scanning; TLS: Terrestrial Laser Scanning; QSM: Quantitative Structure Model; DBH: diameter at breast height; H: tree height; RMSE: Root Mean Square Error.</p>
Full article ">Figure 2
<p>(<b>a</b>) The individual tree scanned by TLS and BLS and drilled by the Resistograph. (<b>b</b>) The photo of the sample plot. (<b>c</b>) The data acquisition route of BLS (the five-pointed star was the starting point of BLS’s scanning route; the red part represents high terrain; the blue part represents low terrain).</p>
Full article ">Figure 3
<p>(<b>a</b>) The operation of the Resistograph in field measurement; (<b>b</b>)the operation of growth cone in field measurement; (<b>c</b>) wood core samples soaked in water; (<b>d</b>) wood core samples during oven drying treatment.</p>
Full article ">Figure 4
<p>The results of individual tree segmentation. (<b>a</b>) Top view of the result of individual tree segmentation; (<b>b</b>) 3D view of the result of individual tree segmentation.</p>
Full article ">Figure 5
<p>Scatterplots of individual tree structure parameters between reference values and extracted values from the QSM based on TLS. (<b>a</b>) Scatterplot of DBH; (<b>b</b>) scatterplot of height; (<b>c</b>) scatterplot of crown width.</p>
Full article ">Figure 6
<p>Box plots of the reference values of branch parameters. (<b>a</b>) Box plot of the number of branches for the primary, secondary, and third-order branches; (<b>b</b>) box plot of branch length for the primary, secondary, and third-order branches; (<b>c</b>) box plot of branch diameter for the primary, secondary, and third-order branches; (<b>d</b>) box plot of branch angle for the primary, secondary, and third-order branches.</p>
Full article ">Figure 7
<p>Quantitative structure models of three sample trees based on the TLS point cloud and BLS point cloud. (a) TLS point cloud; (b) quantitative structure models based on TLS point cloud; (c) BLS point cloud; (d) quantitative structure models based on BLS point cloud. Groups (<b>A</b>,<b>B</b>,<b>C</b>) belong to three individual trees.</p>
Full article ">Figure 8
<p>The levels of branches of each tree between reference values and extracted values from the QSM based on TLS and BLS.</p>
Full article ">Figure 9
<p>The number of branches at each branch level between reference values and extracted values from QSM based on TLS and BLS. (<b>a</b>–<b>l</b>) belong to 12 individual trees.</p>
Full article ">Figure 10
<p>The length of branches at each branch level between reference values and extracted values from the QSM based on TLS and BLS. (<b>a</b>–<b>l</b>) belong to 12 individual trees.</p>
Full article ">Figure 11
<p>The angle of branches at each branch level between reference values and extracted values from QSM based on TLS and BLS. (<b>a</b>–<b>l</b>) belong to 12 individual trees.</p>
Full article ">Figure 12
<p>(<b>a</b>) The correlation analysis between resistance and basic density of 30 sample trees; (<b>b</b>) the correlation analysis between resistance and basic density of three groups (small, medium, and large) divided by diameter.</p>
Full article ">
28 pages, 1683 KiB  
Article
Energy-Saving Geospatial Data Storage—LiDAR Point Cloud Compression
by Artur Warchoł, Karolina Pęzioł and Marek Baścik
Energies 2024, 17(24), 6413; https://doi.org/10.3390/en17246413 - 20 Dec 2024
Viewed by 1011
Abstract
In recent years, the growth of digital data has been unimaginable. This also applies to geospatial data. One of the largest data types is LiDAR point clouds. Their large volumes on disk, both at the acquisition and processing stages, and in the final [...] Read more.
In recent years, the growth of digital data has been unimaginable. This also applies to geospatial data. One of the largest data types is LiDAR point clouds. Their large volumes on disk, both at the acquisition and processing stages, and in the final versions translate into a high demand for disk space and therefore electricity. It is therefore obvious that in order to reduce energy consumption, lower the carbon footprint of the activity and sensitize sustainability in the digitization of the industry, lossless compression of the aforementioned datasets is a good solution. In this article, a new format for point clouds—3DL—is presented, the effectiveness of which is compared with 21 available formats that can contain LiDAR data. A total of 404 processes were carried out to validate the 3DL file format. The validation was based on four LiDAR point clouds stored in LAS files: two files derived from ALS (airborne laser scanning), one in the local coordinate system and the other in PL-2000; and two obtained by TLS (terrestrial laser scanning), also with the same georeferencing (local and national PL-2000). During research, each LAS file was saved 101 different ways in 22 different formats, and the results were then compared in several ways (according to the coordinate system, ALS and TLS data, both types of data within a single coordinate system and the time of processing). The validated solution (3DL) achieved CR (compression rate) results of around 32% for ALS data and around 42% for TLS data, while the best solutions reached 15% for ALS and 34% for TLS. On the other hand, the worst method compressed the file up to 424.92% (ALS_PL2000). This significant reduction in file size contributes to a significant reduction in energy consumption during the storage of LiDAR point clouds, their transmission over the internet and/or during copy/transfer. For all solutions, rankings were developed according to CR and CT (compression time) parameters. Full article
(This article belongs to the Special Issue Low-Energy Technologies in Heavy Industries)
Show Figures

Figure 1

Figure 1
<p>Workflow of tasks in the LiDAR point cloud data storage strategy.</p>
Full article ">Figure 2
<p>Visualisation by RGB values of the point clouds used for the survey. (<b>A</b>) Top view and (<b>C</b>) Vertical cross-section of an ALS point cloud; (<b>B</b>) top view and (<b>D</b>) vertical cross-section of a TLS point cloud.</p>
Full article ">
14 pages, 18753 KiB  
Article
Assessing Forest Resources with Terrestrial and Backpack LiDAR: A Case Study on Leaf-On and Leaf-Off Conditions in Gari Mountain, Hongcheon, Republic of Korea
by Chiung Ko, Jintack Kang, Jeongmook Park and Minwoo Lee
Forests 2024, 15(12), 2230; https://doi.org/10.3390/f15122230 - 18 Dec 2024
Viewed by 540
Abstract
In Republic of Korea, the digital transformation of forest data has emerged as a critical priority at the governmental level. To support this effort, numerous case studies have been conducted to collect and analyze forest data. This study evaluated the accuracy of forest [...] Read more.
In Republic of Korea, the digital transformation of forest data has emerged as a critical priority at the governmental level. To support this effort, numerous case studies have been conducted to collect and analyze forest data. This study evaluated the accuracy of forest resource assessment methods using terrestrial laser scanning (TLS) and backpack personal laser scanning (BPLS) under Leaf-on and Leaf-off conditions in the Gari Mountain Forest Management Complex, Hongcheon, Republic of Korea. The research was conducted across six sample plots representing low, medium, and high stand densities, dominated by Larix kaempferi and Pinus koraiensis. Conventional field survey methods and LiDAR technologies were used to compare key forest attributes such as tree height and volume. The results revealed that Leaf-off LiDAR data exhibited higher accuracy in capturing tree height and canopy structures, particularly in high-density plots. In contrast, during the Leaf-on season, measurements of understory vegetation and lower canopy were hindered by foliage obstruction, reducing precision. Seasonal differences significantly impacted LiDAR measurement accuracy, with Leaf-off data providing a clearer and more reliable representation of forest structures. This study underscores the necessity of considering seasonal conditions to improve the accuracy of LiDAR-derived metrics. It offers valuable insights for enhancing forest inventory practices and advancing the application of remote sensing technologies in forest management. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Comparison of photos taken during the growing season and dormant season.</p>
Full article ">Figure 2
<p>Map showing the location of sample points within the Gari Mountain Forest Management Complex.</p>
Full article ">Figure 3
<p>Methodology for scanning plots using LiDAR (Left: TLS; Right: BPLS).</p>
Full article ">Figure 4
<p>The overall flow chart of the study.</p>
Full article ">Figure 5
<p>Comparison of diameter at breast height (DBH) point density between equipment types (<b>A1</b>: BPLS_Leaf-On; <b>A2</b>: BPLS_Leaf-Off; <b>B1</b>: TLS_Leaf-On; <b>B2</b>: TLS_Leaf-Off).</p>
Full article ">Figure 6
<p>Differences in trunk occlusion due to understory vegetation between growing and dormant seasons (<b>A1</b>: BPLS_Leaf-On; <b>A2</b>: BPLS_Leaf-Off; <b>B1</b>: TLS_Leaf-On; <b>B2</b>: TLS_Leaf-Off).</p>
Full article ">Figure 7
<p>Comparison of tree height point density between equipment types (<b>A1</b>: BPLS_Leaf-On; <b>A2</b>: BPLS_Leaf-Off; <b>B1</b>: TLS_Leaf-On; <b>B2</b>: TLS_Leaf-Off).</p>
Full article ">Figure 8
<p>Residual analysis of tree height by equipment and measurement period.</p>
Full article ">Figure 9
<p>Residual analysis of diameter at breast height (DBH) by equipment and measurement period.</p>
Full article ">
21 pages, 10310 KiB  
Article
Rapid Mapping: Unmanned Aerial Vehicles and Mobile-Based Remote Sensing for Flash Flood Consequence Monitoring (A Case Study of Tsarevo Municipality, South Bulgarian Black Sea Coast)
by Stelian Dimitrov, Bilyana Borisova, Ivo Ihtimanski, Kalina Radeva, Martin Iliev, Lidiya Semerdzhieva and Stefan Petrov
Urban Sci. 2024, 8(4), 255; https://doi.org/10.3390/urbansci8040255 - 16 Dec 2024
Viewed by 1248
Abstract
This research seeks to develop and test a rapid mapping approach using unmanned aerial vehicles (UAVs) and terrestrial laser scanning to provide precise, high-resolution spatial data for urban areas right after disasters. This mapping aims to support efforts to protect the population and [...] Read more.
This research seeks to develop and test a rapid mapping approach using unmanned aerial vehicles (UAVs) and terrestrial laser scanning to provide precise, high-resolution spatial data for urban areas right after disasters. This mapping aims to support efforts to protect the population and infrastructure while analyzing the situation in affected areas. It focuses on flood-prone regions lacking modern hydrological data and where regular monitoring is absent. This study was conducted in resort villages and adjacent catchments in Bulgaria’s southern Black Sea coast with leading maritime tourism features, after a flash flood on 5 September 2023 caused human casualties and severe material damage. The resulting field data with a spatial resolution of 3 to 5 cm/px were used to trace the effects of the flood on topographic surface changes and structural disturbances. Flood simulation using UAV data and a digital elevation model was performed. The appropriateness of contemporary land use forms and infrastructure location in catchments is discussed. The role of spatial data in the analysis of genetic factors in risk assessment is commented on. The results confirm the applicability of rapid mapping in informing the activities of responders in a period of increased vulnerability following a flood. The results were used by Bulgaria’s Ministry of Environment and Water to analyze the situation shortly after the disaster. Full article
Show Figures

Figure 1

Figure 1
<p>The 24-h precipitation (24 h валеж) total for 6 September 2023 (mm) in Bulgaria. Credit: National Institute of Meteorology and Hydrology Bulgaria.</p>
Full article ">Figure 2
<p>Study area map.</p>
Full article ">Figure 3
<p>S.O.D.A Photogrammetry sensor (<b>a</b>) and EbeeX fixed-wing UAS (<b>b</b>).</p>
Full article ">Figure 4
<p>Microdrones MD LiDAR 1000 (<b>a</b>) and Velodyne PUCK VLP-16 (<b>b</b>).</p>
Full article ">Figure 5
<p>ZEB Horizon Geo SLAM (<b>a</b>) and its accessories (<b>b</b>).</p>
Full article ">Figure 6
<p>Sediment cones after the Cherna River mouth.</p>
Full article ">Figure 7
<p>Restoration of the floodplain at the lower end of the Cherna river.</p>
Full article ">Figure 8
<p>Change in the transverse profile of the bed of the Cherna river due to the flood.</p>
Full article ">Figure 9
<p>Visualization of the damage to the bridge on Lisevo Dere after the village of Izgrev (model from laser scanning).</p>
Full article ">Figure 10
<p>Bridge structure in the central part of the town of Tsarevo (model from laser scanning).</p>
Full article ">Figure 11
<p>The ruined bridge at the bottom of the Cherna River.</p>
Full article ">Figure 12
<p>Digital surface (1) and digital terrain (2) models.</p>
Full article ">Figure 13
<p>Areas with logging and lack of forest vegetation—terrain profile.</p>
Full article ">Figure 14
<p>Cross-section at the collapsed bridge: after the flood in September 2023 and today (June 2024).</p>
Full article ">
14 pages, 17262 KiB  
Article
Analyzing the Accuracy of Satellite-Derived DEMs Using High-Resolution Terrestrial LiDAR
by Aya Hamed Mohamed, Mohamed Islam Keskes and Mihai Daniel Nita
Land 2024, 13(12), 2171; https://doi.org/10.3390/land13122171 - 13 Dec 2024
Viewed by 741
Abstract
The accurate estimation of Digital Elevation Models (DEMs) derived from satellite data is critical for numerous environmental applications. This study evaluates the accuracy and reliability of two satellite-derived elevation models, the ALOS World 3D and SRTM DEMs, specifically for their application in hydrological [...] Read more.
The accurate estimation of Digital Elevation Models (DEMs) derived from satellite data is critical for numerous environmental applications. This study evaluates the accuracy and reliability of two satellite-derived elevation models, the ALOS World 3D and SRTM DEMs, specifically for their application in hydrological modeling. A comparative analysis with Terrestrial Laser Scanning (TLS) measurements assessed the agreement between these datasets. Multiple linear regression models were utilized to evaluate the relationships between the datasets and provide detailed insights into their accuracy and biases. The results indicate significant correlations between satellite DEMs and TLS measurements, with adjusted R-square values of 0.8478 for ALOS and 0.955 for the SRTM. To quantify the average difference, root mean square error (RMSE) values were calculated as 10.43 m for ALOS and 5.65 m for the SRTM. Additionally, slope and aspect analyses were performed to highlight terrain characteristics across the DEMs. Slope analysis showed a statistically significant negative correlation between SRTM and TLS slopes (R2 = 0.16, p < 4.47 × 10−10 indicating a weak relationship, while no significant correlation was observed between ALOS and TLS slopes. Aspect analysis showed significant positive correlations for both ALOS and the SRTM with TLS aspect, capturing 30.21% of the variance. These findings demonstrate the accuracy of satellite-derived elevation models in representing terrain features relative to high-resolution terrestrial data. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

Figure 1
<p>Geographical location of study area.</p>
Full article ">Figure 2
<p>Summary of data processing and analysis workflow.</p>
Full article ">Figure 3
<p>Analyzing the datasets using a grid-cell-based approach.</p>
Full article ">Figure 4
<p>Slope analysis of satellite-derived products (ALOS and SRTM) using grid cell analysis.</p>
Full article ">Figure 5
<p>Analyzing the TLS slope using a grid-cell-based approach.</p>
Full article ">Figure 6
<p>Aspect analysis of satellite-derived products (ALOS and SRTM).</p>
Full article ">Figure 7
<p>Analyzing the TLS aspect using a grid-cell-based approach.</p>
Full article ">Figure 8
<p>Comparison of satellites (SRTM and ALOS) with TLS measurements. (<b>a</b>) Elevation values, (<b>b</b>) slope values, and (<b>c</b>) aspect values for comparison of each model.</p>
Full article ">Figure 8 Cont.
<p>Comparison of satellites (SRTM and ALOS) with TLS measurements. (<b>a</b>) Elevation values, (<b>b</b>) slope values, and (<b>c</b>) aspect values for comparison of each model.</p>
Full article ">
21 pages, 54945 KiB  
Article
Efficient Registration of Airborne LiDAR and Terrestrial LiDAR Point Clouds in Forest Scenes Based on Single-Tree Position Consistency
by Xiaolong Cheng, Xinyu Liu, Yuemei Huang, Wei Zhou and Jie Nie
Forests 2024, 15(12), 2185; https://doi.org/10.3390/f15122185 - 12 Dec 2024
Viewed by 670
Abstract
Airborne LiDAR (ALS) and terrestrial LiDAR (TLS) data integration provides complementary perspectives for acquiring detailed 3D forest information. However, challenges in registration arise due to feature instability, low overlap, and differences in cross-platform point cloud density. To address these issues, this study proposes [...] Read more.
Airborne LiDAR (ALS) and terrestrial LiDAR (TLS) data integration provides complementary perspectives for acquiring detailed 3D forest information. However, challenges in registration arise due to feature instability, low overlap, and differences in cross-platform point cloud density. To address these issues, this study proposes an automatic point cloud registration method based on the consistency of the single-tree position distribution in multi-species and complex forest scenes. In this method, single-tree positions are extracted as feature points using the Stepwise Multi-Form Fitting (SMF) technique. A novel feature point matching method is proposed by constructing a polar coordinate system, which achieves fast horizontal registration. Then, the Z-axis translation is determined through the integration of Cloth Simulation Filtering (CSF) and grid-based methods. Finally, the Iterative Closest Point (ICP) algorithm is employed to perform fine registration. The experimental results demonstrate that the method achieves high registration accuracy across four forest plots of varying complexity, with root-mean-square errors of 0.0423 m, 0.0348 m, 0.0313 m, and 0.0531 m. The registration accuracy is significantly improved compared to existing methods, and the time efficiency is enhanced by an average of 90%. This method offers robust and accurate registration performance in complex and diverse forest environments. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Visualization of the original point clouds for the four sample plots: (<b>a</b>) ALS point cloud data; (<b>b</b>) TLS point cloud data.</p>
Full article ">Figure 2
<p>Flowchart of the point cloud registration method.</p>
Full article ">Figure 3
<p>Visualization of the results of fitting the trunk and crown to a single tree.</p>
Full article ">Figure 4
<p>Single-tree splitting effect: (<b>a</b>) single-trees that were successfully fitted; (<b>b</b>) single-trees where trunk features could not be detected; (<b>c</b>) multiple trees incorrectly identified as a single-tree; (<b>d</b>) branches of a single-tree incorrectly identified as a separate tree.</p>
Full article ">Figure 5
<p>Construction of a polar coordinate system: (<b>a</b>) based on single-tree feature points of TLS; (<b>b</b>) based on single-tree feature points of ALS.</p>
Full article ">Figure 6
<p>Polar coordinate system matching: (<b>a</b>) two point cloud distributions, where red points represent TLS point cloud data and blue points represent ALS point cloud data; (<b>b</b>) unsuccessful polar coordinate matching; (<b>c</b>) successful polar coordinate matching.</p>
Full article ">Figure 7
<p>Visualization of trunk fitting results: (<b>a</b>) fitting of a regular tree trunk; (<b>b</b>) fitting of a single-tree trunk with a low branching point; (<b>c</b>) fitting of a trunk section with short branches; (<b>d</b>) fitting of a trunk section with long branches; (<b>e</b>) fitting of an irregular tree trunk; (<b>f</b>) fitting of an irregular tree trunk with lower branches.</p>
Full article ">Figure 8
<p>Visualization of crown fitting results: (<b>a</b>) fitting of a regular tree crown; (<b>b</b>) fitting of a regular tree crown with a non-prominent apex; (<b>c</b>) fitting of a tree crown from dense coexisting TLS point clouds; (<b>d</b>) fitting of a tree crown from sparse coexisting TLS point clouds; (<b>e</b>) fitting of a tree crown from coexisting ALS point clouds; (<b>f</b>) fitting of a tree crown with non-prominent features.</p>
Full article ">Figure 9
<p>Feature point extraction results: (<b>a</b>) TLS point cloud data; (<b>b</b>) ALS point cloud data.</p>
Full article ">Figure 10
<p>Visualization of the coarse registration process for the four plots: ALS point cloud data are shown in red and TLS point cloud data are in black.</p>
Full article ">Figure 11
<p>Visualization of registration results and details: ALS point cloud data are shown in red and TLS point cloud data are in black. Each plot in the sample area includes two boxes to show more detailed views. The right-hand side contains zoomed-in images, with “1” and “2” corresponding to the enlarged views within the boxes.</p>
Full article ">Figure 12
<p>Registration accuracy results for different numbers of single-tree feature points.</p>
Full article ">Figure 13
<p>Impact of feature point extraction errors on registration accuracy under different offsets.</p>
Full article ">
Back to TopTop