Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (406)

Search Parameters:
Keywords = terrestrial LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 11382 KiB  
Article
Evaluation of Two-Dimensional DBH Estimation Algorithms Using TLS
by Jorge Luis Compeán-Aguirre, Pablito Marcelo López-Serrano, José Luis Silván-Cárdenas, Ciro Andrés Martínez-García-Moreno, Daniel José Vega-Nieva, José Javier Corral-Rivas and Marín Pompa-García
Forests 2024, 15(11), 1964; https://doi.org/10.3390/f15111964 - 7 Nov 2024
Viewed by 433
Abstract
Terrestrial laser scanning (TLS) has become a vital tool in forestry for accurately measuring tree parameters, such as diameter at breast height (DBH). However, its application in Mexican forests remains underexplored. This study evaluates the performance of five two-dimensional DBH estimation algorithms (Nelder–Mead, [...] Read more.
Terrestrial laser scanning (TLS) has become a vital tool in forestry for accurately measuring tree parameters, such as diameter at breast height (DBH). However, its application in Mexican forests remains underexplored. This study evaluates the performance of five two-dimensional DBH estimation algorithms (Nelder–Mead, least squares, Hough transform, RANSAC, and convex hull) within a temperate Mexican forest and explores their broader applicability across diverse ecosystems, using published point cloud data from various scanning devices. Results indicate that algorithm accuracy is influenced by local factors like point cloud density, occlusion, vegetation, and tree structure. In the Mexican study area, the Nelder–Mead algorithm achieved the highest accuracy (R² = 0.98, RMSE = 1.59 cm, MAPE = 6.12%), closely followed by least squares (R² = 0.98, RMSE = 1.67 cm, MAPE = 6.42%), with different outcomes in other sites. These findings advance DBH estimation methods by highlighting the importance of tailored algorithm selection and environmental considerations, thereby contributing to more accurate and efficient forest management across various landscapes. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study site in Durango, Mexico.</p>
Full article ">Figure 2
<p>3D structure of the tree, cylindrical section, and 2D projection of the diameter. The 3D figures are associated with a color map.</p>
Full article ">Figure 3
<p>Circle fitting: (<b>a</b>) without outliers, (<b>b</b>) with uniform outliers.</p>
Full article ">Figure 4
<p>Circle fitting with clustered outliers: (<b>a</b>) full circle, (<b>b</b>) half circle.</p>
Full article ">Figure 5
<p>Comparison of DBH estimation methods across different regions using tree caliper as reference.</p>
Full article ">Figure 6
<p>Response of the different fitting algorithms in tree circumference estimation.</p>
Full article ">Figure 7
<p>Bubble chart of coefficients of determination (R² values) for different fitting methods across all regions.</p>
Full article ">Figure 8
<p>Heat map of RMSE obtained from five fitting methods in six countries.</p>
Full article ">Figure 9
<p>Heat map of MAPE obtained from the fitting methods.</p>
Full article ">Figure 10
<p>Comprehensive summary of RMSE, MAPE, and R² across methods and regions.</p>
Full article ">Figure A1
<p>Distribution of TLS scanner and trees in the study plot.</p>
Full article ">Figure A2
<p>Forest inventory tag and fusion target for TLS scans. Black and white squares are LiDAR reference tags.</p>
Full article ">Figure A3
<p>Tree circumference estimation across regions using different fitting algorithms.</p>
Full article ">Figure A4
<p>Estimation of tree circumference in different regions using various fitting algorithms.</p>
Full article ">Figure A5
<p>Scatter plots of diameter estimations in Austria, Guyana, and Indonesia. Points represent diameter data, red line shows R² fit.</p>
Full article ">Figure A6
<p>Scatter plots of diameter estimations in Mexico, Peru, and Switzerland. Points represent diameter data, red line shows R² fit.</p>
Full article ">
26 pages, 21893 KiB  
Article
An Example of Using Low-Cost LiDAR Technology for 3D Modeling and Assessment of Degradation of Heritage Structures and Buildings
by Piotr Kędziorski, Marcin Jagoda, Paweł Tysiąc and Jacek Katzer
Materials 2024, 17(22), 5445; https://doi.org/10.3390/ma17225445 - 7 Nov 2024
Viewed by 407
Abstract
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but [...] Read more.
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but is expensive. The study assessed whether more accessible LiDAR options, such as those integrated with mobile devices such as the Apple iPad Pro, can serve as viable alternatives. This study was conducted in two phases—first assessing measurement accuracy and then assessing degradation detection—using tools such as the FreeScan Combo scanner and the Z+F 5016 IMAGER TLS. The results show that, while low-cost LiDAR is suitable for small-scale documentation, its accuracy decreases for larger, complex structures compared to TLS. Despite these limitations, this study suggests that low-cost LiDAR can reduce costs and improve access to heritage conservation, although further development of mobile applications is recommended. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the object under study.</p>
Full article ">Figure 2
<p>City plan with the existing wall sections plotted on a current orthophotomap [<a href="#B17-materials-17-05445" class="html-bibr">17</a>].</p>
Full article ">Figure 3
<p>Six fragments of walls that survive today, numbered from 1 to 6.</p>
Full article ">Figure 4
<p>Workflow of the research program.</p>
Full article ">Figure 5
<p>Dimensions and weights of the equipment used.</p>
Full article ">Figure 6
<p>Locations of scanner position.</p>
Full article ">Figure 7
<p>Achieved point clouds using TLS.</p>
Full article ">Figure 8
<p>Measurement results from 3DScannerApp for fragment D and M.</p>
Full article ">Figure 9
<p>Location of selected measurement markers. (<b>a</b>). View of fragment D. (<b>b</b>). View of fragment M.</p>
Full article ">Figure 10
<p>Cross-section through the acquired point clouds in relation to the reference cloud (green): (<b>a</b>). 3DScannerApp; (<b>b</b>). Pix4DCatch Captured; (<b>c</b>). Pix4DCatch Depth; (<b>d</b>). Pix4DCatch Fused.</p>
Full article ">Figure 11
<p>Measurement results from the SiteScape application.</p>
Full article ">Figure 12
<p>Differences between Stages 1 and 2 for city wall fragment D.</p>
Full article ">Figure 13
<p>Differences between Stages 1 and 2 for city wall fragment M.</p>
Full article ">Figure 14
<p>Location of selected defects where degradation has occurred.</p>
Full article ">Figure 15
<p>Defect W1 projected onto the plane.</p>
Full article ">Figure 16
<p>Cross-sections through defect W1.</p>
Full article ">Figure 17
<p>W2 defect projected onto the plane.</p>
Full article ">Figure 18
<p>Cross-sections through defect W2.</p>
Full article ">Figure 19
<p>W3 defect projected onto the plane.</p>
Full article ">Figure 20
<p>Cross-sections through defect W3.</p>
Full article ">Figure 21
<p>W4 defect projected onto the plane.</p>
Full article ">Figure 22
<p>Cross-sections through defect W4.</p>
Full article ">Figure 23
<p>Differences between Stages 1 and 2 for measurements taken with a handheld scanner.</p>
Full article ">Figure 24
<p>Defect W2 projected onto the plane—handheld scanner.</p>
Full article ">Figure 25
<p>Cross-sections through defect W2—handheld scanner.</p>
Full article ">Figure 26
<p>Defect W3 projected onto the plane—handheld scanner.</p>
Full article ">Figure 27
<p>Cross-sections through defect W3—handheld scanner.</p>
Full article ">Figure 28
<p>Defect W4 projected onto the plane—handheld scanner.</p>
Full article ">Figure 29
<p>Cross-sections through defect W4—handheld scanner.</p>
Full article ">Figure 30
<p>Example path of a single measurement with marked sample positions of the device.</p>
Full article ">Figure 31
<p>Examples of errors created at corners with the device’s trajectory marked: (<b>a</b>). SiteScape; (<b>b</b>). 3DScannerApp.</p>
Full article ">
19 pages, 9602 KiB  
Article
Forest Aboveground Biomass Estimation Based on Unmanned Aerial Vehicle–Light Detection and Ranging and Machine Learning
by Yan Yan, Jingjing Lei and Yuqing Huang
Sensors 2024, 24(21), 7071; https://doi.org/10.3390/s24217071 - 2 Nov 2024
Viewed by 596
Abstract
Eucalyptus is a widely planted species in plantation forests because of its outstanding characteristics, such as fast growth rate and high adaptability. Accurate and rapid prediction of Eucalyptus biomass is important for plantation forest management and the prediction of carbon stock in terrestrial [...] Read more.
Eucalyptus is a widely planted species in plantation forests because of its outstanding characteristics, such as fast growth rate and high adaptability. Accurate and rapid prediction of Eucalyptus biomass is important for plantation forest management and the prediction of carbon stock in terrestrial ecosystems. In this study, the performance of predictive biomass regression equations and machine learning algorithms, including multivariate linear stepwise regression (MLSR), support vector machine regression (SVR), and k-nearest neighbor (KNN) for constructing a predictive forest AGB model was analyzed and compared at individual tree and stand scales based on forest parameters extracted by Unmanned Aerial Vehicle–Light Detection and Ranging (UAV LiDAR) and variables screened by variable projection importance analysis to select the best prediction method. The results of the study concluded that the prediction model accuracy of the natural transformed regression equations (R2 = 0.873, RMSE = 0.312 t/ha, RRMSE = 0.0091) outperformed that of the machine learning algorithms at the individual tree scale. Among the machine learning models, the SVR prediction model accuracy was the best (R2 = 0.868, RMSE = 7.932 t/ha, RRMSE = 0.231). In this study, UAV-LiDAR-based data had great potential in predicting the AGB of Eucalyptus trees, and the tree height parameter had the strongest correlation with AGB. In summary, the combination of UAV LiDAR data and machine learning algorithms to construct a predictive forest AGB model has high accuracy and provides a solution for carbon stock assessment and forest ecosystem assessment. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Methods of individual tree segmentation; “Total” denotes total number of individual tree segmentation; “TP” denotes number of true positives; “FP” denotes number of false positives; and “FN” denotes number of false negatives.</p>
Full article ">Figure 3
<p>The inversion of forest AGB by LiDAR individual tree parameters; (<b>a</b>,<b>b</b>) denote the fitting results of single variable AvgHA and measured forest AGB; (<b>c</b>,<b>d</b>) denote the fitting results of single variable LorCHA and measured forest AGB; (<b>e</b>,<b>f</b>) denote the fitting results of the two variables (AvgHA and CE) and measured forest AGB.</p>
Full article ">Figure 4
<p>(<b>a</b>) The height distribution of individual trees in the study area; and (<b>b</b>) the spatial distribution of forest AGB based on individual tree parameters.</p>
Full article ">Figure 5
<p>LiDAR features variable importance ranking: green indicates significant characteristic variables, and red indicates insignificant characteristic variables.</p>
Full article ">Figure 6
<p>R<sup>2</sup> and RMSE for different kernel functions in the SVR model: (<b>a</b>) is the R<sup>2</sup> of the four kernel functions before and after variable screening; and (<b>b</b>) is the RMSE of the four kernel functions before and after variable screening.</p>
Full article ">Figure 7
<p>R<sup>2</sup> and RMSE for different k values in KNN models.</p>
Full article ">Figure 8
<p>Three models for estimating AGB in forest stands: (<b>a</b>) denotes the fitting of the MLR model predicted AGB to the measured AGB; (<b>b</b>) denotes the fitting of the SVR model predicted AGB to the measured AGB; and (<b>c</b>) denotes the fitting of the KNN model predicted AGB to the measured AGB.</p>
Full article ">Figure 9
<p>The spatial distribution of the predicted AGB by SVR model.</p>
Full article ">
13 pages, 6373 KiB  
Article
Mapping Forest Parameters to Model the Mobility of Terrain Vehicles
by Tomáš Mikita, Marian Rybansky, Dominika Krausková, Filip Dohnal, Ondřej Vystavěl and Sabina Hollmannová
Forests 2024, 15(11), 1882; https://doi.org/10.3390/f15111882 - 25 Oct 2024
Viewed by 418
Abstract
This study aims to evaluate the feasibility of using non-contact data collection methods—specifically, UAV (unmanned aerial vehicle)-based and terrestrial laser scanning technologies—to assess forest stand passability, which is crucial for military operations. The research was conducted in a mixed forest stand in the [...] Read more.
This study aims to evaluate the feasibility of using non-contact data collection methods—specifically, UAV (unmanned aerial vehicle)-based and terrestrial laser scanning technologies—to assess forest stand passability, which is crucial for military operations. The research was conducted in a mixed forest stand in the Březina military training area, where the position of trees and their DBHs (Diameter Breast Heights) were recorded. The study compared the effectiveness of different methods, including UAV RGB imaging, UAV-LiDAR, and handheld mobile laser scanning (HMLS), in detecting tree positions and estimating DBH. The results indicate that HMLS data provided the highest number of detected trees and the most accurate positioning relative to the reference measurements. UAV-LiDAR showed better tree detection compared to UAV RGB imaging, though both aerial methods struggled with canopy penetration in densely structured forests. The study also found significant variability in DBH estimation, especially in complex forest stands, highlighting the challenges of accurate tree detection in diverse environments. The findings suggest that while current non-contact methods show promise, further refinement and integration of data sources are necessary to improve their applicability for assessing forest passability in military or rescue contexts. Full article
(This article belongs to the Special Issue Modeling of Vehicle Mobility in Forests and Rugged Terrain)
Show Figures

Figure 1

Figure 1
<p>Location of research plot.</p>
Full article ">Figure 2
<p>HMLS trajectory and location of measured trees above orthophoto background.</p>
Full article ">Figure 3
<p>Location of tachymetrically measured trees (yellow) and detected trees (blue—HMLS, red—RGB-UAV, green—UAV-LiDAR).</p>
Full article ">Figure 4
<p>Forest stand passability based on distance from trees with higher DBH (red—unpassable, green—passable).</p>
Full article ">Figure 5
<p>Route planning based on Cost Distance analysis (black crosses—trees with DBH more than 20 cms, yellow dots—smaller trees).</p>
Full article ">Figure 6
<p>Comparison of point cloud density ((<b>a</b>). HMLS; (<b>b</b>). UAV-RGB; (<b>c</b>). UAV-LiDAR).</p>
Full article ">Figure 7
<p>TAROS V2 6 × 6 (VOP Cz s. p.) overcoming tree obstacles.</p>
Full article ">
22 pages, 6820 KiB  
Article
Deriving Vegetation Indices for 3D Canopy Chlorophyll Content Mapping Using Radiative Transfer Modelling
by Ahmed Elsherif, Magdalena Smigaj, Rachel Gaulton, Jean-Philippe Gastellu-Etchegorry and Alexander Shenkin
Forests 2024, 15(11), 1878; https://doi.org/10.3390/f15111878 - 25 Oct 2024
Viewed by 726
Abstract
Leaf chlorophyll content is a major indicator of plant health and productivity. Optical remote sensing estimation of chlorophyll limits its retrievals to two-dimensional (2D) estimates, not allowing examination of its distribution within the canopy, although it exhibits large variation across the vertical profile. [...] Read more.
Leaf chlorophyll content is a major indicator of plant health and productivity. Optical remote sensing estimation of chlorophyll limits its retrievals to two-dimensional (2D) estimates, not allowing examination of its distribution within the canopy, although it exhibits large variation across the vertical profile. Multispectral and hyperspectral Terrestrial Laser Scanning (TLS) instruments can produce three-dimensional (3D) chlorophyll estimates but are not widely available. Thus, in this study, 14 chlorophyll vegetation indices were developed using six wavelengths employed in commercial TLS instruments (532 nm, 670 nm, 808 nm, 785 nm, 1064 nm, and 1550 nm). For this, 200 simulations were carried out using the novel bidirectional mode in the Discrete Anisotropic Radiative Transfer (DART) model and a realistic forest stand. The results showed that the Green Normalized Difference Vegetation Index (GNDVI) of the 532 nm and either the 808 nm or the 785 nm wavelengths were highly correlated to the chlorophyll content (R2 = 0.74). The Chlorophyll Index (CI) and Green Simple Ratio (GSR) of the same wavelengths also displayed good correlation (R2 = 0.73). This study was a step towards canopy 3D chlorophyll retrieval using commercial TLS instruments, but methods to couple the data from the different instruments still need to be developed. Full article
(This article belongs to the Special Issue Growth Models for Forest Stand Development Dynamics)
Show Figures

Figure 1

Figure 1
<p>The 3D forest scene reconstructed in the DART model.</p>
Full article ">Figure 2
<p>The understory spectral profile (DART database); vertical lines represent the six TLS wavelengths of interest.</p>
Full article ">Figure 3
<p>Wood spectral profile (DART database); vertical lines represent the six TLS wavelengths of interest.</p>
Full article ">Figure 4
<p>Sensitivity of the tested wavelengths to the chlorophyll content.</p>
Full article ">Figure 5
<p>Sensitivity of the tested VIs to the chlorophyll content.</p>
Full article ">Figure 6
<p>Sensitivity of the tested VIs to EWT.</p>
Full article ">Figure 7
<p>Sensitivity of the tested VIs to leaf structure parameter (N).</p>
Full article ">Figure 8
<p>Sensitivity of the tested VIs to LMA.</p>
Full article ">Figure 9
<p>Sensitivity of the tested VIs to the carotenoids.</p>
Full article ">Figure 10
<p>Sensitivity of the tested VIs to the brown pigments.</p>
Full article ">Figure 11
<p>Relationships between tested VIs and chlorophyll content for LOPEX dataset.</p>
Full article ">Figure 12
<p>Relationships between tested VIs and chlorophyll content for ANGERS dataset.</p>
Full article ">Figure 13
<p>Relationships between tested VIs and chlorophyll content for LOPEX and ANGERS datasets combined; leaves with N &gt; 2 are highlighted with purple boxes.</p>
Full article ">
19 pages, 3356 KiB  
Article
The First Validation of Aerosol Optical Parameters Retrieved from the Terrestrial Ecosystem Carbon Inventory Satellite (TECIS) and Its Application
by Yijie Ren, Binglong Chen, Lingbing Bu, Gen Hu, Jingyi Fang and Pasindu Liyanage
Remote Sens. 2024, 16(19), 3689; https://doi.org/10.3390/rs16193689 - 3 Oct 2024
Viewed by 460
Abstract
In August 2022, China successfully launched the Terrestrial Ecosystem Carbon Inventory Satellite (TECIS). The primary payload of this satellite is an onboard multi-beam lidar system, which is capable of observing aerosol optical parameters on a global scale. This pioneering study used the Fernald [...] Read more.
In August 2022, China successfully launched the Terrestrial Ecosystem Carbon Inventory Satellite (TECIS). The primary payload of this satellite is an onboard multi-beam lidar system, which is capable of observing aerosol optical parameters on a global scale. This pioneering study used the Fernald forward integration method to retrieve aerosol optical parameters based on the Level 2 data of the TECIS, including the aerosol depolarization ratio, aerosol backscatter coefficient, aerosol extinction coefficient, and aerosol optical depth (AOD). The validation of the TECIS-retrieved aerosol optical parameters was conducted using CALIPSO Level 1 and Level 2 data, with relative errors within 30%. A comparison of the AOD retrieved from the TECIS with the AERONET and MODIS AOD products yielded correlation coefficients greater than 0.7 and 0.6, respectively. The relative error of aerosol optical parameter profiles compared with ground-based measurements for CALIPSO was within 40%. Additionally, the correlation coefficients R2 with MODIS and AERONET AOD were approximately between 0.5 and 0.7, indicating the high accuracy of TECIS retrievals. Utilizing the TECIS retrieval results, combined with ground air quality monitoring data and HYSPLIT outcomes, a typical dust transport event was analyzed from 2 to 7 April 2023. The results indicate that dust was transported from the Taklamakan Desert in Xinjiang, China, to Henan and Anhui provinces, with a gradual decrease in the aerosol depolarization ratio and backscatter coefficient during the transport process, causing varying degrees of pollution in the downstream regions. This research verifies the accuracy of the retrieval algorithm through multi-source data comparison and demonstrates the potential application of the TECIS in the field of aerosol science for the first time. It enables the fine-scale regional monitoring of atmospheric aerosols and provides reliable data support for the three-dimensional distribution of global aerosols and related scientific applications. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A flowchart of the TECIS retrieval algorithm.</p>
Full article ">Figure 2
<p>Trajectory of CALIPSO and TECIS.</p>
Full article ">Figure 3
<p>Total attenuated backscatter coefficient obtained from TECIS and CALIPSO. (<b>a</b>) TECIS, (<b>b</b>) CALIPSO.</p>
Full article ">Figure 4
<p>SNR of total attenuated backscatter coefficient obtained from TECIS and CALIPSO. (<b>a</b>) TECIS, (<b>b</b>) CALIPSO.</p>
Full article ">Figure 5
<p>A comparison of total attenuation backscatter coefficient mean profiles between the TECIS and CALIPSO at 13° to 14°N; the shaded area represents the standard deviation of the two satellites. The blue solid line represents the TECIS result, and the red solid line represents the CALIPSO result.</p>
Full article ">Figure 6
<p>(<b>a</b>) Profile of aerosol depolarization ratio of TECIS, (<b>b</b>) profile of aerosol depolarization ratio of CALIPSO, (<b>c</b>) profile of aerosol backscatter coefficient of TECIS, (<b>d</b>) profile of aerosol backscatter coefficient of CALIPSO, (<b>e</b>) profile of aerosol extinction coefficient of TECIS, (<b>f</b>) profile of aerosol extinction coefficient of CALIPSO.</p>
Full article ">Figure 7
<p>A comparison of aerosol optical parameter mean profiles between the TECIS and CALIPSO at 13° to 14°N, where the blue solid line represents the TECIS result, the red solid line represents the CALIPSO result, and the shaded area represents the standard deviation within the average range of the two satellites. (<b>a</b>) Aerosol depolarization ratio, (<b>b</b>) aerosol backscatter coefficient, (<b>c</b>) aerosol extinction coefficient.</p>
Full article ">Figure 8
<p>Relative error of retrieval results between TECIS and CALIPSO.</p>
Full article ">Figure 9
<p>TECIS 532 nm AOD retrievals against AERONET AOD during April to June 2023; the dashed line is the linear fit described by the regression equation; the black line is the 1:1 line.</p>
Full article ">Figure 10
<p>A scatterplot comparison of TECIS AOD data against MODIS AOD data during April to June 2023; the color scale represents the fraction of the total data. (<b>a</b>) North Africa, (<b>b</b>) the Middle East, (<b>c</b>) North America, (<b>d</b>) Central Asia.</p>
Full article ">Figure 11
<p>TECIS 1064 nm total attenuation backscattering coefficient and HYSPLIT backward tracking from 2 to 7 April 2023 (blue, red, and black represent backward tracking at heights of 3 km, 2 km, and 1 km, respectively).</p>
Full article ">Figure 12
<p>Variations in PM10 and PM2.5 concentrations from 2 to 7 April 2023. (<b>a</b>) PM10, (<b>b</b>) PM2.5.</p>
Full article ">Figure 13
<p>Optical parameters obtained by TECIS inversion from 2 to 7 April 2023. (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>,<b>k</b>) show backscattering coefficient; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>,<b>l</b>) show depolarization ratio.</p>
Full article ">Figure 13 Cont.
<p>Optical parameters obtained by TECIS inversion from 2 to 7 April 2023. (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>,<b>k</b>) show backscattering coefficient; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>,<b>l</b>) show depolarization ratio.</p>
Full article ">
19 pages, 8166 KiB  
Article
Assessment of New Techniques for Measuring Volume in Large Wood Chip Piles
by Miloš Gejdoš, Jozef Výbošťok, Juliána Chudá, Daniel Tomčík, Martin Lieskovský, Michal Skladan, Matej Masný and Tomáš Gergeľ
Forests 2024, 15(10), 1747; https://doi.org/10.3390/f15101747 - 3 Oct 2024
Viewed by 608
Abstract
Our work aimed to compare the chip pile volumes calculated by laser ground scanning, UAV technology, and laser ground measurement and also to determine the accuracy, speed, and economic efficiency of each method. The large chip pile was measured in seven different ways: [...] Read more.
Our work aimed to compare the chip pile volumes calculated by laser ground scanning, UAV technology, and laser ground measurement and also to determine the accuracy, speed, and economic efficiency of each method. The large chip pile was measured in seven different ways: band measurement, laser measurement with Vertex, global navigation satellite system, handheld mobile laser scanner, terrestrial laser scanner, drone, and smartphone with a light detection and ranging sensor. All the methods were compared in terms of accuracy, price, user-friendliness, and time required to obtain results. The calculated pile volume, depending on the method, varied from 2588 to 3362 m3. The most accurate results were provided by the terrestrial laser scanning method, which, however, was the most expensive and the most demanding in terms of collecting and evaluating the results. From a time and economic point of view, the most effective methods were UAVs and smartphones with LiDAR. Full article
Show Figures

Figure 1

Figure 1
<p>Study site with wood chip pile in Hriňová, Slovakia.</p>
Full article ">Figure 2
<p>Tape measurement scheme.</p>
Full article ">Figure 3
<p>Location of the laser rangefinder Vertex Laser Geo’s measured points and corresponding sections.</p>
Full article ">Figure 4
<p>Location of GNSS Stonex S700A receiver-measured points.</p>
Full article ">Figure 5
<p>Stonex X120GO SLAM data collection path.</p>
Full article ">Figure 6
<p>Riegl VZ1000 and Riegl VZ600i scanner positions.</p>
Full article ">Figure 7
<p>DJI Air 2S flight plan from Map Pilot Pro app.</p>
Full article ">Figure 8
<p>Visualizations of the camera positions above UAV-based point cloud.</p>
Full article ">Figure 9
<p>iPhone 15 Pro Max data collection path.</p>
Full article ">Figure 10
<p>Dividing the pile into sections.</p>
Full article ">Figure 11
<p>The breakpoint-based volume calculation workflow visualization in 3D Survey software (the laser rangefinder Vertex Laser Geo and the GNSS Receiver Stonex S700A).</p>
Full article ">Figure 12
<p>The dense-point-cloud-based volume calculation workflow visualization in 3D Survey software (TLS Riegl VZ1000 and Riegl VZ600i, UAV DJI Air 2S, HMLS Stonex X120GO, iPhone 15 Pro Max).</p>
Full article ">Figure 13
<p>Height-distribution maps for measurement methods (excluding tape measuring method). (<b>A</b>) Stonex S700A; (<b>B</b>) iPhone; (<b>C</b>) Vertex; (<b>D</b>) DJI Air 25; (<b>E</b>) Stonex X120GO; (<b>F</b>) Stonex X120GO RTK; (<b>G</b>) Riegl VZ1000; (<b>H</b>) Riegl VZ600i.</p>
Full article ">Figure 14
<p>Analytical radar chart for determining the optimal method in terms of price, time, accuracy, and difficulty criteria.</p>
Full article ">
29 pages, 12094 KiB  
Article
Bitemporal Radiative Transfer Modeling Using Bitemporal 3D-Explicit Forest Reconstruction from Terrestrial Laser Scanning
by Chang Liu, Kim Calders, Niall Origo, Louise Terryn, Jennifer Adams, Jean-Philippe Gastellu-Etchegorry, Yingjie Wang, Félicien Meunier, John Armston, Mathias Disney, William Woodgate, Joanne Nightingale and Hans Verbeeck
Remote Sens. 2024, 16(19), 3639; https://doi.org/10.3390/rs16193639 - 29 Sep 2024
Viewed by 1485
Abstract
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study [...] Read more.
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study presents the 3D-explicit reconstruction of a typical temperate deciduous forest in 2015 and 2022. We demonstrate for the first time the potential use of bitemporal 3D-explicit RT modeling from terrestrial laser scanning on the forward modeling and quantitative interpretation of: (1) remote sensing (RS) observations of leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and canopy light extinction, and (2) the impact of canopy gap dynamics on light availability of explicit locations. Results showed that, compared to the 2015 scene, the hemispherical-directional reflectance factor (HDRF) of the 2022 forest scene relatively decreased by 3.8% and the leaf FAPAR relatively increased by 5.4%. At explicit locations where canopy gaps significantly changed between the 2015 scene and the 2022 scene, only under diffuse light did the branch damage and closing gap significantly impact ground light availability. This study provides the first bitemporal RT comparison based on the 3D RT modeling, which uses one of the most realistic bitemporal forest scenes as the structural input. This bitemporal 3D-explicit forest RT modeling allows spatially explicit modeling over time under fully controlled experimental conditions in one of the most realistic virtual environments, thus delivering a powerful tool for studying canopy light regimes as impacted by dynamics in forest structure and developing RS inversion schemes on forest structural changes. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Geographic location and map of Wytham Woods with plot indicated by ‘X’ [<a href="#B60-remotesensing-16-03639" class="html-bibr">60</a>].</p>
Full article ">Figure 2
<p>Spectral properties of different tree species in the plot [<a href="#B2-remotesensing-16-03639" class="html-bibr">2</a>,<a href="#B51-remotesensing-16-03639" class="html-bibr">51</a>]. (<b>a</b>) Reflectance and transmittance of leaves; (<b>b</b>) reflectance of bark.</p>
Full article ">Figure 3
<p>Locations of canopy gap dynamics and photosynthetically active radiation (PAR) sensors simulated, shown in the TLS point cloud (top view): (<b>a</b>) 2015; (<b>b</b>) 2022.</p>
Full article ">Figure 4
<p>Vertical profiles of different types of canopy gap dynamics observed by terrestrial laser scanning, and the position of simulated PAR sensors.</p>
Full article ">Figure 5
<p>Flowchart of research methodology. QSMs of woody structure were reconstructed using leaf-off TLS data.</p>
Full article ">Figure 6
<p>Segmented TLS leaf-off point cloud of 1-ha Wytham Woods forest stand (top view): (<b>a</b>) 2015; (<b>b</b>) and 2022. Each color represents an individual tree.</p>
Full article ">Figure 7
<p>The dynamic change of wood structure of a Common ash (<span class="html-italic">Fraxinus excelsior</span>) tree from 2015 to 2022. (<b>a</b>) 2015 leaf-off point cloud; (<b>b</b>) 2022 leaf-off point cloud.</p>
Full article ">Figure 8
<p>3D-explicit reconstruction of a Sycamore (<span class="html-italic">Acer pseudoplatanus</span>) tree. (<b>a</b>) TLS point cloud colored by height (leaf-off); (<b>b</b>) QSM overlaid with TLS leaf-off point cloud; (<b>c</b>) QSM, the modeled branch length was 3863.3 m; (<b>d</b>) Fully reconstructed tree: QSM + leaves, the leaf area assigned to this tree was 888.2 m<sup>2</sup>.</p>
Full article ">Figure 9
<p>The 3D-explicit models of the complete 1-ha Wytham Woods forest stand in (<b>a</b>) 2015 and (<b>b</b>) 2022. The different leaf colors represent the different tree species present in Wytham Woods. The stems and branches of all trees are shown in brown.</p>
Full article ">Figure 10
<p>The vertical profiles of simulated (<b>a</b>) light extinction, (<b>b</b>) light absorption, and (<b>c</b>) leaf area per meter of height in 2015 and 2022 forest scenes. The results of light extinction and absorption were based on the PAR band. The illumination zenith angle (IZA) was 38.4° and the illumination azimuth angle (IAA) was 125.2°.</p>
Full article ">Figure 11
<p>The vertical profiles of simulated (<b>a</b>) light extinction and (<b>b</b>) light absorption in the blue, green, red, and NIR bands for the 2015 and 2022 forest scenes. Illumination zenith angle (IZA) 38.4°, illumination azimuth angle (IAA) 125.2°.</p>
Full article ">Figure 12
<p>Simulated top of canopy images of Wytham Woods forest scenes in 2015 and 2022. The images were simulated under nadir viewing directions and Sentinel-2 RGB bands. IZA 38.4°, IAA 125.2°. (<b>a</b>,<b>b</b>) Ultra-high resolution images in 2015 and 2022 (spatial resolution: 1 cm); (<b>d</b>,<b>e</b>) 25 cm resolution images in 2015 and 2022; (<b>g</b>,<b>h</b>) 10 m resolution images in 2015 and 2022; (<b>c</b>,<b>f</b>,<b>i</b>) Spatial pattern of HDRF variation from 2015 to 2022 (red band).</p>
Full article ">Figure 13
<p>Light extinction profiles of downward PAR at location 1: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 14
<p>Light extinction profiles of downward PAR at location 2: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 15
<p>Light extinction profiles of downward PAR at location 3: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 16
<p>Light extinction profiles of downward PAR at location 4: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">
23 pages, 6593 KiB  
Article
Multitemporal Quantification of the Geomorphodynamics on a Slope within the Cratère Dolomieu at the Piton de la Fournaise (La Réunion, Indian Ocean) Using Terrestrial LiDAR Data, Terrestrial Photographs, and Webcam Data
by Kerstin Wegner, Virginie Durand, Nicolas Villeneuve, Anne Mangeney, Philippe Kowalski, Aline Peltier, Manuel Stark, Michael Becht and Florian Haas
Geosciences 2024, 14(10), 259; https://doi.org/10.3390/geosciences14100259 - 28 Sep 2024
Viewed by 497
Abstract
In this study, the geomorphological evolution of an inner flank of the Cratère Dolomieu at Piton de La Fournaise/La Réunion was investigated with the help of terrestrial laser scanning (TLS) data, terrestrial photogrammetric images, and historical webcam photographs. While TLS data and the [...] Read more.
In this study, the geomorphological evolution of an inner flank of the Cratère Dolomieu at Piton de La Fournaise/La Réunion was investigated with the help of terrestrial laser scanning (TLS) data, terrestrial photogrammetric images, and historical webcam photographs. While TLS data and the terrestrial images were recorded during three field surveys, the study was also able to use historical webcam images that were installed for the monitoring of the volcanic activity inside the crater. Although the webcams were originally intended to be used only for visual monitoring of the area, at certain times they captured image pairs that could be analyzed using structure from motion (SfM) and subsequently processed to create digital terrain models (DTMs). With the help of all the data, the geomorphological evolution of selected areas of the crater was investigated in high temporal and spatial resolution. Surface changes were detected and quantified on scree slopes in the upper area of the crater as well as on scree slopes at the transition from the slope to the crater floor. In addition to their quantification, these changes could be assigned to individual geomorphological processes over time. The webcam photographs were a very important additional source of information here, as they allowed the observation period to be extended further into the past. Besides this, the webcam images made it possible to determine the exact dates at which geomorphological processes were active. Full article
(This article belongs to the Section Natural Hazards)
Show Figures

Figure 1

Figure 1
<p>Location of the study area Cratère Dolomieu of the PLF. (Source of the overview base map: ASTER DEM. Source of the overview map of the Cratère Dolomieu is a 1 m DEM based on terrestrial laser scanning data acquired in 2014).</p>
Full article ">Figure 2
<p>Riegl VZ4000 laser scanner located on the crater rim and dGNSS measurement of tie points (Riegl reflector) (own photographs captured during fieldwork in 2014).</p>
Full article ">Figure 3
<p>The entire workflow for processing TLS data, digital terrestrial, and webcam photographs is illustrated. The particular processing steps are demonstrated for each relevant software (RiSCAN PRO (Version 2.4), Agisoft Metashape Pro (Version 1.5.5), Laserdata SAGA LIS (Version 3.0.7, 3.1.0)).</p>
Full article ">Figure 4
<p>Investigated slope and stable areas for ICP adjustment. The location of the AoI can be seen in <a href="#geosciences-14-00259-f001" class="html-fig">Figure 1</a> on the overview map of the crater.</p>
Full article ">Figure 5
<p>Different examples of photographs that were not usable for further SfM processing due to insufficiencies regarding differences in the quality. (<b>A</b>) Investigated slope was either completely or partially in clouds. (<b>B</b>) Camera lens was fogged. (<b>C</b>) Contamination on the camera lens. (<b>D</b>) Light reflections lead to a poor contrast. (<b>E</b>) Existing ground fog does not allow data processing. (<b>F</b>) Strong shadows especially during summer in the southern hemisphere led to contrast differences. (<b>G</b>) Existing fog in the crater. (<b>H</b>) Volcanic eruption occurred on 4 January 2010, moving lava prevented use of this image pair.</p>
Full article ">Figure 6
<p>Mapped areas with visible surface changes within the different time steps between 2010 and 2016 that lie inside the derivable DTM. Both highlighted profile lines (grey, red) for the years 2010 and 2016 are analyzed in Figure 11.</p>
Full article ">Figure 7
<p>Derived surface changes (digital terrain model of differences: DoDs) for the two rockfall hotspots 1 and 2 between 2010 and 2016 (shaded relief in the background is derived on the base of the 2016 DTM). Also shown are the positive surface changes [cm] and the accumulated volume [m<sup>3</sup>] of the two areas for the corresponding periods.</p>
Full article ">Figure 8
<p>Derived surface changes (DoDs) on two selected debris cones between 2010 and 2016 (shaded relief in the background is derived on the base of the 2016 DTM). Also shown are the positive surface changes [cm] and the accumulated volume [m<sup>3</sup>] of the two areas for the corresponding periods.</p>
Full article ">Figure 9
<p>Clearly visible linear patterns on the debris zone II.</p>
Full article ">Figure 10
<p>The white arrows show visually detectable surface changes (DoDs) in rock zone II between 13 June 2011 and 19 June 2011.</p>
Full article ">Figure 11
<p>(<b>A</b>) The two lines are showing the slope development as a swath profile of debris zone II between 2010 and 2016. The location of the profile lines can be found in <a href="#geosciences-14-00259-f006" class="html-fig">Figure 6</a>. (<b>B</b>) Statistical range of the slope inclination for the years 2010 until 2016 showing a flattening of approximately 1°.</p>
Full article ">
15 pages, 6660 KiB  
Article
Forest Canopy Height Estimation Combining Dual-Polarization PolSAR and Spaceborne LiDAR Data
by Yao Tong, Zhiwei Liu, Haiqiang Fu, Jianjun Zhu, Rong Zhao, Yanzhou Xie, Huacan Hu, Nan Li and Shujuan Fu
Forests 2024, 15(9), 1654; https://doi.org/10.3390/f15091654 - 19 Sep 2024
Viewed by 801
Abstract
Forest canopy height data are fundamental parameters of forest structure and are critical for understanding terrestrial carbon stock, global carbon cycle dynamics and forest productivity. To address the limitations of retrieving forest canopy height using conventional PolInSAR-based methods, we proposed a method to [...] Read more.
Forest canopy height data are fundamental parameters of forest structure and are critical for understanding terrestrial carbon stock, global carbon cycle dynamics and forest productivity. To address the limitations of retrieving forest canopy height using conventional PolInSAR-based methods, we proposed a method to estimate forest height by combining single-temporal polarimetric synthetic aperture radar (PolSAR) images with sparse spaceborne LiDAR (forest height) measurements. The core idea of our method is that volume scattering energy variations which are linked to forest canopy height occur during radar acquisition. Specifically, our methodology begins by employing a semi-empirical inversion model directly derived from the random volume over ground (RVoG) formulation to establish the relationship between forest canopy height, volume scattering energy and wave extinction. Subsequently, PolSAR decomposition techniques are used to extract canopy volume scattering energy. Additionally, machine learning is employed to generate a spatially continuous extinction coefficient product, utilizing sparse LiDAR samples for assistance. Finally, with the derived inversion model and the resulting model parameters (i.e., volume scattering power and extinction coefficient), forest canopy height can be estimated. The performance of the proposed forest height inversion method is illustrated with L-band NASA/JPL UAVSAR from AfriSAR data conducted over the Gabon Lope National Park and airborne LiDAR data. Compared to high-accuracy airborne LiDAR data, the obtained forest canopy height from the proposed approach exhibited higher accuracy (R2 = 0.92, RMSE = 6.09 m). The results demonstrate the potential and merit of the synergistic combination of PolSAR (volume scattering power) and sparse LiDAR (forest height) measurements for forest height estimation. Additionally, our approach achieves good performance in forest height estimation, with accuracy comparable to that of the multi-baseline PolInSAR-based inversion method (RMSE = 5.80 m), surpassing traditional PolSAR-based methods with an accuracy of 10.86 m. Given the simplicity and efficiency of the proposed method, it has the potential for large-scale forest height estimation applications when only single-temporal dual-polarization acquisitions are available. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A flowchart of the methodology proposed for the estimation of forest canopy height.</p>
Full article ">Figure 2
<p>The geolocation of the study area: (<b>a</b>) optical imagery; (<b>b</b>) the digital elevation model (DEM) of the study area. The orange rectangles in (<b>a</b>,<b>b</b>) indicate the coverage range of these airborne PolSAR data.</p>
Full article ">Figure 3
<p>Datasets: (a) multi-looked and geocoded SAR image in Pauli basis color combination; (<b>b</b>) ICESat-2 ATL08 sampling points; (<b>c</b>) LVIS forest height.</p>
Full article ">Figure 4
<p>(<b>a</b>) Volume scattering power; (<b>b</b>) extinction coefficient.</p>
Full article ">Figure 5
<p>Importance ranking of each variable in the extinction coefficient estimation model.</p>
Full article ">Figure 6
<p>(<b>a</b>) Forest height map derived by proposed method; (<b>b</b>) validation plots of the forest height inversion, where the color transition from blue to red indicates an increase density of points.</p>
Full article ">Figure 7
<p>(<b>a</b>) Forest height derived via PolSAR inversion method in [<a href="#B18-forests-15-01654" class="html-bibr">18</a>], and (<b>b</b>) scatterplot of validation results.</p>
Full article ">
13 pages, 20019 KiB  
Article
Determination of Microtopography of Low-Relief Tidal Freshwater Forested Wetlands Using LiDAR
by Tarini Shukla, Wenwu Tang, Carl C. Trettin, Shen-En Chen and Craig Allan
Remote Sens. 2024, 16(18), 3463; https://doi.org/10.3390/rs16183463 - 18 Sep 2024
Viewed by 449
Abstract
The microtopography of tidal freshwater forested wetlands (TFFWs) impacts biogeochemical processes affecting the carbon and nitrogen dynamics, ecological parameters, and habitat diversity. However, it is challenging to quantify low-relief microtopographic features that might only vary by a few tens of centimeters. We assess [...] Read more.
The microtopography of tidal freshwater forested wetlands (TFFWs) impacts biogeochemical processes affecting the carbon and nitrogen dynamics, ecological parameters, and habitat diversity. However, it is challenging to quantify low-relief microtopographic features that might only vary by a few tens of centimeters. We assess the high-resolution fine-scale microtopographic features of a TFFW with terrestrial LiDAR and aerial LiDAR to test a method appropriate to quantify microtopography in low-relief forested wetlands. Our method uses a combination of water-level and elevation thresholding (WALET) to delineate hollows in terrestrial and aerial LiDAR data. Close-range remote sensing technologies can be used for microtopography in forested regions. However, the aerial and terrestrial LiDAR technologies have not been used to analyze or compare microtopographic features in TFFW ecosystems. Therefore, the objectives of this study were (1) to characterize and assess the microtopography of low-relief tidal freshwater forested wetlands and (2) to identify optimal elevation thresholds for widely available aerial LiDAR data to characterize low-relief microtopography. Our results suggest that the WALET method can correctly characterize the microtopography in this area of low-relief topography. The microtopography characterization method described here provides a basis for advanced applications and scaling mechanistic models. Full article
Show Figures

Figure 1

Figure 1
<p>Study site: (<b>A</b>) the study site is located in Francis Marion National Forest in South Carolina, the USA; (<b>B</b>) the highlighted location of the study site in the National Forest; (<b>C</b>) tidal freshwater forested low-relief wetland located at Huger Creek in Santee Experimental Forest, SC, the USA, with a shaded topographic relief map of the study site; (<b>D</b>) a photograph of the study site.</p>
Full article ">Figure 2
<p>Ground points extracted from terrestrial LiDAR data using cloth simulation filters in CloudCompare. The inset shows the visible 2 ft × 2 ft GCP in the scan.</p>
Full article ">Figure 3
<p>Daily maximum water-level fluctuations from 2019 to 2022 in the monitored hollow location. The line at 0.00 indicates the ground surface level based on terrestrial LiDAR data.</p>
Full article ">Figure 4
<p>(<b>A</b>) Terrestrial LiDAR-based DEM with a 0.25 m point resolution; (<b>B</b>) Aerial-based DEM with a 1 m resolution.</p>
Full article ">Figure 5
<p>Depression delineation (<b>A</b>) terrestrial LiDAR data of 0.25 m resolution and (<b>B</b>) aerial LiDAR data of 1 m resolution.</p>
Full article ">Figure 6
<p>Depression delineation using the priority flood algorithm based on aerial LiDAR data with a minimum sink size of 1 m<sup>2</sup> and minimum sink depth of 0.1 m. The light blue line delineates the boundary of the study site.</p>
Full article ">Figure 7
<p>(<b>A</b>) Elevation distribution of a terrestrial LiDAR-based DEM with a 0.25 m resolution. Vertical lines show the threshold of 0.68 m, which was selected based on the mean daily maximum water level and equivalent to 29th percentile of the elevation values. (<b>B</b>) The elevation distribution of aerial LiDAR-based DEM of 1 m resolution. The vertical line shows the threshold of 0.87 m, which is the 29th percentile of the elevation values.</p>
Full article ">Figure 8
<p>Depression delineation based on the mean daily water level: (<b>A</b>) depressions delineated in a terrestrial LiDAR-based DEM with a threshold of 0.68 m; (<b>B</b>) depressions delineated in an aerial LiDAR-based DEM with a threshold of 0.87 m.</p>
Full article ">Figure 9
<p>Microtopography delineation over a larger scale using an elevation threshold. Hollows delineated at the 29th percentile of the elevation distribution covering 29% of the total area. Fringes covering 22% of total area are below the 50th percentile and above the 29th percentile. Above the 50th percentile, features are characterized as hummocks, covering 49% of the total area.</p>
Full article ">
23 pages, 5621 KiB  
Article
Enhancing Digital Twins with Human Movement Data: A Comparative Study of Lidar-Based Tracking Methods
by Shashank Karki, Thomas J. Pingel, Timothy D. Baird, Addison Flack and Todd Ogle
Remote Sens. 2024, 16(18), 3453; https://doi.org/10.3390/rs16183453 - 18 Sep 2024
Viewed by 1362
Abstract
Digitals twins, used to represent dynamic environments, require accurate tracking of human movement to enhance their real-world application. This paper contributes to the field by systematically evaluating and comparing pre-existing tracking methods to identify strengths, weaknesses and practical applications within digital twin frameworks. [...] Read more.
Digitals twins, used to represent dynamic environments, require accurate tracking of human movement to enhance their real-world application. This paper contributes to the field by systematically evaluating and comparing pre-existing tracking methods to identify strengths, weaknesses and practical applications within digital twin frameworks. The purpose of this study is to assess the efficacy of existing human movement tracking techniques for digital twins in real world environments, with the goal of improving spatial analysis and interaction within these virtual modes. We compare three approaches using indoor-mounted lidar sensors: (1) a frame-by-frame method deep learning model with convolutional neural networks (CNNs), (2) custom algorithms developed using OpenCV, and (3) the off-the-shelf lidar perception software package Percept version 1.6.3. Of these, the deep learning method performed best (F1 = 0.88), followed by Percept (F1 = 0.61), and finally the custom algorithms using OpenCV (F1 = 0.58). Each method had particular strengths and weaknesses, with OpenCV-based approaches that use frame comparison vulnerable to signal instability that is manifested as “flickering” in the dataset. Subsequent analysis of the spatial distribution of error revealed that both the custom algorithms and Percept took longer to acquire an identification, resulting in increased error near doorways. Percept software excelled in scenarios involving stationary individuals. These findings highlight the importance of selecting appropriate tracking methods for specific use. Future work will focus on model optimization, alternative data logging techniques, and innovative approaches to mitigate computational challenges, paving the way for more sophisticated and accessible spatial analysis tools. Integrating complementary sensor types and strategies, such as radar, audio levels, indoor positioning systems (IPSs), and wi-fi data, could further improve detection accuracy and validation while maintaining privacy. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of Building X’s public space: (<b>a</b>) arrangement of the community assembly and the general layout. (<b>b</b>) Schematic diagram of the study area.</p>
Full article ">Figure 2
<p>Schematic flow chart showing data fusion and outputs for each of the detection methods.</p>
Full article ">Figure 3
<p>Distribution of variation in accuracy metrics for different distance thresholds. A notable increase in all three metrics as the threshold distance extended up to 2 m, beyond which the improvements began to saturate.</p>
Full article ">Figure 4
<p>Sample frames showing detections and confidence levels from the YOLOv5 models trained on (<b>a</b>) maximum elevation (DSM) returns, (<b>b</b>) intensity returns, and (<b>c</b>) their combination as dual bands within the same image. Most detections are the same but minor differences are evident between different methods.</p>
Full article ">Figure 5
<p>Error scenarios: (<b>a</b>) a missed detection due to atypical size and shape of point cloud cluster, (<b>b</b>) a missed detection due to inadequate coverage, (<b>c</b>) a missed detection due to an individual blending into the furniture as they linger, (<b>d</b>) a sample point cloud cluster where a single individual’s lidar signature is disassociated, and (<b>e</b>) additional false positives from a single individual due to misalignment of sensors. The manually labeled points are shown in blue and detections shown in orange.</p>
Full article ">Figure 6
<p>Sample frames showing detections from the OpenCV models using (<b>a</b>) background subtraction and (<b>b</b>) frame differencing.</p>
Full article ">Figure 7
<p>Error scenarios: (<b>a</b>) False positive due to raster edge flickering effects, (<b>b</b>) a missed detection due to lower density lidar coverage, (<b>c</b>) missed detections where individuals linger in the same space for some time and are not detected by the algorithm. The manually labeled points are shown in blue and detections shown in orange.</p>
Full article ">Figure 8
<p>Sample frame showing detections by Blickfeld’s Percept, with manually labeled points shown in blue and detections shown in orange.</p>
Full article ">Figure 9
<p>Error scenarios: (<b>a</b>) false positives as people in the furniture move, (<b>b</b>) false positives on the staircase, (<b>c</b>) a misalignment error between detections and ground truth points attributed to time differences between raster logs and Percept data, and (<b>d</b>) false positives due to multiple signatures of the same object as a result of time misalignments between sensors in Percept. The manually labeled points are shown in blue and detections shown in orange.</p>
Full article ">Figure 10
<p>Chart showing performance summary of detection methods including deep learning approaches for DSM, intensity and dual band; OpenCV’s background subtraction and frame differencing models; and Percept.</p>
Full article ">Figure 11
<p>Spatial distribution of detection errors across the following models: deep learning using DSM, OpenCV background subtraction and Percept, with false positives (FP) in blue and false negatives (FN) in orange. Deep learning has fewer FP and FN than OpenCV and Percept.</p>
Full article ">Figure 12
<p>Kernel density estimation (KDE) map of movement patterns detected by the detection models. (<b>a</b>) Deep learning, (<b>b</b>) OpenCV, and (<b>c</b>) Percept.</p>
Full article ">Figure 13
<p>Normalized aggregate average detections for each model (deep learning in blue, OpenCV in orange and Percept in green) within the study area for the month of September 2023. Overall trends between methods are very similar, with only some minor differences between them.</p>
Full article ">
18 pages, 24660 KiB  
Article
Fireground Recognition and Spatio-Temporal Scalability Research Based on ICESat-2/ATLAS Vertical Structure Parameters
by Guojun Cao, Xiaoyan Wei and Jiangxia Ye
Forests 2024, 15(9), 1597; https://doi.org/10.3390/f15091597 - 11 Sep 2024
Viewed by 570
Abstract
In the ecological context of global climate change, ensuring the stable carbon sequestration capacity of forest ecosystems, which is among the most important components of terrestrial ecosystems, is crucial. Forest fires are disasters that often burn vegetation and damage forest ecosystems. Accurate recognition [...] Read more.
In the ecological context of global climate change, ensuring the stable carbon sequestration capacity of forest ecosystems, which is among the most important components of terrestrial ecosystems, is crucial. Forest fires are disasters that often burn vegetation and damage forest ecosystems. Accurate recognition of firegrounds is essential to analyze global carbon emissions and carbon flux, as well as to discover the contribution of climate change to the succession of forest ecosystems. The common recognition of firegrounds relies on remote sensing data, such as optical data, which have difficulty describing the characteristics of vertical structural damage to post-fire vegetation, whereas airborne LiDAR is incapable of large-scale observations and has high costs. The new generation of satellite-based photon counting radar ICESat-2/ATLAS (Advanced Topographic Laser Altimeter System, ATLAS) data has the advantages of large-scale observations and low cost. The ATLAS data were used in this study to extract three significant parameters, namely general, canopy, and topographical parameters, to construct a recognition index system for firegrounds based on vertical structure parameters, such as the essential canopy, based on machine learning of the random forest (RF) and extreme gradient boosting (XGBoost) classifiers. Furthermore, the spatio-temporal parameters are more accurate, and widespread use scalability was explored. The results show that the canopy type contributed 79% and 69% of the RF and XGBoost classifiers, respectively, which indicates the feasibility of using ICESat-2/ATLAS vertical structure parameters to identify firegrounds. The overall accuracy of the XGBoost classifier was slightly greater than that of the RF classifier according to 10-fold cross-validation, and all the evaluation metrics were greater than 0.8 after the independent sample test under different spatial and temporal conditions, implying the potential of ICESat-2/ATLAS for accurate fireground recognition. This study demonstrates the feasibility of ATLAS vertical structure parameters in identifying firegrounds and provides a novel and effective way to recognize firegrounds based on different spatial–temporal vertical structure information. This research reveals the feasibility of accurately identifying fireground based on parameters of ATLAS vertical structure by systematic analysis and comparison. It is also of practical significance for economical and effective precise recognition of large-scale firegrounds and contributes guidance for forest ecological restoration. Full article
Show Figures

Figure 1

Figure 1
<p>Locations of the firegrounds.</p>
Full article ">Figure 2
<p>Study flow chart.</p>
Full article ">Figure 3
<p>Map of woodland distribution. (<b>a</b>) Area01 Shangri-La City; (<b>b</b>) Area02 Lijiang Naxi Autonomous County; (<b>c</b>) Area03 Dali City; (<b>d</b>) Area04 Guangnan County; (<b>e</b>) Area05 Ninglang Yi Autonomous County.</p>
Full article ">Figure 4
<p>Schematic diagram of the overlap between the ATLAS footprint and the fire in Shangri-La. (<b>a</b>) ATLAS orbital spot data; (<b>b</b>) ATLAS intersecting with the fireground.</p>
Full article ">Figure 5
<p>Schematic diagram of ATLAS and fireground overlap. (<b>a</b>) Area01 Shangri-La city; (<b>b</b>) Area02 Lijiang Naxi Autonomous County; (<b>c</b>) Area03 Dali city; (<b>d</b>) Area04 Guangnan County; (<b>e</b>) Area05 Ninglang Yi Autonomous County.</p>
Full article ">Figure 6
<p>Contributions of ATLAS parameters in classifiers. (<b>a</b>) iRF classifier. (<b>b</b>) XGBoost classifier.</p>
Full article ">Figure 7
<p>Percent contributions of different types of ATLAS parameters. (<b>a</b>) RF classifier. (<b>b</b>) XGBoost classifier.</p>
Full article ">Figure 8
<p>Schematic diagram of ATLAS spot classification: (<b>a</b>) RF classifier and (<b>b</b>) XGBoost classifier.</p>
Full article ">Figure 9
<p>Schematic diagram of different spatio-temporal classifications of ATLAS spots (RF classifier on the (<b>a</b>–<b>d</b>), and XGBoost classifier on the (<b>e</b>–<b>h</b>)).</p>
Full article ">Figure 10
<p>Schematic diagram of the overlap between fireground and NBR in different times and spaces. (<b>a</b>) Area1 Shangri-La City; (<b>b</b>) Area02 Lijiang Naxi Autonomous County; (<b>c</b>) Area03 Dali City; (<b>d</b>) Area04 Guangnan County; (<b>e</b>) Area05 Ninglang Yi Autonomous County.</p>
Full article ">
23 pages, 39653 KiB  
Article
Registration of TLS and ULS Point Cloud Data in Natural Forest Based on Similar Distance Search
by Yuncheng Deng, Jinliang Wang, Pinliang Dong, Qianwei Liu, Weifeng Ma, Jianpeng Zhang, Guankun Su and Jie Li
Forests 2024, 15(9), 1569; https://doi.org/10.3390/f15091569 - 6 Sep 2024
Cited by 1 | Viewed by 667
Abstract
Multiplatform fusion point clouds can effectively compensate for the disadvantages of individual platform point clouds in forest parameter extraction, maximizing the potential of LiDAR technology. However, existing registration algorithms often suffer from insufficient feature extraction and limited registration accuracy. To address these issues, [...] Read more.
Multiplatform fusion point clouds can effectively compensate for the disadvantages of individual platform point clouds in forest parameter extraction, maximizing the potential of LiDAR technology. However, existing registration algorithms often suffer from insufficient feature extraction and limited registration accuracy. To address these issues, we propose a ULS (Unmanned Aerial Vehicle Laser Scanning)-TLS (Terrestrial Laser Scanning) point cloud data registration method based on Similar Distance Search (SDS). This method enhances coarse registration by accurately retrieving points with similar features, leading to high overlap in the rough registration stage and further improving fine registration precision. (1) The proposed method was tested on four natural forest plots, including Pinus densata Mast., Pinus yunnanensis Franch., Pices asperata Mast., Abies fabri (Mast.) Craib, and demonstrated high registration accuracy. Both coarse and fine registration achieved superior results, significantly outperforming existing algorithms, with notable improvements over the TR algorithm. (2) In addition, the study evaluated the accuracy of individual tree parameter extraction from fusion point clouds versus single-platform point clouds. While ULS point clouds performed slightly better in some metrics, the fused point clouds offered more consistent and reliable results across varying conditions. Overall, the proposed SDS method and the resulting fusion point clouds provide strong technical support for efficient and accurate forest resource management, with significant scientific implications. Full article
(This article belongs to the Special Issue LiDAR Remote Sensing for Forestry)
Show Figures

Figure 1

Figure 1
<p>Study area location.</p>
Full article ">Figure 2
<p>ULS point cloud data acquisition system.</p>
Full article ">Figure 3
<p>ULS-TLS point cloud registration flowchart.</p>
Full article ">Figure 4
<p>Tree height (TH) point detection: (<b>a</b>) TLS tree height (<span class="html-italic">TH<sub>TLS</sub></span>) point detection, (<b>b</b>) ULS tree height (<span class="html-italic">TH<sub>ULS</sub></span>) point detection.</p>
Full article ">Figure 5
<p>Comparison of ULS and TLS tree height point positions before and after coarse registration. ULS (red) and TLS (green). (<b>a</b>) Plot 1 before coarse registration, (<b>b</b>) plot 1 after coarse registration, (<b>c</b>) plot 2 before coarse registration, (<b>d</b>) plot 2 after coarse registration, (<b>e</b>) plot 3 before coarse registration, (<b>f</b>) plot 3 after coarse registration, (<b>g</b>) plot 4 before coarse registration, (<b>h</b>) plot 4 after coarse registration.</p>
Full article ">Figure 6
<p>Overall registration effect of plot 1 (<span class="html-italic">Pinus densata</span> Mast.). Overall effect of precise registration for plot 1 (<span class="html-italic">Pinus densata</span> Mast.). (<b>a</b>–<b>c</b>) represent the ULS and TLS point clouds before registration, and (<b>d</b>–<b>f</b>) show the point clouds after registration. TLS points are shown in magenta, and ULS points are shown in green.</p>
Full article ">Figure 7
<p>Fine registration—local amplification of plot 1 (<span class="html-italic">Pinus densata</span> Mast.). Local amplification of fine registration results for plot 1 (<span class="html-italic">Pinus densata</span> Mast.). (<b>a</b>) ULS point cloud, (<b>b</b>) TLS point cloud, (<b>c</b>) combined ULS-TLS point cloud after registration.</p>
Full article ">Figure 8
<p>Overall registration effect of plot 2 (<span class="html-italic">Pices asperata</span> Mast.). Overall effect of precise registration for plot 1 (<span class="html-italic">Pices asperata</span> Mast.). (<b>a</b>–<b>c</b>) represent the ULS and TLS point clouds before registration, and (<b>d</b>–<b>f</b>) show the point clouds after registration. TLS points are shown in magenta, and ULS points are shown in green.</p>
Full article ">Figure 9
<p>Fine registration—Local amplification of plot 2 (<span class="html-italic">Pices asperata</span> Mast.). Local amplification of fine registration results for plot 1 (<span class="html-italic">Pices asperata</span> Mast.). (<b>a</b>) ULS point cloud, (<b>b</b>) TLS point cloud, (<b>c</b>) combined ULS-TLS point cloud after registration.</p>
Full article ">Figure 10
<p>Overall registration effect of plot 3 (<span class="html-italic">Pinus yunnanensis</span> Franch.). Overall effect of precise registration for plot 1 (<span class="html-italic">Pinus yunnanensis</span> Franch.). (<b>a</b>–<b>c</b>) represent the ULS and TLS point clouds before registration, and (<b>d</b>–<b>f</b>) show the point clouds after registration. TLS points are shown in magenta, and ULS points are shown in green.</p>
Full article ">Figure 11
<p>Fine registration—local amplification of plot 2 (<span class="html-italic">Pinus yunnanensis</span> Franch.). Local amplification of fine registration results for plot 1 (<span class="html-italic">Pinus yunnanensis</span> Franch.). (<b>a</b>) ULS point cloud, (<b>b</b>) TLS point cloud, (<b>c</b>) combined ULS-TLS point cloud after registration.</p>
Full article ">Figure 12
<p>Overall registration effect of plot 3 (<span class="html-italic">Abies fabri</span> (Mast.) Craib). Overall effect of precise registration for plot 1 (<span class="html-italic">Abies fabri</span> (Mast.) Craib). (<b>a</b>–<b>c</b>) represent the ULS and TLS point clouds before registration, and (<b>d</b>–<b>f</b>) show the point clouds after registration. TLS points are shown in magenta, and ULS points are shown in green.</p>
Full article ">Figure 13
<p>Fine registration—local amplification of plot 2 (<span class="html-italic">Abies fabri</span> (Mast.) Craib). Local amplification of fine registration results for plot 1 (<span class="html-italic">Abies fabri</span> (Mast.) Craib). (<b>a</b>) ULS point cloud, (<b>b</b>) TLS point cloud, (<b>c</b>) combined ULS-TLS point cloud after registration.</p>
Full article ">Figure 14
<p>Plot 1 ULS-TLS distance statistics of nearest points, (<b>a</b>) after coarse registration, (<b>b</b>) after fine registration.</p>
Full article ">Figure 15
<p>Plot 2 ULS-TLS distance statistics of nearest points, (<b>a</b>) after coarse registration, (<b>b</b>) after fine registration.</p>
Full article ">Figure 16
<p>Plot 3 ULS-TLS distance statistics of nearest points, (<b>a</b>) after coarse registration, (<b>b</b>) after fine registration.</p>
Full article ">Figure 17
<p>Plot 4 ULS-TLS distance statistics of nearest points, (<b>a</b>) after coarse registration, (<b>b</b>) after fine registration.</p>
Full article ">Figure 18
<p>Comparison of tree height extraction results before and after point cloud fusion.</p>
Full article ">Figure 19
<p>The original point cloud and the registered point cloud in two coordinate systems (same views).</p>
Full article ">
17 pages, 6447 KiB  
Article
LiDAR-Based Snowfall Level Classification for Safe Autonomous Driving in Terrestrial, Maritime, and Aerial Environments
by Ji-il Park, Seunghyeon Jo, Hyung-Tae Seo and Jihyuk Park
Sensors 2024, 24(17), 5587; https://doi.org/10.3390/s24175587 - 28 Aug 2024
Viewed by 890
Abstract
Studies on autonomous driving have started to focus on snowy environments, and studies to acquire data and remove noise and pixels caused by snowfall in such environments are in progress. However, research to determine the necessary weather information for the control of unmanned [...] Read more.
Studies on autonomous driving have started to focus on snowy environments, and studies to acquire data and remove noise and pixels caused by snowfall in such environments are in progress. However, research to determine the necessary weather information for the control of unmanned platforms by sensing the degree of snowfall in real time has not yet been conducted. Therefore, in this study, we attempted to determine snowfall information for autonomous driving control in snowy weather conditions. To this end, snowfall data were acquired by LiDAR sensors in various snowy areas in South Korea, Sweden, and Denmark. Snow, which was extracted using a snow removal filter (the LIOR filter that we previously developed), was newly classified and defined based on the extracted number of snow particles, the actual snowfall total, and the weather forecast at the time. Finally, we developed an algorithm that extracts only snow in real time and then provides snowfall information to an autonomous driving system. This algorithm is expected to have a similar effect to that of actual controllers in promoting driving safety in real-time weather conditions. Full article
(This article belongs to the Special Issue Sensors for Intelligent Vehicles and Autonomous Driving)
Show Figures

Figure 1

Figure 1
<p>Voxel size for extracting snow points. Snow particle extraction was performed on a single 5 × 5 × 5 m voxel.</p>
Full article ">Figure 2
<p>Design of the sensor mounting system.</p>
Full article ">Figure 3
<p>Experimental environment for snow data collection (Sweden). The LiDAR sensor was installed to be exposed to the outdoors and acquired both frontal and lateral outdoor data.</p>
Full article ">Figure 4
<p>Sample data visualized through the ROS RViz. The camera image was used to visually check the noise caused by unidentified objects, such as leaves, plastic bags, and other garbage, not snow.</p>
Full article ">Figure 5
<p>Various cases in which an object may exist within a set voxel.</p>
Full article ">Figure 6
<p>The result of extracting snow from four representative data points obtained in each country. The numbers 1 to 4 at the bottom refer to the order in which they are sorted according to the amount of snowfall. The number of snow particles extracted from the data obtained in Denmark was very small compared with the numbers in South Korea and Sweden, so the <span class="html-italic">y</span>-axis range was changed to 100.</p>
Full article ">Figure 7
<p>Descending order of extracted snow data. The 9th data point is from Sweden, and the 10th to 12th data points from Denmark. Red, blue, green, and boxes are data acquired from South Korea, Sweden, and Denmark, respectively.</p>
Full article ">Figure 8
<p>Snow analysis results for three groups classified based on weather forecasts.</p>
Full article ">Figure 9
<p>Result of snow point extraction by snowfall forecast.</p>
Full article ">Figure 10
<p>Defining the snowfall level for five groups.</p>
Full article ">Figure 11
<p>The final result of the snowfall level for five groups.</p>
Full article ">
Back to TopTop