Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,900)

Search Parameters:
Keywords = LIDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2861 KiB  
Article
Autonomous Lunar Rover Localization while Fully Scanning a Bounded Obstacle-Rich Workspace
by Jonghoek Kim
Sensors 2024, 24(19), 6400; https://doi.org/10.3390/s24196400 (registering DOI) - 2 Oct 2024
Abstract
This article addresses the scanning path plan strategy of a rover team composed of three rovers, such that the team explores unknown dark outer space environments. This research considers a dark outer space, where a rover needs to turn on its light and [...] Read more.
This article addresses the scanning path plan strategy of a rover team composed of three rovers, such that the team explores unknown dark outer space environments. This research considers a dark outer space, where a rover needs to turn on its light and camera simultaneously to measure a limited space in front of the rover. The rover team is deployed from a symmetric base station, and the rover team’s mission is to scan a bounded obstacle-rich workspace, such that there exists no remaining detection hole. In the team, only one rover, the hauler, can locate itself utilizing stereo cameras and Inertial Measurement Unit (IMU). Every other rover follows the hauler, while not locating itself. Since Global Navigation Satellite System (GNSS) is not available in outer space, the localization error of the hauler increases as time goes on. For rover’s location estimate fix, one occasionally makes the rover home to the base station, whose shape and global position are known in advance. Once a rover is near the station, it uses its Lidar to measure the relative position of the base station. In this way, the rover fixes its localization error whenever it homes to the base station. In this research, one makes the rover team fully scan a bounded obstacle-rich workspace without detection holes, such that a rover’s localization error is bounded by letting the rover home to the base station occasionally. To the best of our knowledge, this article is novel in addressing the scanning path plan strategy, so that a rover team fully scans a bounded obstacle-rich workspace without detection holes, while fixing the accumulated localization error occasionally. The efficacy of the proposed scanning and localization strategy is demonstrated utilizing MATLAB-based simulations. Full article
(This article belongs to the Special Issue Intelligent Control and Robotic Technologies in Path Planning)
Show Figures

Figure 1

Figure 1
<p>The NASA Space Research Challenge [<a href="#B1-sensors-24-06400" class="html-bibr">1</a>] considered a team of three rovers: the scouter, the hauler, and the excavator. From the left to the right in this figure, one depicts the scouter, the hauler, and the excavator in this order.</p>
Full article ">Figure 2
<p>The bounded obstacle-rich workspace of the NASA Space Research Challenge. Rocks are presented as obstacles of MATLAB-based simulations (see <a href="#sec5-sensors-24-06400" class="html-sec">Section 5</a>). The rover turns on its light in dark environments.</p>
Full article ">Figure 3
<p>This figure plots a symmetric base station used in the NASA Space Research Challenge [<a href="#B1-sensors-24-06400" class="html-bibr">1</a>]. The rover turns on its light in dark environments. To the right of the rover in this figure, there is the symmetric base station. It is assumed that the station’s global position and shape are known in advance.</p>
Full article ">Figure 4
<p>This figure represents <span class="html-italic">I</span> as <math display="inline"><semantics> <mi>η</mi> </semantics></math> changes. The length of <math display="inline"><semantics> <msub> <mi>r</mi> <mi>s</mi> </msub> </semantics></math> is plotted as a dashed line at the bottom of the figure. If <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1.1</mn> </mrow> </semantics></math>, then <span class="html-italic">I</span> consists of three edges (three bold edges in the figure). However, if <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, then <span class="html-italic">I</span> consists of six edges.</p>
Full article ">Figure 5
<p>The hauler <span class="html-italic">R</span> forming a new guidance sensor. In this figure, <span class="html-italic">R</span>, <math display="inline"><semantics> <msup> <mi>R</mi> <mi>e</mi> </msup> </semantics></math>, and <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> are shown as circular robots. The obstacle boundaries are plotted as red curves. The path of the hauler <span class="html-italic">R</span> is shown as blue lines. The large dots along the path of the hauler <span class="html-italic">R</span> illustrate the guidance sensors formed by the hauler <span class="html-italic">R</span>. The footprint of each guidance sensor is represented as a dotted circle. A frontier of the recently formed guidance sensor is shown as a green curve. Two FrontierVertices are shown as two crosses along the frontier.</p>
Full article ">Figure 6
<p>ScanCirclePnts are depicted along the ScanCircle with radius <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <msub> <mi>r</mi> <mi>s</mi> </msub> <msqrt> <mn>3</mn> </msqrt> </mfrac> </mstyle> </semantics></math>. In this figure, <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> footprintPnts are marked with red dots along the footPrint centered at the hauler <span class="html-italic">R</span>. There is a rectangular obstacle (blue box).</p>
Full article ">Figure 7
<p>Scenario 1. In this scenario, we generate obstacle environments, inspired by <a href="#sensors-24-06400-f002" class="html-fig">Figure 2</a>. There is no localization error. One sets <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mo>∞</mo> </mrow> </semantics></math> in Algorithm 3. Obstacle boundaries are shown with thick red curves. The red rectangle illustrates the workspace boundary. The path of the hauler <span class="html-italic">R</span> is depicted as a black circle. Backwards and forwards maneuvering of the scouter <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> (blue lines) is used to scan a ScanCircle entirely.</p>
Full article ">Figure 8
<p>Scenario 1. Regarding the scenario in <a href="#sensors-24-06400-f007" class="html-fig">Figure 7</a>, the upper subplot represents the 2D coordinates of the hauler with respect to time, while the lower subplot illustrates the relative distance between the hauler <span class="html-italic">R</span> and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>M</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> distance units.</p>
Full article ">Figure 9
<p>Scenario 1 with localization error. In Scenario 1, we generate obstacle environments, inspired by <a href="#sensors-24-06400-f002" class="html-fig">Figure 2</a>. One sets <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> in Algorithm 3. The path of the hauler <span class="html-italic">R</span> is marked as a black circle. The hauler <span class="html-italic">R</span> occasionally homes to the base station by applying <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> in Algorithm 3. The homing maneuver of the hauler <span class="html-italic">R</span> is depicted with green diamonds. Backwards and forwards maneuvering of the scouter <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> (blue lines) is used to scan a ScanCircle entirely.</p>
Full article ">Figure 10
<p>Scenario 1 with localization error. Regarding the scenario in <a href="#sensors-24-06400-f009" class="html-fig">Figure 9</a>, (<b>a</b>) subplot plots the 2D coordinates of the hauler with respect to time, while (<b>b</b>) subplot illustrates the relative distance between the hauler <span class="html-italic">R</span> and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>M</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> distance units. Compared to <a href="#sensors-24-06400-f008" class="html-fig">Figure 8</a>, the scanning time increases considerably, since the hauler <span class="html-italic">R</span> needs to home to the base station occasionally. (<b>c</b>) subplot presents the localization error <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> <mi mathvariant="bold">R</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> </mrow> <msup> <mrow> <mi mathvariant="bold">R</mi> </mrow> <mi>p</mi> </msup> <mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> as <span class="html-italic">k</span> varies.</p>
Full article ">Figure 11
<p>Scenario 1 with localization error under the random maneuver strategy. In Scenario 1, we generate obstacle environments, inspired by <a href="#sensors-24-06400-f002" class="html-fig">Figure 2</a>. The path of the hauler <span class="html-italic">R</span> is marked as a black circle. Backwards and forwards maneuvering of the scouter <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> (blue lines) is used to scan a ScanCircle entirely. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver. The simulation ends after 150 s pass.</p>
Full article ">Figure 12
<p>Scenario 1 with localization error under the random maneuver strategy. Regarding the scenario in <a href="#sensors-24-06400-f011" class="html-fig">Figure 11</a>, (<b>a</b>) subplot plots the 2D coordinates of the hauler with respect to time, while (<b>b</b>) subplot illustrates the relative distance between the hauler <span class="html-italic">R</span> and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>M</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> distance units. (<b>c</b>) subplot presents the localization error <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> <mi mathvariant="bold">R</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> </mrow> <msup> <mrow> <mi mathvariant="bold">R</mi> </mrow> <mi>p</mi> </msup> <mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> as <span class="html-italic">k</span> varies.</p>
Full article ">Figure 13
<p>Scenario 2 with localization error. One sets <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> in Algorithm 3. The path of the hauler <span class="html-italic">R</span> is depicted as a black circle. The hauler <span class="html-italic">R</span> occasionally homes to the base station by applying <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> in Algorithm 3. The homing maneuver of the hauler <span class="html-italic">R</span> is marked with green diamonds. Backwards and forwards maneuvering of the scouter <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> (blue lines) is used to scan a ScanCircle entirely.</p>
Full article ">Figure 14
<p>Scenario 2 with localization error. Regarding the scenario in <a href="#sensors-24-06400-f013" class="html-fig">Figure 13</a>, (<b>a</b>) subplot illustrates the 2D coordinates of the hauler with respect to time, while (<b>b</b>) subplot illustrates the relative distance between the hauler <span class="html-italic">R</span> and its closest obstacle boundary with respect to time. (<b>c</b>) subplot presents the localization error <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> <mi mathvariant="bold">R</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> </mrow> <msup> <mrow> <mi mathvariant="bold">R</mi> </mrow> <mi>p</mi> </msup> <mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> as <span class="html-italic">k</span> varies.</p>
Full article ">Figure 15
<p>Scenario 2 with localization error under the random maneuver strategy. The path of the hauler <span class="html-italic">R</span> is marked as a black circle. Backwards and forwards maneuvering of the scouter <math display="inline"><semantics> <msup> <mi>R</mi> <mi>s</mi> </msup> </semantics></math> (blue lines) is used to scan a ScanCircle entirely. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver.</p>
Full article ">Figure 16
<p>Scenario 2 with localization error under the random maneuver strategy. Regarding the scenario in <a href="#sensors-24-06400-f015" class="html-fig">Figure 15</a>, (<b>a</b>) subplot plots the 2D coordinates of the hauler with respect to time, while (<b>b</b>) subplot illustrates the relative distance between the hauler <span class="html-italic">R</span> and its closest obstacle boundary with respect to time. (<b>c</b>) subplot presents the localization error <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> <mi mathvariant="bold">R</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> </mrow> <msup> <mrow> <mi mathvariant="bold">R</mi> </mrow> <mi>p</mi> </msup> <mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> as <span class="html-italic">k</span> varies.</p>
Full article ">
12 pages, 4888 KiB  
Article
Compact Partially End-Pumped Innoslab Laser Based on Micro-Cylindrical Lens Array Homogenizer
by Xinhui Sun, Xiaonan Zhao, Jinxin Chen, Yajun Wu, Yibin Fu, Gang Cheng, Xi Chen, Pan Liu, Linhao Shang, Guangqiang Fan, Huihui Gao, Yan Xiang and Tianshu Zhang
Photonics 2024, 11(10), 932; https://doi.org/10.3390/photonics11100932 - 1 Oct 2024
Viewed by 137
Abstract
We demonstrate a compact, partially end-pumping Innoslab laser based on a micro-cylindrical lens array homogenizer. A dimension of 12 × 0.4 mm2 flat-top pumping line with a Gaussian intensity distribution across the line was simulated by the ray tracing technique. The rate [...] Read more.
We demonstrate a compact, partially end-pumping Innoslab laser based on a micro-cylindrical lens array homogenizer. A dimension of 12 × 0.4 mm2 flat-top pumping line with a Gaussian intensity distribution across the line was simulated by the ray tracing technique. The rate equations considering the asymmetric transverse spatial distributions are theoretically developed. The simulation results are in good agreement with the experimental results. Preliminary data shows that for a pump power of 260 W, a maximum pulse energy of 15.7 mJ was obtained with a pulse width of 8.5 ns at a repetition frequency of 1 kHz. The beam quality M2 factors in the unstable and stable directions were 1.732 and 1.485, respectively. The technology has been successfully applied to temperature and humidity profiling lidar and ozone lidar and has been productized, yielding direct economic value. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental setup for partially end-pumping Innoslab lasers.</p>
Full article ">Figure 2
<p>Absorption coefficient and spectrum of diode stacks [<a href="#B18-photonics-11-00932" class="html-bibr">18</a>].</p>
Full article ">Figure 3
<p>Beam radius of the fundamental mode and dependence with thermal lens.</p>
Full article ">Figure 4
<p>Micro-cylindrical lens array homogenizer.</p>
Full article ">Figure 5
<p>Micro-cylindrical lens array homogenizer in fast direction (<b>above</b>); in slow direction (<b>below</b>).</p>
Full article ">Figure 6
<p>Intensity of the pumping line at the center of the slab crystal.</p>
Full article ">Figure 7
<p>Experimental data of the intensity of the pumping line at the center of the slab crystal.</p>
Full article ">Figure 8
<p>The model of slab crystal; the coordinate system is established as shown in the figure.</p>
Full article ">Figure 9
<p>The output energy (solid curve) and the FWHM pulse width (dashed curve) against the effective output coupler reflectivity when the repetition rate is 1 kHz.</p>
Full article ">Figure 10
<p>(<b>a</b>) the far field pattern of the diode stacks with fast axis collimation; (<b>b</b>) the experimental results of the pump shaping based on a micro-cylindrical array lens.</p>
Full article ">Figure 11
<p>The output pulse energy at different repetition rates against the pump power.</p>
Full article ">Figure 12
<p>The pulse width against the pump power is at 1 kHz.</p>
Full article ">Figure 13
<p>The beam quality factor of M<sup>2</sup>.</p>
Full article ">Figure 14
<p>Left: The pulse width of the output laser. Right: (<b>a</b>) the near field of the output laser; (<b>b</b>) the far field of the output laser.</p>
Full article ">
11 pages, 3579 KiB  
Article
Design and Validation of a Long-Range Streak-Tube Imaging Lidar System with High Ranging Accuracy
by Chaowei Dong, Zhaodong Chen, Zhigang Fan, Xing Wang, Lansong Cao, Pengfei Hao, Zhiwei Dong, Rongwei Fan and Deying Chen
Appl. Sci. 2024, 14(19), 8835; https://doi.org/10.3390/app14198835 - 1 Oct 2024
Viewed by 321
Abstract
The Streak-Tube Imaging Lidar (STIL) has been widely used in high-precision measurement systems due to its ability to capture detailed spatial and temporal information. In this paper, we proposed a ranging measurement method that integrates a Time-to-Digital Converter (TDC) with a streak camera [...] Read more.
The Streak-Tube Imaging Lidar (STIL) has been widely used in high-precision measurement systems due to its ability to capture detailed spatial and temporal information. In this paper, we proposed a ranging measurement method that integrates a Time-to-Digital Converter (TDC) with a streak camera in a remote STIL system. In this method, the TDC accurately measures the trigger pulse time, while the streak camera captures high time-resolution images of the laser echo, thereby enhancing both measurement accuracy and range. A corresponding ranging model is developed for this method. To validate the system’s performance, an outdoor experiment covering a distance of up to 6 km was conducted. The results demonstrate that the system achieved a distance measurement accuracy of 0.1 m, highlighting its effectiveness in long-range applications. The experiment further confirms that the combination of STIL and TDC significantly enhances accuracy and range, making it suitable for various long-range, high-precision measurement tasks. Full article
(This article belongs to the Special Issue Advances of Laser Technologies and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Structure of the STIL system. The red line represents the 1064 nm laser, and the green line represents the frequency-doubled 532 nm laser.</p>
Full article ">Figure 2
<p>Schematic diagram of the streak camera imaging principle: (<b>a</b>) composition of the streak camera; (<b>b</b>) after triggering, the signal is deflected and imaged on the phosphor screen; (<b>c</b>) output of the 2D streak image.</p>
Full article ">Figure 3
<p>Reflected spots of laser pulses from targets at different distances: (<b>a</b>) elliptical Gaussian laser spot formation on a target building at 1 km; (<b>b</b>) schematic of target building and STIL system positioning for long-range ranging at 6 km; (<b>c</b>) incomplete reflection of laser spot on test target building at 6 km.</p>
Full article ">Figure 4
<p>Laser spot images of a target surface at a 6 km distance captured by a CCD camera (false color): (<b>a</b>) raw image with projection distortion; (<b>b</b>) image after projection correction; (<b>c</b>) image region with a pixel spatial resolution of 4.3 mm/pixel and a calculated spot width of 0.66 m.</p>
Full article ">Figure 5
<p>Streak images captured by the STIL system: (<b>a</b>) raw streak image (512 × 1024 pixels) of the target in 6 km; (<b>b</b>) ROI (100 × 100 pixels) of the laser spot: the dashed line indicates the row of the ROI used for range accuracy verification, with red dots marking the centroid feature point extracted.</p>
Full article ">Figure 6
<p>TDC measured timing: Channel 1 measures the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>t</mi> </mrow> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mo>_</mo> <mi>t</mi> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>g</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> (σ<sub>1</sub> = 0.70 ns); Channel 2 measures the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>t</mi> </mrow> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mi>k</mi> <mo>_</mo> <mi>c</mi> <mi>a</mi> <mi>m</mi> <mi>e</mi> <mi>r</mi> <mi>a</mi> <mo>_</mo> <mi>t</mi> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>g</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> (σ<sub>2</sub> = 48 ps); the time difference representing the <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mrow> <mi mathvariant="normal">t</mi> </mrow> <mrow> <mi mathvariant="normal">t</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">g</mi> <mi mathvariant="normal">g</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> </mrow> </msub> </mrow> </semantics></math> (σ = 0.70 ns).</p>
Full article ">Figure 7
<p>Normalized histogram of <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>t</mi> </mrow> <mrow> <mi>r</mi> <mi>a</mi> <mi>n</mi> <mi>g</mi> <mi>e</mi> </mrow> <mrow> <mi>*</mi> </mrow> </msubsup> </mrow> </semantics></math>, with an average measurement time of 38,785.6 ns (d* = 5817 m) and a standard deviation of 0.65 ns (σ<sub>d</sub> = 0.1 m).</p>
Full article ">
12 pages, 1986 KiB  
Article
A Method of Realizing Adaptive Uniform Illumination by Pyramid Prism for PA-LiDAR
by Shuo Zhang, Baiyang Wu, Yong Bi and Weinan Gao
Micromachines 2024, 15(10), 1232; https://doi.org/10.3390/mi15101232 - 30 Sep 2024
Viewed by 355
Abstract
In this paper, we propose a simple method to generate the uniform illumination using a pyramid prism for Plane Array Laser Imaging Detection and Ranging (PA-LiDAR). The principle of the pyramid prism shaping the Gaussian beam to form a uniform beam was analyzed [...] Read more.
In this paper, we propose a simple method to generate the uniform illumination using a pyramid prism for Plane Array Laser Imaging Detection and Ranging (PA-LiDAR). The principle of the pyramid prism shaping the Gaussian beam to form a uniform beam was analyzed theoretically. By changing the parameters of the pyramid prism and laser beam, the profile distribution of the output beam can be easily adjusted. Based on the operation mode and illumination requirements of PA-LiDAR, we have developed a set of LiDAR prototypes using a pyramid prism and carried out experimental research on these prototypes. The simulation and experimental results demonstrated that this method can achieve a uniform illumination beam with excellent propagation properties for meeting the technical requirements of PA-LiDAR. This method of uniform illumination has the advantages of being simple, flexible, easily adjustable and convenient to operate. Full article
Show Figures

Figure 1

Figure 1
<p>The system components and working principle of PA-LiDAR.</p>
Full article ">Figure 2
<p>Principle of beam shaping progress by rectangular pyramid.</p>
Full article ">Figure 3
<p>Geometry characteristics of the splitting effects of the pyramid prism.</p>
Full article ">Figure 4
<p>Beam profile shaping with the pyramid prism.</p>
Full article ">Figure 5
<p>(<b>a</b>) Incident Gaussian beam (<b>b</b>) Unformed beam (<b>c</b>) Unformed beam (<b>d</b>) Flat-top beam.</p>
Full article ">Figure 6
<p>Simulation model.</p>
Full article ">Figure 7
<p>Geometric structure of rectangular pyramid.</p>
Full article ">Figure 8
<p>Flat-top laser beam profile at z = 50 mm plane produced by different included angles.</p>
Full article ">Figure 9
<p>Flat-top laser beam profile at z = 50 mm plane produced by different incidence angles.</p>
Full article ">Figure 10
<p>The flat-top laser beam profile with different working distance.</p>
Full article ">Figure 11
<p>Intensity distribution of the flat-top laser beam with different working distance.</p>
Full article ">Figure 12
<p>(<b>a</b>) Plane array imaging laser radar system structure (<b>b</b>) The prototype of PA-LiDAR.</p>
Full article ">Figure 13
<p>Construction and working mode of detection sensor.</p>
Full article ">Figure 14
<p>Evaluation function of uniformity calculation. O: 20 × 20 pixel area at the center of the field of view. A,B,C,D: 20 × 20 pixel areas at the four corners of the field of view.</p>
Full article ">Figure 15
<p>Standard deviation of distance value before and after beam shaping.</p>
Full article ">Figure 16
<p>Comparison of the beam before (second column images) and after (fourth column images) shaping in the different actual scenario. (<b>a</b>) Objects in laboratory. (<b>b</b>) Plants in corridor. (<b>c</b>) Goods in warehouse. (<b>d</b>) Outdoor signage.</p>
Full article ">
13 pages, 13393 KiB  
Review
Evolution of Single Photon Lidar: From Satellite Laser Ranging to Airborne Experiments to ICESat-2
by John J. Degnan
Photonics 2024, 11(10), 924; https://doi.org/10.3390/photonics11100924 - 30 Sep 2024
Viewed by 311
Abstract
In September 2018, NASA launched the ICESat-2 satellite into a 500 km high Earth orbit. It carried a truly unique lidar system, i.e., the Advanced Topographic Laser Altimeter System or ATLAS. The ATLAS lidar is capable of detecting single photons reflected from a [...] Read more.
In September 2018, NASA launched the ICESat-2 satellite into a 500 km high Earth orbit. It carried a truly unique lidar system, i.e., the Advanced Topographic Laser Altimeter System or ATLAS. The ATLAS lidar is capable of detecting single photons reflected from a wide variety of terrain (land, ice, tree leaves, and underlying terrain) and even performing bathymetric measurements due to its green wavelength. The system uses a single 5-watt, Q-switched laser producing a 10 kHz train of sub-nanosecond pulses, each containing 500 microjoules of energy. The beam is then split into three “strong” and three “weak” beamlets, with the “strong” beamlets containing four times the power of the “weak” beamlets in order to satisfy a wide range of Earth science goals. Thus, ATLAS is capable of making up to 60,000 surface measurements per second compared to the 40 measurements per second made by its predecessor multiphoton instrument, the Geoscience Laser Altimeter System (GLAS) on ICESat-1, which was terminated after several years of operation in 2009. Low deadtime timing electronics are combined with highly effective noise filtering algorithms to extract the spatially correlated surface photons from the solar and/or electronic background noise. The present paper describes how the ATLAS system evolved from a series of unique and seemingly unconnected personal experiences of the author in the fields of satellite laser ranging, optical antennas and space communications, Q-switched laser theory, and airborne single photon lidars. Full article
Show Figures

Figure 1

Figure 1
<p>The prototype NASA SLR2000 system projected a 2 kHz train of low energy, 532 nm, sub-nanosecond pulses via an unobscured 30 cm primary lens in order to concentrate more photons on the satellite while simultaneously eliminating potential eye hazards.</p>
Full article ">Figure 2
<p>Summary of airborne single photon lidars developed at Sigma Space Corporation.</p>
Full article ">Figure 3
<p>A rotating single wedge traces a circle on the terrain. Over longer ranges, the receiver array FOV will, due to the finite speed of light, become displaced along the circumference of the circle from the array of laser spots on the surface. Therefore, we often use an annular compensator wedge in which the transmitted laser beamlets pass unaffected through a small central hole (<b>a</b>) while the receiver array FOV is angularly displaced opposite to the direction of rotation so that the detector array views the illuminated area (<b>b</b>).</p>
Full article ">Figure 4
<p>Editing Noise (<b>a</b>) Unedited point cloud of Greenland terrain (surface reflectivity &gt; 0.9) shows a fair amount of solar noise above (red haze) and below (blue haze) the surface. (<b>b</b>) same point cloud image after use of Sigma-developed noise-editing filters.</p>
Full article ">Figure 5
<p>(<b>Top</b>) Lidar image of the Pacific coastline in Port Lobos, California. (<b>Bottom</b>) Detailed look along the blue line in the top figure showing a hilltop monastery, the heights of various trees, the surface of the Pacific Ocean, and the ocean bottom to a depth of 13 m (42.7 ft).</p>
Full article ">Figure 6
<p>Single photon lidar image of downtown Houston, Texas, taken by the Sigma HRQLS system (pronounced “Hercules”). Colors were arbitrarily assigned based on height above the ground.</p>
Full article ">Figure 7
<p>Side view of an airborne 3D image of a fire tower surrounded by a chain link fence and trees of varying height. Colors are arbitrarily assigned based on height above the surface.</p>
Full article ">Figure 8
<p>Sample surface data from a single channel (#6) of the 16-channel airborne MABELpush-broom lidar taken over the Greenland ice sheet from an altitude of 20 km [<a href="#B18-photonics-11-00924" class="html-bibr">18</a>]. The data demonstrated the lidar’s ability to observe a wide range of surface slopes in the presence of high-intensity solar (or detector) noise due to high spatial correlation of the surface counts and a low deadtime receiver.</p>
Full article ">
20 pages, 8076 KiB  
Article
In-Motion, Non-Contact Detection of Ties and Ballasts on Railroad Tracks
by S. Morteza Mirzaei, Ahmad Radmehr, Carvel Holton and Mehdi Ahmadian
Appl. Sci. 2024, 14(19), 8804; https://doi.org/10.3390/app14198804 - 30 Sep 2024
Viewed by 249
Abstract
This study aims to develop a robust and efficient system to identify ties and ballasts in motion using a variety of non-contact sensors mounted on a robotic rail cart. The sensors include distance LiDAR sensors and inductive proximity sensors for ferrous materials to [...] Read more.
This study aims to develop a robust and efficient system to identify ties and ballasts in motion using a variety of non-contact sensors mounted on a robotic rail cart. The sensors include distance LiDAR sensors and inductive proximity sensors for ferrous materials to collect data while traversing railroad tracks. Many existing tie/ballast health monitoring devices cannot be mounted on Hyrail vehicles for in-motion inspection due to their inability to filter out unwanted targets (i.e., ties or ballasts). The system studied here addresses that limitation by exploring several approaches based on distance LiDAR sensors. The first approach is based on calculating the running standard deviation of the measured distance from LiDAR sensors to tie or ballast surfaces. The second approach uses machine learning (ML) methods that combine two primary algorithms (Logistic Regression and Decision Tree) and three preprocessing methods (six models in total). The results indicate that the optimal configuration for non-contact, in-motion differentiation of ties and ballasts is integrating two distance LiDAR sensors with a Decision Tree model. This configuration provides rapid, accurate, and robust tie/ballast differentiation. The study also facilitates further sensor and inspection research and development in railroad track maintenance. Full article
(This article belongs to the Topic Advances in Non-Destructive Testing Methods, 2nd Volume)
Show Figures

Figure 1

Figure 1
<p>Railway Technologies Laboratory’s remotely controlled track cart used for in-motion differentiation of ties and ballasts.</p>
Full article ">Figure 2
<p>Sensor installation onboard the track cart used for in-motion differentiation of ties and ballasts.</p>
Full article ">Figure 3
<p>Left and right LiDAR sensor installation for in-motion differentiation of ties and ballasts and their potential application for detecting track anomalies, such as sunk, tilted, or bowed ties.</p>
Full article ">Figure 4
<p>Laboratory evaluation of LiDAR sensors on a 40 ft track panel for differentiating between ties and ballasts.</p>
Full article ">Figure 5
<p>Laboratory system evaluation: (<b>a</b>) comparison between LiDAR sensor and inductive proximity sensor measurements. Orange-shaded areas correspond to ties and unshaded areas to ballasts; (<b>b</b>) response of the inductive proximity sensor to a sample tie, where gray dots indicate the “null” regions (no Hall effect measurement) or ballasts and orange dots show the “activated” regions (Hall effect measurement) or ties.</p>
Full article ">Figure 6
<p>A sample of track testing results performed on a branch track and a mainline: (<b>a</b>) measured distances from LiDAR sensors to the tie/ballast surfaces on a branch track, with the left (green) plot shifted for better visualization of the plots; (<b>b</b>) measured distances from LiDAR sensors to the tie/ballast surfaces on a mainline, with the left (green) plot shifted. Like the previous plots, orange-shaded areas are associated with the ties, and the unshaded regions are associated with the ballasts.</p>
Full article ">Figure 7
<p>High-speed test setup: (<b>a</b>) a simulated track set up on an asphalt surface with a tie and ballast arrangement like a railroad track; (<b>b</b>) distance LiDAR sensor installation on the rear of a Chevy Silverado for the simulated high-speed tests.</p>
Full article ">Figure 8
<p>High-speed testing of LiDAR system on a simulated railroad track: (<b>a</b>) 19 mph (30 km/h); (<b>b</b>) 37 mph (59 km/h). The orange-shaded areas correspond to ties, while the unshaded areas correspond to ballasts.</p>
Full article ">Figure 9
<p>A graphical illustration of moving standard deviation for analyzing the LiDAR sensor measurements used for tie and ballast differentiation.</p>
Full article ">Figure 10
<p>LiDAR system measurements for branch-line track testing at 4 mph showing measured distances from the LiDAR sensor to tie and ballast surfaces (gray line) and the moving standard deviation of the distances for a 4-inch spatial window (purple line). Ties are marked with orange shading, and unshaded areas represent ballasts.</p>
Full article ">Figure 11
<p>A schematic of six distinct models, developed and investigated to determine the best model for processing LiDAR data to differentiate between ties and ballasts.</p>
Full article ">Figure 12
<p>A sample of measured distances from LiDAR sensors to tie/ballast surfaces used for training the ML models for tie/ballast differentiation. Ties are marked with orange shading, and unshaded areas represent ballasts. (Plots are shifted by 3 cm for clarity).</p>
Full article ">Figure 13
<p>Accuracy of models from Logistic Regression and Decision Tree algorithms for various window sizes trained on the left LiDAR’s distance measurements (distance from LiDAR sensor to tie/ballast surfaces).</p>
Full article ">Figure 14
<p>A sample section illustrating the model’s performance in differentiating ties and ballasts. The blue line represents the model’s predictions, where 1 indicates a tie and 0 indicates ballast. The model is trained on the left LiDAR’s distance measurements (distance from the LiDAR sensor to the tie/ballast surface) using the Decision Tree algorithm with a window size of 40. Both ties (highlighted in orange) are cracked.</p>
Full article ">Figure 15
<p>An example of the difference between the left and right LiDAR sensor measurements used for training the ML models for tie/ballast differentiation.</p>
Full article ">Figure 16
<p>Accuracy of models from the Decision Tree algorithm for various window sizes trained on the differences between the left and right LiDAR sensors’ measurements.</p>
Full article ">Figure 17
<p>A sample standard deviation of left and right difference LiDAR measurements used in training the ML models for tie/ballast differentiation.</p>
Full article ">Figure 18
<p>Accuracy of models from Decision Tree and Logistic Regression algorithms for various window sizes trained on the standard deviation of the differences between the left and right LiDAR sensors’ measurements.</p>
Full article ">Figure 19
<p>The final model’s performance: (<b>a</b>) The left and right LiDAR sensor measurements, along with the model’s output, in distinguishing between the ties and ballasts over a segment of unseen (i.e., unlabeled) data. The blue line indicates ties (marked as 1) and ballasts (marked as 0). The two ties marked with stars are the same ties shown in <a href="#applsci-14-08804-f014" class="html-fig">Figure 14</a>. (<b>b</b>) The confusion matrix shows the percentage of ties and ballasts correctly predicted by the model.</p>
Full article ">
24 pages, 27095 KiB  
Article
Examining the Impact of Topography and Vegetation on Existing Forest Canopy Height Products from ICESat-2 ATLAS/GEDI Data
by Yisa Li, Dengsheng Lu, Yagang Lu and Guiying Li
Remote Sens. 2024, 16(19), 3650; https://doi.org/10.3390/rs16193650 - 30 Sep 2024
Viewed by 270
Abstract
Forest canopy height (FCH) is an important variable for estimating forest biomass and ecosystem carbon sequestration. Spaceborne LiDAR data have been used to create wall-to-wall FCH maps, such as the forest tree height map of China (FCHChina), Global Forest Canopy Height 2020 (GFCH2020), [...] Read more.
Forest canopy height (FCH) is an important variable for estimating forest biomass and ecosystem carbon sequestration. Spaceborne LiDAR data have been used to create wall-to-wall FCH maps, such as the forest tree height map of China (FCHChina), Global Forest Canopy Height 2020 (GFCH2020), and Global Forest Canopy Height 2019 (GFCH2019). However, these products lack comprehensive assessment. This study used airborne LiDAR data from various topographies (e.g., plain, hill, and mountain) to assess the impacts of different topographical and vegetation characteristics on spaceborne LiDAR-derived FCH products. The results show that GEDI–FCH demonstrates better accuracy in plain and hill regions, while ICESat-2 ATLAS–FCH shows superior accuracy in the mountainous region. The difficulty in accurately capturing photons from sparse tree canopies by ATLAS and the geolocation errors of GEDI has led to partial underestimations of FCH products in plain areas. Spaceborne LiDAR FCH retrievals are more accurate in hilly regions, with a root mean square error (RMSE) of 4.99 m for ATLAS and 3.85 m for GEDI. GEDI–FCH is significantly affected by slope in mountainous regions, with an RMSE of 13.26 m. For wall-to-wall FCH products, the availability of FCH data is limited in plain areas. Optimal accuracy is achieved in hilly regions by FCHChina, GFCH2020, and GFCH2019, with RMSEs of 5.52 m, 5.07 m, and 4.85 m, respectively. In mountainous regions, the accuracy of wall-to-wall FCH products is influenced by factors such as tree canopy coverage, forest cover types, and slope. However, some of these errors may stem from directly using current ATL08 and GEDI L2A FCH products for mountainous FCH estimation. Introducing accurate digital elevation model (DEM) data can improve FCH retrieval from spaceborne LiDAR to some extent. This research improves our understanding of the existing FCH products and provides valuable insights into methods for more effectively extracting accurate FCH from spaceborne LiDAR data. Further research should focus on developing suitable approaches to enhance the FCH retrieval accuracy from spaceborne LiDAR data and integrating multi-source data and modeling algorithms to produce accurate wall-to-wall FCH distribution in a large area. Full article
(This article belongs to the Special Issue Lidar for Forest Parameters Retrieval)
Show Figures

Figure 1

Figure 1
<p>The selected three study areas in Anhui Province: (<b>a</b>) Landform types; (<b>b</b>) Lixin County in plain region; (<b>c</b>) Lujiang County in hilly region; (<b>d</b>) Huangshan District in mountainous region, overlaid with ATLAS and GEDI tracks.</p>
Full article ">Figure 2
<p>Profiles of spaceborne LiDAR data and corresponding ALS data along the tracks in mountainous region (<b>a</b>) ICESat-2 ATLAS, (<b>b</b>) GEDI.</p>
Full article ">Figure 3
<p>Comparison of ATLAS photons and ALS DSM and DEM among plain, hill, and mountain with different slopes and canopy cover levels. (<b>a1</b>–<b>a4</b>) different canopy levels in plain region; (<b>b1</b>–<b>b4</b>) different combinations of slopes and canopy covers in hilly region; (<b>c1</b>–<b>c4</b>) different combinations of slopes and canopy covers in mountainous region.</p>
Full article ">Figure 4
<p>Comparison of GEDI elevation and ALS DSM and DEM among plain, hill and mountain with different slopes and canopy cover levels. (<b>a1</b>–<b>a4</b>) different canopy levels in plain region; (<b>b1</b>–<b>b4</b>) different combinations of slopes and canopy covers in hilly region; (<b>c1</b>–<b>c4</b>) different combinations of slopes and canopy covers in mountainous region.</p>
Full article ">Figure 5
<p>Comparison of spaceborne LiDAR FCH and corresponding ALS FCH (<b>a</b> and <b>b</b> represent ATLAS FCH and GEDI FCH; <b>1</b>, <b>2,</b> and <b>3</b> represent plain, hill, and mountain regions).</p>
Full article ">Figure 6
<p>Root mean square error (RMSE), relative RMSE (rRMSE), and bias of ATLAS forest canopy height (FCH) and GEDI FCH under different terrain slope levels and canopy cover levels in mountain region (Huangshan District) (for canopy cover, only footprints with slopes &lt; 20° were used for analysis) (<b>a</b>, <b>b</b>, and <b>c</b> represent RMSE, rRMSE, and bias; <b>1</b> and <b>2</b> represent slope and canopy cover).</p>
Full article ">Figure 7
<p>Comparison of three wall-to-wall FCH products (<b>a</b>—FCHChina; <b>b</b>—GFCH2020; and <b>c</b>—GFCH2019) with corresponding ALS FCH based on different topographies (<b>1</b>, <b>2</b>, and <b>3</b> represent plain, hill, and mountain regions).</p>
Full article ">Figure 8
<p>Residuals of three FCH products ((<b>a</b>) FCHChina, (<b>b</b>) GFCH2020, and (<b>c</b>) GFCH2019) at different slope intervals (Residual here is FCH maps—ALS referenced values).</p>
Full article ">Figure 9
<p>Comparison of wall-to-wall FCH products with ALS FCH in different slope intervals (<b>a</b>, <b>b</b> and <b>c</b> represent FCHChina, GFCH2020, and GFCH2019; <b>1</b>, <b>2</b>, <b>3</b>, <b>4</b>, <b>5</b>, and <b>6</b> represent five slope ranges: 0°–10°, 10°–20°, 20°–30°, 30°–40°, 40°–50°, and 50°–60°).</p>
Full article ">Figure 10
<p>Assessment of wall-to-wall FCH products with ALS FCH under different forest types (<b>a</b>, <b>b</b>, and <b>c</b> represent FCHChina, GFCH2020, and GFCH2019; <b>1</b>, <b>2</b>, <b>3</b>, and <b>4</b> represent Chinese fir, Masson pine, Moso bamboo, and broadleaf forests).</p>
Full article ">Figure 11
<p>Assessment of wall-to-wall FCH products with ALS FCH under different canopy covers (<b>a</b>, <b>b</b>, and <b>c</b> represent FCHChina, GFCH2020, and GFCH2019; <b>1</b>, <b>2</b>, and <b>3</b> represent low, medium, and high canopy covers).</p>
Full article ">Figure 12
<p>Comparison of original FCH and modified one using ALS data, (<b>a</b>) original ATLAS FCH, (<b>b</b>) improved ATLAS FCH, (<b>c</b>) original GEDI FCH, (<b>d</b>) improved GEDI FCH).</p>
Full article ">Figure 13
<p>Residual distribution of ATLAS FCH and ALS FCH under different canopy photon numbers in plain area.</p>
Full article ">
29 pages, 12094 KiB  
Article
Bitemporal Radiative Transfer Modeling Using Bitemporal 3D-Explicit Forest Reconstruction from Terrestrial Laser Scanning
by Chang Liu, Kim Calders, Niall Origo, Louise Terryn, Jennifer Adams, Jean-Philippe Gastellu-Etchegorry, Yingjie Wang, Félicien Meunier, John Armston, Mathias Disney, William Woodgate, Joanne Nightingale and Hans Verbeeck
Remote Sens. 2024, 16(19), 3639; https://doi.org/10.3390/rs16193639 - 29 Sep 2024
Viewed by 367
Abstract
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study [...] Read more.
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study presents the 3D-explicit reconstruction of a typical temperate deciduous forest in 2015 and 2022. We demonstrate for the first time the potential use of bitemporal 3D-explicit RT modeling from terrestrial laser scanning on the forward modeling and quantitative interpretation of: (1) remote sensing (RS) observations of leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and canopy light extinction, and (2) the impact of canopy gap dynamics on light availability of explicit locations. Results showed that, compared to the 2015 scene, the hemispherical-directional reflectance factor (HDRF) of the 2022 forest scene relatively decreased by 3.8% and the leaf FAPAR relatively increased by 5.4%. At explicit locations where canopy gaps significantly changed between the 2015 scene and the 2022 scene, only under diffuse light did the branch damage and closing gap significantly impact ground light availability. This study provides the first bitemporal RT comparison based on the 3D RT modeling, which uses one of the most realistic bitemporal forest scenes as the structural input. This bitemporal 3D-explicit forest RT modeling allows spatially explicit modeling over time under fully controlled experimental conditions in one of the most realistic virtual environments, thus delivering a powerful tool for studying canopy light regimes as impacted by dynamics in forest structure and developing RS inversion schemes on forest structural changes. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Geographic location and map of Wytham Woods with plot indicated by ‘X’ [<a href="#B60-remotesensing-16-03639" class="html-bibr">60</a>].</p>
Full article ">Figure 2
<p>Spectral properties of different tree species in the plot [<a href="#B2-remotesensing-16-03639" class="html-bibr">2</a>,<a href="#B51-remotesensing-16-03639" class="html-bibr">51</a>]. (<b>a</b>) Reflectance and transmittance of leaves; (<b>b</b>) reflectance of bark.</p>
Full article ">Figure 3
<p>Locations of canopy gap dynamics and photosynthetically active radiation (PAR) sensors simulated, shown in the TLS point cloud (top view): (<b>a</b>) 2015; (<b>b</b>) 2022.</p>
Full article ">Figure 4
<p>Vertical profiles of different types of canopy gap dynamics observed by terrestrial laser scanning, and the position of simulated PAR sensors.</p>
Full article ">Figure 5
<p>Flowchart of research methodology. QSMs of woody structure were reconstructed using leaf-off TLS data.</p>
Full article ">Figure 6
<p>Segmented TLS leaf-off point cloud of 1-ha Wytham Woods forest stand (top view): (<b>a</b>) 2015; (<b>b</b>) and 2022. Each color represents an individual tree.</p>
Full article ">Figure 7
<p>The dynamic change of wood structure of a Common ash (<span class="html-italic">Fraxinus excelsior</span>) tree from 2015 to 2022. (<b>a</b>) 2015 leaf-off point cloud; (<b>b</b>) 2022 leaf-off point cloud.</p>
Full article ">Figure 8
<p>3D-explicit reconstruction of a Sycamore (<span class="html-italic">Acer pseudoplatanus</span>) tree. (<b>a</b>) TLS point cloud colored by height (leaf-off); (<b>b</b>) QSM overlaid with TLS leaf-off point cloud; (<b>c</b>) QSM, the modeled branch length was 3863.3 m; (<b>d</b>) Fully reconstructed tree: QSM + leaves, the leaf area assigned to this tree was 888.2 m<sup>2</sup>.</p>
Full article ">Figure 9
<p>The 3D-explicit models of the complete 1-ha Wytham Woods forest stand in (<b>a</b>) 2015 and (<b>b</b>) 2022. The different leaf colors represent the different tree species present in Wytham Woods. The stems and branches of all trees are shown in brown.</p>
Full article ">Figure 10
<p>The vertical profiles of simulated (<b>a</b>) light extinction, (<b>b</b>) light absorption, and (<b>c</b>) leaf area per meter of height in 2015 and 2022 forest scenes. The results of light extinction and absorption were based on the PAR band. The illumination zenith angle (IZA) was 38.4° and the illumination azimuth angle (IAA) was 125.2°.</p>
Full article ">Figure 11
<p>The vertical profiles of simulated (<b>a</b>) light extinction and (<b>b</b>) light absorption in the blue, green, red, and NIR bands for the 2015 and 2022 forest scenes. Illumination zenith angle (IZA) 38.4°, illumination azimuth angle (IAA) 125.2°.</p>
Full article ">Figure 12
<p>Simulated top of canopy images of Wytham Woods forest scenes in 2015 and 2022. The images were simulated under nadir viewing directions and Sentinel-2 RGB bands. IZA 38.4°, IAA 125.2°. (<b>a</b>,<b>b</b>) Ultra-high resolution images in 2015 and 2022 (spatial resolution: 1 cm); (<b>d</b>,<b>e</b>) 25 cm resolution images in 2015 and 2022; (<b>g</b>,<b>h</b>) 10 m resolution images in 2015 and 2022; (<b>c</b>,<b>f</b>,<b>i</b>) Spatial pattern of HDRF variation from 2015 to 2022 (red band).</p>
Full article ">Figure 13
<p>Light extinction profiles of downward PAR at location 1: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 14
<p>Light extinction profiles of downward PAR at location 2: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 15
<p>Light extinction profiles of downward PAR at location 3: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 16
<p>Light extinction profiles of downward PAR at location 4: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">
26 pages, 13744 KiB  
Article
When-to-Loop: Enhanced Loop Closure for LiDAR SLAM in Urban Environments Based on SCAN CONTEXT
by Xu Xu, Lianwu Guan, Jianhui Zeng, Yunlong Sun, Yanbin Gao and Qiang Li
Micromachines 2024, 15(10), 1212; https://doi.org/10.3390/mi15101212 - 29 Sep 2024
Viewed by 232
Abstract
Global Navigation Satellite Systems (GNSSs) frequently encounter challenges in providing reliable navigation and positioning within urban canyons due to signal obstruction. Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMUs) offers an alternative for autonomous navigation, but they are susceptible to accumulating errors. To mitigate [...] Read more.
Global Navigation Satellite Systems (GNSSs) frequently encounter challenges in providing reliable navigation and positioning within urban canyons due to signal obstruction. Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMUs) offers an alternative for autonomous navigation, but they are susceptible to accumulating errors. To mitigate these influences, a LiDAR-based Simultaneous Localization and Mapping (SLAM) system is often employed. However, these systems face challenges in drift and error accumulation over time. This paper presents a novel approach to loop closure detection within LiDAR-based SLAM, focusing on the identification of previously visited locations to correct time-accumulated errors. Specifically, the proposed method leverages the vehicular drivable area and IMU trajectory to identify significant environmental changes in keyframe selection. This approach differs from conventional methods that only rely on distance or time intervals. Furthermore, the proposed method extends the SCAN CONTEXT algorithm. This technique incorporates the overall distribution of point clouds within a region rather than solely relying on maximum height to establish more robust loop closure constraints. Finally, the effectiveness of the proposed method is validated through experiments conducted on the KITTI dataset with an enhanced accuracy of 6%, and the local scenarios exhibit a remarkable improvement in accuracy of 17%, demonstrating improved robustness in loop closure detection for LiDAR-based SLAM. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of Navigation Error Divergence over Time.</p>
Full article ">Figure 2
<p>Overview Workflow of Proposed Algorithm.</p>
Full article ">Figure 3
<p>Potential Scenarios for Vehicular Navigation in Urban Environments from KITTI Datasets.</p>
Full article ">Figure 4
<p>Ground and Obstacle Points in LiDAR Point Cloud and Image from Left Camera.</p>
Full article ">Figure 5
<p>Ground Points Extracted from Point Cloud <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>P</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Point Cloud Processing for a Scene in Overhead Occlusion. (<b>a</b>) Front View Camera Image. (<b>b</b>) Original LiDAR Point Cloud Scan with Overhead Occlusion Highlighted in White. (<b>c</b>) Filtered Point Cloud with Distractions like Pedestrians Removed. (<b>d</b>) Extracted Ground Points.</p>
Full article ">Figure 7
<p>Extracted Ground and Obstacle Point Clouds in Different Scenarios. (<b>a</b>) Straight Road. (<b>b</b>) T-Junction on a Circular Road. (<b>c</b>) T-Junction. (<b>d</b>) Four-Way Intersection.</p>
Full article ">Figure 7 Cont.
<p>Extracted Ground and Obstacle Point Clouds in Different Scenarios. (<b>a</b>) Straight Road. (<b>b</b>) T-Junction on a Circular Road. (<b>c</b>) T-Junction. (<b>d</b>) Four-Way Intersection.</p>
Full article ">Figure 8
<p>SCAN CONTEXT Bins.</p>
Full article ">Figure 9
<p>Keyframe Determination Flowchart.</p>
Full article ">Figure 10
<p>MEMS Gyroscope and Accelerometer Data Variations During Experiment.</p>
Full article ">Figure 11
<p>Improved SCAN CONTEXT Point Cloud Compression.</p>
Full article ">Figure 12
<p>Camera Images, LiDAR Point Cloud and Re-Encoding Result of the Cross Road. (<b>a</b>) View of Crossroad from Camera. (<b>b</b>) LiDAR Point Cloud of Cross. (<b>c</b>) Encoding Result of LiDAR Point Cloud.</p>
Full article ">Figure 13
<p>Use of Traditional MEMS/GNSS-Integrated Navigation Systems to Redefine LiDAR-Integrated Navigation Systems.</p>
Full article ">Figure 14
<p>Algorithm Overview of SINS-Based 3D LiDAR Tightly Integrated with SLAM.</p>
Full article ">Figure 15
<p>The sensor configuration on the platform.</p>
Full article ">Figure 16
<p>Camera Image and Point Clouds from KITTI Dataset 00.</p>
Full article ">Figure 17
<p>Comparison Results of KITTI Dataset 00.</p>
Full article ">Figure 18
<p>Experimental Hardware. (<b>a</b>) 3D LiDAR and GNSS Antenna. (<b>b</b>) GNSS Processing, Fiber Optic Gyroscope, and MEMS.</p>
Full article ">Figure 19
<p>Result of LiDAR SLAM in Data_1. (<b>a</b>) MEMS data from Data_1. (<b>b</b>) Built by LOAM. (<b>c</b>) Built by SC-LeGO-LOAM. (<b>d</b>) Mapping Result of Method from Paper. (<b>e</b>) Direct Comparison of Various Algorithms. (<b>f</b>) Positioning Errors of Algorithms Measured in Meters.</p>
Full article ">Figure 19 Cont.
<p>Result of LiDAR SLAM in Data_1. (<b>a</b>) MEMS data from Data_1. (<b>b</b>) Built by LOAM. (<b>c</b>) Built by SC-LeGO-LOAM. (<b>d</b>) Mapping Result of Method from Paper. (<b>e</b>) Direct Comparison of Various Algorithms. (<b>f</b>) Positioning Errors of Algorithms Measured in Meters.</p>
Full article ">Figure 20
<p>Camera Image and Point Clouds Near Parking Lot. (<b>a</b>) Camera Image. (<b>b</b>) Point Clouds.</p>
Full article ">Figure 21
<p>Result of LiDAR SLAM in Data_2. (<b>a</b>) MEMS Data from Data_2. (<b>b</b>) Built by LOAM. (<b>c</b>) Built by SC-LeGO-LOAM. (<b>d</b>) Mapping Result of Method from Paper. (<b>e</b>) Direct Comparison of Various Algorithms. (<b>f</b>) Positioning Errors of Algorithms Measured in Meters.</p>
Full article ">Figure 21 Cont.
<p>Result of LiDAR SLAM in Data_2. (<b>a</b>) MEMS Data from Data_2. (<b>b</b>) Built by LOAM. (<b>c</b>) Built by SC-LeGO-LOAM. (<b>d</b>) Mapping Result of Method from Paper. (<b>e</b>) Direct Comparison of Various Algorithms. (<b>f</b>) Positioning Errors of Algorithms Measured in Meters.</p>
Full article ">
17 pages, 6147 KiB  
Article
Tactile Simultaneous Localization and Mapping Using Low-Cost, Wearable LiDAR
by John LaRocco, Qudsia Tahmina, John Simonis, Taylor Liang and Yiyao Zhang
Hardware 2024, 2(4), 256-272; https://doi.org/10.3390/hardware2040012 - 29 Sep 2024
Viewed by 266
Abstract
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., [...] Read more.
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., ClaySight) that enhances the creation of automatic tactile map generation, as well as a model for wearable devices that use low-cost laser imaging, detection, and ranging (LiDAR,) used to improve the immediate spatial knowledge of visually impaired individuals. Our system uses LiDAR sensors to (1) produce affordable, low-latency tactile maps, (2) function as a day-to-day wayfinding aid, and (3) provide interactivity using a wearable device. The system comprises a dynamic mapping and scanning algorithm and an interactive handheld 3D-printed device that houses the hardware. Our algorithm accommodates user specifications to dynamically interact with objects in the surrounding area and create map models that can be represented with haptic feedback or alternative tactile systems. Using economical components and open-source software, the ClaySight system has significant potential to enhance independence and quality of life for the visually impaired. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart for ClaySight workflow with LiDAR scanning and generating haptic feedback.</p>
Full article ">Figure 2
<p>PCB layout for ClaySight, including labeled component slots. The detailed schematic is provided in the <a href="#app1-hardware-02-00012" class="html-app">Supplementary Files</a> of the paper.</p>
Full article ">Figure 3
<p>Data-logging function flowchart.</p>
Full article ">Figure 4
<p>ClaySight wiring diagram and hardware schematic. Lines denote wired connections between components. The LiDAR and MCU board control the driver that rotates the LiDAR sensor, as well as converting LiDAR output to directly control 4–8 haptic motors. (The detailed schematic is provided in the <a href="#app1-hardware-02-00012" class="html-app">Supplementary Files</a> of the paper).</p>
Full article ">Figure 5
<p>Positions for mounting electronic assemblies on the ClaySight PCB. (The detailed schematic is provided in the <a href="#app1-hardware-02-00012" class="html-app">Supplementary Files</a> of the paper).</p>
Full article ">Figure 6
<p>Soldered PCB with 4 motors mounted and TF-Luna LiDAR attached.</p>
Full article ">Figure 7
<p>Mounting and assembly for casing, gears, and TF-Luna LiDAR.</p>
Full article ">Figure 8
<p>Workflow to configure the data logger.</p>
Full article ">Figure 9
<p>Prototype of the LiDAR sensor-based navigation system.</p>
Full article ">Figure 10
<p>Wearing the ClaySight device on the wrist.</p>
Full article ">Figure 11
<p>Mounting ClaySight unit on a tripod for calibration.</p>
Full article ">Figure 12
<p>Testing the ClaySight device by angling it against a stairwell, with the laser modeled as red line.</p>
Full article ">Figure 13
<p>Motor PWM resulting from distance measurements under control and noisy conditions.</p>
Full article ">
28 pages, 2513 KiB  
Article
ROS Gateway: Enhancing ROS Availability across Multiple Network Environments
by Byoung-Youl Song and Hoon Choi
Sensors 2024, 24(19), 6297; https://doi.org/10.3390/s24196297 - 29 Sep 2024
Viewed by 253
Abstract
As the adoption of large-scale model-based AI grows, the field of robotics is undergoing significant changes. The emergence of cloud robotics, where advanced tasks are offloaded to fog or cloud servers, is gaining attention. However, the widely used Robot Operating System (ROS) does [...] Read more.
As the adoption of large-scale model-based AI grows, the field of robotics is undergoing significant changes. The emergence of cloud robotics, where advanced tasks are offloaded to fog or cloud servers, is gaining attention. However, the widely used Robot Operating System (ROS) does not support communication between robot software across different networks. This paper introduces ROS Gateway, a middleware designed to improve the usability and extend the communication range of ROS in multi-network environments, which is important for processing sensor data in cloud robotics. We detail its structure, protocols, and algorithms, highlighting improvements over traditional ROS configurations. The ROS Gateway efficiently handles high-volume data from advanced sensors such as depth cameras and LiDAR, ensuring reliable transmission. Based on the rosbridge protocol and implemented in Python 3, ROS Gateway is compatible with rosbridge-based tools and runs on both x86 and ARM-based Linux environments. Our experiments show that the ROS Gateway significantly improves performance metrics such as topic rate and delay compared to standard ROS setups. We also provide predictive formulas for topic receive rates to guide the design and deployment of robotic applications using ROS Gateway, supporting performance estimation and system optimization. These enhancements are essential for developing responsive and intelligent robotic systems in dynamic environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Example of a ROS-Based robotic application in cloud or fog Configuration. Subnet <math display="inline"><semantics> <mi>α</mi> </semantics></math> is a subnet to which multiple hosts that constitute cloud or fog computing are connected. Subnet <math display="inline"><semantics> <mi>β</mi> </semantics></math> is a subnet to which robots are connected, or a local host network within the robot. Subnet <math display="inline"><semantics> <mi>γ</mi> </semantics></math> is a subnet to which a control system consisting of multiple hosts in a remote location is connected, though not a subnet in which robots are included.</p>
Full article ">Figure 2
<p>The Gateway architecture. * The Client Worker is activated when a configuration that enables connectivity to a gateway in a different network is applied. ** Server Workers are initially created with a single process for receiving commands from external sources. Subsequently, a new process is assigned based on the client that establishes a connection.</p>
Full article ">Figure 3
<p>Comparison of ROS topic transmission rates for different configurations: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), the use of the Gateway on separate networks (<span class="html-italic">Gateway-pub-json, Gateway-sub-json, Gateway-sub-raw</span>), and the use of rosbridge on separate networks (<span class="html-italic">rosbridge-pub-json, rosbridge-sub-json, rosbridge-sub-raw</span>). The best performance for each configuration is plotted, with error bars representing the worst performance. Higher values indicate superior performance.</p>
Full article ">Figure 4
<p>Comparison of ROS topic transmission delay for different configurations: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), the use of the Gateway on separate networks (<span class="html-italic">Gateway-pub-json, Gateway-sub-json, Gateway-sub-raw</span>), and the use of rosbridge on separate networks (<span class="html-italic">rosbridge-pub-json, rosbridge-sub-json, rosbridge-sub-raw</span>). A log scale is used to illustrate the delay values. The minimum delay for each configuration is plotted, and the error bars represent the maximum delay. Lower values indicate better performance. The horizontal dashed lines on the graph represent the real-time limit delay at each occurrence rate.</p>
Full article ">Figure 5
<p>Observed topic rates by Gateway configurations and sensor publishing rates: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), as well as the use of the Gateway on each option. The best performance for each configuration is plotted, with error bars representing the worst performance. Higher values indicate superior performance.</p>
Full article ">Figure 6
<p>Observed topic delays by Gateway configurations and sensor publishing rates: The configurations include the use of ROS on a single device (<span class="html-italic">ros2-localhost</span>) and local subnet (<span class="html-italic">ros2-subnet</span>), as well as the use of the Gateway on each option. A log scale is used to illustrate the delay values. The minimum delay for each configuration is plotted, and the error bars represent the maximum delay. Lower values indicate better performance. The horizontal dashed lines on the graph represent the real-time limit delay at each occurrence rate.</p>
Full article ">Figure A1
<p>Topic task sequence diagram.</p>
Full article ">Figure A2
<p>Service task sequence diagram.</p>
Full article ">Figure A3
<p>Action task sequence diagram.</p>
Full article ">
13 pages, 4821 KiB  
Article
Marking-Based Perpendicular Parking Slot Detection Algorithm Using LiDAR Sensors
by Jing Gong, Amod Raut, Marcel Pelzer and Felix Huening
Vehicles 2024, 6(4), 1717-1729; https://doi.org/10.3390/vehicles6040083 - 29 Sep 2024
Viewed by 246
Abstract
The emergence of automotive-grade LiDARs has given rise to new potential methods to develop novel advanced driver assistance systems (ADAS). However, accurate and reliable parking slot detection (PSD) remains a challenge, especially in the low-light conditions typical of indoor car parks. Existing camera-based [...] Read more.
The emergence of automotive-grade LiDARs has given rise to new potential methods to develop novel advanced driver assistance systems (ADAS). However, accurate and reliable parking slot detection (PSD) remains a challenge, especially in the low-light conditions typical of indoor car parks. Existing camera-based approaches struggle with these conditions and require sensor fusion to determine parking slot occupancy. This paper proposes a parking slot detection (PSD) algorithm which utilizes the intensity of a LiDAR point cloud to detect the markings of perpendicular parking slots. LiDAR-based approaches offer robustness in low-light environments and can directly determine occupancy status using 3D information. The proposed PSD algorithm first segments the ground plane from the LiDAR point cloud and detects the main axis along the driving direction using a random sample consensus algorithm (RANSAC). The remaining ground point cloud is filtered by a dynamic Otsu’s threshold, and the markings of parking slots are detected in multiple windows along the driving direction separately. Hypotheses of parking slots are generated between the markings, which are cross-checked with a non-ground point cloud to determine the occupancy status. Test results showed that the proposed algorithm is robust in detecting perpendicular parking slots in well-marked car parks with high precision, low width error, and low variance. The proposed algorithm is designed in such a way that future adoption for parallel parking slots and combination with free-space-based detection approaches is possible. This solution addresses the limitations of camera-based systems and enhances PSD accuracy and reliability in challenging lighting conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Processing pipeline of proposed PSD algorithm. <b>Top</b>: High-level processing pipeline for both sides facing LiDARs. <b>Bottom</b>: Detection pipeline for a single LiDAR.</p>
Full article ">Figure 2
<p>Input point cloud and ROI.</p>
Full article ">Figure 3
<p>Preprocessed point cloud.</p>
Full article ">Figure 4
<p>Intensity-filtered ground point cloud.</p>
Full article ">Figure 5
<p>Detected parking slots.</p>
Full article ">Figure 6
<p>Test vehicle JUPITER and side LiDAR.</p>
Full article ">Figure 7
<p>Test section of the car park.</p>
Full article ">Figure 8
<p>Shadowing effect due to the parked vehicles.</p>
Full article ">
23 pages, 7080 KiB  
Article
Multitemporal Quantification of the Geomorphodynamics on a Slope within the Cratére Dolomieu—At the Piton de la Fournaise (La Réunion, Indian Ocean) Using Terrestrial LiDAR Data, Terrestrial Photographs, and Webcam Data
by Kerstin Wegner, Virginie Durand, Nicolas Villeneuve, Anne Mangeney, Philippe Kowalski, Aline Peltier, Manuel Stark, Michael Becht and Florian Haas
Geosciences 2024, 14(10), 259; https://doi.org/10.3390/geosciences14100259 - 28 Sep 2024
Viewed by 215
Abstract
In this study, the geomorphological evolution of an inner flank of the Dolomieu at Piton de La Fournaise/La Réunion was investigated with the help of terrestrial laser scanning (TLS) data, terrestrial photogrammetric images, and historical webcam photographs. While TLS data and the terrestrial [...] Read more.
In this study, the geomorphological evolution of an inner flank of the Dolomieu at Piton de La Fournaise/La Réunion was investigated with the help of terrestrial laser scanning (TLS) data, terrestrial photogrammetric images, and historical webcam photographs. While TLS data and the terrestrial images were recorded during three field surveys, the study was also able to use historical webcam images that were installed for the monitoring of the volcanic activity inside the crater. Although the webcams were originally intended to be used only for visual monitoring of the area, at certain times they captured image pairs that could be analyzed using structure from motion (SfM) and subsequently processed to create digital terrain models (DTMs). With the help of all the data, the geomorphological evolution of selected areas of the crater was investigated in high temporal and spatial resolution. Surface changes were detected and quantified on scree slopes in the upper area of the crater as well as on scree slopes at the transition from the slope to the crater floor. In addition to their quantification, these changes could be assigned to individual geomorphological processes over time. The webcam photographs were a very important additional source of information here, as they allowed the observation period to be extended further into the past. Besides this, the webcam images made it possible to determine the exact dates at which geomorphological processes were active. Full article
(This article belongs to the Section Natural Hazards)
Show Figures

Figure 1

Figure 1
<p>Location of the study area Cratére Dolomieu of the PLF. (Source of the overview base map: ASTER DEM. Source of the overview map of the Cratére Dolomieu is a 1 m DEM based on terrestrial laser scanning data acquired in 2014).</p>
Full article ">Figure 2
<p>Riegl VZ4000 laser scanner located on the crater rim and dGNSS measurement of tie points (Riegl reflector) (own photographs captured during fieldwork in 2014).</p>
Full article ">Figure 3
<p>The entire workflow for processing TLS data, digital terrestrial, and webcam photographs is illustrated. The particular processing steps are demonstrated for each relevant software (RiSCAN PRO (Version 2.4), Agisoft Metashape Pro (Version 1.5.5), Laserdata SAGA LIS (Version 3.0.7, 3.1.0)).</p>
Full article ">Figure 4
<p>Investigated slope and stable areas for ICP adjustment. The location of the AoI can be seen in <a href="#geosciences-14-00259-f001" class="html-fig">Figure 1</a> on the overview map of the crater.</p>
Full article ">Figure 5
<p>Different examples of photographs that were not usable for further SfM processing due to insufficiencies regarding differences in the quality. (<b>A</b>) Investigated slope was either completely or partially in clouds. (<b>B</b>) Camera lens was fogged. (<b>C</b>) Contamination on the camera lens. (<b>D</b>) Light reflections lead to a poor contrast. (<b>E</b>) Existing ground fog does not allow data processing. (<b>F</b>) Strong shadows especially during summer in the southern hemisphere led to contrast differences. (<b>G</b>) Existing fog in the crater. (<b>H</b>) Volcanic eruption occurred on 4 January 2010, moving lava prevented use of this image pair.</p>
Full article ">Figure 6
<p>Mapped areas with visible surface changes within the different time steps between 2010 and 2016 that lie inside the derivable DTM. Both highlighted profile lines (grey, red) for the years 2010 and 2016 are analyzed in Figure 11.</p>
Full article ">Figure 7
<p>Derived surface changes (digital terrain model of differences: DoDs) for the two rockfall hotspots 1 and 2 between 2010 and 2016 (shaded relief in the background is derived on the base of the 2016 DTM). Also shown are the positive surface changes [cm] and the accumulated volume [m<sup>3</sup>] of the two areas for the corresponding periods.</p>
Full article ">Figure 8
<p>Derived surface changes (DoDs) on two selected debris cones between 2010 and 2016 (shaded relief in the background is derived on the base of the 2016 DTM). Also shown are the positive surface changes [cm] and the accumulated volume [m<sup>3</sup>] of the two areas for the corresponding periods.</p>
Full article ">Figure 9
<p>Clearly visible linear patterns on the debris zone II.</p>
Full article ">Figure 10
<p>The white arrows show visually detectable surface changes (DoDs) in rock zone II between 13 June 2011 and 19 June 2011.</p>
Full article ">Figure 11
<p>(<b>A</b>) The two lines are showing the slope development as a swath profile of debris zone II between 2010 and 2016. The location of the profile lines can be found in <a href="#geosciences-14-00259-f006" class="html-fig">Figure 6</a>. (<b>B</b>) Statistical range of the slope inclination for the years 2010 until 2016 showing a flattening of approximately 1°.</p>
Full article ">
16 pages, 13027 KiB  
Article
A Real-Time Global Re-Localization Framework for a 3D LiDAR-Based Navigation System
by Ziqi Chai, Chao Liu and Zhenhua Xiong
Sensors 2024, 24(19), 6288; https://doi.org/10.3390/s24196288 - 28 Sep 2024
Viewed by 393
Abstract
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of [...] Read more.
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of sensors in the re-localization process and the process is time consuming. In this paper, a template-matching-based global re-localization framework is proposed to address these challenges. The proposed framework includes an offline building stage and an online matching stage. In the offline stage, virtual LiDAR scans are densely resampled in the map and rotation-invariant descriptors can be extracted as templates. These templates are hierarchically clustered to build a template library. The map used to collect virtual LiDAR scans can be built either by the robot itself previously, or by other heterogeneous sensors. So, an important feature of the proposed framework is that it can be used in environments that have never been visited by the robot before. In the online stage, a cascade coarse-to-fine template matching method is proposed for efficient matching, considering both computational efficiency and accuracy. In the simulation with 100 K templates, the proposed framework achieves a 99% success rate and around 11 Hz matching speed when the re-localization error threshold is 1.0 m. In the validation on The Newer College Dataset with 40 K templates, it achieves a 94.67% success rate and around 7 Hz matching speed when the re-localization error threshold is 1.0 m. All the results show that the proposed framework has high accuracy, excellent efficiency, and the capability to achieve global re-localization in heterogeneous maps. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed global re-localization framework.</p>
Full article ">Figure 2
<p>Resampling in Gazebo using mesh model. (<b>Left</b>) AGV in Gazebo with mesh model, collecting point cloud data. (<b>Right</b>) Collected point cloud data.</p>
Full article ">Figure 3
<p>Extracted PCASC global descriptor (20 row × 60 column) from point cloud data in <a href="#sensors-24-06288-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>The nearest-neighbor search engine building process.</p>
Full article ">Figure 5
<p>The online template matching procedure.</p>
Full article ">Figure 6
<p>Different clustering principles while merging clusters using 10 K samples. Clusters are identified from each other by color, where each dot represents a real sample, and representative templates for each cluster are plotted with black dots.</p>
Full article ">Figure 7
<p>Accuracy comparison between different number of candidates on simulated data.</p>
Full article ">Figure 8
<p>Efficiency comparison between different numbers of candidates on simulated data.</p>
Full article ">Figure 9
<p>The NCD dataset used for validation. (<b>a</b>) Top view of the test environment. Each test sample is plotted with a red dot at the location in the environment where it was collected. (<b>b</b>) Distribution of test samples on X-Y plane.</p>
Full article ">Figure 10
<p>Accuracy comparison between different numbers of candidates on real data.</p>
Full article ">Figure 11
<p>Efficiency comparison between different numbers of candidates on real data.</p>
Full article ">Figure 12
<p>Match result distribution between distance and similarity. (<b>a</b>) Exhaustive match result distribution. (<b>b</b>) LSH-KDT match result distribution.</p>
Full article ">Figure 13
<p>The change in candidate searching time with the increase in the number of representative templates for different <span class="html-italic">K</span> values.</p>
Full article ">
18 pages, 3141 KiB  
Article
Genetic Algorithm Empowering Unsupervised Learning for Optimizing Building Segmentation from Light Detection and Ranging Point Clouds
by Muhammad Sulaiman, Mina Farmanbar, Ahmed Nabil Belbachir and Chunming Rong
Remote Sens. 2024, 16(19), 3603; https://doi.org/10.3390/rs16193603 - 27 Sep 2024
Viewed by 513
Abstract
This study investigates the application of LiDAR point cloud datasets for building segmentation through a combined approach that integrates unsupervised segmentation with evolutionary optimization. The research evaluates the extent of improvement achievable through genetic algorithm (GA) optimization for LiDAR point cloud segmentation. The [...] Read more.
This study investigates the application of LiDAR point cloud datasets for building segmentation through a combined approach that integrates unsupervised segmentation with evolutionary optimization. The research evaluates the extent of improvement achievable through genetic algorithm (GA) optimization for LiDAR point cloud segmentation. The unsupervised methodology encompasses preprocessing, adaptive thresholding, morphological operations, contour filtering, and terrain ruggedness analysis. A genetic algorithm was employed to fine-tune the parameters for these techniques. Critical tunable parameters, such as the interpolation method for DSM and DTM generation, scale factor for contrast enhancement, adaptive constant and block size for adaptive thresholding, kernel size for morphological operations, squareness threshold to maintain the shape of predicted objects, and terrain ruggedness index (TRI) were systematically optimized. The study presents the top ten chromosomes with optimal parameter values, demonstrating substantial improvements of 29% in the average intersection over union (IoU) score (0.775) on test datasets. These findings offer valuable insights into LiDAR-based building segmentation, highlighting the potential for increased precision and effectiveness in future applications. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Fitness function workflow diagram. The text color of functions (<b>left</b>) and their subsequent parameters (<b>right</b>) are in the same color for identification. Tunable parameters are highlighted with a yellow background, which will serve as a chromosome in the GA.</p>
Full article ">Figure 2
<p>TRI explanation: each DSM has a red and green contour. (<b>a</b>) DSM before applying Equation (<a href="#FD2-remotesensing-16-03603" class="html-disp-formula">2</a>). Blue serves as a 3 × 3 kernel for contour filtering for yellow 1, as shown in example (<b>a</b>). (<b>b</b>) DSM after contour filtering. The red contour’s average value, which is TRI, is greater (not building) than the green contour (building).</p>
Full article ">Figure 3
<p>Flow diagram of genetic algorithm. The red arrow denotes the generation loop. The algorithm terminates if the fitness score is the same in 10 consecutive generations.</p>
Full article ">Figure 4
<p>Population (green), chromosomes (blue), and genes (red) are indicated with different colors for understanding.</p>
Full article ">Figure 5
<p>Crossover of two parents. <math display="inline"><semantics> <mrow> <mi>P</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>P</mi> <mn>2</mn> </mrow> </semantics></math> are two parents, the Red straight line determines the cut on the parents, and <math display="inline"><semantics> <mrow> <mi>C</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>C</mi> <mn>2</mn> </mrow> </semantics></math> are two offspring/children producers through the crossover.</p>
Full article ">Figure 6
<p>Mutation of an individual in which the third gene is randomly selected and altered for diversity.</p>
Full article ">Figure 7
<p>Dataset 11th Illinois result, first row results from preprocessing step and second-row results from segmentation.</p>
Full article ">Figure 8
<p>Genetic Algorithm (GA) Convergence Graph: The Y-axis represents the fitness score (IoU) of the top chromosome, while the X-axis indicates the generation number at which the top fitness score was achieved.</p>
Full article ">
Back to TopTop