Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

LiDAR Sensor Hardware, Algorithm Development and Its Application

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Radar Sensors".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 30642

Special Issue Editors


E-Mail Website
Guest Editor
Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
Interests: lidar system and algorithm development; multi-sensors remote sensing; airborne and satellite lidar remote sensing

E-Mail Website
Guest Editor
Department of Remote Sensing and Geo-Information Engineering, School of Land Science and Technology, China University of Geosciences in Beijing, Xueyuan Road 29, Haidian District, Beijing 100083, China
Interests: digital photogrammetry and computer vision; processing of indoor, terrestrial and air-borne LiDAR data; indoor 3D modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Lidar is an acronym for light detection and ranging, which originated in atmosphere monitoring before the invention of laser. In 1960, the laser was invented with four unique characteristics, coherence, directionality, monochromatic, and high intensity, which greatly enhanced the capability of the lidar remote sensing in atmospheric physics and chemistry research. Lidar can also measure the distance and can be carried on the airborne and spaceborne platform to study the surface elevations and bathymetric mapping as well. This application subsequently expands to vegetation classification, land cover use, and terrestrial scanning, as well as architecture scanning and micro-topography. Recently, according to the outburst demands of autopilot technology, lidar is regarded as the most effective sensor to detect the presence of vehicles, pedestrians, and other relevant entities, and has been manufactured in commercial vehicles. This Special Issue invites contributions on the cutting-edge lidar technology developments, including the hardware design and production, the robust algorithm and complex scenario, and the work to discover the advantages of the lidar sensors in various application fields.

Potential topics include, but are not limited to:

  • Lidar system and algorithm development for atmosphere monitoring on environmental and climate studies;
  • Lidar sensing mapping and application in elevation measurement, ecological and land classification, precision forestry and agriculture, natural disaster and civil facility monitoring, micro-topography, etc.;
  • Automatic driving lidar, algorithms for object detection and identification and 3D modeling;
  • New technology and device for lidar sensors.

Prof. Dr. Dong Liu
Prof. Dr. Zhizhong Kang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • atmosphere monitoring
  • surveying and mapping
  • automatic driving
  • SLAM
  • MEMS
  • sensor calibration
  • multisensory fusion
  • data quality control/evaluation
  • optical phased array
  • 3D modeling
  • ecological and land classification
  • precision forestry and agriculture
  • natural disaster monitoring
  • civil facility safety monitoring
  • object detection and identification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 4019 KiB  
Article
Original and Low-Cost ADS-B System to Fulfill Air Traffic Safety Obligations during High Power LIDAR Operation
by Frédéric Peyrin, Patrick Fréville, Nadège Montoux and Jean-Luc Baray
Sensors 2023, 23(6), 2899; https://doi.org/10.3390/s23062899 - 7 Mar 2023
Cited by 3 | Viewed by 2044
Abstract
LIDAR is an atmospheric sounding instrument based on the use of high-power lasers. The use of these lasers involves fulfilling obligations with respect to air safety. In this article, we present a low-cost air traffic surveillance solution integrated into an automated operating system [...] Read more.
LIDAR is an atmospheric sounding instrument based on the use of high-power lasers. The use of these lasers involves fulfilling obligations with respect to air safety. In this article, we present a low-cost air traffic surveillance solution integrated into an automated operating system for the Rayleigh-Mie-Raman LIDAR of Clermont Ferrand and the statistical elements of its application over more than two years of operation from September 2019 to March 2022. Air traffic surveillance that includes the possibility of shutting off lasers is required by international regulations because LIDAR is equipped with a class four laser that presents potential dangers to aircraft flying overhead. The original system presented in this article is based on software-defined radio. ADS-B transponder frames are analyzed in real-time, and laser emission is stopped during LIDAR operation when an aircraft is detected within a 2 km radius around the LIDAR. The system was accredited in 2019 by the French air traffic authorities. Laser shutdowns due to the detection of aircraft near the Clermont Ferrand LIDAR caused a data loss rate of less than 2% during the period of application. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Overview of the air-safety system based on a picture of the outdoor OPGC platform.</p>
Full article ">Figure 2
<p>General view of the outdoor (<b>a</b>) and indoor (<b>b</b>) hardware with SDR Dongle and Raspberry Pi.</p>
Full article ">Figure 3
<p>RaspberryPi screenshot illustrating software in use: DUMP1090 “server” (top) and SDRAirSafeLid “client” (bottom).</p>
Full article ">Figure 4
<p>Screenshots from Virtual Radar Server based on OpenStreetMap showing an example of international flight (XA-FEM, Servicios Aereos Regiomontanos SA) (<b>a</b>) and of local flight (SAMU63, medical emergency helicopter) (<b>b</b>). Concentric yellow and green circles give the ground distance to the COPLid location.</p>
Full article ">Figure 5
<p>Monthly laser stop and measurement durations. The red line corresponds to the monthly laser stop duration, and the blue line to the measurement stop duration, also including the temporization and warmup time needed to stabilize the laser emission.</p>
Full article ">Figure 6
<p>Monthly number of laser stops.</p>
Full article ">
23 pages, 5739 KiB  
Article
Extracting Traffic Signage by Combining Point Clouds and Images
by Furao Zhang, Jianan Zhang, Zhihong Xu, Jie Tang, Peiyu Jiang and Ruofei Zhong
Sensors 2023, 23(4), 2262; https://doi.org/10.3390/s23042262 - 17 Feb 2023
Viewed by 2441
Abstract
Recognizing traffic signs is key to achieving safe automatic driving. With the decreasing cost of LiDAR, the accurate extraction of traffic signs using point cloud data has received wide attention. In this study, we propose combining point cloud and image traffic sign extraction: [...] Read more.
Recognizing traffic signs is key to achieving safe automatic driving. With the decreasing cost of LiDAR, the accurate extraction of traffic signs using point cloud data has received wide attention. In this study, we propose combining point cloud and image traffic sign extraction: firstly, we use the improved YoloV3 model to detect traffic signs in panoramic images. The specific improvement is that the convolution block attention module is added to the algorithm framework, the traditional K-means clustering algorithm is improved, and Focal Loss is introduced as the loss function. It shows higher accuracy on the TT100K dataset, with a 1.4% improvement in accuracy compared to the previous YoloV3. Then, the point cloud of the area where the traffic sign is located is extracted by combining the image detection results. On this basis, the outline of the traffic sign is accurately extracted using the reflection intensity, spatial geometry and other information. Compared with the traditional method, the proposed method can effectively reduce the missed detection rate, narrow the range of point cloud, and improve the detection accuracy by 10.2%. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of Yolov3 network structure.</p>
Full article ">Figure 2
<p>In this paper, a convolutional block attention mechanism module is added to the backbone region of the network, and a CBAM is added to each network residual module to be able to make full use of the effective information in the image and improve detection accuracy: (<b>a</b>) CBAM module structure; (<b>b</b>) Join the network residual module of CBAM.</p>
Full article ">Figure 3
<p>The parameters in the formula are shown in figure (<b>a</b>), A and B are two rectangular regions, C is the smallest external frame after the intersection of A and B, and C\(A ∪ B) indicates the area of C minus the area of A ∪ B. All three plots in (<b>b</b>) have the same IoU, but the GIoU decreases sequentially. It indicates that GIoU not only reflects the overlap between the prediction frame and the sign reference frame, but also considers their alignment, and the higher the alignment, the higher the GIoU value.</p>
Full article ">Figure 4
<p>The model is trained for 6000 iterations, and in figure (<b>a</b>), it can be seen that the trend of the model’s Loss is oscillating steadily down, and the final Loss value is stable at about 0.1; in figure (<b>b</b>), it can be found that the final average cross-merge ratio of the model has also converged. In summary, after 6000 iterations, all the indexes have reached a stable state.</p>
Full article ">Figure 5
<p>(<b>a</b>) Test set detection results; (<b>b</b>) Panoramic detection effect.</p>
Full article ">Figure 6
<p>The vehicle-mounted laser measurement system SZT-R1000.</p>
Full article ">Figure 7
<p>Figure (<b>a</b>) is a panoramic photo of the Fourth Ring Road in Beijing. By adding the distance constraint, meaningless point cloud information can be excluded and the computational effort can be reduced, the generated depth image is shown in figure (<b>b</b>).</p>
Full article ">Figure 8
<p>(<b>a</b>) To perform the conversion, first wind the original image spatial coordinates <math display="inline"><semantics> <mrow> <mi>XYZ</mi> <mo>−</mo> <mi mathvariant="normal">S</mi> </mrow> </semantics></math> under the <math display="inline"><semantics> <mi mathvariant="normal">Y</mi> </semantics></math> axis <math display="inline"><semantics> <mi mathvariant="sans-serif">φ</mi> </semantics></math> to obtain the coordinate system <math display="inline"><semantics> <mrow> <mrow> <mo> </mo> <msup> <mi mathvariant="normal">X</mi> <mo>′</mo> </msup> <msup> <mi mathvariant="normal">Y</mi> <mo>′</mo> </msup> <msup> <mi mathvariant="normal">Z</mi> <mo>′</mo> </msup> </mrow> <mo>−</mo> <mi mathvariant="normal">S</mi> </mrow> </semantics></math>; (<b>b</b>) Then rotate around <math display="inline"><semantics> <mrow> <mo> </mo> <msup> <mi mathvariant="normal">X</mi> <mo>′</mo> </msup> </mrow> </semantics></math> axis to obtain the coordinate system <math display="inline"><semantics> <mrow> <mrow> <mo> </mo> <msup> <mi mathvariant="normal">X</mi> <mo>′</mo> </msup> <msup> <mi mathvariant="normal">Y</mi> <mo>′</mo> </msup> <msup> <mi mathvariant="normal">Z</mi> <mo>′</mo> </msup> </mrow> <mo>−</mo> <mi mathvariant="normal">S</mi> </mrow> </semantics></math>; (<b>c</b>) Finally rotate around <math display="inline"><semantics> <mrow> <mrow> <mo> </mo> <msup> <mi mathvariant="normal">Z</mi> <mo>″</mo> </msup> </mrow> </mrow> </semantics></math>. The final spatial coordinate system is obtained by rotating around the axis, as shown in the figure below.</p>
Full article ">Figure 9
<p>As shown in <a href="#sensors-23-02262-f009" class="html-fig">Figure 9</a>, the photography center is placed at the origin of the coordinate system, and a projection beam is launched to the four 3D coordinates of the prediction box to form a conical projection area with a rectangular bottom. In this process, the scope of the cone region will be set according to the actual situation, and it will not be allowed to extend indefinitely. The shaded area is the region of interest. Using this method to extract the region of interest can fully contain the detection target, ensuring that no omissions occur.</p>
Full article ">Figure 10
<p>Flow of region of interest extraction with image detection results.</p>
Full article ">Figure 11
<p>Point cloud area after positioning and extraction. (<b>a</b>) Panoramic image detection results; (<b>b</b>) Raw data; (<b>c</b>) post-positioning area.</p>
Full article ">Figure 12
<p>Signage extraction technical route.</p>
Full article ">Figure 13
<p>The contrast of point clouds before (<b>a</b>) and after (<b>b</b>) the ground is removed.</p>
Full article ">Figure 14
<p>Point cloud growth clustering segmentation results.</p>
Full article ">Figure 15
<p>(<b>a</b>) After intensity filtering, most ground appendages can be filtered, traffic signs and their attached poles, and some highway barriers are retained. (<b>b</b>)After further screening by RANSAC fitting plane features, only traffic signs and their attached rods remain.</p>
Full article ">Figure 16
<p>After the dimension characteristic judgment, the class cluster whose dimensional features are closest to plane features is screened out.</p>
Full article ">Figure 17
<p>Comparison of our method with other methods: (<b>a</b>) Pre-Extraction Data; (<b>b</b>) Our Methodology; (<b>c</b>) Combined Imaging + RANSAC; (<b>d</b>) Combined image + reflection intensity. RANCAR algorithm incorrectly fits the traffic sign bar plane and extracts it. As traffic signs are exposed all year round, their reflective materials fall off, and the point clouds obtained by extracting traffic signs with intensity information are sparse or even missing.</p>
Full article ">
16 pages, 9123 KiB  
Article
Parameter Optimization and Development of Mini Infrared Lidar for Atmospheric Three-Dimensional Detection
by Zhiqiang Kuang, Dong Liu, Decheng Wu, Zhenzhu Wang, Cheng Li and Qian Deng
Sensors 2023, 23(2), 892; https://doi.org/10.3390/s23020892 - 12 Jan 2023
Cited by 2 | Viewed by 2346
Abstract
In order to conduct more thorough research on the structural characteristics of the atmosphere and the distribution and transmission of atmospheric pollution, the use of remote sensing technology for multi-dimensional detection of the atmosphere is needed. A light-weight, low-volume, low-cost, easy-to-use and low-maintenance [...] Read more.
In order to conduct more thorough research on the structural characteristics of the atmosphere and the distribution and transmission of atmospheric pollution, the use of remote sensing technology for multi-dimensional detection of the atmosphere is needed. A light-weight, low-volume, low-cost, easy-to-use and low-maintenance mini Infrared Lidar (mIRLidar) sensor is developed for the first time. The model of lidar is established, and the key optical parameters of the mIRLidar are optimized through simulation, in which wavelength of laser, energy of pulse laser, diameter of telescope, field of view (FOV), and bandwidth of filter are included. The volume and weight of the lidar system are effectively reduced through optimizing the structural design and designing a temperature control system to ensure the stable operation of the core components. The mIRLidar system involved a 1064 nm laser (the pulse laser energy 15 μJ, the repetition frequency 5 kHz), a 100 mm aperture telescope (the FOV 1.5 mrad), a 0.5 nm bandwidth of filter and an APD, where the lidar has a volume of 200 mm × 200 mm × 420 mm and weighs about 13.5 kg. It is shown that the lidar can effectively detect three-dimensional distribution and transmission of aerosol and atmospheric pollution within a 5 km detection range, from Horizontal, scanning and navigational atmospheric measurements. It has great potential in the field of meteorological research and environmental monitoring. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Model of parameter optimization.</p>
Full article ">Figure 2
<p>SNR curves for lidar wavelength of 355 nm (red), 532 nm (blue), 1064 nm (green) at daytime (<b>a</b>) and nighttime (<b>b</b>).</p>
Full article ">Figure 3
<p>Maximum detection range map at different laser pulse energy and different Telescope diameter.</p>
Full article ">Figure 4
<p>SNR curves of lidar corresponding to different FOV: 1.0 mrad (blue), 1.2 mrad (yellow), 1.5 mrad (red), 3.0 mrad (black).</p>
Full article ">Figure 5
<p>SNR curves of lidar corresponding to different filter bandwidth: 0.1 nm (blue), 0.2 nm (yellow), 0.5 nm (red), 1.0 nm (black).</p>
Full article ">Figure 6
<p>(<b>a</b>) Influence of Central wavelength of filter and (<b>b</b>) influence of dark count variation of APD on SNR at different temperature.</p>
Full article ">Figure 7
<p>mIRLidar system: (<b>a</b>) schematic diagram and (<b>b</b>) internal structure diagram.</p>
Full article ">Figure 8
<p>(<b>a</b>) Schematic diagram of APD control unit, (<b>b</b>) photo of APD detector and (<b>c</b>) internal structure diagram of APD: (1) Cu-heat sinks; (2) TEC; (3) nylon cover; (4) metal base; (5) PCB; (6) APD; (7) metal base; (8) nylon base.</p>
Full article ">Figure 9
<p>Schematic diagram of filter temperature control.</p>
Full article ">Figure 10
<p>Detection distance test, (<b>a</b>) vertical detection and (<b>b</b>) horizontal detection.</p>
Full article ">Figure 11
<p>(<b>a</b>) Photo of mIRLidar system and (<b>b</b>) RCS profiles in vertical mode on 14 March 2022, in Hefei (UTC+8).</p>
Full article ">Figure 12
<p>(<b>a</b>) Photo of mIRLidar system and RCS maps (3 km radius) in scan mode at different times on 17 March 2022, in Zibo (UTC+8): (<b>b</b>) 06:00~06:30, (<b>c</b>) 06:30~07:00, (<b>d</b>) 07:00~07:30 and (<b>e</b>) 07:30~08:00.</p>
Full article ">Figure 13
<p>(<b>a</b>,<b>b</b>) Photo of mIRLidar system, and (<b>c</b>) extinction maps in Navigation detection mode on 25 April 2022, in Quanzhou (UTC+8).</p>
Full article ">
22 pages, 29366 KiB  
Article
CMANet: Cross-Modality Attention Network for Indoor-Scene Semantic Segmentation
by Longze Zhu, Zhizhong Kang, Mei Zhou, Xi Yang, Zhen Wang, Zhen Cao and Chenming Ye
Sensors 2022, 22(21), 8520; https://doi.org/10.3390/s22218520 - 5 Nov 2022
Cited by 15 | Viewed by 2937
Abstract
Indoor-scene semantic segmentation is of great significance to indoor navigation, high-precision map creation, route planning, etc. However, incorporating RGB and HHA images for indoor-scene semantic segmentation is a promising yet challenging task, due to the diversity of textures and structures and the disparity [...] Read more.
Indoor-scene semantic segmentation is of great significance to indoor navigation, high-precision map creation, route planning, etc. However, incorporating RGB and HHA images for indoor-scene semantic segmentation is a promising yet challenging task, due to the diversity of textures and structures and the disparity of multi-modality in physical significance. In this paper, we propose a Cross-Modality Attention Network (CMANet) that facilitates the extraction of both RGB and HHA features and enhances the cross-modality feature integration. CMANet is constructed under the encoder–decoder architecture. The encoder consists of two parallel branches that successively extract the latent modality features from RGB and HHA images, respectively. Particularly, a novel self-attention mechanism-based Cross-Modality Refine Gate (CMRG) is presented, which bridges the two branches. More importantly, the CMRG achieves cross-modality feature fusion and produces certain refined aggregated features; it serves as the most crucial part of CMANet. The decoder is a multi-stage up-sampled backbone that is composed of different residual blocks at each up-sampling stage. Furthermore, bi-directional multi-step propagation and pyramid supervision are applied to assist the leaning process. To evaluate the effectiveness and efficiency of the proposed method, extensive experiments are conducted on NYUDv2 and SUN RGB-D datasets. Experimental results demonstrate that our method outperforms the existing ones for indoor semantic-segmentation tasks. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Comparisons: (<b>a</b>) RGB images, (<b>b</b>) one-channel depth images, and (<b>c</b>) three-channel HHA images.</p>
Full article ">Figure 2
<p>Some challenging samples of semantic-segmentation tasks. The yellow bounding box indicates the complex sample on solely RGB images, whereas the blue indicates the complex sample with solely HHA images.</p>
Full article ">Figure 3
<p>Illustration of the framework of CMANet. CMANet has the encoder–decoder architecture: (<b>1</b>) The encoder extracts RGB and HHA features with ResNet [<a href="#B44-sensors-22-08520" class="html-bibr">44</a>] backbone. (<b>2</b>) The decoder is an up-sampled backbone composed of several standard residual blocks.</p>
Full article ">Figure 4
<p>Structure of residual units: (<b>a</b>) Residual Convolutional Unit (RCU): a standard residual convolutional unit, which is an adaptive convolution that has two standard <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> layers with shortcut connection; (<b>b</b>) Chained Residual Pooling (CRP): a chain of pooling blocks (two blocks) with shortcut connection, each of which contains of a <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <mn>5</mn> </mrow> </semantics></math> max-pooling layer and a <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> convolution layer; (<b>c</b>) Downsample Residual Unit (DRU): a downsample residual unit in the (ResNet-50) backbone; (<b>d</b>) Upsample Residual Unit (URU): an upsample residual unit that we propose in the decoder.</p>
Full article ">Figure 5
<p>Structures of processing modules: (<b>a</b>) the structure of the Agent module; (<b>b</b>) the structure of the Context module.</p>
Full article ">Figure 6
<p>Structures of CBAM and its sub-modules: (<b>a</b>) the structure of CBAM, which is comprised of a channel-attention module and a spatial-attention module in sequence; (<b>b</b>) the structure of the channel-attention module; (<b>c</b>) the structure of the spatial-attention module.</p>
Full article ">Figure 7
<p>Structure of the Cross-Modality Refine Gate (CMRG).</p>
Full article ">Figure 8
<p>Semantic segmentation qualitative visual results on NYUDv2 dataset. The (<b>a</b>) represents the bedroom has few objects on the bed; (<b>b</b>) has disorganized items on the bed; (<b>c</b>) represents the hallway scene with a complex structure; (<b>d</b>) has an obvious lighting imbalance in the bedroom; (<b>e</b>) represents the table which is cluttered with small and numerous objects; (<b>f</b>) presents a scene in which there are not only strong lighting conditions, but also many overlapped and similar-texture objects.</p>
Full article ">Figure 9
<p>Visualization of pyramid supervision.</p>
Full article ">Figure 10
<p>Visualization of channel attention. (<b>a</b>) The visual results on RGB channel-refined features; (<b>b</b>) the visual results on HHA channel-refined features; (<b>c</b>) the color bar.</p>
Full article ">Figure 11
<p>Visualization of the cross-modality refine gate. (<b>a</b>) Visual results of the output of the CMRG; (<b>b</b>) the color bar.</p>
Full article ">
18 pages, 8003 KiB  
Article
Weighted Iterative CD-Spline for Mitigating Occlusion Effects on Building Boundary Regularization Using Airborne LiDAR Data
by Renato César dos Santos, Ayman F. Habib and Mauricio Galo
Sensors 2022, 22(17), 6440; https://doi.org/10.3390/s22176440 - 26 Aug 2022
Cited by 1 | Viewed by 1573
Abstract
Building occlusions usually decreases the accuracy of boundary regularization. Thus, it is essential that modeling methods address this problem, aiming to minimize its effects. In this context, we propose a weighted iterative changeable degree spline (WICDS) approach. The idea is to use a [...] Read more.
Building occlusions usually decreases the accuracy of boundary regularization. Thus, it is essential that modeling methods address this problem, aiming to minimize its effects. In this context, we propose a weighted iterative changeable degree spline (WICDS) approach. The idea is to use a weight function for initial building boundary points, assigning a lower weight to the points in the occlusion region. As a contribution, the proposed method allows the minimization of errors caused by the occlusions, resulting in a more accurate contour modeling. The conducted experiments are performed using both simulated and real data. In general, the results indicate the potential of the WICDS approach to model a building boundary with occlusions, including curved boundary segments. In terms of Fscore and PoLiS, the proposed approach presents values around 99% and 0.19 m, respectively. Compared with the previous iterative changeable degree spline (ICDS), the WICDS resulted in an improvement of around 6.5% for completeness, 4% for Fscore, and 0.24 m for the PoLiS metric. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Building partially covered by a tree and modeled boundary using the ICDS method. Aerial image (<b>a</b>), airborne LiDAR data (<b>b</b>), and sampled points on the roof building (<b>c</b>) and modeled contour (<b>d</b>).</p>
Full article ">Figure 2
<p>Flowchart of the proposed approach.</p>
Full article ">Figure 3
<p>Critical point determination for a partially occluded building roof: Boundary points extracted using alpha-shape algorithm (<b>a</b>). Critical points derived from Douglas–Peucker (blue squares) (<b>b</b>). The angle <span class="html-italic">θ</span> between two adjacent lines formed by connecting adjacent critical points (<b>c</b>). Critical points derived from angle-based generalization (<b>d</b>), and from occlusion-based refinement (<b>e</b>).</p>
Full article ">Figure 4
<p>Building boundary with occlusion (<b>a</b>) and representation of the points related to <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <msub> <mrow> <mi mathvariant="italic">initial</mi> </mrow> <mrow> <mi mathvariant="italic">occl</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <msub> <mrow> <mi mathvariant="italic">final</mi> </mrow> <mrow> <mi mathvariant="italic">occl</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math> (<b>b</b>). The orange points denote the contour points located in the occlusion region.</p>
Full article ">Figure 5
<p>Rectangular building with different sizes of occlusion areas. Roof points (<b>First row</b>), modeled boundary (blue line) using ICDS (<b>Second row</b>) and WICDS (<b>Third row</b>).</p>
Full article ">Figure 6
<p>Curved building with different sizes of occlusion areas. Roof points (<b>First row</b>), modeled boundary (blue line) using ICDS (<b>Second row</b>) and WICDS (<b>Third row</b>).</p>
Full article ">Figure 7
<p>Quality metrics for rectangular (<b>a</b>,<b>c</b>) and curved buildings (<b>b</b>,<b>d</b>), considering the ICDS and WICDS methods.</p>
Full article ">Figure 8
<p>Quality metrics for rectangular (<b>a</b>,<b>c</b>) and curved (<b>b</b>,<b>d</b>) buildings with partial occlusions considering different weight values. <span class="html-italic">F<sub>score</sub></span> (<b>a</b>,<b>b</b>) and <span class="html-italic">PoLiS</span> (<b>c</b>,<b>d</b>) metrics.</p>
Full article ">Figure 9
<p>Modeled contour for buildings B1_oc2 (<b>a</b>) and B2_oc2 (<b>b</b>) using different weight values.</p>
Full article ">Figure 10
<p>Buildings with occlusions selected in the Presidente Prudente/Brazil dataset. Aerial image patches (<b>first column</b>), points sampled over the building roof (<b>second column</b>), and results derived from ICDS (<b>third column</b>) and WICDS method (<b>fourth column</b>).</p>
Full article ">Figure 10 Cont.
<p>Buildings with occlusions selected in the Presidente Prudente/Brazil dataset. Aerial image patches (<b>first column</b>), points sampled over the building roof (<b>second column</b>), and results derived from ICDS (<b>third column</b>) and WICDS method (<b>fourth column</b>).</p>
Full article ">Figure 11
<p>Modeled contour for buildings B6 (<b>a</b>) and B7 (<b>b</b>) using different weight values.</p>
Full article ">Figure 12
<p>Modeled boundary for building B8. Results using the ICDS and WICDS method. The orange rectangles highlight the occlusion region.</p>
Full article ">Figure 13
<p>Two-dimensional (<b>a</b>) and three-dimensional (<b>b</b>) representations for buildings B9–B11. First row in (<b>a</b>): aerial image patches, roof points and results derived from building modeling methods. Second row in (<b>b</b>): representation 3D of roof points and results of boundary modeling. The cyan rectangles in (<b>a</b>) highlight the occlusions caused by antennas.</p>
Full article ">Figure 14
<p>Occlusions at building corners caused by nearby trees. Building with curved segments (<b>first row</b>). Building with straight-line segments (<b>second row</b>). For both buildings, we show aerial image patches, roof points, and modeled boundaries. The orange rectangles highlight the corner region in B13 where the occlusion occurs.</p>
Full article ">Figure 15
<p>Quality metrics for buildings B3–B11 using the ICDS and WICDS methods. Plots of <span class="html-italic">F<sub>score</sub></span> (<b>a</b>) and <span class="html-italic">PoLiS</span> (<b>b</b>) metrics.</p>
Full article ">
17 pages, 5725 KiB  
Article
New Denoising Method for Lidar Signal by the WT-VMD Joint Algorithm
by Zhenzhu Wang, Hongbo Ding, Bangxin Wang and Dong Liu
Sensors 2022, 22(16), 5978; https://doi.org/10.3390/s22165978 - 10 Aug 2022
Cited by 10 | Viewed by 2738
Abstract
Light detection and ranging (LIDAR) is an active remote sensing system. Lidar echo signal is non-linear and non-stationary, which is often accompanied by various noises. In order to filter out the noise and extract valid signal information, a suitable method should be chosen [...] Read more.
Light detection and ranging (LIDAR) is an active remote sensing system. Lidar echo signal is non-linear and non-stationary, which is often accompanied by various noises. In order to filter out the noise and extract valid signal information, a suitable method should be chosen for noise reduction. Some denoising methods are commonly used, such as the wavelet transform (WT), the empirical mode decomposition (EMD), the variational mode decomposition (VMD), and their improved algorithms. In this paper, a new denoising method named the WT-VMD joint algorithm based on the sparrow search algorithm (SSA), for lidar signal is selected by comparative experiment analysis. It is shown that this method is the most suitable one with the maximum signal-to-noise ratio (SNR), the minimum root-mean-square error (RMSE), and a relatively small indicator of smoothness when it is used in three kinds (50, 100, and 1000 pulses) of simulate lidar signals. The SNR is increased by 138.5%, 77.8% and 42.8% and the RMSE is decreased by 81.8%, 72.0% and 68.8% when being used to the three kinds of cumulative signal without pollution. Then, the SNR is increased by 83.3%, 60.4% and 24.0% and the RMSE is decreased by 70.8%, 66.0% and 50.5% when being used to the three kinds of cumulative signal with aerosol and clouds. The WT-VMD joint algorithm based on SSA is used in the denoising process for the actual lidar signal, showing extraordinary denoising effect and will improve the inversion accuracy of the lidar signal. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Simulation results of cumulative pulse signals with noise.</p>
Full article ">Figure 2
<p>Experimental results of denoising of 50-pulse cumulative simulation signals (no pollution): (<b>a</b>) original signal; (<b>b</b>) WT; (<b>c</b>) EMD; (<b>d</b>) VMD; (<b>e</b>) WT-EMD; (<b>f</b>) WT-VMD.</p>
Full article ">Figure 3
<p>Experimental results of denoising of 100-pulse cumulative simulation signals (no pollution): (<b>a</b>) original signal; (<b>b</b>) WT; (<b>c</b>) EMD; (<b>d</b>) VMD; (<b>e</b>) WT-EMD; (<b>f</b>) WT-VMD.</p>
Full article ">Figure 4
<p>Experimental results of denoising of 1000-pulse cumulative simulation signals (no pollution): (<b>a</b>) original signal; (<b>b</b>) WT; (<b>c</b>) EMD; (<b>d</b>) VMD; (<b>e</b>) WT-EMD; (<b>f</b>) WT-VMD.</p>
Full article ">Figure 5
<p>Comparison of relative error of extinction coefficient before and after denoising (no pollution): (<b>a</b>) 50-pulse cumulative simulation signals; (<b>b</b>) 100-pulse cumulative simulation signals; (<b>c</b>) 1000-pulse cumulative simulation signals.</p>
Full article ">Figure 6
<p>Experimental results of denoising of 50-pulse cumulative simulation signals (with aerosol and clouds): (<b>a</b>) original signal; (<b>b</b>) WT; (<b>c</b>) EMD; (<b>d</b>) VMD; (<b>e</b>) WT-EMD; (<b>f</b>) WT-VMD.</p>
Full article ">Figure 7
<p>Experimental results of denoising of 100-pulse cumulative simulation signals (with aerosol and clouds): (<b>a</b>) original signal; (<b>b</b>) WT; (<b>c</b>) EMD; (<b>d</b>) VMD; (<b>e</b>) WT-EMD; (<b>f</b>) WT-VMD.</p>
Full article ">Figure 8
<p>Experimental results of denoising of 1000-pulse cumulative simulation signals (with aerosol and clouds): (<b>a</b>) original signal; (<b>b</b>) WT; (<b>c</b>) EMD; (<b>d</b>) VMD; (<b>e</b>) WT-EMD; (<b>f</b>) WT-VMD.</p>
Full article ">Figure 8 Cont.
<p>Experimental results of denoising of 1000-pulse cumulative simulation signals (with aerosol and clouds): (<b>a</b>) original signal; (<b>b</b>) WT; (<b>c</b>) EMD; (<b>d</b>) VMD; (<b>e</b>) WT-EMD; (<b>f</b>) WT-VMD.</p>
Full article ">Figure 9
<p>Comparison of relative error of extinction coefficient before and after denoising (with aerosol and clouds): (<b>a</b>) 50-pulse cumulative simulation signals; (<b>b</b>) 100-pulse cumulative simulation signals; (<b>c</b>) 1000-pulse cumulative simulation signals.</p>
Full article ">Figure 10
<p>Comparison of denoising results for range-squared corrected signal.</p>
Full article ">Figure 11
<p>Comparison of aerosol extinction coefficient before and after denoising.</p>
Full article ">Figure 11 Cont.
<p>Comparison of aerosol extinction coefficient before and after denoising.</p>
Full article ">Figure 12
<p>Comparison of extinction coefficient between averaging method and WT-VMD method (<b>a</b>) 1 h averaging vs. 5 min WT-VMD denoising; (<b>b</b>) 1 h averaging vs. 1 h WT-VMD denoising.</p>
Full article ">Figure 13
<p>Comparison of extinction coefficient between median filtering method and WT-VMD method (<b>a</b>) median filtering_9 vs. 5 min WT-VMD denoising; (<b>b</b>) median filtering_21 vs. 5 min WT-VMD denoising.</p>
Full article ">
14 pages, 4584 KiB  
Article
Analysis of the Impact of Changes in Echo Signal Parameters on the Uncertainty of Distance Measurements in p-ToF Laser Rangefinders
by Michał Muzal and Marek Zygmunt
Sensors 2022, 22(16), 5973; https://doi.org/10.3390/s22165973 - 10 Aug 2022
Cited by 1 | Viewed by 1505
Abstract
The article presents results of research on the influence of changes in parameters of the digitally recorded echo signals on the uncertainty of pulsed Time-of-Flight (p-ToF) laser distance measurements. The main objective of the study was to evaluate the distance calculation method developed [...] Read more.
The article presents results of research on the influence of changes in parameters of the digitally recorded echo signals on the uncertainty of pulsed Time-of-Flight (p-ToF) laser distance measurements. The main objective of the study was to evaluate the distance calculation method developed by the authors. This method is based on the acquisition of the full-waveform of the echo pulse signal and approximation of its shape by the second-degree polynomial (we called it SDPA for short). To determine the pulse transit time and measure the distance, the position of the vertex of this parabola is sought. This position represents the maximum intensity of the incoming echo signal and is related to the round-trip propagation time of the laser pulse. In the presented work, measurement uncertainty was evaluated using simulation tests for various parameters of the echo pulse. All obtained results were used to formulate the general relationship between the measurement uncertainty of the SDPA algorithm and the parameters of the received echo signals. This formula extends the base knowledge in the domain of laser p-ToF distance measurements. It can be used to estimate the measurement uncertainty of a FW LiDAR at an early design stage. This greatly improves capabilities of analysis of expected performance of the device. It can also be implemented directly into the rangefinder’s measurement algorithm to estimate the measurement uncertainty based on the emission of a single pulse rather than a series of pulses. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Approximation of a sampled signal with a second−degree polynomial (SDPA).</p>
Full article ">Figure 2
<p>Example of the pulse signal functions <span class="html-italic">f</span>(<span class="html-italic">x</span>) and sample sets <span class="html-italic">f</span>(<span class="html-italic">x<sub>i</sub></span>) used in the simulations. The plot was drawn for (<b>a</b>) even (<span class="html-italic">A</span> = 1, <span class="html-italic">τ</span> = 50 ns, <span class="html-italic">f<sub>s</sub></span> = 50 MHz, noise <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>∈</mo> <mi>N</mi> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>0.1</mn> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </semantics></math> and (<b>b</b>) odd (<span class="html-italic">A</span> = 1, <span class="html-italic">τ</span> = 50 ns, <span class="html-italic">f<sub>s</sub></span> = 250 MHz, noise <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>∈</mo> <mi>N</mi> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>0.1</mn> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </semantics></math> numbers of samples.</p>
Full article ">Figure 3
<p>Algorithm of simulation trials.</p>
Full article ">Figure 4
<p>Dependence of measurement uncertainty on the signal-to-noise ratio (SNR). Parameters of generated pulses are shown in <a href="#sensors-22-05973-t001" class="html-table">Table 1</a> Block 1.</p>
Full article ">Figure 5
<p>Dependence of measurement uncertainty on the signal-to-noise ratio (SNR). Parameters of generated pulses are shown in <a href="#sensors-22-05973-t001" class="html-table">Table 1</a> Block 2.</p>
Full article ">Figure 6
<p>Dependence of measurement uncertainty on sampling frequency <span class="html-italic">f<sub>smp</sub></span>. Parameters of generated pulses are shown in <a href="#sensors-22-05973-t002" class="html-table">Table 2</a> Block 3.</p>
Full article ">Figure 7
<p>Dependence of measurement uncertainty on sampling frequency <span class="html-italic">f<sub>smp</sub></span>. Parameters of generated pulses are shown in <a href="#sensors-22-05973-t002" class="html-table">Table 2</a> Block 4.</p>
Full article ">Figure 8
<p>Dependence of measurement uncertainty on pulse width for <span class="html-italic">τ</span> for SNR = 100. Parameters of generated pulses are shown in <a href="#sensors-22-05973-t003" class="html-table">Table 3</a> Block 5.</p>
Full article ">Figure 9
<p>Dependence of measurement uncertainty on pulse width <span class="html-italic">τ</span> for <span class="html-italic">f<sub>smp</sub></span> = 1 GHz. Parameters of generated pulses are shown in <a href="#sensors-22-05973-t003" class="html-table">Table 3</a> Block 6.</p>
Full article ">Figure 10
<p>Plots of the dependance of the measurement’s uncertainty calculated using Equation (16) for signal parameters shown in <a href="#sensors-22-05973-t004" class="html-table">Table 4</a>. Comparable results are shown in <a href="#sensors-22-05973-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 11
<p>Plots of the dependance of the measurement’s uncertainty calculated using Equation (16) for signal parameters shown in <a href="#sensors-22-05973-t004" class="html-table">Table 4</a>. Comparable results are shown in <a href="#sensors-22-05973-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 12
<p>Plots of the dependance of the measurement’s uncertainty calculated using Equation (16) for signal parameters shown in <a href="#sensors-22-05973-t004" class="html-table">Table 4</a>. Comparable results are shown in <a href="#sensors-22-05973-f008" class="html-fig">Figure 8</a>.</p>
Full article ">
18 pages, 53674 KiB  
Article
ScatterHough: Automatic Lane Detection from Noisy LiDAR Data
by Honghao Zeng, Shihong Jiang, Tianxiang Cui, Zheng Lu, Jiawei Li, Boon-Giin Lee, Junsong Zhu and Xiaoying Yang
Sensors 2022, 22(14), 5424; https://doi.org/10.3390/s22145424 - 20 Jul 2022
Cited by 4 | Viewed by 3441
Abstract
Lane detection plays an essential role in autonomous driving. Using LiDAR data instead of RGB images makes lane detection a simple straight line, and curve fitting problem works for realtime applications even under poor weather or lighting conditions. Handling scatter distributed noisy data [...] Read more.
Lane detection plays an essential role in autonomous driving. Using LiDAR data instead of RGB images makes lane detection a simple straight line, and curve fitting problem works for realtime applications even under poor weather or lighting conditions. Handling scatter distributed noisy data is a crucial step to reduce lane detection error from LiDAR data. Classic Hough Transform (HT) only allows points in a straight line to vote on the corresponding parameters, which is not suitable for data in scatter form. In this paper, a Scatter Hough algorithm is proposed for better lane detection on scatter data. Two additional operations, ρ neighbor voting and ρ neighbor vote-reduction, are introduced to HT to make points in the same curve vote and consider their neighbors’ voting result as well. The evaluation of the proposed method shows that this method can adaptively fit both straight lines and curves with high accuracy, compared with benchmark and state-of-the-art methods. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p><math display="inline"><semantics> <mi>ρ</mi> </semantics></math> Neighbor Voting. For (<b>b</b>,<b>c</b>), the <span class="html-italic">x</span>-axis is <math display="inline"><semantics> <mi>θ</mi> </semantics></math> and the <span class="html-italic">y</span>-axis is <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>, and the colors with the deeper depth represent the bigger numbers of votes. (<b>a</b>) original data; (<b>b</b>) classic HT voting in Hough space; (<b>c</b>) ScatterHough voting in Hough space.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <mi>ρ</mi> </semantics></math> Neighborhood. (<b>a</b>) neighborhood under Cartesian coordinates, each dot (in different colors) is a candidate Cartesian point and two candidate Cartesian points determine a fitted line; (<b>b</b>) neighborhood under Hough space, each dot corresponds to one fitted line (in same color) in Cartesian space and each curve represents all the possible lines that passing through one candidate Cartesian point (in same color).</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mi>ρ</mi> </semantics></math> Neighbor Vote-reduction. The top line, from left to right: original data, first line fit, second line fit, and third line fit. The bottom line, from left to right, first iteration of the proposed method in Hough space, second iteration of the proposed method in Hough space, third iteration of the proposed method in Hough space, fourth iteration of the proposed method in Hough space, the <span class="html-italic">x</span>-axis is <math display="inline"><semantics> <mi>θ</mi> </semantics></math> and the <span class="html-italic">y</span>-axis is <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>, and the colors with the deeper depth represent the bigger numbers of votes. (<b>a</b>) 1st iteration; (<b>b</b>) 2nd iteration; (<b>c</b>) 3rd iteration; (<b>d</b>) 4th iteration.</p>
Full article ">Figure 4
<p>Visualization results for all scenes, from left to right: multiRansac, Ours, RGB.</p>
Full article ">Figure 5
<p>Visualization results for all scenes, from left to right: RANSAC, Ours, RGB.</p>
Full article ">Figure 6
<p>Visualization results for all scenes, from left to right: DSAC, Ours, RGB.</p>
Full article ">Figure 7
<p>Visualization results for all scenes, from left to right: Poly, Ours, RGB.</p>
Full article ">Figure 8
<p>Visualization results for different values of <span class="html-italic">d</span>. (<b>a</b>) <span class="html-italic">d</span> = 0.1; (<b>b</b>) <span class="html-italic">d</span> = 0.25; (<b>c</b>) <span class="html-italic">d</span> = 1; (<b>d</b>) <span class="html-italic">d</span> = 2.</p>
Full article ">Figure 9
<p>Visualization results for different values of <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </semantics></math> = 10; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </semantics></math> = 30; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </semantics></math> = 60; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> <mi>e</mi> <mi>s</mi> <mi>h</mi> <mi>o</mi> <mi>l</mi> <mi>d</mi> </mrow> </semantics></math> = 120.</p>
Full article ">Figure 10
<p>Visualization results for different values of <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>a</mi> <mi>x</mi> <mi>G</mi> <mi>a</mi> <mi>p</mi> </mrow> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>a</mi> <mi>x</mi> <mi>G</mi> <mi>a</mi> <mi>p</mi> </mrow> </semantics></math> = 5; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>a</mi> <mi>x</mi> <mi>G</mi> <mi>a</mi> <mi>p</mi> </mrow> </semantics></math> = 10; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>a</mi> <mi>x</mi> <mi>G</mi> <mi>a</mi> <mi>p</mi> </mrow> </semantics></math> = 15; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>a</mi> <mi>x</mi> <mi>G</mi> <mi>a</mi> <mi>p</mi> </mrow> </semantics></math> = 25.</p>
Full article ">
15 pages, 5730 KiB  
Article
Design of Lidar Data Acquisition and Control System in High Repetition Rate and Photon-Counting Mode: Providing Testing for Space-Borne Lidar
by Liangliang Cheng, Chenbo Xie, Ming Zhao, Lu Li, Hao Yang, Zhiyuan Fang, Jianfeng Chen, Dong Liu and Yingjian Wang
Sensors 2022, 22(10), 3706; https://doi.org/10.3390/s22103706 - 12 May 2022
Cited by 5 | Viewed by 4940
Abstract
For ground-based lidars in atmospheric observation, their data acquisition unit and control unit usually work independently. They usually require the cooperation of large-volume, high-power-consumption Industrial Personal Computer (IPC). However, the space-borne lidar has high requirements on the stability and integration of the acquisition [...] Read more.
For ground-based lidars in atmospheric observation, their data acquisition unit and control unit usually work independently. They usually require the cooperation of large-volume, high-power-consumption Industrial Personal Computer (IPC). However, the space-borne lidar has high requirements on the stability and integration of the acquisition control system. In this paper, a new data acquisition and lidar control system (DALCS) was developed based on System-on-Chip Field-Programmable Gate Array (SoC FPGA) technology. It can be used in lidar systems with high repetition rate and photon-counting mode and has functions such as data storage, laser control, automatic collimation, wireless communication, and fault self-test. DALCS has two working modes: in online mode, the echo data collected by DALCS are transmitted to the computer for display in real-time and then stored with the current time as the file name; in offline mode, the data are stored in local non-volatile memory, which can be read remotely and can work autonomously when there is no IPC. The test results showed that in the frequency range of 0–70 M, the counting linearity of DALCS reached 0.9999, and the maximum relative error between the DALCS card and the standard signal source was 0.211%. The comparison results showed that the correlation coefficient between DALCS and MCS-PCI was as high as 0.99768. The DALCS was placed in a self-developed lidar sensor system for continuous observation, and the system worked stably under different weather conditions. The range-squared-corrected signal profiles obtained from the observations reflect the spatial and temporal distribution characteristics of aerosols and clouds well. This provides scheme verification and experimental support for the development of space-borne lidar data acquisition and control system. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>Lidar sensor system: (<b>a</b>) schematic diagram and (<b>b</b>) internal structure diagram.</p>
Full article ">Figure 2
<p>Block diagram of DALCS hardware structure.</p>
Full article ">Figure 3
<p>Block Schematic diagram of echo photon signal acquisition.</p>
Full article ">Figure 4
<p>The Overall logical structure of the acquisition system.</p>
Full article ">Figure 5
<p>The control module logic structure diagram.</p>
Full article ">Figure 6
<p>The logic structure diagram of the counting module.</p>
Full article ">Figure 7
<p>The logic structure diagram of the output module.</p>
Full article ">Figure 8
<p>Main logic signal diagram of the acquisition system.</p>
Full article ">Figure 9
<p>Software design: (<b>a</b>) embedded program flow chart; (<b>b</b>) the main program of the upper computer.</p>
Full article ">Figure 10
<p>Signal test: (<b>a</b>) Laser and PMT module output signal; (<b>b</b>) Test environment; (<b>c</b>) Count linearity.</p>
Full article ">Figure 11
<p>The upper computer software of DALCS.</p>
Full article ">Figure 12
<p>Actual test: (<b>a</b>) Lidar test platform; (<b>b</b>) DALCS Card; (<b>c</b>) comparison of D ALCS and MCS-PCI; (<b>d</b>) linear fitting of DALCS and MCS-PCI.</p>
Full article ">Figure 13
<p>Range-corrected signal profiles in different weather: (<b>a</b>) 532 nm parallel Polarization channel on 5 March 2022; (<b>b</b>) 532 nm vertical Polarization channel on 5 March 2022; (<b>c</b>) 532 nm parallel Polarization channel on 8 March 2022; (<b>d</b>) 532 nm vertical Polarization channel on 8 March 2022.</p>
Full article ">
11 pages, 2572 KiB  
Article
Simultaneous Extraction of Planetary Boundary-Layer Height and Aerosol Optical Properties from Coherent Doppler Wind Lidar
by Yehui Chen, Xiaomei Jin, Ningquan Weng, Wenyue Zhu, Qing Liu and Jie Chen
Sensors 2022, 22(9), 3412; https://doi.org/10.3390/s22093412 - 29 Apr 2022
Cited by 6 | Viewed by 1816
Abstract
Planetary boundary-layer height is an important physical quantity for weather forecasting models and atmosphere environment assessment. A method of simultaneously extracting the surface-layer height (SLH), mixed-layer height (MLH), and aerosol optical properties, which include aerosol extinction coefficient (AEC) and aerosol optical depth (AOD), [...] Read more.
Planetary boundary-layer height is an important physical quantity for weather forecasting models and atmosphere environment assessment. A method of simultaneously extracting the surface-layer height (SLH), mixed-layer height (MLH), and aerosol optical properties, which include aerosol extinction coefficient (AEC) and aerosol optical depth (AOD), based on the signal-to-noise ratio (SNR) of the same coherent Doppler wind lidar (CDWL) is proposed. The method employs wavelet covariance transform to locate the SLH and MLH using the local maximum positions and an automatic algorithm of dilation operation. AEC and AOD are determined by the fitting curve using the SNR equation. Furthermore, the method demonstrates the influential mechanism of optical properties on the SLH and MLH. MLH is linearly correlated with AEC and AOD because of solar heating increasing. The results were verified by the data of an ocean island site in China. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) SNR image of successive 180 measurements, (<b>b</b>) SNR of one measurement, and the average SNR of 180 successive measurements. The PBL is in the red rectangular area. The green dashed line denotes the top of PBL, which can be considered as the MLH.</p>
Full article ">Figure 2
<p>(<b>a</b>) Plot of the Haar wavelet function, (<b>b</b>) WCT of SNR at the different dilation, and (<b>c</b>) the minimum of WCT depending on dilation.</p>
Full article ">Figure 3
<p>SNR and WCT of SNR in terms of altitude. A and B denote the local minimum values of WCT of SNR and the absolute value of WCT of SNR, and the corresponding altitudes are labeled with <math display="inline"><semantics> <msub> <mi>P</mi> <mi>a</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>b</mi> </msub> </semantics></math>, respectively.</p>
Full article ">Figure 4
<p>Flowchart for determination of SLH/MLH and its AEC/AOD.</p>
Full article ">Figure 5
<p>(<b>a</b>) SLH, MLH, and AEC during local time, and (<b>b</b>) SLH, MLH, and AOD during local time. MLH is a linear relationship with (<b>c</b>) AEC and (<b>d</b>) AOD.</p>
Full article ">Figure 6
<p>(<b>a</b>) Slope of MLH depending on AEC and AOD, (<b>b</b>) slope of SLH depending on AEC and AOD for successive 8 days.</p>
Full article ">Figure 7
<p>(<b>a</b>) The correlation between AEC and aerosols with different sizes including <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mn>10</mn> </msub> </mrow> </semantics></math> during the local time, (<b>b</b>) statistical associations between AEC and mean of wind speed, and (<b>c</b>) AOD and mean of wind speed during local time.</p>
Full article ">Figure 8
<p>(<b>a</b>) The trend of the AEC and optical absorption coefficient, (<b>b</b>) the correlation between AEC and air quality, including <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mn>10</mn> </msub> </mrow> </semantics></math> during the local time, and (<b>c</b>) the MLH correlation during the local time.</p>
Full article ">Figure 9
<p>RCSNR and WCT of RCSNR in terms of altitude. C and D denote the local maximum and minimum values of WCT of RCSNR.</p>
Full article ">Figure 10
<p>(<b>a</b>) Lidar image destroyed by the clouds, (<b>b</b>) filtered lidar image with the morphological opening operation.</p>
Full article ">
18 pages, 4637 KiB  
Article
Simulation and Analysis of Mie-Scattering Lidar-Measuring Atmospheric Turbulence Profile
by Yuqing Lu, Jiandong Mao, Yingnan Zhang, Hu Zhao, Chunyan Zhou, Xin Gong, Qiang Wang and Yi Zhang
Sensors 2022, 22(6), 2333; https://doi.org/10.3390/s22062333 - 17 Mar 2022
Cited by 3 | Viewed by 2913
Abstract
Based on the residual turbulent scintillation theory, the Mie-scattering lidar can measure the intensity of atmospheric turbulence by detecting the light intensity scintillation index of the laser return signal. In order to evaluate and optimize the reliability of the Mie-scattering lidar system for [...] Read more.
Based on the residual turbulent scintillation theory, the Mie-scattering lidar can measure the intensity of atmospheric turbulence by detecting the light intensity scintillation index of the laser return signal. In order to evaluate and optimize the reliability of the Mie-scattering lidar system for detecting atmospheric turbulence, the appropriate parameters of the Mie-scattering lidar system are selected and optimized using the residual turbulent scintillation theory. Then, the Fourier transform method is employed to perform the numerical simulation of the phase screen of the laser light intensity transformation on the vertical transmission path of atmospheric turbulence. The phase screen simulation, low-frequency optimization, and scintillation index calculation methods are provided in detail, respectively. Based on the phase distribution of the laser beam, the scintillation index is obtained. Through the relationship between the scintillation index and the atmospheric turbulent refractive index structure constant, the atmospheric turbulence profile is inverted. The simulation results show that the atmospheric refractive index structure constant profile obtained by the iterative method is consistent with the input HV5/7 model below 6500 m, which has great guiding significance to carry out actual experiments to measure atmospheric turbulence using the Mie lidar. Full article
(This article belongs to the Special Issue LiDAR Sensor Hardware, Algorithm Development and Its Application)
Show Figures

Figure 1

Figure 1
<p>The structure diagram of the Mie lidar system.</p>
Full article ">Figure 2
<p>Spatial correlation scale of light intensity fluctuation <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi mathvariant="normal">I</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The three-dimensional (<b>left</b>) and two-dimensional (<b>right</b>) schematic diagrams of the emitted beam intensity distribution of the Mie lidar system at <span class="html-italic">z</span> = 0.</p>
Full article ">Figure 4
<p>Phase screen model of light transmission in atmospheric turbulence.</p>
Full article ">Figure 5
<p>Schematic diagram of phase screen grid simulation.</p>
Full article ">Figure 6
<p>Schematic diagram of sub-harmonic compensation.</p>
Full article ">Figure 7
<p>The three-dimensional (<b>left</b>) and two-dimensional (<b>right</b>) schematic diagrams of the phase distribution of the high-frequency phase screen numerically simulated in the turbulent atmosphere.</p>
Full article ">Figure 8
<p>The three-dimensional (<b>left</b>) and two-dimensional (<b>right</b>) schematic diagrams of the numerically simulated phase distribution in the turbulent atmosphere after the third harmonic compensation under the same conditions.</p>
Full article ">Figure 9
<p>Kolmogorov phase screen structure function comparison diagram.</p>
Full article ">Figure 10
<p>Light intensity distribution of the laser beam on the vertical path (<b>a</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="san-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>75</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mi>L</mi> <mo>=</mo> <mn>1000</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="san-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>75</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mi>L</mi> <mo>=</mo> <mn>2000</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, (<b>c</b>), <math display="inline"><semantics> <mrow> <mi mathvariant="san-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>200</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mi>L</mi> <mo>=</mo> <mn>1000</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="san-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>200</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mi>L</mi> <mo>=</mo> <mn>2000</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Spot drift phenomenon of laser transmission (<b>a</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="san-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>75</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mi>L</mi> <mo>=</mo> <mn>2000</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="san-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>200</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mi>L</mi> <mo>=</mo> <mn>2000</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Distribution of scintillation index with transmission distance.</p>
Full article ">Figure 13
<p>The profile of atmospheric refractive index structure constant obtained by the iterative inversion algorithm.</p>
Full article ">
Back to TopTop