Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Review of Land Surface Albedo: Variance Characteristics, Climate Effect and Management Strategy
Next Article in Special Issue
Native Smartphone Single- and Dual-Frequency GNSS-PPP/IMU Solution in Real-World Driving Scenarios
Previous Article in Journal
Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data
Previous Article in Special Issue
An Error Overbounding Method Based on a Gaussian Mixture Model with Uncertainty Estimation for a Dual-Frequency Ground-Based Augmentation System
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments

School of Geomatics, Liaoning Technical University, Fuxin 123000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(6), 1380; https://doi.org/10.3390/rs14061380
Submission received: 8 February 2022 / Revised: 5 March 2022 / Accepted: 10 March 2022 / Published: 12 March 2022
(This article belongs to the Special Issue Remote Sensing in Navigation: State-of-the-Art)
Figure 1
<p>Overview of the improved-UWB/LiDAR-SLAM integrated positioning system.</p> ">
Figure 2
<p>Coordinate systems.</p> ">
Figure 3
<p>Indoor positioning scenarios. (<b>a</b>) NLOS positioning scenario; and (<b>b</b>) LiDAR scanning scenario.</p> ">
Figure 4
<p>NLOS identification using a LiDAR point cloud.</p> ">
Figure 5
<p>The line passes through the triangle.</p> ">
Figure 6
<p>Experimental platform and automatic tracking total station: (<b>a</b>) front view; (<b>b</b>) left-side view; and (<b>c</b>) right-side view.</p> ">
Figure 7
<p>Experimental layout.</p> ">
Figure 8
<p>Time series of UWB raw measurements for different anchors. Blue: anchor 1; black: anchor 2; green: anchor 3; and red: anchor 4.</p> ">
Figure 9
<p>Time series of UWB ranging errors for different anchors: (<b>a</b>) anchor 1; (<b>b</b>) anchor 2; (<b>c</b>) anchor 3; and (<b>d</b>) anchor 4.</p> ">
Figure 10
<p>UWB NLOS identification using a LiDAR point cloud in a certain epoch: (<b>a</b>) 3-D view; (<b>b</b>) top view; (<b>c</b>) front view; and (<b>d</b>) left-side view.</p> ">
Figure 11
<p>Time series of the number of UWB NLOS anchors.</p> ">
Figure 12
<p>Experimental layout.</p> ">
Figure 13
<p>Time series of the UWB raw measurements (blue) and LOS measurements (red) for different anchors: (<b>a</b>) anchor 1; (<b>b</b>) anchor 2; (<b>c</b>) anchor 3; and (<b>d</b>) anchor 4.</p> ">
Figure 14
<p>Time series of the UWB raw ranging errors (blue) and LOS ranging errors (red) for different anchors: (<b>a</b>) anchor 1; (<b>b</b>) anchor 2; (<b>c</b>) anchor 3; and (<b>d</b>) anchor 4.</p> ">
Figure 15
<p>Time series of the number of UWB NLOS anchors.</p> ">
Figure 16
<p>Trajectory comparison of the ground truth (black), LS (blue) and NI-LS (red).</p> ">
Figure 17
<p>Time series of the position errors for LS (blue) and NI-LS (red): (<b>a</b>) X direction; (<b>b</b>) Y direction; and (<b>c</b>) plane direction.</p> ">
Figure 18
<p>Time series of NLOS identification for different anchors. Blue: Anchor 1; black: Anchor 2; green: Anchor 3; and red: Anchor 4.</p> ">
Figure 19
<p>Trajectory comparison of the ground truth (black), NI-LS (cyan), LeGO-LOAM (magenta), TC-EKF (orange), TC-REKF (green), NI-TC-EKF (blue) and NI-TC-REKF (red). (<b>a</b>–<b>c</b>) are certain sections with special characteristics.</p> ">
Figure 20
<p>Time series of the position errors for NI-LS (cyan), LeGO-LOAM (magenta), TC-EKF (orange), TC-REKF (green), NI-TC-EKF (blue) and NI-TC-REKF (red): (<b>a</b>) X direction; (<b>b</b>) Y direction; and (<b>c</b>) plane direction.</p> ">
Versions Notes

Abstract

:
Reliable absolute positioning is indispensable in long-term positioning systems. Although simultaneous localization and mapping based on light detection and ranging (LiDAR-SLAM) is effective in global navigation satellite system (GNSS)-denied environments, it can provide only local positioning results, with error divergence over distance. Ultrawideband (UWB) technology is an effective alternative; however, non-line-of-sight (NLOS) propagation in complex indoor environments severely affects the precision of UWB positioning, and LiDAR-SLAM typically provides more robust results under such conditions. For robust and high-precision positioning, we propose an improved-UWB/LiDAR-SLAM tightly coupled (TC) integrated algorithm. This method is the first to combine a LiDAR point cloud map generated via LiDAR-SLAM with position information from UWB anchors to distinguish between line-of-sight (LOS) and NLOS measurements through obstacle detection and NLOS identification (NI) in real time. Additionally, to alleviate positioning error accumulation in long-term SLAM, an improved-UWB/LiDAR-SLAM TC positioning model is constructed using UWB LOS measurements and LiDAR-SLAM positioning information. Parameter solving using a robust extended Kalman filter (REKF) to suppress the effect of UWB gross errors improves the robustness and positioning performance of the integrated system. Experimental results show that the proposed NI method using the LiDAR point cloud can efficiently and accurately identify UWB NLOS errors to improve the performance of UWB ranging and positioning in real scenarios. The TC integrated method combining NI and REKF achieves better positioning effectiveness and robustness than other comparative methods and satisfactory control of sensor errors with a root-mean-square error of 0.094 m, realizing subdecimeter indoor positioning.

1. Introduction

Obtaining accurate, robust and continuous position information is an important guarantee for location-based services and applications (internet of things, intelligent transportation, etc.), for which global navigation satellite systems (GNSSs) (especially the global positioning system (GPS) and the BeiDou navigation satellite system (BDS)) are common positioning technology solutions for outdoor environments. Precise point positioning (PPP) or precise point positioning-real-time kinematic (PPP-RTK) positioning solutions can provide dm–cm level navigation and positioning services when the observation environment is good [1,2,3]. However, for indoor environments where GNSS signals cannot penetrate, such as large warehouses and underground parking lots, the attenuation of satellite signals and multipath effects cause the GNSS positioning accuracy to be ineffective, meaning that it cannot meet the demand for indoor high-precision positioning. Therefore, with the growing demand for indoor positioning, determining how to develop and build a real-time, high-precision and robust indoor positioning system has become a hot issue in current research on indoor positioning.
Ultrawideband (UWB) positioning technology is a wireless positioning technology that has been widely used in recent years and has been applied to indoor positioning technology research due to its advantages of low system complexity, high temporal resolution, strong resistance to multipath effects, and strong signal penetration. The UWB positioning accuracy is determined by the pulse width and response frequency of the UWB. The UWB pulse width is at only the nanosecond or subnanosecond level [4], with ultrahigh bandwidth characteristics, thus ensuring the emission of an ultralow signal power spectral density, which can theoretically reach the centimeter level. However, in a complex indoor environment, obstacles (pedestrians, vehicles, wall columns, etc.) can cause reflection, refraction and penetration of the UWB pulse signal during propagation, resulting in a certain degree of signal attenuation, a time delay and a relatively large propagation distance; these factors, in turn, cause a biased distance estimate based on signal strength measurement or time measurement between the anchor and the mobile tag to be located. This bias is called the non-line-of-sight (NLOS) error. The interference from NLOS errors can significantly reduce the positioning accuracy and stability of UWB systems and, in severe cases, even lead to an inability to range and position, which is seriously inconsistent with the expectation of applying UWB to indoor high-precision positioning. Therefore, determining how to effectively identify and suppress NLOS errors in the NLOS environment to improve the accuracy and stability of the positioning algorithm has become the key to indoor high-precision positioning.
For indoor positioning scenarios in which NLOS errors exist, traditional NLOS identification techniques are broadly classified into three categories [5], namely, distance estimation-based methods [6,7], channel statistics-based methods [8,9,10,11,12,13,14] and position estimation-based methods [15,16,17,18,19]. Distance estimation-based methods distinguish between line-of-sight (LOS) and NLOS errors by comparing the variance in the distance estimate with a predefined threshold or by using the probability density function (PDF). However, this method inevitably increases the time delay due to the collection of distance and is also be limited by the requirements of the prior distribution function and the difficulty of choosing a suitable threshold [10,13]. Channel statistics-based methods usually use hypothesis testing or classification theory to distinguish between LOS and NLOS errors using amplitude (e.g., received signal strength (RSS), maximum amplitude of the received signal, power difference, and power ratio) and temporal (e.g., time-of-arrival (TOA), root-mean-square delay-spread, peak-to-lead delay, rise time, mean excess delay, and maximum excess delay) statistics obtained from the channel impulse response (CIR) and extract channel characteristic parameters, such as the skewness, kurtosis, and kernel function for identification [20]. However, these methods require modeling the distribution of multiple statistics and thus require the creation and frequent updating of large training databases [21], yet the collection of NLOS data is often tedious and intensive. Both distance estimation-based and channel statistics-based methods identify NLOS errors before positioning, while position estimation-based methods identify NLOS errors during positioning. In the case of redundant valid measurements, NLOS errors can be identified by comparing the position estimation results for different subsets of range values. However, when there are multiple measurements with NLOS errors or insufficient redundant measurements (at least 3 measurements are required for two-dimensional (2-D) positioning), this method cannot reliably identify the NLOS measurements and is therefore not suitable for complex positioning scenarios with sparse indoor anchor placement. If the user’s position information is known a priori, it can also be combined with environmental data (attenuation factors and geometry) to identify NLOS errors. Jo et al. [15] used a 2-D map for signal ray tracing to obtain accurate knowledge of wireless signal propagation, identify NLOS signals and estimate errors caused by NLOS. However, real-time ray tracing is usually not possible because of the high computational complexity and the need to know the physical properties in advance, including the dielectric constants and boundary conditions, for the objects in the environment. Suski et al. and Zhu et al. [16,17] improved UWB positioning accuracy by building a database of error fingerprints for indoor static environments. Wang et al. [18,19] used manually drawn 2-D maps matched with line segment information for NLOS identification and adjusted the weights of UWB ranging values according to the obstruction situation, effectively eliminating the effect of NLOS errors. Although the methods in [16,17,18,19] are accurate, the preliminary measurements required to generate environmental maps can be a daunting task for large buildings, e.g., [16] distributed 8000 positioning error measurement points in a space of 12 × 8 m and collected a total of 4 million measurements, which was very time-consuming and tedious. Such an expensive data collection phase increases labor costs and reduces availability, and if the space changes in any way, including the movement of people or objects, the corresponding positioning error maps or digital maps are no longer valid. Therefore, these methods are not practical due to the lack of additional information that is applicable to all environments, and the dynamic environment prevents their real-time implementation.
Due to the rapid development of three-dimensional (3-D) sensor technology, environmental maps of large buildings can be quickly and accurately captured by sensors, such as light detection and ranging (LiDAR), red green blue-depth (RGB-D) cameras, and stereo cameras [22]. Compared to cameras, simultaneous localization and mapping (SLAM) using LiDAR can directly and accurately provide dense 3-D points of structural information in the environment, is less affected by light and weather and has been widely used for indoor mapping of 3-D buildings due to its long detection range, high accuracy and high robustness [23,24]. Typically, LiDAR-SLAM estimates the interframe pose transformation matrix by registering the point clouds of consecutive frames before obtaining the trajectory and building an environment map. Therefore, the key to the quality of environmental map construction is the accuracy of LiDAR 6-degree-of-freedom (6-DOF) pose estimation. Conventional LiDAR-SLAM mostly uses the iterative closest point (ICP) and the normal distributions transform (NDT)-based algorithms [25,26] or their variants for interframe pose estimation. ICP estimates the pose transformation matrix between the point clouds of two frames by iteratively matching the closest points of adjacent frames and minimizing the distance between corresponding feature points. However, such algorithms are sensitive to the initial values, with incorrect initialization leading to convergence to local minima outside the attraction basin of the true solution [27,28], and they suffer from slow convergence. As an alternative to ICP, the NDT algorithm does not carry out matching according to the characteristics of the corresponding points. This algorithm represents the underlying scan as a set of Gaussian distributions, which locally model the surface scan as a PDF, and obtains the optimal transformation parameters that maximize the sum of probability densities using optimization methods. The operation speed and robustness are greatly improved in comparison to the ICP algorithm. However, NDT discretizes the original point cloud into cells, and the definition of cell size significantly affects the final map quality [22]. LiDAR odometry and mapping (LOAM) [29], the current state-of-the-art solution for LiDAR-SLAM, has a high score for rotation and translation errors on the KITTI vision benchmark ranking [30]. This model extracts line and plane features to correlate the consistency between consecutive frames with the global map and allows complex SLAM problems to be solved by executing a high-frequency odometry thread and a low-frequency mapping thread. These parallel threads significantly improve the real-time performance of SLAM techniques. LOAM is capable of mapping efficiently with acceptable accuracy in large-scale indoor and outdoor environments, but the lack of closed-loop detection [31] makes error accumulation inevitable. Lightweight and ground-optimized LOAM (LeGO-LOAM) [32] adopts ground segmentation and point cloud clustering to make the extracted feature points more effective while adding an optimized back-end to LOAM and conducting incremental smoothing and mapping using a Bayes tree (iSAM2) [33] closed-loop module to suppress long-term drift, improving the computational efficiency and accuracy of LOAM. However, the two aforementioned methods suffer from motion degradation in scenarios with less discriminative structures and exhibit poor performance.
Considering the vulnerability of individual sensors in real scenarios and the complementarity of both UWB and LiDAR sensors (UWB performs better in open scenarios, while LiDAR-SLAM is more suitable for complex scenarios where UWB performs poorly), with reference to the GNSS and LiDAR fusion method [34,35,36,37], this paper combines UWB and LiDAR-SLAM and applies the information fusion approach to fully utilize and combine the advantages of both positioning systems to achieve a continuously working indoor positioning system with high accuracy, high pervasiveness, and strong robustness. On one hand, LiDAR-SLAM can improve the reliability and availability of the UWB positioning system in complex scenarios. On the other hand, as a type of recursive navigation scheme, the LiDAR-SLAM navigation error diverges gradually with increasing moving distance [37,38]. The fusion of LiDAR-SLAM with UWB helps suppress the error accumulation of SLAM. Most importantly, UWB, as an absolute positioning scheme, can transform LiDAR-SLAM positioning results in a local coordinate system to the world coordinate system (e.g., world geodetic system 1984 (WGS-84)) for some indoor navigation tasks.
The main contributions of this paper are as follows.
  • In complex environments, such as large buildings or time-varying environments, where UWB signals are heavily affected by NLOS errors, the rich geometric features enable LiDAR-SLAM to provide accurate and robust pose estimation and mapping results. With the advanced LiDAR-SLAM algorithm LeGO-LOAM, we propose and implement UWB NLOS identification using the LiDAR point cloud algorithm. This method combines the position information of the UWB anchor with the environment map generated by LeGO-LOAM and distinguishes between LOS and NLOS measurements in real time by efficiently and accurately performing obstacle detection and NLOS identification toward the line-of-sight direction of the anchor to improve the UWB data quality. It has good universality as it does not need the tedious data collection work and training phase in the early stage, and it can cope with the interference of dynamic obstacles in the environment well. Experimental results show that this NLOS identification algorithm is reasonably effective without adding a large amount of computation.
  • To suppress the error accumulation of LiDAR-SLAM while simultaneously obtaining the positioning results in the world coordinate system, we propose a novel improved-UWB/LiDAR-SLAM tightly coupled positioning system by using UWB LOS measurements identified by the LiDAR point cloud and the positioning results of LeGO-LOAM as the input to the integrated system. Considering that in addition to NLOS propagation, UWB measurements are affected by signal multipath effects, intensity attenuation and other factors, which are likely to cause large gross errors, we use a robust extended Kalman filter (REKF) for parameter solutions and effectively suppress the influence of abnormal measurements on the filtering results by reducing the weights of outliers. A dynamic positioning experiment demonstrates the accuracy and robustness of the proposed tightly coupled integrated method combining NLOS identification and REKF.
The remainder of this paper is organized as follows. Section 2.1 gives an overview of the complete system pipeline. Section 2.2 designs the temporal synchronization and spatial calibration between sensors to ensure effective sensor fusion. The details of the UWB NLOS identification using the LiDAR point cloud algorithm are described in Section 2.3 and Section 2.4. The improved-UWB/LiDAR-SLAM tightly coupled positioning algorithm is introduced in Section 2.5 and Section 2.6. The detailed experimental setup and discussions are reported and analyzed in Section 3. Finally, Section 4 summarizes the work by drawing several conclusions and provides an outlook for future research.

2. Methodology

In this paper, we use the superscript T to denote the transpose of vector or matrix, with lowercase bold symbols (e.g., x ) for vectors and uppercase bold symbols (e.g., R ) for matrices. For any vector x , x denotes its Euclidean norm.

2.1. System Overview

The framework of the proposed improved-UWB/LiDAR-SLAM integrated positioning system is illustrated in Figure 1. First, the system performs data synchronization of LiDAR and UWB sensors through a robot operating system (ROS) and the calibrated external reference matrix, unifying the time and spatial datum (purple blocks in Figure 1). Second, the original LiDAR point cloud is segmented to distinguish between the ground and nonground points, and the ground and nonground feature points are extracted from these point clouds for feature matching to estimate the 6-DOF relative poses of the system. The point cloud in the current frame is registered to the starting coordinate system in accordance with the transformation matrix to obtain a globally consistent map (green blocks in Figure 1). Then, the UWB NLOS data are identified by combining the UWB anchor information with the constructed point cloud map. The UWB data quality is effectively controlled by eliminating NLOS measurements and retaining LOS measurements (yellow blocks in Figure 1). Finally, the improved-UWB/LiDAR-SLAM tightly coupled model is designed. Multiple UWB LOS data and LiDAR data are fused using REKF to obtain the optimal estimation of the integrated system state. The possibility of LiDAR-SLAM divergence is effectively reduced, and the robustness of the system is improved (orange blocks in Figure 1). The red dashed blocks in Figure 1 represent the contributions of our work.

2.2. Spatiotemporal Synchronization of Sensors

Because UWB and LiDAR belong to two different types of positioning systems, temporal synchronization and spatial calibration between sensors are necessary steps for UWB NLOS identification and subsequent multisensor fusion.
For temporal synchronization, a ROS provides a well-established data encapsulation format for storing sensor information and timestamps. Therefore, we use the mechanism of ROS to assign timestamps to each sensor with the same time reference and achieve temporal synchronization for both types of sensors by aligning the timestamps of UWB and LiDAR data.
The spatial calibration is shown in Figure 2. The world coordinate system, referred to as the w-frame (i.e., the UWB coordinate system), is established by setting stations on indoor control points with a high-precision total station; the body coordinate system is referred to as the b-frame, and the LiDAR coordinate system is referred to as the l-frame. LeGO-LOAM defines the l-frame of the first frame as the global reference coordinate system, referred to as the g-frame (i.e., the map coordinate system). Considering the number of points in the point cloud map, the initial positions of the UWB anchors and the mobile tag are transformed into the g-frame for subsequent sensor data information fusion to improve the conversion efficiency. Therefore, it is necessary to precisely determine the 6-DOF transformation relationship T w b = R , t between the w-frame and the g-frame, where R represents the rotation parameter and t represents the translation parameter.
The specific steps are as follows.
  • Considering that the indoor ground is mostly horizontal, the transformation relationship T w b between the w-frame and b-frame is obtained by observing the front and rear points on the side of the mobile platform with a high-precision total station.
  • Because the LiDAR sensor remains fixed after installation, the transformation relationship between the b-frame and l-frame needs to be calibrated only once. A feature-rich static scenario is selected, and the mobile platform is kept stationary. Multiple pairs of corresponding feature points p w , p l are obtained by observing the sharp-shaped corner feature points in the scenario from two different positions using LiDAR and a high-precision total station. The point p w observed by the total station is transformed into p b in accordance with T w b , and the corresponding feature point pairs p b , p l in the b-frame and l-frame are obtained. Thus, the transformation relationship T b l between the b-frame and l-frame is obtained by solving for the four parameters.
  • The transformation relationship T w l between the w-frame and l-frame is T w l = T b l T w b . Since the g-frame and l 0 - frame are coincident at the initial time, T w g = T w l 0 = T b l 0 T w b . Accordingly, the UWB anchor coordinates in the g-frame and the lever-arm vector l l with the LiDAR center pointing to the UWB mobile tag in the l-frame can be obtained. In this paper, because the LiDAR center and the UWB mobile tag are on the same plumb line, therefore l l = 0 , 0 , d T , where d is the distance between the LiDAR center and the UWB mobile tag.

2.3. Generation of LiDAR Point Cloud Map

Because the indoor ground is mostly flat, consistent with the assumption that the LeGO-LOAM robot is always on the ground, and to ensure the accuracy and computational efficiency of the UWB NLOS identification algorithm, this paper adopts the advanced LiDAR-SLAM algorithm LeGO-LOAM to build a high-precision 3-D environment map. LeGO-LOAM consists of four main modules: segmentation, feature extraction, odometry and mapping.
  • Segmentation: The point cloud obtained at time t is set to P t = p 1 , p 2 , , p n , where p i is a point in the point cloud P t . The point cloud is projected into a range image with a resolution of 1800 × 16. The ground point cloud ( P p ) and nonground point cloud ( P e ) are extracted from the LiDAR raw data by judging the vertical dimensional characteristic.
  • Feature extraction: To extract features uniformly from all directions from ground points and segmentation points, the range image is equally divided into several subimages in the horizonal direction. The smoothness of each point is calculated and compared with the smoothness threshold to extract the edge features and planar features for registration.
  • Odometry: The scan-to-scan constraint based on extracted features is built next. Since the ground remains essentially constant between consecutive frames, the variations in the vertical dimension t z , t r o l l , t p i t c h can be estimated based on the planar features. The estimate of the vertical dimension is input as the initial value into the second optimization step to reduce the number of iterations, and the variations in the horizontal dimension t x , t y , t y a w are calculated to improve the computational efficiency.
  • Mapping: The scan-to-map constraint is constructed by the Levenberg–Marquardt (L-M) method, and the final global map is obtained using a loop detection approach. More details of the LeGO-LOAM algorithm can be found in [32].
Considering the computational efficiency and the time-varying nature of dynamic scenarios and to minimize the interference caused by the nonrepetition of dynamic objects between adjacent frames (e.g., a vehicle or pedestrian scanned in the previous frame may not appear in the next frame), the method used in this paper registers only the LiDAR point cloud in the current frame to the environment map in accordance with the transformation matrix and updates iteratively in real time.

2.4. UWB NLOS Identification Using a LiDAR Point Cloud Map

For positioning in some indoor scenarios, as shown in Figure 3a, occlusion due to obstacles (e.g., pedestrians, vehicles, wall pillars, etc.) may cause a node to enter the NLOS status, meaning that the UWB pulse signal has no direct path and needs to be reflected, refracted, and/or permitted to penetrate through obstacles before it can be received by the target node. Such regions are called NLOS regions. Compared with the LOS region, the time delay of the signal in the NLOS region leads to a positively biased distance estimate, which seriously affects the positioning accuracy. However, in the LiDAR point cloud map, as shown in Figure 3b, the position, size, and reflective surface of each obstacle (static or dynamic) are all known, that is, the LOS region and NLOS region are known.
In the NLOS region, there are obstacles acting as blockages between nodes, which is reflected in the point cloud map as point cloud clusters between nodes. Based on this assumption, after completing the temporal synchronization and spatial calibration of two sensors, we design a UWB NLOS identification using the LiDAR point cloud algorithm. The two types of sensors, UWB and LiDAR, are correlated, and the LiDAR point cloud map is used to identify NLOS by judging whether there is obstacle occlusion between the UWB mobile tag and anchor, i.e., whether the line-of-sight direction is in the NLOS status.
The inputs to NLOS identification algorithm are the transformation relationship T w g between the w-frame and g-frame, the UWB anchor set u a all = u a 1 , u a 2 , , u a n , the UWB mobile tag coordinate r tag w at the initial time, the UWB anchor coordinates r anchor w , the LiDAR coordinate r lidar g , the LiDAR point cloud map M g and a point number threshold n t h r . The output is the UWB LOS anchor set u a los = u a 1 , u a 2 , , u a s . The main flow of NLOS identification algorithm is as follows.
  • At the initial time, the g-frame coincides with the l 0 - frame . In accordance with the transformation relationship T w g between the w-frame and g-frame calculated in Section 2.2, the UWB mobile tag coordinate r tag w and anchor coordinates r anchor w in the w-frame are transformed into the g-frame (i.e., l 0 - frame ). The lever-arm vector l l between the UWB mobile tag and the LiDAR sensor in the l-frame and the UWB anchor coordinates r anchor g = x anchor g , y anchor g , z anchor g in the g-frame are obtained.
  • At time t, the LiDAR coordinate r lidar g is used to deduce the UWB mobile tag coordinate r lidar , tag g = x lidar , tag g , y lidar , tag g , z lidar , tag g in the g-frame in accordance with the lever-arm vector l l .
    r lidar , tag g = r lidar g + l l
  • K-dimensional trees (KD-trees) and voxel grids are commonly used for map representation and corresponding searches in SLAM systems [36,39]. Considering that KD-tree representation can find the association points through a nearest neighbor search or radius search, we convert the point cloud map M g generated in Section 2.3 into a KD-tree structure Μ tree g for subsequent obstacle detection.
  • Based on the UWB mobile tag coordinate r lidar , tag g = x lidar , tag g , y lidar , tag g , z lidar , tag g and anchor j coordinate r anchor g = x anchor g , y anchor g , z anchor g , the line-of-sight direction, i.e., the search direction for NLOS identification (e.g., the direction of the red dashed line in Figure 4), is determined. At the same time, the distance d , vertical azimuth ω , and horizontal azimuth α between the mobile tag and anchor j are calculated.
    Figure 4. NLOS identification using a LiDAR point cloud.
    Figure 4. NLOS identification using a LiDAR point cloud.
    Remotesensing 14 01380 g004
  • The first NLOS identification is performed along the line of sight, centered on the UWB mobile tag position.
  • Considering the general size of indoor objects and the LiDAR vertical angular resolution, the constructed KD-tree Μ tree g is used to search the point cloud in the neighborhood with a radius of 1 m. If the number of points in the neighborhood exceeds the given threshold n t h r (in this paper, n t h r = 10 ), it is considered that the line-of-sight direction may be blocked by obstacles.
  • To prevent misjudgment (orange dashed block in Figure 4) and improve the accuracy of NLOS identification, based on the principle of collision detection, we traverse all points of the LiDAR point cloud in the neighborhood, take any three points to construct a triangle, and use Plücker coordinates to determine whether the line of sight intersects with the triangle in 3-D space. The Plücker coordinates represent a method of specifying directed lines in 3-D space using six-dimensional vectors. Taking any different points a = a 0 , a 1 , a 2 and b = b 0 , b 1 , b 2 to construct the Plücker coordinates l of line ab, we obtain:
l = pl ü cker a , b = l 0 , l 1 , l 2 , l 3 , l 4 , l 5
where:
l 0 = a 0 b 1 b 0 a 1 l 2 = a 0 b 0 l 4 = a 2 b 2 l 1 = a 0 b 2 b 0 a 2 l 3 = a 1 b 2 b 1 a 2 l 5 = b 1 a 1
The side operator is defined as the permuted inner product of Plücker coordinates l 1 and l 2 :
side l 1 , l 2 = l 1 , 0 l 2 , 4 + l 1 , 1 l 2 , 5 + l 1 , 2 l 2 , 3 + l 1 , 3 l 2 , 2 + l 1 , 4 l 2 , 0 + l 1 , 5 l 2 , 1
where l 1 = l 1 , 0 , l 1 , 1 , l 1 , 2 , l 1 , 3 , l 1 , 4 , l 1 , 5 and l 2 = l 2 , 0 , l 2 , 1 , l 2 , 2 , l 2 , 3 , l 2 , 4 , l 2 , 5 .
In our method, the line of sight is defined by the UWB mobile tag coordinate r lidar , tag g and anchor coordinate r anchor g , and the triangle is defined by any three points p 0 , p 1 and p 2 in the LiDAR point cloud. Let:
s 1 = side l uwb , p l 1 s 2 = side l uwb , p l 2 s 3 = side l uwb , p l 3
where:
l uwb = pl ü cker r lidar , tag g , r anchor g p l 1 = pl ü cker p 1 , p 0 p l 2 = pl ü cker p 2 , p 1 p l 3 = pl ü cker p 0 , p 2
If s 1 , s 2 , s 3 > 0 or s 1 , s 2 , s 3 < 0 , then the line and the triangle are not coplanar, and the line passes through the triangle, as shown in Figure 5.
8.
If a triangle intersects with the line of sight, it is considered that the line-of-sight direction is blocked by obstacles, i.e., the status between the UWB mobile tag and anchor j is NLOS. This stops the detection and marks anchor j as an NLOS anchor. Otherwise, we consider that there is no obstacle occlusion at the current position and continue with the detection.
9.
The search center point is moved to the next point r search , m g = x search , m g , y search , m g , z search , m g in accordance with Equation (7), taking 2 m as the step size along the line of sight. Steps 6–8 are repeated to continue searching the point cloud for NLOS identification for anchor j.
x search , m g = x lidar , tag g + 2 m 1 cos ω cos α y search , m g = y lidar , tag g + 2 m 1 cos ω sin α z search , m g = z lidar , tag g + 2 m 1 sin ω
10.
When m > d + 3 / 2 , the NLOS identification process for anchor j ends, and anchor j is marked as an LOS anchor.
11.
Steps 4–10 are repeated to complete NLOS identification for all UWB anchors and distinguish between the LOS anchor set u a los = u a 1 , u a 2 , , u a s and the NLOS anchor set u a nlos = u a 1 , u a 2 , , u a r . The LOS anchor measurements will be used as input to the UWB module in the integrated positioning system.

2.5. Improved-UWB/LiDAR-SLAM Tightly Coupled Integrated Model

In our method, to suppress the error accumulation of LiDAR-SLAM while obtaining positioning results in the world coordinate system, we introduce the measurements of UWB LOS anchors identified by NLOS identification algorithm in Section 2.4 for correction to eliminate the drift error of LiDAR-SLAM and provide more robust positioning results.
Considering that the number of UWB LOS measurements may not be sufficient to maintain UWB positioning, we adopt a UWB/LiDAR-SLAM tightly coupled model that takes UWB LOS measurements and LiDAR-SLAM positioning information without loop closure as the measurement inputs for the Kalman filter. This model is not restricted by the number of UWB measurements and integrates UWB and LiDAR at the measurement level. This paper names it the improved-UWB/LiDAR-SLAM tightly coupled integrated algorithm (NLOS identification + UWB/LiDAR-SLAM tightly coupled, NI-TC).
In this study, we use a constant velocity model [40] to constrain the two consecutive states. The system model in a discrete-time form can be expressed as follows:
x k = Φ k , k 1 x k 1 + ω k 1
where symbols with subscript k indicate the parameter at time k; x k denotes the state vector; Φ k , k 1 denotes the state transition matrix; and ω k 1 denotes the white noise sequence of the system, which is modeled as a Gauss–Markov random process with the ideal zero mean assumption.
In the construction of the tightly coupled measurement equation, the superscript “~” represents the disturbance term. The distance between the UWB mobile tag, calculated using LiDAR-SLAM position, and the anchor is d ˜ lidar , tag . The distance measured using the two-way time-of-flight (TW-TOF) method of UWB is d ˜ tag . Assuming that the true distance between the UWB mobile tag and anchor is d true , tag , we can obtain:
d ˜ lidar , tag = r ˜ lidar , tag g r anchor g = d true , tag + r true , tag g r anchor g T d true , tag r ˜ lidar , tag g r true , tag g + o r ˜ lidar , tag g r true , tag g
where r true , tag g denotes the true position of the UWB mobile tag and o r ˜ lidar , tag g r true , tag g is an infinitesimal term.
Assuming that the lever-arm measurements l l have no calibration error, the disturbance analysis of Equation (1) is as follows:
r ˜ lidar , tag g = r ˜ lidar g + l l                         = r ture , lidar g + δ r lidar g + l l                         = r tur e , tag g + δ r lidar g
where r lidar , ture g denotes the true position of the LiDAR center and δ r lidar g denotes the error vector for the LiDAR position.
Substituting Equation (10) into Equation (9) yields the following formula:
d ˜ lidar , tag d true , tag + r true , tag g r anchor g T d true , tag δ r lidar g
The TW-TOF ranging information measured by UWB can be expressed as:
d ˜ tag = d true , tag + ε d , tag
where ε d , tag denotes the measurement noise.
Combining Equations (11) and (12), the measurement model for the integrated system is as follows:
δ z k = H k δ x k + v k
where:
δ z k , j = d ˜ lidar , tag , j d ˜ tag , j δ x k = δ r lidar g δ v g 6 × 1 T H k , j = r true , tag g r anchor , j g T d true , tag , j 0 1 × 3 v k , j = ε d , tag , j
where symbols with subscript j indicate the relevant information of UWB LOS anchor j, j = 1 , 2 , , s ; z k denotes the measurement vector; H k denotes the design matrix of the relationship between measurement and state; δ x k denotes the error of the state vector; and v k denotes the white noise sequence of the measurements, and its covariance matrix is R k .

2.6. Innovation-Based Outlier-Resistant Robust EKF Algorithm

The filtering method is a common scheme for multisensor data fusion [41,42,43,44]. The standard extended Kalman filter (EKF) carries out the optimal estimation of the real-time state through prediction-measurement-correction, which minimizes the variance in the state error. To maintain the optimal system state, fault measurements should be excluded prior to Kalman updating. In Section 2.4, we use the LiDAR point cloud to perform some degree of quality control on the UWB raw data, effectively eliminating the NLOS measurements. However, in addition to NLOS propagation, the UWB measurements are also influenced by signal multipath effects, intensity fading and other factors. Especially for demanding environments, the system is more random.
The theory of robust estimation provides a feasible solution to this problem. It does not excessively require the unbiasedness and effectiveness of parameter estimation. Reducing the weights of outliers can effectively suppress the influence of abnormal measurements on the filtering results and improve the robustness of the system. The derivation of the REKF is essentially the same as that of the standard EKF, with the main difference being the replacement of the measurement noise covariance matrix R k in the standard EKF with an equivalent covariance matrix R k , thereby adjusting the Kalman gain K k to control the adverse effect of measurement noise on parameter estimation.
This paper adopts the two-factor variance inflation model [45] to construct the equivalent covariance matrix R k :
R k = σ ¯ 11 2 σ ¯ 1 n 2 σ ¯ n 1 2 σ ¯ n n 2 = γ 11 σ 11 2 γ 1 n σ 1 n 2 γ n 1 σ n 1 2 γ n n σ n n 2
where the diagonal elements σ i i denote the measurement noise between the UWB mobile tag and each UWB anchor; the nondiagonal elements σ i j denote the correlation noise between the UWB mobile tag and each UWB anchor, which are considered to be 0 in this paper; and γ i j = γ i i γ j j , γ i i and γ j j are inflation factors, and γ i i can be expressed as:
γ i i = 1 , v ˜ k , i k 0 v ˜ k , i k 0 k 1 k 0 k 1 v ˜ k , i , k 0 < v ˜ k , i k 1 + , v ˜ k , i > k 1
where the weight reduction factor k 0 and the elimination factor k 1 are the set thresholds, usually k 0 = 2.0 ~ 3.0 and k 1 = 4.5 ~ 8.5 [18,41], with the best filtering performance being achieved when k 0 = 2.0 and k 1 = 6.0 in this paper; and v ˜ k , i is the standardized innovation, which can be expressed as:
v ˜ k , i = δ z k H k δ x k , k 1 i H k P k , k 1 H k T + R k i , i
where δ x k , k 1 and P k , k 1 denote a priori estimates of the state error and its covariance matrix, respectively.
The standardized innovation v ˜ k , i can better reflect the measurement anomalies. The construction of the inflation factor shows that when gross error occurs in the measurements, the standardized innovation exceeds the detection threshold of the system, and the corresponding variance is inflated, reducing or eliminating the influence of abnormal measurements on parameter estimation and ensuring the reliability and positioning accuracy of the system. Otherwise, the measurement is considered to be normal, and the original variance remains unchanged.
After equivalently replacing the measurement noise covariance matrix, the recursive equations of the REKF are obtained as follows:
K ¯ k = P k , k 1 H k T ( H k P k , k 1 H k T + R ¯ k ) 1 δ x k = δ x k , k 1 + K ¯ k δ z k H k δ x k , k 1 P k = I K ¯ k H k P k , k 1
where K ¯ k denotes the equivalent Kalman gain matrix; δ x k and P k denote the filter estimation and its covariance matrix, respectively; and I denotes the identity matrix.

3. Experiments and Discussions

3.1. System Hardware

We implement the proposed UWB NLOS identification using the LiDAR point cloud and the improved-UWB/LiDAR-SLAM tightly coupled positioning algorithm in C++ using an ROS (Melodic version). To evaluate the effectiveness of the proposed algorithm, an integrated positioning experimental platform is applied in this paper, as shown in Figure 6, and two groups of experiments (static and dynamic state) are conducted in underground parking lots. The datasets are acquired by the Velodyne VLP-16 3-D LiDAR and Time Domain UWB PulsON440 modules, and the ground truth data are acquired in real time by a Leica TS50 automatic tracking total station. The VLP-16 scanner has a measurement range of up to 100 m with an accuracy of ± 3   cm . Its vertical and horizontal fields of view are ±15°and 360°, respectively. When the sampling rate is set to 10 Hz, the 16-line laser scanner provides a vertical angular resolution of 2° and a horizontal angular resolution of 0.2°. The PulsON440 module uses the TW-TOF method to accurately measure the distance between two units with an accuracy of 5 ± 1   cm under LOS conditions, and the sampling rate is set to 2 Hz. The ranging accuracy of the Leica TS50 automatic tracking total station is 0.6   mm ± 1   ppm , and the sampling rate is set to 10 Hz. All algorithms are tested on an Intel computer (i7-10875H CPU @ 2.30 GHz, 32 GB RAM and Nvidia GeForce RTX 2060 GPU) with an ROS, allowing easy real-time operation.

3.2. Evaluation of the UWB NLOS Identification Algorithm Using Static Data

To test the performance of the UWB NLOS identification using the LiDAR point cloud algorithm, a static experiment is conducted in an underground parking lot of a shopping mall. The experimental layout is shown in Figure 7. UWB anchor 1 and anchor 4 are in the NLOS region, with the line of sight to the UWB mobile tag being blocked by a wall column and a vehicle, respectively. UWB anchor 2 and anchor 3 are in the LOS region, where anchor 2 is placed on the edge of the wall column, very close to the column but not blocked by it, which is a deliberate trap we set to detect the effectiveness of the algorithm. The positions of the LiDAR and UWB modules are all precisely measured by the total station.
In the static experiment, the distances between the UWB mobile tag and anchors are measured using the TW-TOF method, as shown in Figure 8, and the true distances are calculated using the measurement results of the total station according to the Euclidean distance formula. In this paper, the zero value in the purple dashed block in Figure 8 is considered to be a result of the loss of the current epoch measurement due to factors such as the actual environment and anchor placement, which is a normal phenomenon in UWB data acquisition.
To evaluate the NLOS errors caused by indoor obstacles, we calculate the ranging errors for the four UWB anchors after removing the zero values from the ranging values, as shown in Figure 9. Compared with the LOS scenario, the occlusion of obstacles always causes positive NLOS errors, and by comparing UWB anchor 1 (Figure 9a) and anchor 4 (Figure 9d), it is clear that different obstacles cause different degrees of NLOS errors. Due to the strong penetration of the UWB signal, the wall column causes larger NLOS errors than the vehicle, with a root mean square error (RMSE) of 3.348 m. When range values with positive errors are used for positioning, the positioning error inevitably increases. Therefore, to achieve better positioning accuracy, the identification of NLOS becomes particularly important.
Figure 10 shows the LiDAR point cloud map (green points) in a certain epoch in Algorithm 1 and the corresponding parameters for NLOS identification, such as the UWB mobile tag coordinate (black triangle), UWB anchor coordinate (red triangles), line-of-sight direction (blue lines) and search center points (blue points). Figure 10a is a 3-D view, and Figure 10b–d shows the top view, front view and left-side view, respectively. As shown in the different views in Figure 10, our proposed NLOS identification algorithm obtains LiDAR point clouds (purple and orange points) within a radius of 1 m by performing a radius search using KD-tree along the line-of-sight direction. To prevent misjudgment that incorrectly treats the entire point cloud in the neighborhood as obstacle points that block the line of sight, we effectively distinguish between true obstacle points (purple points) and false obstacle points (orange points) through further collision detection, thereby distinguishing between NLOS anchors and LOS anchors and improving the accuracy of NLOS identification. As shown in Table 1, according to the distance between the UWB mobile tag and anchors, the theoretical search times for the four UWB anchors are 5, 7, 6 and 10. During the actual search, obstacle points are detected on the 4th search of UWB anchor 1, which confirms that the line of sight is blocked by obstacles through collision detection. Therefore, UWB anchor 1 is marked as an NLOS anchor, and the detection is stopped. On the 6th search of UWB anchor 2, obstacle points are detected, which are determined by collision detection to be false obstacle points, i.e., the obstacle is on the side of the line of sight without occlusion. The same situation also occurs in the 3rd search of UWB anchor 4, but the result of the 8th search confirmed that there is an obstacle blocking the line of sight, so it is also marked as an NLOS anchor. In our experiment, the NLOS identification times for the four UWB anchors are 0.634 ms, 0.404 ms, 0.034 ms and 0.506 ms, meaning that we can easily realize the real-time identification of NLOS. The number of NLOS anchors after NLOS identification is shown in Figure 11. Only one NLOS anchor (anchor 1) is identified in the first epoch due to insufficient information of the point cloud map. For each subsequent epoch, the proposed UWB NLOS identification algorithm can accurately and efficiently distinguish between LOS and NLOS anchors, and no more missed or false detections have occurred. In the static experiment, the success rate of UWB NLOS identification is up to 99.80%.

3.3. Evaluation of the UWB NLOS Identification Algorithm Using Dynamic Data

To further verify the effectiveness of the UWB NLOS identification using the LiDAR point cloud algorithm, a dynamic experiment is conducted in an underground parking lot of a hospital. The experimental layout, which is shown in Figure 12, is obviously challenging for the UWB positioning system. We deploy four UWB anchors in a 40 m × 40 m area and use the integrated positioning experimental platform to obtain an evaluation dataset with changing mobility. The purple lines and arrows in Figure 12 are the driving route and direction, respectively. Pedestrians and vehicles are allowed to pass intermittently throughout the experiment, thereby the UWB signals are mainly affected by the fixed indoor structures (e.g., wall columns), experimenters, and passing vehicles.
Figure 13 shows UWB raw measurements for each anchor and UWB LOS measurements after NLOS identification. Figure 14 shows the error curve between UWB measurements and the true values after removing the zero values, and the upper subgraphs show a detailed enlargement of the error curve. Table 2 reports the results of the ranging error evaluation, and ‘Max’ represents the maximum error. It is quite obvious that UWB raw measurements contain a large number of outliers, with RMSEs of 2.210 m, 1.831 m, 1.612 m, and 1.075 m and maximum errors of 37.105 m, 30.562 m, 51.658 m and 28.330 m for the four anchor ranging values, respectively. The RMSEs of LOS measurements after NLOS identification are reduced to 0.050 m, 0.098 m, 0.100 m and 0.097 m, which are reduced by 97.74%, 94.65%, 93.80% and 90.98% compared to the raw measurements, respectively, significantly improving the data quality of UWB measurements.
Figure 15 shows the number of UWB NLOS anchors for each epoch identified by NLOS identification algorithm. UWB raw measurements and UWB LOS measurements are used as inputs to the UWB positioning system, and the parameters are solved using the least squares (LS) method to obtain the positioning trajectories for LS and LS with NLOS identification (NLOS identification + LS, NI-LS), as shown in Figure 16. Figure 17a–c shows the position errors for the LS and NI-LS algorithms in the X, Y and plane directions, respectively. Table 3 reports the results of the positioning error evaluation. The ‘MAE’ and ‘Std’ in Table 3 are the abbreviations representing mean absolute error and standard deviation, respectively, and the ‘Availability’ is calculated by dividing the number of positioning results by the total number of measurements.
The quality of the UWB ranging values seriously affects the positioning accuracy. Direct parameter calculation using the raw measurements can reach an RMSE of 2.752 m in the plane direction, with a maximum error of 30.483 m, which seriously deviates from the ground truth and is no longer able to maintain an indoor high-precision positioning result. After filtering the UWB data using our NLOS identification algorithm, the positioning accuracy is significantly improved. Our NI-LS (red) method outperforms the LS method (blue) for both error evaluation metrics, reducing the RMSEs of the position error in the X, Y and plane directions by 96.45%, 92.58% and 94.77%, respectively, and reducing the maximum error to 0.590 m, 0.787 m and 0.826 m, respectively, which provides solid evidence for the effectiveness of the proposed method. However, since 2-D LS positioning requires at least 3 ranging values, the NLOS number is no more than one, as shown by the red points in Figure 15, reducing the availability of the NI-LS method to 54.34%. Figure 18 shows the NLOS identification time of the proposed method for different anchors in each epoch. The average identification times for the four anchors are 0.809 ms, 0.919 ms, 0.940 ms, and 1.111 ms, and the maximum identification times are 14.537 ms, 2.004 ms, 5.063 ms, and 4.942 ms, respectively. In our experimental scenario, the average runtime of the whole NLOS identification module is 41.099 ms, and the maximum runtime is 160.754 ms, which does not significantly increase the computation time of the system and satisfies the real-time requirements.
From both static and dynamic experiments, it is clear that, due to good information fusion and data association, Algorithm 1 can effectively identify NLOS errors and improve the accuracy and robustness of UWB positioning in complex indoor environments without significantly increasing the amount of computation.

3.4. Evaluation of the Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning Algorithm

To verify the effectiveness of the proposed improved-UWB/LiDAR-SLAM tightly coupled algorithm with the REKF (NLOS identification + UWB/LiDAR-SLAM tightly coupled + REKF, NI-TC-REKF), several different positioning methods are compared on the evaluation dataset collected in Section 3.3. Considering that the data availability of the NI-LS method is only 54.34%, we take the LOS measurements identified by NLOS identification algorithm and the positioning results of LeGO-LOAM as the direct input of the integrated system. In this experiment, we compare the trajectory solved by the NI-TC-REKF algorithm with several other methods:
  • NI-LS;
  • LeGO-LOAM without loop closure;
  • UWB/LiDAR-SLAM tightly coupled algorithm with the EKF (UWB/LiDAR-SLAM tightly coupled + EKF, TC-EKF);
  • UWB/LiDAR-SLAM tightly coupled algorithm with the REKF (UWB/LiDAR-SLAM tightly coupled + REKF, TC-REKF);
  • Improved-UWB/LiDAR-SLAM tightly coupled algorithm with the EKF (NLOS identification + UWB/LiDAR-SLAM tightly coupled + EKF, NI-TC-EKF).
The ground truth is provided by the Leica TS50 automatic tracking total station. Figure 19 shows the comparison of the trajectories estimated by different comparative methods and the ground truth. Figure 20a–c shows the position errors for different algorithms in the X, Y and plane directions, respectively, and the specific statistical results are shown in Table 4. In the table, ‘Scale’ represents the ability of the algorithm to calculate global coordinates, with the red results indicating the best accuracy, blue results indicating the second-best accuracy and green results indicating the third-best accuracy for each error metric.
The comparison of NI-LS (cyan), LeGO-LOAM (magenta) and TC-EKF (orange) indicates that if the quality of input sensor data is not controlled, abnormal measurements are likely to interfere with the integrated system and have serious negative effects, resulting in poor accuracy of the system that might be even worse than that of a single sensor. Compared with the TC-EKF, the NI-TC-EKF (blue) and TC-REKF (green) algorithms substantially improve the performance of the integrated system with significant improvements in the positioning accuracy by using the UWB LOS data identified by Algorithm 1 as input to the integrated system and by adjusting the measurement noise covariance matrix to reduce the contribution of abnormal measurements to parameter estimation, respectively. The RMSEs of the TC-REKF algorithm in the X, Y and plane directions are reduced by 91.32%, 91.51% and 91.41%, respectively, and its positioning accuracy is close to that of NI-LS and slightly inferior to that of LeGO-LOAM. The RMSEs of the NI-TC-EKF algorithm in the X, Y and plane directions are reduced by 94.05%, 93.80% and 93.90%, respectively, and its positioning performance is better than that of the NI-LS (RMSEs in the X, Y and plane directions are reduced by 8.86%, 39.17% and 28.47%, respectively), LeGO-LOAM (RMSEs in the X, Y and plane directions are reduced by 17.24%, 3.95% and 10.43%, respectively) and TC-REKF (RMSEs in the X, Y and plane directions are reduced by 31.43%, 27.00% and 28.97%, respectively) algorithms, which further illustrates the importance of the data quality of the measurements. However, NI-TC-EKF is still affected by gross measurement errors in some areas. As shown in Figure 19a, due to the occlusion of the wall column, UWB anchor 3 is identified as an NLOS anchor, and the ranging errors for UWB anchors 1, 2 and 4 are 0.497 m, 0.081 m and 0.102 m, respectively. The positioning errors for NI-LS and NI-TC-EKF in the plane direction are 0.826 m and 0.463 m, respectively.
It is quite obvious that the NI-TC-REKF (red) method, which combines NLOS identification with REKF, is more accurate than the other methods for most of the evaluation metrics (as shown in Table 4). Compared with the NI-TC-EKF algorithm, the RMSEs of the NI-TC-REKF algorithm in the X, Y and plane directions are reduced by 8.33%, 8.22% and 8.74%, respectively. The improvement is lower than that of TC-REKF relative to TC-EKF, which is mainly attributed to the strict control of the NLOS identification algorithm on the quality of UWB data. Compared with the single sensor, the RMSEs of the NI-TC-REKF algorithm in the X, Y and plane directions are reduced by 16.46%, 44.17%, and 34.72% (NI-LS) and 24.14%, 11.84%, and 18.26% (LeGO-LOAM), respectively. As shown in Figure 19, the trajectory of NI-TC-REKF in multiple regions is also superior to those of the other methods and more consistent with the ground truth, such as Figure 19b,c. As shown in Figure 20, the error peaks and spike occurrence ratios of NI-TC-REKF are significantly reduced, and the positioning accuracy and robustness are greatly improved.
Table 5 reports the computational load for each NI-TC-REKF module. In our tests, the LiDAR-SLAM module is the dominant computational load of NI-TC-REKF algorithm, with an average runtime of 560.849 ms. The NLOS identification module and the tightly coupled module introduce an additional time consumption of 7.33% and 0.003% of LiDAR-SLAM, respectively. In the tightly coupled module, the average runtimes of NI-TC-REKF and NI-TC-EKF are 0.017 ms and 0.016 ms, and the maximum runtimes are 0.124 ms and 0.119 ms, respectively. The positioning accuracy is improved effectively while occupying only a small amount of memory resources. In summary, the NI-TC-REKF algorithm performs best when considering the accuracy, availability and ability to obtain global coordinates, fully validating the effectiveness of the proposed tightly coupled method based on the combination of NLOS identification and the REKF.

4. Conclusions

The presence of indoor obstacles affects the accuracy of UWB ranging to varying degrees, thereby reducing the accuracy and reliability of the positioning system. To obtain robust and reliable positioning results in complex indoor environments, we perform the following work. First, we propose a method to detect, identify and eliminate UWB NLOS efficiently and accurately through well-designed sensor fusion and data association by exploiting the complementarity of UWB and LiDAR to improve the performance of UWB dynamic positioning in real scenarios. Second, to make full use of both UWB and LiDAR sensors, an improved-UWB/LiDAR-SLAM tightly coupled positioning algorithm is proposed by combining the improved UWB measurements after NLOS identification and the positioning results of LeGO-LOAM. Additionally, to better suppress the effects of UWB gross errors on the integrated system, the REKF is used for accurate position estimation. The effectiveness and performance of the proposed methods are verified and evaluated through a series of experiments. (1) Both static and dynamic experiments show that the quality of UWB LOS measurements after NLOS identification achieves satisfactory results, and the proposed algorithm can effectively identify NLOS errors without adding a large amount of computation. Furthermore, the accuracy of ranging values (the RMSE of ranging values is reduced by 94.29% on average in the dynamic experiment) and the robustness of position estimation are substantially improved, and the success rate of NLOS identification in the static experiment is up to 99.80%. (2) The tightly coupled method NI-TC-REKF, which combines NLOS identification and the REKF, outperforms other comparative methods in terms of positioning performance and robustness, achieving satisfactory control of sensor errors with RMSEs of 0.066 m, 0.067 m and 0.094 m in the X, Y and plane directions, respectively, and maximum errors of 0.317 m, 0.557 m and 0.566 m, respectively.
The proposed NI-TC-REKF algorithm provides a novel solution for high-precision positioning applications in GNSS-denied environments, such as automatic driving, reverse car-searching in parking lots and warehousing logistics. However, the use of UWB ranging values in this paper is not sufficient, and the complete exclusion of NLOS measurements may lead to poor geometric distribution, which may backfire and increase positioning errors. In future research, we will aim to better judge the discard and use of NLOS measurements to improve the availability of data and the reliability of positioning. In addition, we are considering integrating a low-cost inertial measurement unit (IMU) on the existing platform, using the high-frequency attitude information output by the IMU to improve the quality of LiDAR mapping and thereby improving the robustness and accuracy of the integrated system for higher-speed, more agile systems.

Author Contributions

Conceptualization, Z.C. and A.X.; data curation, Z.C., S.W. and J.G.; formal analysis, Z.C., X.S. and C.W.; funding acquisition, A.X.; investigation, Z.C.; methodology, Z.C. and X.S.; software, Z.C., C.W. and Z.S.; validation, Z.C., S.W. and J.G.; writing—original draft preparation, Z.C.; writing—review and editing, Z.C., A.X., X.S., C.W., S.W., J.G. and Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China (No.42074012), in part by the Liaoning Key Research and Development Program (No. 2020JH2/10100044), in part by the Liaoning Revitalization Talents Program (Nos. XLYC2002101; XLYC2008034), and in part by the Fundamental Research Program of the Liaoning Education Department (No. LJ2020JCL016).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, T.; Yuan, Y.; Zhang, B.; Wang, N.; Tan, B.; Chen, Y. Multi-GNSS precise point positioning (MGPPP) using raw observations. J. Geod. 2017, 91, 253–268. [Google Scholar] [CrossRef]
  2. Zhou, F.; Dong, D.; Li, P.; Li, X.; Schuh, H. Influence of stochastic modeling for inter-system biases on multi-GNSS undifferenced and uncombined precise point positioning. GPS Solut. 2019, 23, 59. [Google Scholar] [CrossRef]
  3. Li, Z.; Chen, W.; Ruan, R.; Liu, X. Evaluation of PPP-RTK based on BDS-3/BDS-2/GPS observations: A case study in Europe. GPS Solut. 2020, 24, 38. [Google Scholar] [CrossRef]
  4. Yang, X. NLOS mitigation for UWB localization based on sparse pseudo-input Gaussian process. IEEE Sens. J. 2018, 18, 4311–4316. [Google Scholar] [CrossRef]
  5. Khodjaev, J.; Park, Y.; Saeed, M.A. Survey of NLOS identification and error mitigation problems in UWB-based positioning algorithms for dense environments. Ann. Telecommun. 2010, 65, 301–311. [Google Scholar] [CrossRef]
  6. Borras, J.; Hatrack, P.; Mandayam, N.B. Decision Theoretic Framework for NLOS Identification. In Proceedings of the 48th IEEE Vehicular Technology Conference, Ottawa, ON, Canada, 21 May 1998. [Google Scholar]
  7. Schroeder, J.; Galler, S.; Kyamakya, K.; Jobmann, K. NLOS detection algorithms for ultra-wideband localization. In Proceedings of the 2007 4th Workshop on Positioning, Navigation and Communication, Hannover, Germany, 22 March 2007. [Google Scholar]
  8. Maranò, S.; Gifford, W.M.; Wymeersch, H.; Win, M.Z. NLOS identification and mitigation for localization based on UWB experimental data. IEEE J. Sel. Area. Comm. 2010, 28, 1026–1035. [Google Scholar] [CrossRef] [Green Version]
  9. Wymeersch, H.; Maranò, S.; Gifford, W.M.; Win, M.Z. A machine learning approach to ranging error mitigation for UWB localization. IEEE Trans. Commun. 2012, 60, 1719–1728. [Google Scholar] [CrossRef] [Green Version]
  10. Savic, V.; Ferrer-Coll, J.; Ängskog, P.; Chilo, J.; Stenumgaard, P.; Larsson, E.G. Measurement analysis and channel modeling for TOA-based ranging in tunnels. IEEE Trans. Wirel. Commun. 2014, 14, 456–467. [Google Scholar] [CrossRef] [Green Version]
  11. Savic, V.; Larsson, E.G.; Ferrer-Coll, J.; Stenumgaard, P. Kernel methods for accurate UWB-based ranging with reduced complexity. IEEE Trans. Wirel. Commun. 2015, 15, 1783–1793. [Google Scholar] [CrossRef] [Green Version]
  12. Silva, B.; Hancke, G.P. IR-UWB-based non-line-of-sight identification in harsh environments: Principles and challenges. IEEE Trans. Ind. Inform. 2016, 12, 1188–1195. [Google Scholar] [CrossRef]
  13. Yu, K.; Wen, K.; Li, Y.; Zhang, S.; Zhang, K. A novel NLOS mitigation algorithm for UWB localization in harsh indoor environments. IEEE Trans. Veh. Technol. 2018, 68, 686–699. [Google Scholar] [CrossRef]
  14. Jiang, C.; Shen, J.; Chen, S.; Chen, Y.; Liu, D.; Bo, Y. UWB NLOS/LOS classification using deep learning method. IEEE Commun. Lett. 2020, 24, 2226–2230. [Google Scholar] [CrossRef]
  15. Jo, Y.H.; Lee, J.Y.; Ha, D.H.; Kang, S.H. Accuracy enhancement for UWB indoor positioning using ray tracing. In Proceedings of the IEEE/ION PLANS 2006, San Diego, CA, USA, 25–27 April 2006. [Google Scholar]
  16. Suski, W.; Banerjee, S.; Hoover, A. Using a map of measurement noise to improve UWB indoor position tracking. IEEE Trans. Instrum. Meas. 2013, 62, 2228–2236. [Google Scholar] [CrossRef]
  17. Zhu, X.; Yi, J.; Cheng, J.; He, L. Adapted error map based mobile robot UWB indoor positioning. IEEE Trans. Instrum. Meas. 2020, 69, 6336–6350. [Google Scholar] [CrossRef]
  18. Wang, C.; Xu, A.; Kuang, J.; Sui, X.; Hao, Y.; Niu, X. A High-Accuracy Indoor Localization System and Applications Based on Tightly Coupled UWB/INS/Floor Map Integration. IEEE Sens. J. 2021, 21, 18166–18177. [Google Scholar] [CrossRef]
  19. Wang, C.; Xu, A.; Sui, X.; Hao, Y.; Shi, Z.; Chen, Z. A Seamless Navigation System and Applications for Autonomous Vehicles Using a Tightly Coupled GNSS/UWB/INS/Map Integration Scheme. Remote Sens. 2022, 14, 27. [Google Scholar] [CrossRef]
  20. Ferreira, A.G.; Fernandes, D.; Branco, S.; Catarino, A.P.; Monteiro, J.L. Feature Selection for Real-Time NLOS Identification and Mitigation for Body-Mounted UWB Transceivers. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
  21. Wen, K.; Yu, K.; Li, Y. NLOS identification and compensation for UWB ranging based on obstruction classification. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017. [Google Scholar]
  22. Wen, C.; Tan, J.; Li, F.; Wu, C.; Lin, Y.; Wang, Z.; Wang, C. Cooperative indoor 3D mapping and modeling using LiDAR data. Inf. Sci. 2021, 574, 192–209. [Google Scholar] [CrossRef]
  23. Wang, Q.; Tan, Y.; Mei, Z. Computational methods of acquisition and processing of 3D point cloud data for construction applications. Arch. Comput. Methods Eng. 2020, 27, 479–499. [Google Scholar] [CrossRef]
  24. Li, K.; Li, M.; Hanebeck, U.D. Towards high-performance solid-state-lidar-inertial odometry and mapping. IEEE Robot. Autom. Lett. 2021, 6, 5167–5174. [Google Scholar] [CrossRef]
  25. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  26. Biber, P.; Strasser, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003. [Google Scholar]
  27. Landry, D.; Pomerleau, F.; Giguere, P. CELLO-3D: Estimating the Covariance of ICP in the Real World. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
  28. Brossard, M.; Bonnabel, S.; Barrau, A. A new approach to 3D ICP covariance estimation. IEEE Robot. Autom. Lett. 2020, 5, 744–751. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, J.; Singh, S. Low-drift and real-time lidar odometry and mapping. Auton. Robot. 2017, 41, 401–416. [Google Scholar] [CrossRef]
  30. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  31. Li, S.; Li, G.; Wang, L.; Qin, Y. SLAM integrated mobile mapping system in complex urban environments. ISPRS J. Photogramm. Remote Sens. 2020, 166, 316–332. [Google Scholar] [CrossRef]
  32. Shan, T.; Englot, B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
  33. Kaess, M.; Johannsson, H.; Roberts, R.; Ila, V.; Leonard, J.J.; Dellaert, F. iSAM2: Incremental smoothing and mapping using the Bayes tree. Int. J. Robot. Res. 2012, 31, 216–235. [Google Scholar] [CrossRef]
  34. He, G.; Yuan, X.; Zhuang, Y.; Hu, H. An integrated GNSS/LiDAR-SLAM pose estimation framework for large-scale map building in partially GNSS-denied environments. IEEE Trans. Instrum. Meas. 2020, 70, 1–9. [Google Scholar] [CrossRef]
  35. Wen, W.; Zhang, G.; Hsu, L.T. GNSS NLOS exclusion based on dynamic object detection using LiDAR point cloud. IEEE Trans. Intell. Transp. 2019, 22, 853–862. [Google Scholar] [CrossRef]
  36. Wen, W. 3D LiDAR Aided GNSS Positioning and Its Application in Sensor Fusion for Autonomous Vehicles in Urban Canyons. Ph.D. Thesis, Hong Kong Polytechnic University, Hong Kong, China, July 2020. [Google Scholar]
  37. Li, T.; Pei, L.; Xiang, Y.; Wu, Q.; Xia, S.; Tao, L.; Guan, X.; Yu, W. P3-LOAM: PPP/LiDAR Loosely Coupled SLAM with Accurate Covariance Estimation and Robust RAIM in Urban Canyon Environment. IEEE Sens. J. 2020, 21, 6660–6671. [Google Scholar] [CrossRef]
  38. Chang, L.; Niu, X.; Liu, T. GNSS/IMU/ODO/LiDAR-SLAM Integrated Navigation System Using IMU/ODO Pre-Integration. Sensors 2020, 20, 4702. [Google Scholar] [CrossRef]
  39. Yokozuka, M.; Koide, K.; Oishi, S.; Banno, A. LiTAMIN: LiDAR-based Tracking and Mapping by Stabilized ICP for Geometry Approximation with Normal Distributions. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, CA, USA, 24 October–24 January 2021. [Google Scholar]
  40. Wen, W.; Kan, Y.C.; Hsu, L.T. Performance comparison of GNSS/INS integrations based on EKF and factor graph optimization. In Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019), Miami, FL, USA, 16–20 September 2019. [Google Scholar]
  41. Li, T.; Zhang, H.; Gao, Z.; Chen, Q.; Niu, X. High-accuracy positioning in urban environments using single-frequency multi-GNSS RTK/MEMS-IMU integration. Remote Sens. 2018, 10, 205. [Google Scholar] [CrossRef] [Green Version]
  42. Chiang, K.W.; Tsai, G.J.; Chang, H.W.; Joly, C.; El-Sheimy, N. Seamless navigation and mapping using an INS/GNSS/grid-based SLAM semi-tightly coupled integration scheme. Inform. Fusion 2019, 50, 181–196. [Google Scholar] [CrossRef]
  43. Liu, T.; Niu, X.; Kuang, J.; Cao, S.; Zhang, L.; Chen, X. Doppler shift mitigation in acoustic positioning based on pedestrian dead reckoning for smartphone. IEEE Trans. Instrum. Meas. 2020, 70, 1–11. [Google Scholar] [CrossRef]
  44. Chiang, K.W.; Tsai, G.J.; Li, Y.H.; Li, Y.; El-Sheimy, N. Navigation engine design for automated driving using INS/GNSS/3D LiDAR-SLAM and integrity assessment. Remote Sens. 2020, 12, 1564. [Google Scholar] [CrossRef]
  45. Yang, Y.; Song, L.; Xu, T. Robust estimator for correlated observations based on bifactor equivalent weights. J. Geod. 2002, 76, 353–358. [Google Scholar] [CrossRef]
Figure 1. Overview of the improved-UWB/LiDAR-SLAM integrated positioning system.
Figure 1. Overview of the improved-UWB/LiDAR-SLAM integrated positioning system.
Remotesensing 14 01380 g001
Figure 2. Coordinate systems.
Figure 2. Coordinate systems.
Remotesensing 14 01380 g002
Figure 3. Indoor positioning scenarios. (a) NLOS positioning scenario; and (b) LiDAR scanning scenario.
Figure 3. Indoor positioning scenarios. (a) NLOS positioning scenario; and (b) LiDAR scanning scenario.
Remotesensing 14 01380 g003
Figure 5. The line passes through the triangle.
Figure 5. The line passes through the triangle.
Remotesensing 14 01380 g005
Figure 6. Experimental platform and automatic tracking total station: (a) front view; (b) left-side view; and (c) right-side view.
Figure 6. Experimental platform and automatic tracking total station: (a) front view; (b) left-side view; and (c) right-side view.
Remotesensing 14 01380 g006
Figure 7. Experimental layout.
Figure 7. Experimental layout.
Remotesensing 14 01380 g007
Figure 8. Time series of UWB raw measurements for different anchors. Blue: anchor 1; black: anchor 2; green: anchor 3; and red: anchor 4.
Figure 8. Time series of UWB raw measurements for different anchors. Blue: anchor 1; black: anchor 2; green: anchor 3; and red: anchor 4.
Remotesensing 14 01380 g008
Figure 9. Time series of UWB ranging errors for different anchors: (a) anchor 1; (b) anchor 2; (c) anchor 3; and (d) anchor 4.
Figure 9. Time series of UWB ranging errors for different anchors: (a) anchor 1; (b) anchor 2; (c) anchor 3; and (d) anchor 4.
Remotesensing 14 01380 g009
Figure 10. UWB NLOS identification using a LiDAR point cloud in a certain epoch: (a) 3-D view; (b) top view; (c) front view; and (d) left-side view.
Figure 10. UWB NLOS identification using a LiDAR point cloud in a certain epoch: (a) 3-D view; (b) top view; (c) front view; and (d) left-side view.
Remotesensing 14 01380 g010
Figure 11. Time series of the number of UWB NLOS anchors.
Figure 11. Time series of the number of UWB NLOS anchors.
Remotesensing 14 01380 g011
Figure 12. Experimental layout.
Figure 12. Experimental layout.
Remotesensing 14 01380 g012
Figure 13. Time series of the UWB raw measurements (blue) and LOS measurements (red) for different anchors: (a) anchor 1; (b) anchor 2; (c) anchor 3; and (d) anchor 4.
Figure 13. Time series of the UWB raw measurements (blue) and LOS measurements (red) for different anchors: (a) anchor 1; (b) anchor 2; (c) anchor 3; and (d) anchor 4.
Remotesensing 14 01380 g013
Figure 14. Time series of the UWB raw ranging errors (blue) and LOS ranging errors (red) for different anchors: (a) anchor 1; (b) anchor 2; (c) anchor 3; and (d) anchor 4.
Figure 14. Time series of the UWB raw ranging errors (blue) and LOS ranging errors (red) for different anchors: (a) anchor 1; (b) anchor 2; (c) anchor 3; and (d) anchor 4.
Remotesensing 14 01380 g014
Figure 15. Time series of the number of UWB NLOS anchors.
Figure 15. Time series of the number of UWB NLOS anchors.
Remotesensing 14 01380 g015
Figure 16. Trajectory comparison of the ground truth (black), LS (blue) and NI-LS (red).
Figure 16. Trajectory comparison of the ground truth (black), LS (blue) and NI-LS (red).
Remotesensing 14 01380 g016
Figure 17. Time series of the position errors for LS (blue) and NI-LS (red): (a) X direction; (b) Y direction; and (c) plane direction.
Figure 17. Time series of the position errors for LS (blue) and NI-LS (red): (a) X direction; (b) Y direction; and (c) plane direction.
Remotesensing 14 01380 g017
Figure 18. Time series of NLOS identification for different anchors. Blue: Anchor 1; black: Anchor 2; green: Anchor 3; and red: Anchor 4.
Figure 18. Time series of NLOS identification for different anchors. Blue: Anchor 1; black: Anchor 2; green: Anchor 3; and red: Anchor 4.
Remotesensing 14 01380 g018
Figure 19. Trajectory comparison of the ground truth (black), NI-LS (cyan), LeGO-LOAM (magenta), TC-EKF (orange), TC-REKF (green), NI-TC-EKF (blue) and NI-TC-REKF (red). (ac) are certain sections with special characteristics.
Figure 19. Trajectory comparison of the ground truth (black), NI-LS (cyan), LeGO-LOAM (magenta), TC-EKF (orange), TC-REKF (green), NI-TC-EKF (blue) and NI-TC-REKF (red). (ac) are certain sections with special characteristics.
Remotesensing 14 01380 g019
Figure 20. Time series of the position errors for NI-LS (cyan), LeGO-LOAM (magenta), TC-EKF (orange), TC-REKF (green), NI-TC-EKF (blue) and NI-TC-REKF (red): (a) X direction; (b) Y direction; and (c) plane direction.
Figure 20. Time series of the position errors for NI-LS (cyan), LeGO-LOAM (magenta), TC-EKF (orange), TC-REKF (green), NI-TC-EKF (blue) and NI-TC-REKF (red): (a) X direction; (b) Y direction; and (c) plane direction.
Remotesensing 14 01380 g020
Table 1. The corresponding parameters for NLOS identification.
Table 1. The corresponding parameters for NLOS identification.
ParameterAnchor 1Anchor 2Anchor 3Anchor 4
Distance (m)7.90811.43810.07917.011
Theoretical search times57610
Actual search times4768
Search time (ms)0.6340.4040.0340.506
Table 2. Ranging errors of the raw measurements and LOS measurements for different anchors.
Table 2. Ranging errors of the raw measurements and LOS measurements for different anchors.
Anchor 1Anchor 2Anchor 3Anchor 4
Raw MeasurementsRMSE (m)2.2101.8311.6121.075
MAX (m)37.10530.56251.65828.330
LOS MeasurementsRMSE (m)0.0500.0980.1000.097
MAX (m)0.9020.6290.8790.704
Table 3. Accuracy evaluation for the LS and NI-LS.
Table 3. Accuracy evaluation for the LS and NI-LS.
LSNI-LS
MAE (m)X0.6760.253
Y0.6910.307
Plane0.8510.351
RMSE (m)X2.2260.079
Y1.6170.120
Plane2.7520.144
Std (m)X2.1900.195
Y1.5600.226
Plane2.6580.240
Max (m)X30.0060.590
Y20.6850.787
Plane30.4830.826
Availability 100%54.34%
Table 4. Accuracy evaluation for the NI-LS, LeGO-LOAM, TC-EKF, TC-REKF, NI-TC-EKF and NI-TC-REKF algorithms.
Table 4. Accuracy evaluation for the NI-LS, LeGO-LOAM, TC-EKF, TC-REKF, NI-TC-EKF and NI-TC-REKF algorithms.
NI-LSLeGO-LOAMTC-EKFTC-REKFNI-TC-EKFNI-TC-REKF
MAE (m)X0.2530.2900.5720.2530.2340.228
Y0.3070.2650.5500.2530.2360.230
Plane0.3510.3350.6920.3180.2930.286
RMSE (m)X0.0790.0871.2100.1050.0720.066
Y0.1200.0761.1780.1000.0730.067
Plane0.1440.1151.6880.1450.1030.094
Std (m)X0.1950.2071.1900.2060.1850.181
Y0.2260.1971.1650.2040.1860.182
Plane0.2400.2241.6330.2400.2150.210
Max (m)X0.5900.37212.8011.1670.3940.317
Y0.7870.46219.7650.9680.6130.557
Plane0.8260.48221.9651.1690.6210.566
Scale GlobalLocalGlobalGlobalGlobalGlobal
Table 5. Runtime of each NI-TC-REKF module.
Table 5. Runtime of each NI-TC-REKF module.
LiDAR-SLAM NITightly CoupledTotal
Average (ms)560.84941.0990.017602.657
Max (ms)1072.22160.7540.1241106.812
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Z.; Xu, A.; Sui, X.; Wang, C.; Wang, S.; Gao, J.; Shi, Z. Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments. Remote Sens. 2022, 14, 1380. https://doi.org/10.3390/rs14061380

AMA Style

Chen Z, Xu A, Sui X, Wang C, Wang S, Gao J, Shi Z. Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments. Remote Sensing. 2022; 14(6):1380. https://doi.org/10.3390/rs14061380

Chicago/Turabian Style

Chen, Zhijian, Aigong Xu, Xin Sui, Changqiang Wang, Siyu Wang, Jiaxin Gao, and Zhengxu Shi. 2022. "Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments" Remote Sensing 14, no. 6: 1380. https://doi.org/10.3390/rs14061380

APA Style

Chen, Z., Xu, A., Sui, X., Wang, C., Wang, S., Gao, J., & Shi, Z. (2022). Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments. Remote Sensing, 14(6), 1380. https://doi.org/10.3390/rs14061380

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop