Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Discrete and Continuous Operational Calculus in N-Critical Shocks Reliability Systems with Aging under Delayed Information
Previous Article in Journal
Cyberbullying Detection on Twitter Using Deep Learning-Based Attention Mechanisms and Continuous Bag of Words Feature Extraction
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Survey of Point Cloud Registration Methods and New Statistical Approach

by
Jaroslav Marek
1,*,† and
Pavel Chmelař
2,†
1
Department of Mathematics and Physics, Faculty of Electrical Engineering and Informatics, University of Pardubice, Studentská 95, 532 10 Pardubice, Czech Republic
2
Department of Electrical Engineering, Faculty of Electrical Engineering and Informatics, University of Pardubice, Studentská 95, 532 10 Pardubice, Czech Republic
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(16), 3564; https://doi.org/10.3390/math11163564
Submission received: 30 June 2023 / Revised: 4 August 2023 / Accepted: 11 August 2023 / Published: 17 August 2023

Abstract

:
The use of a 3D range scanning device for autonomous object description or unknown environment mapping leads to the necessity of improving computer methods based on identical point pairs from different point clouds (so-called registration problem). The registration problem and three-dimensional transformation of coordinates still require further research. The paper attempts to guide the reader through the vast field of existing registration methods so that he can choose the appropriate approach for his particular problem. Furthermore, the article contains a regression method that enables the estimation of the covariance matrix of the transformation parameters and the calculation of the uncertainty of the estimated points. This makes it possible to extend existing registration methods with uncertainty estimates and to improve knowledge about the performed registration. The paper’s primary purpose is to present a survey of known methods and basic estimation theory concepts for the point cloud registration problem. The focus will be on the guiding principles of the estimation theory: ICP algorithm; Normal Distribution Transform; Feature-based registration; Iterative dual correspondences; Probabilistic iterative correspondence method; Point-based registration; Quadratic patches; Likelihood-field matching; Conditional random fields; Branch-and-bound registration; PointReg. The secondary purpose of this article is to show an innovative statistical model for this transformation problem. The new theory needs known covariance matrices of identical point coordinates. An unknown rotation matrix and shift vector have been estimated using a nonlinear regression model with nonlinear constraints. The paper ends with a relevant numerical example.

1. Introduction

In the registration problem, data describing a certain object in different coordinate systems is available. This arises quite naturally if we measure points on the object from different positions. We thus obtain so-called clouds of points in individual scans. When solving the registration problem, we search for matching (identical) points in different scans and estimate the transformation parameters. The criterion for estimating unknown parameters is often the minimization of the alignment error. Pioneering methods for solving the registration problem were developed in the 1980s in the field of 2D image matching. This important problem of computer graphics crystallized, especially during the processing of medical images. Research gradually moved towards the application of 3D point cloud registration. Applications in object reconstruction, non-contact inspections, medical and surgery support, robot navigation, and autonomous vehicle navigation gradually appeared. An overview of methods used in 2D registration is published in the book [1]. The paper [2] contains a literature survey on the use of the registration problem in robotics and robot navigation. The state-of-the art of registration problem with an emphasis on radar data is dealt with in the book [3].
Nowadays, laser scanning is finding huge use. A comprehensive review of terrestrial laser scanner (TLS) point cloud registration methods is given in ref. [4]. The book [5] expressed the TLS as follows: “TLS systems enable documentation of the whole measured object with all constructional elements. The high scanning speed of the current scanners enables a significant reduction in the time necessary for measurement and, alternatively, an increasing amount of information about the measured object. More data (several million points) enable the creation of complex models or documentation of objects in the form of a detailed point cloud”. A relatively large potential for the use of such models has arisen. As such, laser scanning finds application in various fields: industry, robotics, process control, quality control, architecture, civil engineering, archeology, geosciences, medical sciences, and dentistry. Autonomous laser scanning systems also offer the prospect of saving human lives. They are deployed in dangerous environments such as unexplored areas, abandoned places, or mines. Such systems offer the possibility of getting detailed maps of the unknown surrounding environment without human intervention. In these systems, solving registration problems is crucial. Cf. [5].
Our measurement was made by using an optical rangefinder, which uses an optical filter to spread the green laser beam in a vertical line. See Figure 1. At each detected point of this line, it is possible to determine a 3D point in the scanned environment. Figure 2 demonstrates laser scanning of an indoor room. The optical rangefinder’s measurement head is rotating in 360 degrees around the Z axis to scan the whole surrounding area. The resulting point clouds form two positions and are presented in Figure 3. Using a camera, you can create a colored point cloud.
In this paper, we will mainly discuss methods applicable to 3D registration. In doing so, we will focus on the registration of building objects. Firstly, we present an overview of the most commonly used methods. In addition, we will propose a statistical algorithm that can become an appendix to standard algorithms. In the model, we will assume that the transformation is formed only by shift and rotation in 3D space. Such a transformation is sometimes referred to as a rigid transformation. In this new algorithm, we will consider measurement errors in individual scans. This regression model will allow the synchronous estimation of unknown transformation parameters from all scans. The prerequisite for using this approach will be the knowledge of identical points. These points can be earned through standard registration methods. In special geodetic problems, points on the object can be given by placing reflective targets, which are then focused in individual scans. Here, it is not necessary to look for identical points, as they are determined by reflection points.
Let us return to the description of an example of a typical 3D registration problem. Measured coordinates of points are given in the point clouds, which can be obtained from different scans from different positions. Figure 3 gives a point cloud registration example. The registration problem’s aim is to create a compact point cloud with all possible details.
The structure of the article is as follows. In the introductory chapter, the framework of existing knowledge was outlined, the assumptions and goals of the proposed statistical model were described, and the realization of a specific laser measurement was presented. The second section seeks to examine and discuss the main currents of thought in the theoretical and practical solution of registration problems without attempting to present the details of all the methods. In the third chapter, we will state a new statistical method, which exploits a regression model with constraints. In the fourth chapter, a concluding discussion is made.
A fundamental shortcoming of commonly used methods is the fact that the covariance matrices of the measured points are often not used or are replaced by identical matrices. Further, the estimate of the variance matrix of the estimated coordinates is not known. We find this estimate using an indirect measurement regression model with a system of nonlinear constraints. Separately, this model is applicable in the case of geodetic measurements, when identical points are marked with reflective targets. After gaining pairs of identical points from commonly used methods, the new approach will find application in the process of object modeling.

1.1. Optical Rangefinder

The principles of the 3D technique for object description are based on the triangulation method. The idea of measuring the distance and then determining the point cloud is mentioned in the article [6]. A rotating tripod carried the entire measuring device; see Figure 1. A Basler camera with an output of 2590 × 1942 pixels was used to capture the image in high quality. The output power of 200 mW of the used laser diode made it possible for good recognition after vertical swapping. An optical filter with a 90-degree angle scattered the laser spot. A powerful and fast-stepping motor enabled the measuring head to rotate 360 degrees. Captured colored images are processed by image processing methods to get precise laser line determination in the individual scanned images. For each detected point on the line, the 3D position in the surrounding scanning area is calculated. Image information is also used in the point cloud creation process to get a colored point cloud; see ref. [6].

1.2. 3D Point Cloud

The Figure 2 left side shows an example of the implementation of one frame measurement. Laser line segmentation was performed using a green-colored Gaussian Mixture Model (GMM). For more details, see the article [6]. By using laser mask analysis, 10 different distances were measured. The total number of N measured points in one image was 80. The mobile device is located at the coordinates P = ( P X , P Y , P Z ) . The point P is the origin of one automatic 360-degree scan. Through measurement data processing, the 3D point cloud is created. An example of a 3D point cloud is in Figure 3. The used coordinate system is Cartesian, and each point in this point cloud is described by a vector p = ( p X , p Y , p Z ) , where p X , p Y , and p Z are coordinates in the appropriate axis.
The black dot marks mobile platform position P 1 . This point cloud covers the measurement from one position, and there are some important points missing in the top corner. The whole scanning process’s intention is to create a compact map of a scanning space. For this purpose, it is necessary to provide the measurement from more positions in a scanning space. For changing the measurement position, the used measurement device can be, for example, placed manually in a new position or mounted on a mobile device, which provides precise information about its position in space. We place a mobile platform manually in a scanning environment. The differences in position we measure with the precise laser rangefinder.

2. A Survey of Methods

In the following subsections, we will present the ideas of the most commonly used methods developed to solve the registration problem.

2.1. ICP Algorithm

ICP is the oldest registration algorithm; cf. [7,8]. If the user’s measurement or device’s localization is accurate, the registration can be determined directly by separate measurements. Figure 3 describes the connection to one group. However, imprecise measurement sensors and erroneous self-localization have to be assumed for registration. The frequently used method is called Iterative Closest Points (ICP); see refs. [9,10].
Let’s have two independently gained sets of 3D points, the measurement from position P 1 and P 2 , we want to estimate the transformation ( R , t ) forming by a rotation matrix R and a shift vector t which minimizes the cost functional
E ( R , t ) = i = 1 N m j = 1 N d w ( i , j ) | | m ^ i ( R d ^ j ) + t | | 2 .
The parameter w i , j takes the value 1 when the i-th point of M ^ determines the same 3D position of the point as the j-th point of D ^ . If it specifies another point, then w i , j is 0. Next, two parameters need to be determined: the corresponding points and the function E ( R , t ) that minimizes E ( R , t ) with the use of corresponding points. The ICP algorithm iteratively computes the point correspondences. In each step of the algorithm, the algorithm searches the closest points as correspondences and computes the transformation ( R , t ) for the minimizing value E ( R , t ) .

2.1.1. ICP Algorithm Results

The ICP algorithm application result is shown in Figure 4, and the final RMS value between both point clouds is 0.167. The point cloud contains points from the second measurement that fill the empty area in the first measurement (top corner). The new point cloud contains more measurement points and gives more details about the scanned room. In Figure 4, on the left side are both registered point clouds from Figure 3. On the right side, there is a colored version.

2.1.2. ICP Algorithm Problem

By using the ICP method, there are two main problems. The ICP algorithm is point-based, and it does not take into account the local surface and shape around each point. The ICP algorithm is also time-consuming in searching for the nearest neighbor. Even if an efficient search data structure such as a kd-tree is used, the estimation time for a large number of points is long. If both scans are with a bigger distance offset and rotated in all axes by about several tens of degrees, the registration result is obtained with error.

2.2. Normal Distribution Transform (NDT)

The method was proposed in the publication [11]. The basic pillar of this algorithm is the capture of data by a statistical model. The method does not focus on the simple use of individual measurement points but instead constructs a model of mixtures of normal distributions and examines the probabilities of the surface point occurring in a certain position; see ref. [12]. This approach provides a piece-wise smooth introduction to the point cloud model with continuous first- and second-order derivatives. This representation is followed by standard numerical optimization methods for the registration problem. The NDT algorithm consists of several steps: firstly, the whole point cloud is dismembered into small subdomains (cells), and for each cell, statistical characteristics such as the mean and the covariance matrix are computed. We consider that the coordinates of the reference scan surface points have a D-dimensional normal distribution. The likelihood function is
p ( x ) = 1 ( 2 π ) D 2 | Σ | exp 1 2 ( x μ ) Σ ( x μ ) ,
where μ is the vector of mean values and Σ is the covariance matrix of the reference points on the scanned surface in the cell in which x is located. The constant c 0 denotes the expression ( 2 π ) D 2 | Σ | . The estimates of the mean and covariance matrix are
μ ¯ = 1 m k = 1 m y k ,
Σ ¯ = 1 m 1 k = 1 m ( y k μ ) ( y k μ ) ,
where y k , k = 1 , , m are coordinates of reference scan points form a cell. According to the eigenvalues of the covariance matrix, it is possible to estimate the shape of the cell. This will help in the point cloud registration process. In the NDT method, we look for such a position of the actual scan that when the maximum probability occurs, the points of the actual scan are located on the surface of the reference scan. Searched Helmert transformation parameters (i.e., translation and rotation) offer a pose estimate in the given scan; for encoding, we will use the vector variable p. In the current scan, we have point clouds X = { x 1 , , x n } .
The displacement of the point x in space by the position p is described by the spatial transformation function T ( p , x ) . We consider the best estimate of the true position p to be the one that maximizes the likelihood
L = k = 1 n p ( T ( p , x k ) ) .
For 3D-NDT registration, the Euler angles are useful because of their natural advantages. This representation does not need a numerical optimization procedure with easier-to-calculate derivatives. If we use the representation using Euler angles, there are six transformation parameters to estimate: three unknown translation parameters and three unknown rotation parameters. The rotation matrix can be expressed by Cardan angles and angles of yaw, pitch, and roll, ref. [13]. The pose can be explained using the six-dimensional vector p 6 = [ t x , t y , t z , ϕ x , ϕ y , ϕ z ] using the Euler angles. Then the 3D transformation is given by formula:
T E ( p 6 , x ) = R x + t = R x R y R z x k + t , R = c x c z c y s z s y c x c z + s x s y c z c x c z s x s y c z s x c y s x s z c x s y c z c x s y s x + s x c z c x c y , t = ( t x , t y , t z )
where c i = cos ϕ i and s i = sin ϕ i .

2.3. Feature Based Registration

The feature-based matching extracts distinguishing features from the range of images and uses corresponding features to calculate scan alignment. For most robots, the detection of closed loops is realized from camera data [14]. The most common functions of the detector are: SURF; SIFT; GLOH; Shape Context; PCA; Moments; Cross-correlation; and Steerable filters [9]. A major aspect of feature detectors is their invariance with individual transformation parameters: scale, shift, rotation, and illumination changes. A comprehensive survey of detectors and descriptors is given in [15]. These methods use the Gaussian function for image spectral analysis
G ( x , y , σ ) = 1 2 π σ 2 exp x 2 + y 2 2 σ 2 ,
where x and y are pixel coordinates in an image, and σ is the Gaussian parameter. The n-times filtration provides scale invariance. The input images of the convolution with variable scales give the differences of Gaussian (DoG), which are the image features. For a significant increase in computation speed, the SURF method uses Gaussian function approximation. The feature-based navigation system is presented in ref. [16]. The autonomous robot is guided along a given path. The navigation is based on exteroceptive sensors, and a simple, effective mathematical model is suitable.

2.4. IDC

The iterative dual correspondence (IDC) algorithm [17] can be considered an extension of ICP. The main mission is to accelerate the convergence of the rotation matrix when estimating position in scan registration. In this method, two rules are introduced to determine the correspondence. The first is similar to the ICP. In each iteration, the rotation and translation pair τ 1 are determined using the closest points. The second criterion, called ’the matching range-point rule’ deals with the point selection of a new set of corresponding points. In this criterion, corresponding points are identified within the permissible angular interval. An equation in 2D space has the form [ φ t φ , φ + t φ ] , t φ and is limited (how far the algorithm should seek). The matching range-point metric is determined as follows: corresponding ( x ) = arg   min x ( | r ρ | ) where x = [ φ , r ] and x = [ ϕ , ρ ] , and ϕ t ϕ ρ ϕ + t ϕ .
The corresponding point is located within the specified angular limits. In 3D space, an interval is a cuboid-shaped “window” whose projections into the planes are rectangles. A three-dimensional point x is given by polar coordinates [ ϕ , θ , r ] , where ϕ and θ are the latitude and longitude angles. This window is bound from [ ϕ t ϕ , θ t θ ] to [ ϕ + t ϕ , θ + t θ ] . A comparison of the quality of the solution to the registration problem led to the conclusion that the IDC algorithm is more robust than the ICP. However, with a large bias in the initial estimate of the position compared with the actual value, greater inaccuracies in the solution occurred more with the IDC than with the ICP. The IDC method is much more sensitive to a biased initial solution. Cf. [17].

2.5. pIC

Another approach is a registration algorithm called the probabilistic iterative correspondence method (pIC), cf. [18]. In this algorithm, scan noise data is combined with an initial estimate of position. The coordinates of the scan points and the initial estimate of the position are assumed to be random variables with zero mean Gaussian noise. For the best result from matching, the covariance matrix is based on the sensor positions and robot odometry. In the beginning, points from both scans are selected to create a subset composed of statistically compatible points. The construction of the pairing fitness function is based on the minimization of the Mahalanobis distance d m ( x , y ) . of point x and y ,
d m 2 ( x , y ) = | | x y | | Σ 1 | | x y | |
where Σ is a known estimate of the covariance matrix, including sensor properties and system conditions. If the Mahalanobis distance does not exceed the specified threshold, these points are considered compatible. The set of points satisfying this constraint formulates a subset of A. The method for determining the minimum argument of the fitness function, i.e., estimating the point y i A that best corresponds to the point x , is the integration of each admissible locations x and each admissible sensor coordinates considering their Gaussians estimates.
The results of the pIC algorithm were compared with those of IDC and ICP. pIC estimates were more robust with respect to a biased initial solution.
The pIC method shows faster convergence, about 25 %. Cf. [18].

2.6. Point-Based Probabilistic Registration

In this method, the measured coordinates of the discrete points of the reference scan are replaced by probabilistic functions, cf. [19]. The basis for estimating the probability of a scan point from the current cloud is to trace a ray rising from the estimated position, along the direction associated with each measurement from the current scan, to the nearest surface in the reference scan. Triangulation of the reference coordinate system scan is then performed to create an approximation of the surface. The estimated range of this measurement is determined by the length of the beam.
The likelihood function of a scan point is determined from a mixture of Gaussian values that are centered in an estimated range, with variance determined by known scanner characteristics, and a uniform distribution that is also determined with respect to scanner uncertainties. The probability of the distance r obtained by measurement (the distance to the cloud point x ) depends on the theoretical distance (the mean of the Gaussian value). This value is calculated through the mixture model using the Euclidean metric between scan points x and the triangulated reference scan surface. The calculation uses the expected metric d e ( x , Y , p ) pursuant to the direction of the beam x , the reference cloud Y, and the position p . The probability that the position of the current cloud is p is given by multiplying probabilities x X p ( x | d e ( x , Y , p ) ) .
The algorithm tries to optimize the likelihood function. The presented results by the authors include a pair of building scans in different coordinate systems, and it is discussed that the method gives results with less uncertainty in comparison with the ICP on this data set. Cf. [19].

2.7. Gaussian Fields

The next developed registration method has the name Gaussian fields, cf. [20]. This approach is similar to the Normal Distribution Transformation (NDT). A Gaussian mixture model (GMM) becomes the main tool here for measuring the scan’s statistical characteristics, the spatial distance between the points, and the surface similarity around the points. The comparison of points is realized in a multidimensional space, the dimension of which is given by the spatial dimension of the coordinate system, which is increased by the number of attributes. The proximity and similarity of points x and y from different clouds is
F ( x , y ) = exp ( | | x y | | d g 2 ( S ( x ) S ( y ) ) × D 1 ( S ( x ) S ( y ) ) ) ,
where S ( x ) is the 3D-moment shape characteristics of the surface passing through x . Equation F(x, y) describes a Gaussian function centered at y . The parameter d g describes the decomposition rate according to the spatial function, and the diagonal matrix D determines the penalty given by differences in attributes. The fitness function is i , j F ( x i , y i ) . Cf. [21].

2.8. Quadratic Patches

In this method, the reference scanned surface is expressed implicitly when the approximant is constructed using the square of the distance from the surface. The minimization of the distance metric given by the sum of squared distances defines the criterion to connect points from the current scan. The local shape of the surface is fitted by approximants using quadratic terms, i.e., the second-order derivative. The optimum of this surface representation can be found by Newton’s optimization method. Cf. [22].

2.9. Likelihood-Field Matching

This variant of registration works with similar principles as NDT. The method is often used by developers for 2D sonar scan registration. The large uncertainty of the angular resolution of sonars, compared with lidars, brings complications to the successful identification of registration points. In the method, a sequence of scans is first accumulated using robot odometry to obtain multi-point scans. Such an additional procedure can be applied to the input data for all scan registration algorithms. Thus, a new group of methods was created with the following prefix: sNDT, sICP. The method, named LF/SoG (short for probability field introduced as a sum of Gaussians), is very comparable to NDT when the points in one scan are corrected to a normal distribution according to the points from the next scan. In this method, an identity matrix is used instead of the covariance matrix estimates. Instead of a discrete explicit grid, the entire reference scan is passed through the circular window by replacing the values of the internal points in the sliding window with their centers of gravity. A Gaussian approximant is centered at the point of the resampled cloud, with the mean given by the point pose and the covariance matrix set to the identical matrix. In the process of registration, not only the nearest points are used for the calculation of the optimization criterion but also points at a defined distance from the Gaussian function. Cf. [23].

2.10. CRF Matching

Conditional Random Fields (CRF) is a probabilistic approach used in the construction of relational information models. Unlike hidden Markov models or Markov random fields, the use of CRF does not require independence of observations. Originally, CRFs were implemented in the field of computational linguistics, where they were used to label sentences according to parts of speech. Conditional random fields have been implemented to register 2D scans. In the algorithm, the CRF created for each point in the current scan introduces a hidden node. In turn, each hidden node is assigned to its corresponding point in the reference scan. The hidden node is also linked to a data node that contains certain functions defined for each scan point. On the selected training data, the model learns and searches for suitable parameters. Cf. [24].

2.11. Branch-and-Bound Registration

A branch-and-bound strategy is another registration algorithm presented in [25]. First, the translational part of the position space is discretized in several resolutions with ordered levels from the coarsest to the finest. The sequence of poses is analyzed at a certain level of the hierarchy. At the second lowest hierarchical level (branching), the value of the score is determined, on the basis of which sub-nodes are eliminated (boundary). This application is often used in forestry. The application is suitable for highly unstructured environments. It is particularly interesting in 2D analysis. The branch-and-bound procedure is used in the translation part of the transformation. This method is used in applications aimed at modeling the orientation of the robot based on various sensors. The initial value of the rotation matrix plays an important role here. In a space with six degrees of freedom (position in a space and orientation angles yaw, pitch, and roll angles), the optimization is complicated. Cf. [25].

2.12. Registration Using Local Geometric Features

Compared with the point algorithms described earlier, registration can also be implemented using more descriptive local geometric elements [26]. An ideal local property descriptor is invariant to rigid motion so that the corresponding surface can be found regardless of initial scan positions. The correspondence problem is solved when sufficiently prominent features are found. Furthermore, a global adaptation of the surface is possible, i.e., the realization of registration without an initial estimate of the position. The most commonly used surface description method is ’spin-images’. The spin-image is constructed at an oriented point, which is a surface point with a specified normal vector. Another alternative gives ’surface signatures’. These are similar to spin images but use surface curvature instead of point density. Another alternative surface description is the ’splash’. Spatters are determined around oriented points and involve capturing the point cloud using triangulation methods. All these description techniques are used to express the geometric features of both scans. The main aim is to calculate the estimates of rotation and translation parameters between the scans in such a way that we will obtain their minimal mutual distance. Cf. [26].

2.13. PoitnReg

Another registration approach for a wide range of three-dimensional scans is presented in ref. [27]. This paper deals with the pitfalls of 3D range scans. The rangefinder translates coordinates to a chosen coordinate system, and each scan origin is translated by the X, Y, Z coordinates. The data from an RTK GPS receiver placed directly above the scanner position is available for measurement. The PointReg algorithm evaluates the rotation combinations about the Z-axes of two scans to estimate the rotation pair with the lowest RMS error. The selected points from the first scan are compared with triangles formed by the nearest three points in the second scan.
This algorithm’s initial stage needs interaction by the user to ensure that the registration is determined only at points located on the object. Arbitrary moving objects in the scans are eliminated to ensure low inconsistencies between acquired points. The presented result showed that the algorithm needed only a few integrations, and for the wide-range scan, the PointReg algorithm overcomes the standard ICP.

2.14. The Automatic Mapping of Parametric Objects (Buildings) from Indoor Point Clouds

The recent paper [28] focuses on sophisticated data processing, modeling, and data visualization after the registration process. The point cloud registration process is done by high-quality rangefinder software. This research aims at the volumetric and the parametric building model, which additionally incorporates contextual information such as global wall connectivity. Unlike other methods working with the surface of the monitored object, this reconstruction problem works with the representation of objects/buildings using parametrically described and interconnected volume elements. The automatically reconstructed walls between adjoining rooms from opposite surfaces of vertical walls were extracted in the input clouds, and at the present discretion, the globally satisfactory connectivity of all elements. At the same time, with a precise mapping of wall thickness, the result is a complex volumetric approximation of an object’s wall elements. The described algorithm deals with many challenges, including removing outliers, automatic wall width estimation, calculating individual room area, window and door detection, and exit path determination for the scanned building.
The resultant model is easy to edit in software, and there can be simulated building changes, for example, in the case of future reconstruction. The presented approach was successfully tested on several real building measurements.

2.15. Basic Registration Methods Comparison

In Table 1 and Table 2, the basic method comparison from Section 2.1, Section 2.2, Section 2.3, Section 2.4, Section 2.5, Section 2.6, Section 2.7, Section 2.8, Section 2.9, Section 2.10, Section 2.11, Section 2.12, Section 2.13 and Section 2.14 is summarized. Individual algorithms are compared based on individual points or some space modeling, handling covariance matrix, important requirements, their speed processing, computation complexity, precision and advantages, necessity of human intervention, suitable application, and their year of origin.
Due to various scanning sensors’ errors, different registration methods can fit the concrete algorithm in Table 1 and Table 2. Generally, the statement can be valid, if measurement data are precise and point-based algorithms can perform better. The methods that use some space modeling can be more suitable for noisy input data. The concrete application area can also be important in the algorithm selection because some algorithms are specific to concrete applications. Despite the number of mentioned algorithms in Section 2. the main standard in 3D range scanning is mainly the ICP and NDT and their deviates. They are used and implemented in point cloud processing libraries. The method from Section 2.14 is not mentioned because it is a complex handling of building models, and it is not impossible to compare it with others.

3. The Optimal Statistical Transformation Model Taking into Account Errors in Coordinate Systems

This method can be added as an appendix to any registration algorithm presented in Section 2. The registration algorithm will provide data on the found identical points, from which the transformation parameters and their covariance matrices are subsequently estimated in the regression model.
The statistical model consisting of two scans has been studied in ref. [29], under the assumption that stochastic coordinates are given only at the second coordinate system and the points of the first scan are error-less.
A more complex problem with 3D transformation arises when measurement errors are respected in both coordinate systems. Further, the following notation is used.
Y i 3 ( n i + n i + 1 ) -dimensional random vector of realizations, which are obtained by the data in the i-th position of the laser sensor, i = 1 , 2 , 3 , 4 (in the following text four device positions will be considered), n i … number of points P in the i-th laser sensor position; (if i = 4 , then instead of i + 1 is the number 1).
The i-th device position must be chosen so that the i-th and ( i + 1 ) -th measured object side must be covered by measurement. The often arbitrary i-th position may be selected as the basis for the coordinate transformation into a chosen system of coordinates.
If the chosen coordinate system is equivalent to the first cloud system then the expected value of vector Y 1 is
E ( Y 1 ) = fi 1 β 2 , β 1 = β 1 , 1 , , β 1 , n 1 , β 2 = β 2 , 1 , , β 2 , n 2 ,
where β 1 , j are three-dimensional coordinates of the j-th point P on the first side of the object and β 2 , j are three-dimensional coordinates of the j-th point on the other side of the object. If i 1 , then
E ( Y i ) = α i α i + 1 ,
where instead of i + 1 for i = 4 is the number 1.
3 n i -dimensional vector α i includes the coordinates of n i points on the i-th object side at the i-th device coordinate system. Similarly, α i + 1 has a length 3 n i + 1 when this vector includes the coordinates of n i + 1 points on ( i + 1 ) -th object side at i-th device coordinate system. It can be considered
α i = 1 n i γ i + I n i , n i T i β i ,
α i + 1 = 1 n i + 1 γ i + I n i + 1 , n i + 1 T i β i + 1 ,
where β i , β i + 1 are vectors of point coordinates on i-th side or ( i + 1 ) -th object side respectively. The 3D-vector γ i and the orthogonal matrix T i are unknown and must be appropriately estimated.
Therefore
E ( Y i ) = 1 n i γ i 1 n i + 1 γ i + I n i + n i + 1 , n i + n i + 1 T i β i β i + 1 .
Further
T i = T i , 11 , T i , 12 , T i , 13 T i , 21 , T i , 22 , T i , 23 T i , 31 , T i , 32 , T i , 33 , T i , j = T i , 1 j T i , 2 j T i , 3 j , i , j = 1 , 2 , 3 .
For simplicity, the situation will be discussed further on the model with 4 scans.

3.1. Model with Four Scans

The model with four scans is described as follows
E Y 1 Y 2 Y 3 Y 4 β 1 β 2 α 2 ( 2 ) α 3 ( 2 ) α 3 ( 3 ) α 4 ( 3 ) α 4 ( 4 ) α 1 ( 4 ) , Σ ,
Σ = Σ 1 , 0 , 0 , 0 0 , Σ 2 , 0 , 0 0 , 0 Σ 3 , 0 0 , 0 0 , Σ 4
and constraints (for s = 2 , 3 , 4 ) are
T s , 1 T s , 1 = 1 , T s , 1 T s , 2 = 0 , T s , 1 T s , 3 = 0 , T s , 2 T s , 2 = 1 , T s , 2 T s , 3 = 0 , T s , 3 T s , 3 = 1 ,
and
α 2 ( 2 ) 1 n 2 γ 2 I n 2 , n 2 T 2 β 2 = 0 , α 3 ( 2 ) 1 n 3 γ 2 I n 3 , n 3 T 2 β 3 = 0 , α 3 ( 3 ) 1 n 3 γ 3 I n 3 , n 3 T 3 β 3 = 0 , α 4 ( 3 ) 1 n 4 γ 3 I n 4 , n 4 T 3 β 4 = 0 , α 4 ( 4 ) 1 n 4 γ 4 I n 4 , n 4 T 4 β 4 = 0 , α 1 ( 4 ) 1 n 1 γ 4 I n 1 , n 1 T 4 β 1 = 0 ,
( α i + 1 ( i ) comprise measured coordinates on the ( i + 1 ) -th side of object at the i-th device coordinate system).
Unknown parameters are the following vectors
β 1 , β 2 , β 3 , β 4 , α 2 ( 2 ) , α 3 ( 2 ) , α 3 ( 3 ) , α 4 ( 3 ) , α 4 ( 4 ) , α 1 ( 4 ) , γ 2 , γ 3 , γ 4 , T i , 1 , T i , 2 , T i , 3 , i = 2 , 3 , 4 .
The unknown parameter count in the model is 3 ( n 1 + n 2 + n 3 + n 4 ) + 6 ( n 3 + n 4 ) + 3 ( n 1 + n 2 ) + 9 + 27 , and the number of constraints is 3 n 1 + 6 ( n 3 + n 4 ) + 3 n 2 + 18 . The measurements count plus constraints minus parameters is 3 n 18 .

3.2. Linearization of a Model with 4 Scans

Now the linearized model will be described.
The following notation and terminology will be used:
Θ 1 = β 1 , β 2 , ( α 2 ( 2 ) ) , ( α 3 ( 2 ) ) , ( α 3 ( 3 ) ) , ( α 4 ( 3 ) ) , ( α 4 ( 4 ) ) , ( α 1 ( 4 ) ) ,
Θ 2 = β 3 , β 4 , γ 2 , γ 3 , γ 4 , [ vec ( T 2 ) ] , [ vec ( T 3 ) ] , [ vec ( T 4 ) ] ,
Y = Y 1 , Y 2 , Y 3 , Y 4 ,
Σ = Σ 1 , Σ 2 , Σ 3 , Σ 4 .
Let Θ 1 ( 0 ) , Θ 2 ( 0 ) are approximate values of parameters Θ 1 , Θ 2 .
The linear version of constraints is b + B 1 δ Θ 1 + B 2 δ Θ 2 = 0 , where δ Θ 1 = Θ 1 Θ 1 ( 0 ) , δ Θ 2 = Θ 2 Θ 2 ( 0 ) and matrices B 1 and B 2 are given in Table 3.

3.3. The Estimates of Unknown Parameters in a Model with 4 Scans

The indirect measurement model with II type constraints is
Y ( X Θ 1 , Σ ) , b + B 1 Θ 1 + B 2 Θ 1 = 0 .
If rank ( X n , k ) = k 1 < n , rank ( B 1 , ( q , k 1 ) , B 2 , ( q , k 2 ) ) = q < k 1 + k 2 , rank ( B 2 ) = k 2 < q , and Σ is a positive definite, then the model is regular.
black of vector Θ 1 Θ 1 is given by formula
Θ ^ ^ 1 Θ ^ ^ 2 = ( X Σ 1 X ) 1 B 1 Q 1 , 1 Q 2 , 1 b + + I ( X Σ 1 X ) 1 B 1 Q 1 , 1 B 1 Q 2 , 1 B 1 Θ ^ 1 ,
where Θ ^ 1 = ( X Σ 1 X ) 1 X Σ 1 Y (estimate non-respecting the parameters Θ 1 , Θ 2 constraints); covariance matrix is
var Θ ^ ^ 1 Θ ^ ^ 2 = var ( Θ ^ ^ 1 ) , cov ( Θ ^ ^ 1 , Θ ^ ^ 2 ) cov ( Θ ^ ^ 2 , Θ ^ ^ 1 ) , var ( Θ ^ ^ 2 ) ,
where
var ( Θ ^ ^ 1 ) = ( X Σ 1 X ) 1 ( X Σ 1 X ) 1 B 1 × Q 1 , 1 B 1 ( X Σ 1 X ) 1 ,
cov ( Θ ^ ^ 1 , Θ ^ ^ 2 ) = ( X Σ 1 X ) 1 B 1 Q 1 , 2 ,
var ( Θ ^ ^ 2 ) = Q 2 , 2 ,
and
Q 1 , 1 , Q 1 , 2 Q 2 , 1 , Q 2 , 2 = B 1 ( X Σ 1 X ) 1 B 1 , B 2 B 2 , 0 1 .
Proof in [30].

3.4. Numerical Study

In the following example, the transformation parameters will be calculated in a simulated problem. We will consider a model of a building with four scans (coordinate systems) of measurement. See Figure 5 .
Let Y 1 = Y 1 I Y 1 I I , Y 2 = Y 2 I I Y 2 I I I ,
            Y 3 = Y 3 I I I Y 3 I V , Y 4 = Y 4 I V Y 4 I .
For example, Y 1 I includes the measured coordinates on the I-side of an object that are given in the first stage of measurement. Vector Y 1 I I encompasses the measured coordinates on the II-side of an object that are given in the second stage of measurement.
Figure 5. Layout of measurement with 4 scans.
Figure 5. Layout of measurement with 4 scans.
Mathematics 11 03564 g005
In Table 3, we present the measured coordinates:
We assume the coordinate accuracy of measured points in all scans to be approximately the same and to be given by a standard deviation of 10 mm.
For the above-mentioned algorithm, estimators of unknown parameters of translation and rotation will be calculated. During linearization, matrices obtained from partial derivations of the model according to individual parameters are used. The calculated matrices are given in Appendix A, Table A1.
Estimates computed by Formula (24) are
T ^ 2 = 0.55919 0.82904 0 0.82904 0.55919 0 0 0 0 , T ^ 3 = 0.91355 0.40674 0 0.40674 0.91355 0 0 0 0 , T ^ 4 = 0.46947 0.88295 0 0.88295 0.46947 0 0 0 0 , γ ^ 2 = ( 99.3732 , 10.8637 , 0.0000 ) , γ ^ 3 = ( 89.6260 , 84.0564 , 9.9873 ) , γ ^ 4 = ( 2.5764 , 95.9541 , 15.0178 ) .
Furthermore, the resulting estimators of coordinates obtained in the system of scan 1 and absolute point positioning errors obtained from (25) are shown in Table 4.

4. Concluding Remarks

The paper attempts to present a state-of-the-art solution to the registration problem. Indeed, there are data registration problems that require high complexity for the laser systems and algorithms used. Many previously presented methods lack variability estimations. Our statistical model can serve as a complement to any registration algorithm since it provides an estimate of the variance matrix based on the knowledge of identical points. The method requires covariance matrix knowledge for identical point coordinates. The solution includes translational parameter estimation and the rotation matrix between the several coordinate systems. Unknown transformation parameters and the variance matrix were estimated by using the nonlinear regression model with nonlinear constraints. Separately, this approach is applicable in the case of geodetic measurements, when identical points are marked with reflective targets.

Author Contributions

Conceptualization, J.M. and P.C.; methodology, J.M. and P.C.; validation, , J.M. and P.C.; formal analysis, P.C.; investigation, P.C. and J.M.; writing-original draft preparation, P.C. and J.M.; supervision, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Internal Grant Agency of the University of Pardubice. Vouchers from the review activity were also used to finance the contribution.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some data are presented in tables in the article. Additional data sets are available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Matrices B 1 and B 2 .
Table A1. Matrices B 1 and B 2 .
B 1 i = 1 3 n 2 = Θ 1 α 2 ( 2 ) 1 n 2 γ 2 I n 2 T 2 β 2 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 2 , 3 n 1 , ( I n 2 T 2 ( 0 ) ) , I 3 n 2 , 0 3 n 2 , 3 n 3 , 0 3 n 2 , 3 n 3 , 0 3 n 2 , 3 n 4 , 0 3 n 2 , 3 n 4 , 0 3 n 2 , 3 n 1 ,
B 1 i = 3 n 2 + 1 3 n 2 + 3 n 3 = Θ 1 α 3 ( 2 ) 1 n 3 γ 2 I n 3 T 2 β 3 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 3 , 3 n 1 , 0 3 n 3 , 3 n 2 , 0 3 n 3 , 3 n 2 , I 3 n 3 , 0 3 n 3 , 3 n 3 , 0 3 n 3 , 3 n 4 , 0 3 n 3 , 3 n 4 , 0 3 n 3 , 3 n 1 ,
B 1 i = 3 n 2 + 3 n 3 + 1 3 n 2 + 3 n 3 + 3 n 3 = Θ 1 α 3 ( 3 ) 1 n 3 γ 3 I n 3 T 3 β 3 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 3 , 3 n 1 , 0 3 n 3 , 3 n 2 , 0 3 n 3 , 3 n 2 , 0 3 n 3 , 3 n 3 , I 3 n 3 , 0 3 n 3 , 3 n 4 , 0 3 n 3 , 3 n 4 , 0 3 n 3 , 3 n 1 ,
B 1 i = 3 n 2 + 6 n 3 + 1 3 n 2 + 6 n 3 + 3 n 4 = Θ 1 α 4 ( 3 ) 1 n 4 γ 3 I n 4 T 3 β 4 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 4 , 3 n 1 , 0 3 n 4 , 3 n 2 , 0 3 n 4 , 3 n 2 , 0 3 n 4 , 3 n 3 , 0 3 n 4 , 3 n 3 , I 3 n 4 , 0 3 n 4 , 3 n 4 , 0 3 n 4 , 3 n 1 ,
B 1 i = 3 n 2 + 6 n 3 + 3 n 4 + 1 3 n 2 + 6 n 3 + 6 n 4 = Θ 1 α 4 ( 4 ) 1 n 4 γ 4 I n 4 T 4 β 4 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 4 , 3 n 1 , 0 3 n 4 , 3 n 2 , 0 3 n 4 , 3 n 2 , 0 3 n 4 , 3 n 3 , 0 3 n 4 , 3 n 3 , 0 3 n 4 , 3 n 4 , I 3 n 4 , 0 3 n 4 , 3 n 1 ,
B 1 i = 3 n 2 + 6 n 3 + 6 n 4 + 1 3 n 2 + 6 n 3 + 6 n 4 + 3 n 1 = Θ 1 α 1 ( 4 ) 1 n 1 γ 4 I n 1 T 4 β 1 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = I n 1 T 4 , 0 3 n 1 , 3 n 2 , 0 3 n 1 , 3 n 2 , 0 3 n 1 , 3 n 3 , 0 3 n 1 , 3 n 3 , 0 3 n 1 , 3 n 4 , 0 3 n 1 , 3 n 4 , I 3 n 1 ,
B 1 i = 3 n 1 + 3 n 2 + 6 n 3 + 6 n 4 + 1 3 ( n 1 + n 2 ) + 6 ( n 3 + n 4 ) + 18 = 0 18 , 6 ( n 1 + n 2 + n 3 + n 4 ) .
B 2 i = 1 3 n 2 = Θ 2 α 2 ( 2 ) 1 n 2 γ 2 I n 2 T 2 β 2 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 2 , 3 n 3 , 0 3 n 2 , 3 n 4 , 1 n 2 I 3 , 0 3 n 2 , 3 , 0 3 n 2 , 3 , β 2 I n 3 , 0 3 n 2 , 9 , 0 3 n 2 , 9 ,
B 2 i = 3 n 2 + 1 3 n 2 + 3 n 3 = Θ 2 α 3 ( 2 ) 1 n 3 γ 2 I n 3 T 2 β 3 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = ( I n 3 T 2 ) , 0 3 n 3 , 3 n 4 , 0 3 n 3 , 3 , 1 n 3 I 3 , 0 3 n 3 , 3 , β 3 I n 3 , 0 3 n 3 , 9 , 0 3 n 3 , 9 ,
B 2 i = 3 n 2 + 3 n 3 + 1 3 n 2 + 6 n 3 = Θ 2 α 3 ( 3 ) 1 n 3 γ 3 I n 3 T 3 β 3 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = ( I n 3 T 3 ) , 0 3 n 3 , 3 n 4 , 0 3 n 3 , 3 , 1 n 3 I 3 , 0 3 n 3 , 3 , 0 3 n 3 , 9 , β 3 I n 3 , 0 3 n 3 , 9 ,
B 2 i = 3 n 2 + 6 n 3 + 1 3 n 2 + 6 n 3 + 3 n 4 = Θ 2 α 4 ( 3 ) 1 n 4 γ 3 I n 4 T 3 β 4 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 4 , 3 n 3 , ( I n 4 T 3 ) , 0 3 n 4 , 3 , 1 n 4 I 3 , 0 3 n 4 , 3 , 0 3 n 4 , 9 , β 4 I n 3 , 0 3 n 4 , 9 ,
B 2 i = 3 n 2 + 6 n 3 + 3 n 4 + 1 3 n 2 + 6 n 3 + 6 n 4 = Θ 2 α 4 ( 4 ) 1 n 4 γ 4 I n 4 T 4 β 4 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 4 , 3 n 3 , ( I n 4 T 4 ) , 0 3 n 4 , 3 , 0 3 n 4 , 3 , 1 n 4 I 3 , 0 3 n 4 , 9 , 0 3 n 4 , 9 , β 4 I n 3 ,
B 2 i = 3 n 2 + 6 ( n 3 + n 4 ) + 1 3 n 2 + 6 ( n 3 + n 4 ) + 3 n 1 = Θ 2 α 1 ( 4 ) 1 n 1 γ 4 I n 1 T 4 β 1 Θ 1 = Θ 1 ( 0 ) , Θ 2 = Θ 2 ( 0 ) = 0 3 n 1 , 3 n 3 , 0 3 n 1 , 3 n 4 , 0 3 n 1 , 3 , 0 3 n 1 , 3 , 1 n 1 I 3 , 0 3 n 1 , 9 , 0 3 n 1 , 9 , β 1 I 3   .

References

  1. Goshtasby, A.A. Image Registration; Advances in Pattern Recognition; Springer: London, UK, 2012. [Google Scholar] [CrossRef]
  2. Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef]
  3. Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of laser scanning point clouds: A review. Sensors. Multidisciplinary Digital Publishing Institute. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef]
  4. Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Wang, Y.; Dai, W.; Fan, H.; Hyypp, J.; et al. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
  5. Kopacik, A.; Erdeeyi, J.; Kyrinovic, P. Engineering Surveys for Industry, Chapter Terrestrial Laser Scanning Systems; Springer: Berlin/Heidelberg, Germany, 2020; ISBN 9783030483081. [Google Scholar] [CrossRef]
  6. Chmelar, P.; Rejfek, L.; Nguyen, T.N.; Ha, D. Advanced Methods for Point Cloud Processing and Simplification. Appl. Sci. 2020, 10, 3340. [Google Scholar] [CrossRef]
  7. Chen, Y.; Gerard, M. Object modelling by registration of multiple range images. Image Vision Comput. 1991, 10, 145–155. [Google Scholar] [CrossRef]
  8. Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  9. Nüchter, A. 3D Robotic Mapping: The Simultaneous Localization and Mapping Problem with Six Degrees of Freedom; Springer: Berlin/Heidelberg, Germany, 2009; ISBN 978-354089883-2. ISSN 16107438. [Google Scholar] [CrossRef]
  10. Nüchter, A.; Elseberg, J.; Schneider, P.; Paulus, D. Study of parameterizations for the rigid body transformations of the scan registration problem. Comput. Vis. Image Underst. 2010, 114, 963–980. [Google Scholar] [CrossRef]
  11. Peter, B.; Wolfgang, S. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 3, pp. 2743–2748. [Google Scholar]
  12. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef]
  13. Magnusson, M. The Three-Dimensional Normal-Distributions Transform an Efficient Representation for Registration, Surface Analysis, and Loop Detection. In Studies in Technology; Örebro University: Örebro, Sweden, 2013. [Google Scholar]
  14. Beran, L.; Chmelar, P.; Rejfek, L. Navigation of Robotics Platform Using Advanced Image Processing Navigation Methods. In Proceedings of the VIPIMAGE (V. Eccomas Thematic Conference on Computional Vision and Medical Image Processing), Tenerife, Spain, 19–21 October 2015; ISBN 978-113802926-2. [Google Scholar]
  15. Beran, L.; Chmelar, P.; Rejfek, L. Navigation of Robotics Platform using Monocular Visual Odometry. In Proceedings of the Radioelektronika 25th International Conference, Pardubice, Czech Republic, 21–22 April 2015. [Google Scholar]
  16. Krajnik, T. Large-Scale Mobile Robot Navigation and Map Building. Ph.D. Thesis, Czech Technical University in Prague, Faculty of Electrical Engineering, Department of Cybernetics, Prague, Czech Republic, 2011. [Google Scholar]
  17. Lu, F.; Milios, E. Robot pose estimation in unknown environments by matching 2D range scans. J. Intell. Robot. Syst. 1997, 18, 249–275. [Google Scholar] [CrossRef]
  18. Montesano, L.; Minguez, J.; Montano, L. Probabilistic scan matching for motion estimation in unstructured environments. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2005. [Google Scholar] [CrossRef]
  19. Hähnel, D.; Burgard, W. Probabilistic Matching for 3D Scan Registration. In Proceedings of the VDI-Conference Robotik 2002 (Robotik), Forum am Schlosspark, Germany, 19–20 June 2002; Available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=4f75beeeb9958a948492490f69c25f37165a3908 (accessed on 10 August 2023).
  20. Boughobel, F.; Muharrem, M.; Koschan, A.; Abidi, M. A new method for the registration of three-dimensional point-sets: The Gaussian Fields framework. Image Vis. Comput. 2010, 28, 124–137. [Google Scholar] [CrossRef]
  21. Boughorbel, F.; Koschan, A.; Abidi, B.; Abidi, M. Gaussian fields: A new criterion for 3D rigid registration. Pattern Recognit. 2004, 37, 1567–1571. [Google Scholar] [CrossRef]
  22. Mitra, N.J.; Gelfand, N.; Pottmann, H.; Guibas, L. Registration of point cloud data from a geometric optimization perspective. In Proceedings of the Symposium on Geometry Processing, Nice, France, 8–10 July 2004; pp. 22–31. [Google Scholar] [CrossRef]
  23. Burguera, A.; Gonz lez, Y.; Oliver, G. The likelihood field approach to sonar scan matching. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2008; pp. 2977–2982. [Google Scholar] [CrossRef]
  24. Lafferty, J.; McCallum, A.; Pereira, F. Conditional randomfields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML), San Francisco, CA, USA, 28 June–1 July 2001; ISBN 1-55860-778-1. [Google Scholar]
  25. Forsman, P.; Halme, A. Feature based registration of range images for mapping of natural outdoor environments. In Proceedings of the International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT), Washington, DC, USA, 14–16 June 2004. [Google Scholar] [CrossRef]
  26. Johnson, A.E. Spin Images: A Representation for 3-D Surface Matching. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 1997. [Google Scholar]
  27. Olsen, M.; Johnstone, E.; Kuester, F.; Driscoll, N.; Ashford, S. New Automated Point-Cloud Alignment for Ground-Based Light Detection and Ranging Data of Long Coastal Sections. J. Surv. Eng. 2011, 137, 14–25. [Google Scholar] [CrossRef]
  28. Ochmann, S.; Vock, R.; Wessel, R.; Klein, R. Automatic reconstruction of parametric building models from indoor point clouds. Comput. Graph. 2016, 54, 94–103. [Google Scholar] [CrossRef]
  29. Marek, J.; Rak, J.; Jetensky, P. Statistical solution of 3D transformation problem. In Computer Science Research Notes, CSRN 2503; Zapadoceska Univerzita v Plzni: Plzen, Czech Republic, 2015; pp. 85–90. ISBN 978-808694367-1. [Google Scholar]
  30. Kubacek, L. Statistical Theory of Geodetic Networks; VUGTK: Zdiby, Czech Republic, 2013. [Google Scholar]
Figure 1. Measuring device.
Figure 1. Measuring device.
Mathematics 11 03564 g001
Figure 2. One measurement frames of Figure 1.
Figure 2. One measurement frames of Figure 1.
Mathematics 11 03564 g002
Figure 3. 3D point cloud from 1st and 2nd position.
Figure 3. 3D point cloud from 1st and 2nd position.
Mathematics 11 03564 g003
Figure 4. ICP algorithm result. Left side the registered point clouds from Figure 3 by different colors. Right side the colored version.
Figure 4. ICP algorithm result. Left side the registered point clouds from Figure 3 by different colors. Right side the colored version.
Mathematics 11 03564 g004
Table 1. Basic methods comparison part 1.
Table 1. Basic methods comparison part 1.
AlgorithmBased on (Points/Model)Covarience MatrixImportant RequirementsSpeed ProcessingComputation
Complexity
3D ICPpointsis estimated-slow for large datasets O ( N 2 )
3D NDT3D normal distributionis estimated-faster than ICP O ( N )
Feature basedsearch space featuresdepends on applicationvaried environmentfast—depends on selected featuredepends on selected descriptor
IDCpointsis estimated-faster than ICPsimilar to ICP but not specified
pICpointsis estimated-slow for large datasetssimilar to ICP but not specified
Point-based probabilisticProbability functionsss estimated-faster than ICPsimilar to ICP but not specified
Gaussian fieldsGaussian mixtureis estimated-faster than ICP O ( N )
Quadratic patchesQuadratic approximationis estimated-faster than ICPnot specified
Likelihood-fieldsLikelihood-fieldsuses identity matrix-similar to NDTnot specified
CRF matchingConditional random fieldsuses log-likelihood functionmake sense only in 2Dslow for large datasets O ( N ) in current scan, O ( N 2 ) in reference scan
Branch and boundRotation symmetric featuresis necessary, get from sensorsmake sense only in 2Dnot specifiednot specified
Local geometric featuresLocal geometric featuresIs estimated-Depending on used algorithmDepending on used algorithm
PointRegpointsis necessaryonly consistent object, moving has to be eliminatedslow due to user interactionnot specified
Table 2. Basic methods comparison part 2.
Table 2. Basic methods comparison part 2.
AlgorithmPrecisionAdvantagesNecessity of Human InterventionSuitable ApplicationOrigin
3D ICPgood - depens on input dataprecise with suitable datainitial parameters setupprecise 3D point clouds1992
3D NDTgood with suitable settingsspace modeling, using of less accurate datainitial parameters setupvarious input data2007
Feature basedgood, depens on environmentworking with camera imagesinitial parameters setuprobotic navigation1998, 2006
(real time)
IDCmore robust, less precision than ICProbustnes for bigger initial poseinitial parameters setupsimilar to ICP1994
pICbetter accuracy, robustness and convergence than ICP and IDCincorporates sensors quality and noiseinitial parameters setupsimilar to ICP2005
Point-based probabilbetter than ICPscans treating as probability functioninitial parameters setuplarge 3D scans2002
Gaussian fieldsgood with suitable settingssimilar to 3D NDTinitial parameters setupvarious input data2004
Quadratic patchesgood with suitable settingssimilar to 3D NDTinitial parameters setupvarious input data2004
Likelihood-fieldssimilar to NDTpossiblity to extend scans to get more pointsinitial parameters setupvarious input data2008
CRF matchinggood with suitable settingsusing od different user-features, robust to initial pose errorinitial parameters setuponly for 2D scans2001
Branch and boundnot specifed, dependent on set of common symetric featuresusing on highly unstructured environmentsinitial parameters setupnatural outdoor environment2004
Local geometric featgood depending on input datamatching local surfacesinitial parameters setuppoint clouds wit varied objects1997
PointReglow RMS with the suitable settings and data handlinggood with good data handlingmanualpitfalls 3D range scans2011
Table 3. Measurements.
Table 3. Measurements.
Y 1 I Y 1 II Y 2 II Y 2 III Y 3 III Y 3 IV Y 4 IV Y 4 I
629.1793629.0871521.6030533.7651−457.5236−446.6342208.9794223.0744
84.158384.0498−464.5526−469.4856−258.9134−265.6701702.7168691.0423
15.19764.09804.098016.620826.620817.899722.899730.1976
626.6654638.7188532.2563536.3855−451.5327−445.1702209.2158218.5299
87.968690.5740−468.8736−462.9402−262.6306−263.4124700.0364690.6115
3.50587.84267.84267.743717.743712.000117.000118.5058
622.4489640.0699533.7651536.7300−450.7452−444.5112209.3221210.9074
94.359791.4826−469.4856−462.0798−263.1193−262.3962698.8299689.8890
4.06665.59985.59983.326713.326714.645219.645219.0666
622.0117 536.5612−451.1310−443.3154209.5151210.1170
95.0224 −462.5013−262.8799−260.5522696.6406689.8140
3.0727 16.287926.287917.357022.357018.0727
538.5282−446.6342−445.0099209.2416
−457.5883−265.6701−263.1652699.7429
17.635527.635520.708125.7081
−439.5869210.1170
−254.8022689.8140
14.719819.7198
n 1 = 4 n 2 = 3 n 3 = 5 n 4 = 6 n 1 = 4
Table 4. Estimates of coordinates.
Table 4. Estimates of coordinates.
β ^ 1 s β ^ 1 β ^ 2 s β ^ 2 β ^ 3 s β ^ 3 β ^ 4 s β ^ 4
629.22360.0134629.27270.0151639.99650.0254632.70310.1205
84.08420.016284.17650.010891.45900.0235102.19260.1057
14.89460.01804.25840.014916.72080.02327.99210.1125
626.70460.0121638.75390.0133636.10580.0753630.66150.0852
87.91790.010790.41750.015497.51760.0541100.71790.0816
3.46010.01557.85240.01517.57730.04292.00020.0344
622.32380.0142640.06700.0180635.61140.0907629.47780.0233
94.32760.013991.47410.018997.85610.068799.89180.0422
4.19080.01885.60400.02013.26770.07894.63970.0179
621.91690.0190 635.88590.0595627.76320.0912
95.02360.0224 97.92920.061498.78440.0894
2.96610.0256 16.26010.02097.44810.0796
632.83280.1003630.43260.0488
102.11810.0982100.36220.0198
17.67770.114310.76760.0158
622.04440.0129
94.89020.0164
4.75480.0246
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marek, J.; Chmelař, P. Survey of Point Cloud Registration Methods and New Statistical Approach. Mathematics 2023, 11, 3564. https://doi.org/10.3390/math11163564

AMA Style

Marek J, Chmelař P. Survey of Point Cloud Registration Methods and New Statistical Approach. Mathematics. 2023; 11(16):3564. https://doi.org/10.3390/math11163564

Chicago/Turabian Style

Marek, Jaroslav, and Pavel Chmelař. 2023. "Survey of Point Cloud Registration Methods and New Statistical Approach" Mathematics 11, no. 16: 3564. https://doi.org/10.3390/math11163564

APA Style

Marek, J., & Chmelař, P. (2023). Survey of Point Cloud Registration Methods and New Statistical Approach. Mathematics, 11(16), 3564. https://doi.org/10.3390/math11163564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop