CN107818598B - Three-dimensional point cloud map fusion method based on visual correction - Google Patents
Three-dimensional point cloud map fusion method based on visual correction Download PDFInfo
- Publication number
- CN107818598B CN107818598B CN201710989642.2A CN201710989642A CN107818598B CN 107818598 B CN107818598 B CN 107818598B CN 201710989642 A CN201710989642 A CN 201710989642A CN 107818598 B CN107818598 B CN 107818598B
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- feature
- cloud map
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012937 correction Methods 0.000 title claims abstract description 21
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 17
- 230000000007 visual effect Effects 0.000 title description 3
- 239000011159 matrix material Substances 0.000 claims abstract description 18
- 230000009466 transformation Effects 0.000 claims abstract description 17
- 239000000284 extract Substances 0.000 claims abstract description 10
- 239000013598 vector Substances 0.000 claims description 44
- 238000000034 method Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000001308 synthesis method Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 4
- 239000000203 mixture Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Architecture (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
本发明公开了一种基于视觉矫正的三维点云地图融合方法,包括以下步骤:1)处理待融合的两个点云地图;2)提取两个三维点云地图的3D‑SIFT关键点;3)在3D‑SIFT关键点上提取IPFH特征;4)通过计算特征点之间的欧式距离寻找特征匹配点;5)计算转换矩阵,旋转点云地图;6)采用ICP算法将两个点云地图融合在一起。本发明将SIFT特征拓展到三维点云中,提取3D‑SIFT关键点,从而保证特征对视角旋转、变换的鲁棒性;通过提取IPFH特征,克服了原始FPFH特征的权重系数错误的问题,同时该特征综合邻域点的几何特性,来表征一个三维点的特征,大大提高了算法的稳定性。通过这样处理,本发明能够将两个视角差异很大的三维点云地图融合在一起。
The invention discloses a three-dimensional point cloud map fusion method based on vision correction, comprising the following steps: 1) processing two point cloud maps to be fused; 2) extracting 3D-SIFT key points of the two three-dimensional point cloud maps; 3 ) Extract IPFH features on 3D-SIFT key points; 4) Find feature matching points by calculating the Euclidean distance between the feature points; 5) Calculate the transformation matrix and rotate the point cloud map; 6) Use the ICP algorithm to combine the two point cloud maps. mix together. The invention extends the SIFT feature into the three-dimensional point cloud, and extracts the 3D-SIFT key points, thereby ensuring the robustness of the feature to the rotation and transformation of the viewing angle; by extracting the IPFH feature, the problem of the wrong weight coefficient of the original FPFH feature is overcome, and at the same time This feature integrates the geometric characteristics of neighboring points to characterize the characteristics of a three-dimensional point, which greatly improves the stability of the algorithm. Through such processing, the present invention can fuse together two three-dimensional point cloud maps with greatly different viewing angles.
Description
技术领域technical field
本发明涉及计算机视觉领域,主要涉及三维点云地图融合,具体提供了一种基于视觉矫正的方法,可用于解决因视角差异过大导致点云地图融合失败的问题。The invention relates to the field of computer vision, mainly relates to three-dimensional point cloud map fusion, and specifically provides a method based on vision correction, which can be used to solve the problem of point cloud map fusion failure due to excessive viewing angle differences.
背景技术Background technique
伴随着立体相机的快速发展,三维点云数据已经广泛的应用在计算机视觉领域中,例如机器人构建地图、导航、物体识别、跟踪。当机器人处于广阔的外部环境或者复杂的内部环境时,SLAM构建地图虽然能够解决机器人自主导航,但是由于环境的复杂性、广阔性,构建地图时会出现各种问题,比如构建地图过程中消耗时间过长以至于导航失败。因此,在这种情况下需要多个机器人共同构建地图,然后将各自的地图拼接成完整的地图,从而实现地图的共享,这样就可以提高自主导航的效率。With the rapid development of stereo cameras, 3D point cloud data has been widely used in the field of computer vision, such as robot construction of maps, navigation, object recognition, and tracking. When the robot is in a vast external environment or a complex internal environment, although SLAM can build a map to solve the robot's autonomous navigation, due to the complexity and vastness of the environment, various problems will occur when building a map, such as time consumption in the process of building a map. So long that navigation fails. Therefore, in this case, multiple robots are required to jointly build a map, and then stitch their respective maps into a complete map, so as to realize the sharing of maps, which can improve the efficiency of autonomous navigation.
点云地图融合是指:将不同视角、不同位置采集的点云地图通过刚体变换将来自多个视角的点云融合在一起。Besl,P.J.&McKay,N.d.(1992).A method for registrationof 3-D shapes.IEEE Transactions on Pattern Analysis and Machine Intelligence,14(2),239-256.公开了Iterative Closest Point(ICP)算法,通过寻找最近对应点,估算转换矩阵,计算目标函数的最小值,最后经过多次的迭代,计算出转换矩阵,然后将两个点云融合在一起。该算法只有提供精确的初始值时,才能得到比较良好的的融合结果。Point cloud map fusion refers to the fusion of point clouds from multiple perspectives through rigid body transformation through point cloud maps collected from different perspectives and locations. Besl, P.J. & McKay, N.d. (1992). A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 239-256. The Iterative Closest Point (ICP) algorithm is disclosed by finding the nearest Corresponding points, the transformation matrix is estimated, the minimum value of the objective function is calculated, and finally after many iterations, the transformation matrix is calculated, and then the two point clouds are fused together. Only when the algorithm provides an accurate initial value, can a relatively good fusion result be obtained.
为了计算精确的初始值,很多研究学者通过寻找特征匹配来计算初始值。Klasing等人采用了基于优化平均的方法估计法向量,例如PlanePCA,PlaneSVD,QuadSVD方法。然而点云的法向量包含的数据信息较少,点云地图视角的变化会导致点云法向量的计算,最终影响点云地图融合的效果。Gelfand提供了提取点云体积的方法,然而该方法确不能适应点云地图视角变换。Zou通过提取点云高斯曲率的方法来计算初始值,但是伴随着点云地图视角的变化,同一个物体的高斯曲率发生变化,这样会导致错误的特征匹配,从而点云地图融合失败。Rusu等人提出了一种提取点云FPFH特征的方法,但是该方法存在着权重系数溢出、特征匹配错误的问题。RANSAC算法是融合算法的另外一种,最初由Fischler等人提出,采用随机采样一致性来匹配一个数学模型,算法比较稳定但是该方法的复杂度在最坏情况下会达到O(n3),当点云数量庞大时,算法时间消耗不可接受。In order to calculate the accurate initial value, many researchers calculate the initial value by looking for feature matching. Klasing et al. adopted methods based on optimal averaging to estimate normals, such as PlanePCA, PlaneSVD, QuadSVD methods. However, the normal vector of the point cloud contains less data information, and the change of the point cloud map perspective will lead to the calculation of the point cloud normal vector, which will ultimately affect the effect of point cloud map fusion. Gelfand provides a method for extracting point cloud volume, however, this method does not adapt to point cloud map perspective transformation. Zou calculates the initial value by extracting the Gaussian curvature of the point cloud, but with the change of the point cloud map perspective, the Gaussian curvature of the same object changes, which will lead to wrong feature matching, and thus the point cloud map fusion fails. Rusu et al. proposed a method to extract FPFH features from point clouds, but this method has the problems of weight coefficient overflow and feature matching errors. The RANSAC algorithm is another fusion algorithm. It was originally proposed by Fischler et al. It uses random sampling consistency to match a mathematical model. The algorithm is relatively stable, but the complexity of the method will reach O(n3) in the worst case. When When the number of point clouds is huge, the time consumption of the algorithm is unacceptable.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种融合精度高、效率高的基于视觉矫正的三维点云地图融合方法,以解决上述背景技术中提出的问题。The purpose of the present invention is to provide a three-dimensional point cloud map fusion method based on vision correction with high fusion precision and high efficiency, so as to solve the problems raised in the above background art.
为实现上述目的,本发明提供如下技术方案:To achieve the above object, the present invention provides the following technical solutions:
一种基于视觉矫正的三维点云地图融合方法,包括以下步骤:A three-dimensional point cloud map fusion method based on vision correction, comprising the following steps:
1)对三维点云地图进行预处理,首先输入两个pcd格式的三维点云地图文件,从而获得有待融合的两个点云地图,这两个点云地图分别为目标点云地图pointtgt和待融合的点云地图pointsrc,然后对这两个点云地图进行预处理;1) Preprocess the 3D point cloud map. First, input two 3D point cloud map files in pcd format to obtain two point cloud maps to be fused. These two point cloud maps are the target point cloud map point tgt and The point cloud map point src to be fused, and then preprocess the two point cloud maps;
2)对于步骤1)中的两个点云地图,分别提取目标点云地图pointtgt和待融合的点云地图pointsrc的3D-SIFT关键点;2) For the two point cloud maps in step 1), extract the 3D-SIFT key points of the target point cloud map point tgt and the point cloud map point src to be fused, respectively;
3)对步骤2)中的两个点云地图3D-SIFT关键点分别提取IPFH特征,得到待融合的点云地图pointsrc的特征向量P和目标点云地图pointtgt特征向量Q,其中特征向量P有m个特征点,特征向量Q有n个特征点;3) Extract IPFH features from the two 3D-SIFT key points of the point cloud map in step 2), respectively, to obtain the feature vector P of the point cloud map point src to be fused and the feature vector Q of the target point cloud map point tgt , where the feature vector P has m feature points, and feature vector Q has n feature points;
4)计算特征向量P和特征向量Q中特征点的欧氏距离,根据最小距离找到特征匹配点;4) Calculate the Euclidean distance of the feature points in the feature vector P and the feature vector Q, and find the feature matching point according to the minimum distance;
5)通过特征匹配点计算旋转矩阵,利用矩阵将点云地图pointtgt旋转,改变该点云地图的视角,缩小两个点云地图的视角差异;5) Calculate the rotation matrix through the feature matching points, and use the matrix to rotate the point cloud map point tgt , change the perspective of the point cloud map, and reduce the perspective difference between the two point cloud maps;
6)采用ICP精确融合算法将缩小视角差异后的两个三维点云地图融合在一起。6) The ICP precise fusion algorithm is used to fuse the two 3D point cloud maps after reducing the difference of viewing angles.
作为本发明进一步的方案:步骤1)中,所述预处理包括去噪和采样。As a further solution of the present invention: in step 1), the preprocessing includes denoising and sampling.
作为本发明再进一步的方案:步骤2)中,3D-SIFT关键点提取的具体步骤为:As a further scheme of the present invention: in step 2), the concrete steps of 3D-SIFT key point extraction are:
Ⅰ、点云尺度空间中检测极值点,对点云地图进行不同程度的下采样构建金字塔模型,对每一组点云地图构建高斯尺度空间,所用的函数如下所示Ⅰ. Detect extreme points in the point cloud scale space, downsample the point cloud map to different degrees to construct a pyramid model, and construct a Gaussian scale space for each set of point cloud maps. The functions used are as follows
尺度空间:L(x,y,z,σ)=G(x,y,z,kσ)*P(x,y,z);Scale space: L(x,y,z,σ)=G(x,y,z,kσ)*P(x,y,z);
高斯差分函数: Difference of Gaussian function:
参数k指点金字塔中的组数,σ是尺度大小;The parameter k refers to the number of groups in the pyramid, and σ is the scale;
构建高斯差分空间:D(x,y,z,kiσ)=L(x,y,z,ki+1σ)-L(x,y,z,kiσ);Construct Gaussian difference space: D(x,y,z, ki σ)=L(x,y,z,ki +1 σ)-L(x,y,z, ki σ);
构建DoG空间后,通过对比的方法寻找极值点:对于某个点,其取值大于或者小于其邻域中所有的点的时候,即看做极值点;After the DoG space is constructed, the extreme point is found by the method of comparison: for a point, when its value is greater than or less than all points in its neighborhood, it is regarded as an extreme point;
Ⅱ、确定关键点方向,计算极值点所在邻域内每个点的方位角和仰角,计算公式如下:Ⅱ. Determine the direction of key points, and calculate the azimuth and elevation angles of each point in the neighborhood where the extreme point is located. The calculation formula is as follows:
公式中d表示极值点与邻域中心点之间的距离,θ表示方位角,φ表示仰角,x0,y0,z0指的是极值点所在邻域内的中心点;In the formula, d represents the distance between the extreme point and the center point of the neighborhood, θ represents the azimuth angle, φ represents the elevation angle, and x 0 , y 0 , and z 0 refer to the center point in the neighborhood where the extreme point is located;
统计邻域内每个点方位角θ和仰角φ的方向:采用两个直方图来统计每个点两个角度的方向,θ的方向取值范围为[0°,360°],采用8柱直方图统计方向的情况,每个柱代表45度;而φ的取值为[-90°,90°],采用4柱直方图来统计方向的情况,每个柱代表45度;Count the directions of the azimuth angle θ and the elevation angle φ of each point in the neighborhood: use two histograms to count the directions of the two angles of each point, and the value range of the direction of θ is [0°, 360°]. For the statistical direction of the graph, each column represents 45 degrees; while the value of φ is [-90°, 90°], a 4-column histogram is used to count the direction, and each column represents 45 degrees;
统计后,取两个直方图的峰值作为极值点的方位角和仰角,计算后,每个关键点就包含了x、y、z坐标信息、尺度信息和方向信息即(x,y,z,σ,θ,φ);After statistics, the peaks of the two histograms are taken as the azimuth and elevation angles of the extreme points. After calculation, each key point contains x, y, z coordinate information, scale information and direction information, namely (x, y, z ,σ,θ,φ);
Ⅲ、将某个关键点p所在的坐标系旋转到关键点的主方向上,由于前面已经计算出了主方向,因此采用如下旋转公式进行计算Ⅲ. Rotate the coordinate system where a key point p is located to the main direction of the key point. Since the main direction has been calculated before, the following rotation formula is used for calculation
其中pi为旋转之前的点,pi′代表旋转后的点;where pi is the point before rotation, and pi ′ represents the point after rotation;
采用旋转后的点pi′重新计算前面提到的几个参数,包括d,θ,φ,接下来再计算旋转后的关键点p′与邻域点pi′之间的几何关系,该几何关系指的是向量p′pi′与旋转之前的关键点p处的法向量n之间的夹角δ,该夹角的计算采用如下公式,其中分子表示两个向量的点乘,分母表示两个向量的长度,对于一个关键点与邻域中所有的邻居点就会生成三元组数据信息(θ,φ,δ)Use the rotated point p i ' to recalculate the aforementioned parameters, including d, θ, φ, and then calculate the geometric relationship between the rotated key point p' and the neighboring point p i ', the The geometric relationship refers to the angle δ between the vector p'pi ' and the normal vector n at the key point p before the rotation. The calculation of the angle is as follows, where the numerator represents the dot product of the two vectors, and the denominator Represents the length of the two vectors. For a key point and all neighbor points in the neighborhood, triple data information (θ, φ, δ) will be generated
对若干个三元组的三个角度进行划分,生成特征描述子。Divide the three angles of several triples to generate feature descriptors.
作为本发明再进一步的方案:步骤3)中,3D-SIFT关键点周围区域提取IPFH特征的具体步骤为:As a further scheme of the present invention: in step 3), the specific steps of extracting IPFH features from the area around the 3D-SIFT key point are:
Ⅰ、在3D-SIFT关键点p(x,y,z,σ,θ,φ),以该点为中心,确定k邻域,在邻域中点两两互连构建点对,计算每组点对的法向量;Ⅰ. At the 3D-SIFT key point p(x, y, z, σ, θ, φ), take this point as the center, determine the k neighborhood, and connect the points in the neighborhood to construct point pairs, and calculate each group. the normal vector of the point pair;
Ⅱ、每组点对之间构建局部u-v-w坐标系,三个坐标轴采用如下方式定义:Ⅱ. A local u-v-w coordinate system is constructed between each group of point pairs, and the three coordinate axes are defined as follows:
根据局部坐标系和法向量计算四个特征值,计算方法如下公式所示,四个特征值称作SPFHFour eigenvalues are calculated according to the local coordinate system and normal vector. The calculation method is shown in the following formula. The four eigenvalues are called SPFH.
Ⅲ、在两个点云地图中,重新选择k邻域,利用每个三维点临近的SPFH合成IPFH,合成方法按照如下公式:Ⅲ. In the two point cloud maps, re-select the k neighborhood, and use the SPFH adjacent to each 3D point to synthesize IPFH. The synthesis method is as follows:
ω0指的是查询点p与每个邻域点距离的平均值,ωi代表了邻域空间中查询点p与一个邻近点pi之间的距离;ω 0 refers to the average value of the distance between the query point p and each neighborhood point, and ω i represents the distance between the query point p and a neighboring point p i in the neighborhood space;
Ⅳ、统计每个点的f1、f3、f4取值情况,每个值取分为11个区间,IPFH采用33维度的数据信息进行描述。Ⅳ. Count the values of f1, f3, and f4 of each point. Each value is divided into 11 intervals. IPFH uses 33-dimensional data information to describe.
作为本发明再进一步的方案:步骤4)中,IPFH特征匹配,确定对应关系,判断标准为特征向量之间的距离,采用如下的匹配策略:取点云地图pointsrc一点A的IPFH,在点云地图pointtgt中寻找与A向量距离最近的两个点B和C,通过下面公式计算,如果满足公式条件,则选择B为其匹配点As a further scheme of the present invention: in step 4), the IPFH features are matched, the corresponding relationship is determined, and the criterion is the distance between the feature vectors, and the following matching strategy is adopted: take the IPFH of point cloud map point src a point A, at the point Find two points B and C closest to the vector A in the cloud map point tgt , and calculate by the following formula. If the formula conditions are met, select B as its matching point
作为本发明再进一步的方案:步骤5)中,找到特征匹配点之后,采用奇异值分解的方法计算出转换矩阵,根据转换矩阵将点云地图pointsrc进行旋转,从而减小了两个点云地图之间的视角差,完成了视角矫正。As a further scheme of the present invention: in step 5), after finding the feature matching points, adopt the method of singular value decomposition to calculate the transformation matrix, and rotate the point cloud map point src according to the transformation matrix, thereby reducing two point clouds The viewing angle between the maps is different, and the viewing angle correction is completed.
与现有技术相比,本发明的有益效果是:Compared with the prior art, the beneficial effects of the present invention are:
1、本发明使用3D-SIFT提取关键点,并在关键点上计算FPFH特征,最后算出初始粗估计,为传统的点云配准方法提供了一个初始转化矩阵,可以减少配准算法的的迭代次数,提高算法的配准成功率和精度,降低计算成本。1. The present invention uses 3D-SIFT to extract key points, calculates FPFH features on the key points, and finally calculates the initial rough estimate, which provides an initial transformation matrix for the traditional point cloud registration method, which can reduce the iteration of the registration algorithm. times, improve the registration success rate and accuracy of the algorithm, and reduce the computational cost.
2、本发明将SIFT特征拓展到三维点云中,提取3D-SIFT关键点,从而保证特征对视角旋转、变换的鲁棒性;通过提取IPFH特征,克服了原始FPFH特征的权重系数错误的问题,同时该特征综合邻域点的几何特性,来表征一个三维点的特征,大大提高了算法的稳定性。通过这样处理,本发明能够将两个视角差异很大的三维点云地图融合在一起。2. The present invention extends the SIFT feature to the three-dimensional point cloud and extracts 3D-SIFT key points, thereby ensuring the robustness of the feature to view rotation and transformation; by extracting the IPFH feature, the problem of the wrong weight coefficient of the original FPFH feature is overcome At the same time, this feature integrates the geometric characteristics of neighboring points to characterize the characteristics of a three-dimensional point, which greatly improves the stability of the algorithm. Through such processing, the present invention can fuse together two three-dimensional point cloud maps with greatly different viewing angles.
附图说明Description of drawings
图1为基于视觉矫正的三维点云地图融合方法的流程示意图。FIG. 1 is a schematic flowchart of a three-dimensional point cloud map fusion method based on vision correction.
图2为基于视觉矫正的三维点云地图融合方法中使用3D-SIFT提取关键点的示意图。FIG. 2 is a schematic diagram of extracting key points using 3D-SIFT in a 3D point cloud map fusion method based on vision correction.
图3为基于视觉矫正的三维点云地图融合方法中使用FPFH进行特征匹配的示意图。FIG. 3 is a schematic diagram of feature matching using FPFH in a 3D point cloud map fusion method based on vision correction.
图4为基于视觉矫正的三维点云地图融合方法进行点云融合的效果图。FIG. 4 is an effect diagram of point cloud fusion performed by the 3D point cloud map fusion method based on vision correction.
具体实施方式Detailed ways
下面结合具体实施方式对本发明的技术方案作进一步详细地说明。The technical solutions of the present invention will be described in further detail below in conjunction with specific embodiments.
请参阅图1,一种基于视觉矫正的三维点云地图融合方法,包括以下步骤:Please refer to Figure 1, a 3D point cloud map fusion method based on vision correction, including the following steps:
1)对三维点云地图进行预处理,首先输入两个pcd格式的三维点云地图文件,从而获得有待融合的两个点云地图,这两个点云地图分别为目标点云地图pointtgt和待融合的点云地图pointsrc,然后对这两个点云地图进行预处理,预处理包括去噪和采样;1) Preprocess the 3D point cloud map. First, input two 3D point cloud map files in pcd format to obtain two point cloud maps to be fused. These two point cloud maps are the target point cloud map point tgt and The point cloud map point src to be fused, and then preprocess the two point cloud maps, including denoising and sampling;
2)对于步骤1)中的两个点云地图,分别提取目标点云地图pointtgt和待融合的点云地图pointsrc的3D-SIFT关键点;2) For the two point cloud maps in step 1), extract the 3D-SIFT key points of the target point cloud map point tgt and the point cloud map point src to be fused, respectively;
3D-SIFT关键点提取的具体步骤为:The specific steps of 3D-SIFT key point extraction are:
Ⅰ、点云尺度空间中检测极值点,对点云地图进行不同程度的下采样构建金字塔模型,对每一组点云地图构建高斯尺度空间,所用的函数如下所示,1. Detect extreme points in the point cloud scale space, downsample the point cloud map to different degrees to construct a pyramid model, and construct a Gaussian scale space for each set of point cloud maps. The functions used are as follows:
尺度空间:L(x,y,z,σ)=G(x,y,z,kσ)*P(x,y,z);Scale space: L(x,y,z,σ)=G(x,y,z,kσ)*P(x,y,z);
高斯差分函数: Difference of Gaussian function:
参数k指点金字塔中的组数,σ是尺度大小;The parameter k refers to the number of groups in the pyramid, and σ is the scale;
构建高斯差分空间:D(x,y,z,kiσ)=L(x,y,z,ki+1σ)-L(x,y,z,kiσ);Construct Gaussian difference space: D(x,y,z, ki σ)=L(x,y,z,ki +1 σ)-L(x,y,z, ki σ);
构建DoG空间后,通过对比的方法寻找极值点:对于某个点,其取值大于或者小于其邻域中所有的点的时候,即看做极值点;After the DoG space is constructed, the extreme point is found by the method of comparison: for a point, when its value is greater than or less than all points in its neighborhood, it is regarded as an extreme point;
Ⅱ、确定关键点方向,计算极值点所在邻域内每个点的方位角和仰角,计算公式如下:Ⅱ. Determine the direction of key points, and calculate the azimuth and elevation angles of each point in the neighborhood where the extreme point is located. The calculation formula is as follows:
公式中d表示极值点与邻域中心点之间的距离,θ表示方位角,φ表示仰角,x0,y0,z0指的是极值点所在邻域内的中心点;In the formula, d represents the distance between the extreme point and the center point of the neighborhood, θ represents the azimuth angle, φ represents the elevation angle, and x 0 , y 0 , and z 0 refer to the center point in the neighborhood where the extreme point is located;
统计邻域内每个点方位角θ和仰角φ的方向:采用两个直方图来统计每个点两个角度的方向,θ的方向取值范围为[0°,360°],采用8柱直方图统计方向的情况,每个柱代表45度;而φ的取值为[-90°,90°],采用4柱直方图来统计方向的情况,每个柱代表45度;Count the directions of the azimuth angle θ and the elevation angle φ of each point in the neighborhood: use two histograms to count the directions of the two angles of each point, and the value range of the direction of θ is [0°, 360°]. For the statistical direction of the graph, each column represents 45 degrees; while the value of φ is [-90°, 90°], a 4-column histogram is used to count the direction, and each column represents 45 degrees;
统计后,取两个直方图的峰值作为极值点的方位角和仰角,计算后,每个关键点就包含了x、y、z坐标信息、尺度信息和方向信息即(x,y,z,σ,θ,φ);After statistics, the peaks of the two histograms are taken as the azimuth and elevation angles of the extreme points. After calculation, each key point contains x, y, z coordinate information, scale information and direction information, namely (x, y, z ,σ,θ,φ);
Ⅲ、将某个关键点p所在的坐标系旋转到关键点的主方向上,由于前面已经计算出了主方向,因此采用如下旋转公式进行计算Ⅲ. Rotate the coordinate system where a key point p is located to the main direction of the key point. Since the main direction has been calculated before, the following rotation formula is used for calculation
其中pi为旋转之前的点,pi′代表旋转后的点;where pi is the point before rotation, and pi ′ represents the point after rotation;
采用旋转后的点pi′重新计算前面提到的几个参数,包括d,θ,φ,接下来再计算旋转后的关键点p′与邻域点pi′之间的几何关系,该几何关系指的是向量p′pi′与旋转之前的关键点p处的法向量n之间的夹角δ,该夹角的计算采用如下公式,其中分子表示两个向量的点乘,分母表示两个向量的长度,对于一个关键点与邻域中所有的邻居点就会生成三元组数据信息(θ,φ,δ)Use the rotated point p i ' to recalculate the aforementioned parameters, including d, θ, φ, and then calculate the geometric relationship between the rotated key point p' and the neighboring point p i ', the The geometric relationship refers to the angle δ between the vector p'pi ' and the normal vector n at the key point p before the rotation. The calculation of the angle is as follows, where the numerator represents the dot product of the two vectors, and the denominator Represents the length of the two vectors. For a key point and all neighbor points in the neighborhood, triple data information (θ, φ, δ) will be generated
对若干个三元组的三个角度进行划分,生成特征描述子;Divide the three angles of several triples to generate feature descriptors;
3)对步骤2)中的两个点云地图3D-SIFT关键点分别提取IPFH特征,得到待融合的点云地图pointsrc的特征向量P和目标点云地图pointtgt特征向量Q,其中特征向量P有m个特征点,特征向量Q有n个特征点;3) Extract IPFH features from the two 3D-SIFT key points of the point cloud map in step 2), respectively, to obtain the feature vector P of the point cloud map point src to be fused and the feature vector Q of the target point cloud map point tgt , where the feature vector P has m feature points, and feature vector Q has n feature points;
3D-SIFT关键点周围区域提取IPFH特征的具体步骤为:The specific steps for extracting IPFH features from the area around 3D-SIFT key points are:
Ⅰ、在3D-SIFT关键点p(x,y,z,σ,θ,φ),以该点为中心,确定k邻域,在邻域中点两两互连构建点对,计算每组点对的法向量;Ⅰ. At the 3D-SIFT key point p(x, y, z, σ, θ, φ), take this point as the center, determine the k neighborhood, and connect the points in the neighborhood to construct point pairs, and calculate each group. the normal vector of the point pair;
Ⅱ、每组点对之间构建局部u-v-w坐标系,三个坐标轴采用如下方式定义:Ⅱ. A local u-v-w coordinate system is constructed between each group of point pairs, and the three coordinate axes are defined as follows:
根据局部坐标系和法向量计算四个特征值,计算方法如下公式所示,四个特征值称作SPFHFour eigenvalues are calculated according to the local coordinate system and normal vector. The calculation method is shown in the following formula. The four eigenvalues are called SPFH.
Ⅲ、在两个点云地图中,重新选择k邻域,利用每个三维点临近的SPFH合成IPFH,合成方法按照如下公式:Ⅲ. In the two point cloud maps, re-select the k neighborhood, and use the SPFH adjacent to each 3D point to synthesize IPFH. The synthesis method is as follows:
ω0指的是查询点p与每个邻域点距离的平均值,ωi代表了邻域空间中查询点p与一个邻近点pi之间的距离;ω 0 refers to the average value of the distance between the query point p and each neighborhood point, and ω i represents the distance between the query point p and a neighboring point p i in the neighborhood space;
Ⅳ、统计每个点的f1、f3、f4取值情况,每个值取分为11个区间,IPFH采用33维度的数据信息进行描述;Ⅳ. Count the values of f1, f3, and f4 of each point, each value is divided into 11 intervals, and IPFH uses 33-dimensional data information to describe;
4)计算特征向量P和特征向量Q中特征点的欧氏距离,根据最小距离找到特征匹配点;4) Calculate the Euclidean distance of the feature points in the feature vector P and the feature vector Q, and find the feature matching point according to the minimum distance;
IPFH特征匹配,确定对应关系,判断标准为特征向量之间的距离,采用如下的匹配策略:取点云地图pointsrc一点A的IPFH,在点云地图pointtgt中寻找与A向量距离最近的两个点B和C,通过下面公式计算,如果满足公式条件,则选择B为其匹配点The IPFH feature is matched to determine the corresponding relationship. The judgment criterion is the distance between the feature vectors. The following matching strategy is adopted: take the IPFH of a point A in the point cloud map point src , and find the two closest points to the A vector in the point cloud map point tgt . Points B and C are calculated by the following formula. If the conditions of the formula are met, select B as the matching point
5)通过特征匹配点计算旋转矩阵,利用矩阵将点云地图pointtgt旋转,改变该点云地图的视角,缩小两个点云地图的视角差异;5) Calculate the rotation matrix through the feature matching points, and use the matrix to rotate the point cloud map point tgt , change the perspective of the point cloud map, and reduce the perspective difference between the two point cloud maps;
找到特征匹配点之后,采用奇异值分解的方法计算出转换矩阵,根据转换矩阵将点云地图pointsrc进行旋转,从而减小了两个点云地图之间的视角差,完成了视角矫正;After finding the feature matching points, the singular value decomposition method is used to calculate the transformation matrix, and the point cloud map point src is rotated according to the transformation matrix, thereby reducing the viewing angle difference between the two point cloud maps and completing the viewing angle correction;
6)采用ICP精确融合算法将缩小视角差异后的两个三维点云地图融合在一起。6) The ICP precise fusion algorithm is used to fuse the two 3D point cloud maps after reducing the difference of viewing angles.
在图2中,展示了基于视觉矫正的三维点云地图融合方法中,使用3D-SIFT提取关键点的效果图。SIFT特征是图像局部特征,其对旋转、尺度缩放、亮度变化保持不变性、对视角变化、仿射变换、噪声也保持一定程度的稳定性。In Figure 2, the effect of using 3D-SIFT to extract key points in the 3D point cloud map fusion method based on visual correction is shown. SIFT features are local features of the image, which remain invariant to rotation, scaling, brightness changes, and maintain a certain degree of stability to viewing angle changes, affine transformations, and noise.
在图3中,展示了基于视觉矫正的三维点云地图融合方法中,在3D-SIFT提取的关键点上进一步提取FPFH特征,并进行特征匹配的示意图。FPFH是对PFH特征的改进,是一种以统计直方图的形式描述一个样本点周围的局部几何特征信息的点特征表示方法,计算出源和目标点云中的特征点的欧式距离,根据最小距离找到特征匹配点,通过特征匹配点计算旋转矩阵。In Fig. 3, a schematic diagram of further extracting FPFH features on the key points extracted by 3D-SIFT and performing feature matching in the 3D point cloud map fusion method based on visual correction is shown. FPFH is an improvement on the PFH feature. It is a point feature representation method that describes the local geometric feature information around a sample point in the form of a statistical histogram. It calculates the Euclidean distance of the feature points in the source and target point clouds, according to the minimum Find the feature matching points by distance, and calculate the rotation matrix through the feature matching points.
在图4中,展示了基于视觉矫正的三维点云地图融合方法中的测试与实验结果,本实验采用了三组数据集,其中前两组数据来自于使用Xtion Pro立体传感器采集的室内点云数据;另一组来自于PCL提供的由3D激光扫描仪采集的室外部分城市点云。图片的第一列和第二列分别代表源和目标点云,第三列和第四列是融合之后的点云,在第四列中用不同的颜色表示了两张点云融合之后的结果。In Figure 4, the test and experimental results of the 3D point cloud map fusion method based on vision correction are shown. This experiment uses three sets of data sets, of which the first two sets of data are from indoor point clouds collected using Xtion Pro stereo sensors. Data; another set of point clouds from outdoor parts of the city collected by a 3D laser scanner provided by PCL. The first and second columns of the picture represent the source and target point clouds respectively, the third and fourth columns are the fused point clouds, and the fourth column uses different colors to represent the result of the fusion of the two point clouds .
本发明使用3D-SIFT提取关键点,并在关键点上计算FPFH特征,最后算出初始粗估计,为传统的点云配准方法提供了一个初始转化矩阵,可以减少配准算法的的迭代次数,提高算法的配准成功率和精度,降低计算成本。The invention uses 3D-SIFT to extract key points, calculates FPFH features on the key points, and finally calculates an initial rough estimate, provides an initial transformation matrix for the traditional point cloud registration method, and can reduce the number of iterations of the registration algorithm. Improve the registration success rate and accuracy of the algorithm, and reduce the computational cost.
本发明将SIFT特征拓展到三维点云中,提取3D-SIFT关键点,从而保证特征对视角旋转、变换的鲁棒性;通过提取IPFH特征,克服了原始FPFH特征的权重系数错误的问题,同时该特征综合邻域点的几何特性,来表征一个三维点的特征,大大提高了算法的稳定性。通过这样处理,本发明能够将两个视角差异很大的三维点云地图融合在一起。The invention extends the SIFT feature into the three-dimensional point cloud, and extracts the 3D-SIFT key points, thereby ensuring the robustness of the feature to the rotation and transformation of the viewing angle; by extracting the IPFH feature, the problem of the wrong weight coefficient of the original FPFH feature is overcome, and at the same time This feature integrates the geometric characteristics of neighboring points to characterize the characteristics of a three-dimensional point, which greatly improves the stability of the algorithm. Through such processing, the present invention can fuse together two three-dimensional point cloud maps with greatly different viewing angles.
上面对本发明的较佳实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域的普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The preferred embodiments of the present invention have been described in detail above, but the present invention is not limited to the above-mentioned embodiments. Within the scope of knowledge possessed by those of ordinary skill in the art, various aspects can also be made without departing from the purpose of the present invention. kind of change.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710989642.2A CN107818598B (en) | 2017-10-20 | 2017-10-20 | Three-dimensional point cloud map fusion method based on visual correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710989642.2A CN107818598B (en) | 2017-10-20 | 2017-10-20 | Three-dimensional point cloud map fusion method based on visual correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107818598A CN107818598A (en) | 2018-03-20 |
CN107818598B true CN107818598B (en) | 2020-12-25 |
Family
ID=61606939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710989642.2A Active CN107818598B (en) | 2017-10-20 | 2017-10-20 | Three-dimensional point cloud map fusion method based on visual correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107818598B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11830207B2 (en) * | 2020-05-18 | 2023-11-28 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method, apparatus, electronic device and readable storage medium for point cloud data processing |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211167A (en) * | 2019-06-14 | 2019-09-06 | 北京百度网讯科技有限公司 | Method and apparatus for handling point cloud data |
CN110930495A (en) * | 2019-11-22 | 2020-03-27 | 哈尔滨工业大学(深圳) | Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium |
CN111310818B (en) * | 2020-02-10 | 2021-05-18 | 贝壳找房(北京)科技有限公司 | Feature descriptor determining method and device and computer-readable storage medium |
CN111429574B (en) * | 2020-03-06 | 2022-07-15 | 上海交通大学 | Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion |
CN112414417B (en) * | 2020-11-17 | 2021-11-26 | 智邮开源通信研究院(北京)有限公司 | Automatic driving map generation method and device, electronic equipment and readable storage medium |
CN112902905A (en) * | 2021-01-20 | 2021-06-04 | 西安电子科技大学 | High-definition 3D scanning-based ground object spectrum testing method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105118059A (en) * | 2015-08-19 | 2015-12-02 | 哈尔滨工程大学 | Multi-scale coordinate axis angle feature point cloud fast registration method |
CN106296693A (en) * | 2016-08-12 | 2017-01-04 | 浙江工业大学 | Based on 3D point cloud FPFH feature real-time three-dimensional space-location method |
CN106780459A (en) * | 2016-12-12 | 2017-05-31 | 华中科技大学 | A kind of three dimensional point cloud autoegistration method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10360469B2 (en) * | 2015-01-15 | 2019-07-23 | Samsung Electronics Co., Ltd. | Registration method and apparatus for 3D image data |
-
2017
- 2017-10-20 CN CN201710989642.2A patent/CN107818598B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105118059A (en) * | 2015-08-19 | 2015-12-02 | 哈尔滨工程大学 | Multi-scale coordinate axis angle feature point cloud fast registration method |
CN106296693A (en) * | 2016-08-12 | 2017-01-04 | 浙江工业大学 | Based on 3D point cloud FPFH feature real-time three-dimensional space-location method |
CN106780459A (en) * | 2016-12-12 | 2017-05-31 | 华中科技大学 | A kind of three dimensional point cloud autoegistration method |
Non-Patent Citations (1)
Title |
---|
Aligning Point Cloud Views using Persistent Feature Histograms;Radu Bogdan Rusu 等;《2008 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20081014;第3384-3391页 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11830207B2 (en) * | 2020-05-18 | 2023-11-28 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method, apparatus, electronic device and readable storage medium for point cloud data processing |
Also Published As
Publication number | Publication date |
---|---|
CN107818598A (en) | 2018-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107818598B (en) | Three-dimensional point cloud map fusion method based on visual correction | |
WO2023184968A1 (en) | Structured scene visual slam method based on point line surface features | |
CN113269094B (en) | Laser SLAM system and method based on feature extraction algorithm and key frame | |
CN112017220B (en) | A Point Cloud Accurate Registration Method Based on Robust Constrained Least Squares Algorithm | |
CN100559398C (en) | Automatic deepness image registration method | |
CN106296693A (en) | Based on 3D point cloud FPFH feature real-time three-dimensional space-location method | |
CN106023298B (en) | Point cloud Rigid Registration method based on local Poisson curve reestablishing | |
CN109493372B (en) | Rapid global optimization registration method for product point cloud data with large data volume and few features | |
CN114972459B (en) | A point cloud registration method based on low-dimensional point cloud local feature descriptors | |
CN107590827A (en) | A kind of indoor mobile robot vision SLAM methods based on Kinect | |
CN111986219B (en) | Matching method of three-dimensional point cloud and free-form surface model | |
CN109960402A (en) | A virtual-real registration method based on fusion of point cloud and visual features | |
CN107392964A (en) | The indoor SLAM methods combined based on indoor characteristic point and structure lines | |
CN113628263A (en) | Point cloud registration method based on local curvature and neighbor characteristics thereof | |
CN113706381A (en) | Three-dimensional point cloud data splicing method and device | |
CN117197333A (en) | Space target reconstruction and pose estimation method and system based on multi-view vision | |
CN108053445A (en) | The RGB-D camera motion methods of estimation of Fusion Features | |
CN117292064A (en) | Three-dimensional object modeling method and system based on structured light scanning data | |
CN117132630A (en) | A point cloud registration method based on second-order spatial compatibility measure | |
CN117036653A (en) | Point cloud segmentation method and system based on super voxel clustering | |
CN113850293A (en) | Localization method based on joint optimization of multi-source data and direction priors | |
CN117934336A (en) | Indoor structure point cloud contour line optimization method based on lattice tower theory | |
Wang et al. | Multi-robot raster map fusion without initial relative position | |
CN114463396B (en) | Point cloud registration method utilizing plane shape and topological graph voting | |
CN118501829A (en) | Non-target camera-radar calibration method based on line segments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |