Nothing Special   »   [go: up one dir, main page]

CN108010123A - A kind of three-dimensional point cloud acquisition methods for retaining topology information - Google Patents

A kind of three-dimensional point cloud acquisition methods for retaining topology information Download PDF

Info

Publication number
CN108010123A
CN108010123A CN201711178471.1A CN201711178471A CN108010123A CN 108010123 A CN108010123 A CN 108010123A CN 201711178471 A CN201711178471 A CN 201711178471A CN 108010123 A CN108010123 A CN 108010123A
Authority
CN
China
Prior art keywords
image
point cloud
dimensional
matching
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711178471.1A
Other languages
Chinese (zh)
Other versions
CN108010123B (en
Inventor
张小国
王小虎
郭恩惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201711178471.1A priority Critical patent/CN108010123B/en
Publication of CN108010123A publication Critical patent/CN108010123A/en
Application granted granted Critical
Publication of CN108010123B publication Critical patent/CN108010123B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种保留拓扑信息的三维点云获取方法,首先,利用相机通过围绕式或低空航拍获取图像,并对图像进行灰度化、高斯去噪、照片对齐等数据预处理;其次,进行保留拓扑信息的特征点提取与匹配;然后,解算三维点云并将二维拓扑关系映射到三维空间,获取的点云数据成果可用于构建三维模型。本发明与目前常规的基于序列图像的三维点云获取方法相比,具有点云分布均匀、自带三维拓扑信息的优势,可显著提高构建三维模型的精度。

The present invention provides a three-dimensional point cloud acquisition method that retains topological information. First, the camera is used to acquire images through surround or low-altitude aerial photography, and data preprocessing such as grayscale, Gaussian denoising, and photo alignment is performed on the images; secondly, Extract and match feature points that retain topological information; then, solve the 3D point cloud and map the 2D topological relationship to 3D space, and the obtained point cloud data results can be used to build a 3D model. Compared with the current conventional sequence image-based 3D point cloud acquisition method, the present invention has the advantages of uniform point cloud distribution and self-contained 3D topological information, and can significantly improve the precision of building a 3D model.

Description

一种保留拓扑信息的三维点云获取方法A 3D Point Cloud Acquisition Method Preserving Topological Information

技术领域technical field

本发明涉及计算机视觉领域的图像处理和基于序列图像的三维点云重建,尤其涉及一种保留拓扑信息的三维点云获取方法。The invention relates to image processing in the field of computer vision and three-dimensional point cloud reconstruction based on sequence images, in particular to a three-dimensional point cloud acquisition method that preserves topological information.

背景技术Background technique

在计算机视觉中,三维重建是指根据单视图或者多视图的图像重建三维信息的过程。由于单视图的信息不完全,因此三维重建需要利用经验知识。而多视图的三维重建(类似人的双目定位)相对比较容易,其方法是先对摄像机进行标定获取相机内参,然后通过匹配出的特征点对解算出相机的运动参数,两者结合即可计算出摄像机的图像坐标系与世界坐标系的关系,最终利用多个二维图像中的信息重建出三维信息。In computer vision, 3D reconstruction refers to the process of reconstructing 3D information from single-view or multi-view images. Since the information of a single view is incomplete, 3D reconstruction needs to utilize empirical knowledge. However, multi-view 3D reconstruction (similar to human binocular positioning) is relatively easy. The method is to first calibrate the camera to obtain the internal camera parameters, and then calculate the motion parameters of the camera through the matched feature point pairs. The two can be combined. Calculate the relationship between the image coordinate system of the camera and the world coordinate system, and finally use the information in multiple two-dimensional images to reconstruct three-dimensional information.

三维点云获取是基于多视图三维重建的关键技术与难点,三维点云的质量决定了后续构建三维模型的精度。现有的三维点云获取步骤:图像预处理、特征点提取与匹配、解算三维点云,其中特征点的提取与匹配是其中资源耗费最大的部分,也是相关学者致力于优化、改进的研究热点。现有的特征点提取算法有SIFT、SURF、ORB等,这些算法在克服图像尺度和旋转变化、光照变化和图像变形等方面取得较好成果。但目前各种方法提取出的特征点都存在冗余、分布不均、不包含二维拓扑信息的缺陷,影响后续构建三维模型的成败和精度。3D point cloud acquisition is a key technology and difficulty based on multi-view 3D reconstruction. The quality of 3D point cloud determines the accuracy of subsequent 3D model construction. The existing 3D point cloud acquisition steps: image preprocessing, feature point extraction and matching, and 3D point cloud solution. The feature point extraction and matching is the most resource-consuming part, and it is also the research that relevant scholars are committed to optimization and improvement. hotspot. The existing feature point extraction algorithms include SIFT, SURF, ORB, etc. These algorithms have achieved good results in overcoming image scale and rotation changes, illumination changes, and image deformation. However, the feature points extracted by various methods currently have defects such as redundancy, uneven distribution, and lack of two-dimensional topological information, which affect the success or failure and accuracy of the subsequent construction of the three-dimensional model.

发明内容Contents of the invention

发明目的:针对上述现有技术的局限性,本发明的目的是提供一种保留轮廓纹理拓扑信息的三维点云获取方法,解决点云数据冗余、分布不均和不包含轮廓纹理拓扑信息的缺陷,用于后续构建带拓扑约束的三维模型,提高模型精度。Purpose of the invention: Aiming at the limitations of the above-mentioned prior art, the purpose of the present invention is to provide a 3D point cloud acquisition method that retains the contour texture topological information, and solve the problems of point cloud data redundancy, uneven distribution and no contour texture topological information Defects are used to subsequently build a 3D model with topological constraints to improve model accuracy.

技术方案:一种保留拓扑信息的三维点云获取方法,该方法的主要流程如下:首先,利用相机通过围绕式或低空航拍获取图像,并对图像进行灰度化、高斯去噪、照片对齐等数据预处理;其次,进行保留拓扑信息的特征点提取与匹配;然后,解算三维点云并将二维拓扑关系映射到三维空间,获取的点云数据成果可用于构建三维模型。在基于序列图像的三维点云获取过程中,本发明所提供的方法,包括以下步骤:Technical solution: A 3D point cloud acquisition method that retains topological information. The main process of the method is as follows: First, use the camera to acquire images through surrounding or low-altitude aerial photography, and then grayscale the images, Gaussian denoising, photo alignment, etc. Data preprocessing; secondly, extract and match feature points that retain topological information; then, solve the 3D point cloud and map the 2D topological relationship to 3D space, and the obtained point cloud data results can be used to build a 3D model. In the process of acquiring a three-dimensional point cloud based on sequence images, the method provided by the present invention includes the following steps:

1、相机标定,获取相机内部参数,以矩阵的形式保存;1. Camera calibration, obtain the internal parameters of the camera, and save them in the form of a matrix;

2、通过围绕式或低空航拍的方式获取目标区域的图像数据;2. Obtain the image data of the target area by surrounding or low-altitude aerial photography;

3、对获取的图像数据进行图像灰度化、高斯去噪、照片对齐预处理,具体照片对齐步骤如下:3. Perform image grayscale, Gaussian denoising, and photo alignment preprocessing on the acquired image data. The specific photo alignment steps are as follows:

3.1、利用FAST算子提取特征点,并计算描述子;3.1. Use the FAST operator to extract feature points and calculate descriptors;

3.2、利用FLANN实现高效特征点匹配及双向匹配技术减少误匹配;3.2. Use FLANN to achieve efficient feature point matching and bidirectional matching technology to reduce mismatching;

3.3、利用RANSAC过滤误匹配;3.3. Use RANSAC to filter mismatches;

3.4、利用步骤3.3中获取的特征点对采用8点法解算基础矩阵,结合步骤1中已有的相机标定矩阵获取图像对的本质矩阵;3.4. Use the feature point pair obtained in step 3.3 to solve the basic matrix by using the 8-point method, and combine the existing camera calibration matrix in step 1 to obtain the essential matrix of the image pair;

3.5、利用步骤3.4中获取的图像对的本质矩阵,估算所有图像的匹配图像:3.5. Estimate the matching images of all images using the essential matrix of the image pair obtained in step 3.4:

3.5a、通过将图像对的本质矩阵分解为旋转和平移两部分,获取两个相机之间的位置变换关系,以此类推可获取任意一对相机的位置变换关系;3.5a. By decomposing the essential matrix of the image pair into two parts of rotation and translation, the position transformation relationship between the two cameras can be obtained, and the position transformation relationship of any pair of cameras can be obtained by analogy;

3.5b、选取第一张图像,根据该图像与其他图像相对位置变换关系,选取图像对之间旋转和平移尺度最小的作为该图像的匹配图像;3.5b. Select the first image, and according to the relative position transformation relationship between the image and other images, select the image pair with the smallest rotation and translation scale as the matching image of the image;

3.5c、以第一张图像的匹配图像作为第二张待匹配图像并执行3.5a与3.5b,以此类推确定所有图像的匹配图像;3.5c. Use the matching image of the first image as the second image to be matched and execute 3.5a and 3.5b, and so on to determine the matching images of all images;

3.6、假定第一张图像的相机矩阵固定且是标准型,利用步骤3.5a中获取的相机之间位置变换关系得到匹配图像对中另一张图像的相机矩阵,并以此获取所有图像的相机矩阵;3.6. Assuming that the camera matrix of the first image is fixed and standard, use the position transformation relationship between the cameras obtained in step 3.5a to obtain the camera matrix of the other image in the matching image pair, and use this to obtain the cameras of all images matrix;

4、保留轮廓纹理拓扑关系的特征点提取;4. Feature point extraction that preserves the topological relationship of the contour texture;

4.1、利用Canny边缘提取算法提取每一幅图像中目标地物所有轮廓纹理特征,检测的轮廓纹理不建立等级关系,每一幅图像的轮廓纹理数据都保存在二维容器中,其中每条轮廓纹理数据保存为点数据格式;4.1. Use the Canny edge extraction algorithm to extract all the contour texture features of the target features in each image. The detected contour texture does not establish a hierarchical relationship. The contour texture data of each image is stored in a two-dimensional container. Each contour Texture data is saved in point data format;

4.2、利用道格拉斯-普克算法对每一幅图像的每一条轮廓纹理点数据进行精简:4.2. Use the Douglas-Puck algorithm to simplify the data of each contour texture point of each image:

4.2a、保留精简后的轮廓纹理点,作为待匹配特征点,并标记该点所属的图像和轮廓编号,每一幅图像的处理结果单独保存在一个二维容器中;4.2a. Retain the simplified contour texture points as feature points to be matched, and mark the image and contour number to which the point belongs, and the processing result of each image is stored separately in a two-dimensional container;

4.2b、保留精简前的轮廓纹理点,作为特征点匹配的特征点库。4.2b. Retain the contour texture points before simplification as the feature point library for feature point matching.

5、特征点匹配及过滤误匹配;5. Feature point matching and filter mismatching;

5.1、利用SIFT算子对待匹配特征点进行特征描述;5.1. Use the SIFT operator to describe the feature points to be matched;

5.2、特征点匹配:5.2. Feature point matching:

5.2a、选取第一张图像,根据步骤3.5中获取的待匹配图像对,确定该图像的匹配图像;5.2a, select the first image, and determine the matching image of the image according to the pair of images to be matched obtained in step 3.5;

5.2b、在匹配图像对中,一张图像上的待匹配特征点与匹配图像上的特征点库利用步骤3.4中获取本质矩阵确定的“极线约束”关系缩小搜索范围;5.2b. In the matching image pair, the feature points to be matched on one image and the feature point library on the matching image use the "epipolar constraint" relationship determined by the essential matrix obtained in step 3.4 to narrow the search range;

5.2c、利用FLANN实现高效匹配;5.2c, using FLANN to achieve efficient matching;

5.3、利用RANSAC算法过滤误匹配。5.3. Use RANSAC algorithm to filter false matches.

6、利用步骤5中获取的匹配特征点对,根据三角形法解算三维点云;6. Use the matching feature point pairs obtained in step 5 to solve the 3D point cloud according to the triangle method;

6.1、在一个匹配图像对中通过两个二维点来近似一个三维点,即利用步骤3.6中获取的相机矩阵与步骤5.3中获取的匹配特征点对之间的约束关系,解算空间点的三维坐标;6.1. Approximate a three-dimensional point by two two-dimensional points in a matching image pair, that is, use the constraint relationship between the camera matrix obtained in step 3.6 and the matching feature point pair obtained in step 5.3 to solve the spatial point three-dimensional coordinates;

6.2、通过一个循环对匹配点对执行步骤6.1操作来实现完整的三角形法,获取两幅图像重建的三维点云,以此作为序列图像三维重建的初始结构;6.2. Perform the step 6.1 operation on the matching point pairs through a loop to realize the complete triangle method, and obtain the 3D point cloud reconstructed from the two images, which is used as the initial structure of the 3D reconstruction of the sequence image;

6.3、将剩下的图像逐个加入这个初始结构中,即在剩下的图像中找到与第二张图像匹配的图像作为第三张重建的图像,重复执行步骤6.2,即可得到序列图像的三维点云。6.3. Add the remaining images to this initial structure one by one, that is, find the image that matches the second image in the remaining images as the third reconstructed image, and repeat step 6.2 to obtain the three-dimensional sequence image point cloud.

7、将步骤4获取特征点中包含的二维轮廓纹理拓扑信息映射到三维点云,将无组织的三维点云转变为可分类的、带有轮廓纹理拓扑信息的三维点云。7. Map the two-dimensional contour texture topological information contained in the feature points obtained in step 4 to a three-dimensional point cloud, and transform the unorganized three-dimensional point cloud into a classifiable three-dimensional point cloud with contour texture topological information.

7.1、在步骤6.1三角形法解算过程中,通过两幅图像中的两个二维匹配点解算一个三维点坐标时,将两个特征点所属的图像和轮廓信息保留到解算出的三维点信息中,即完成了二维拓扑信息到三维拓扑信息的映射;7.1. During the calculation process of the triangle method in step 6.1, when calculating the coordinates of a 3D point through the two 2D matching points in the two images, the image and contour information to which the two feature points belong are retained in the calculated 3D point information, that is, the mapping from two-dimensional topological information to three-dimensional topological information is completed;

7.2、在步骤7.1中得到的三维点云中,每一个三维点都包含了该点来自于哪两幅图像以及其在该图像上具体的轮廓编号,据此可对点云进行分类:7.2. In the 3D point cloud obtained in step 7.1, each 3D point includes which two images the point comes from and its specific contour number on the image, and the point cloud can be classified accordingly:

7.2a、根据该三维点的图像编号对所得点云进行一级分类;7.2a. Perform primary classification on the obtained point cloud according to the image number of the 3D point;

7.2b、根据该三维点对应该图像上的轮廓编号对一级分类后的点云进行二级分类。7.2b. Perform secondary classification on the point cloud after the primary classification according to the contour number on the image corresponding to the three-dimensional point.

有益效果:相较于传统的基于序列图像三维点云获取技术,该方法支持围绕式和平铺式、有序和无序的拍照方式,获取的特征点分布均匀,并且提高了特征点匹配效率,最重要的优势是该方法保留了二维拓扑信息并将其映射到三维点云,可用于后续构建带拓扑约束的三维模型,提高模型精度。Beneficial effects: Compared with the traditional three-dimensional point cloud acquisition technology based on sequence images, this method supports surrounding and flat, orderly and disorderly photographing methods, and the acquired feature points are evenly distributed, and the matching efficiency of feature points is improved. The most important advantage is that this method retains 2D topological information and maps it to a 3D point cloud, which can be used to subsequently build a 3D model with topology constraints and improve model accuracy.

附图说明Description of drawings

图1为本发明方法的流程示意图;Fig. 1 is a schematic flow sheet of the inventive method;

图2为常规三维点云获取方法的重建出的小房子的三维点云;Fig. 2 is the three-dimensional point cloud of the reconstructed small house of conventional three-dimensional point cloud acquisition method;

图3为本发明提供的保留拓扑信息的三维点云获取方法的重建出的小房子的三维点云;Fig. 3 is the three-dimensional point cloud of the small house reconstructed by the three-dimensional point cloud acquisition method that retains topological information provided by the present invention;

图4(a)为不加约束的点云构网;Figure 4(a) is an unconstrained point cloud network;

图4(b)为轮廓纹理约束信息示意;Figure 4(b) is a schematic diagram of contour texture constraint information;

图4(c)为带轮廓纹理约束的点云构网。Figure 4(c) is a point cloud network with contour texture constraints.

具体实施方式Detailed ways

图1所示为本发明一种保留拓扑信息的三维点云获取方法的主要流程。通过本发明所提供的保留拓扑信息三维点云获取方法,得到的三维点云数据可用于后续构建带拓扑约束的三维模型,提高模型精度。以获取一间房屋的三维点云数据为例,结合附图1对以下各步骤进行详细描述:FIG. 1 shows the main flow of a method for acquiring a 3D point cloud that retains topological information in the present invention. Through the three-dimensional point cloud acquisition method that retains topological information provided by the present invention, the obtained three-dimensional point cloud data can be used for subsequent construction of a three-dimensional model with topology constraints to improve model accuracy. Taking the acquisition of 3D point cloud data of a house as an example, the following steps are described in detail in conjunction with Figure 1:

1、相机标定,获取相机内部参数,以矩阵K的形式保存;1. Camera calibration, obtain the internal parameters of the camera, and save them in the form of matrix K;

2、通过围绕式拍照方式获取目标区域的图像数据;2. Obtain the image data of the target area through surrounding photography;

3、对获取的图像数据进行图像灰度化、高斯去噪、照片对齐预处理,具体照片对齐步骤如下;3. Perform image grayscale, Gaussian denoising, and photo alignment preprocessing on the acquired image data. The specific photo alignment steps are as follows;

3.1、利用FAST算子提取特征点(建议设置Fast算子阈值为20,设定最多提取点数量为1000个),并计算描述子;3.1. Use the FAST operator to extract feature points (it is recommended to set the Fast operator threshold to 20, and set the maximum number of extraction points to 1000), and calculate the descriptor;

3.2、利用FLANN实现高效特征点匹配及双向匹配技术减少误匹配;3.2. Use FLANN to achieve efficient feature point matching and bidirectional matching technology to reduce mismatching;

3.3、利用RANSAC过滤误匹配;3.3. Use RANSAC to filter mismatches;

3.4、利用步骤3.3中获取的特征点对采用8点法解算基础矩阵F,结合步骤1中已有的相机标定矩阵K获取图像对的本质矩阵E;3.4. Use the feature point pair obtained in step 3.3 to solve the basic matrix F by using the 8-point method, and combine the existing camera calibration matrix K in step 1 to obtain the essential matrix E of the image pair;

3.5、利用步骤3.4中获取的图像对的本质矩阵E,估算所有图像的匹配图像:3.5. Using the essential matrix E of the image pair obtained in step 3.4, estimate the matching images of all images:

3.5a、通过将图像对的本质矩阵E分解为旋转R和平移t两部分,获取两个相机之间的位置变换关系,以此类推可获取任意一对相机的位置变换关系;3.5a. By decomposing the essential matrix E of the image pair into two parts of rotation R and translation t, the position transformation relationship between the two cameras can be obtained, and the position transformation relationship of any pair of cameras can be obtained by analogy;

3.5b、选取第一张图像,根据该图像与其他图像相对位置变换关系,选取图像对之间旋转和平移尺度最小的作为该图像的匹配图像;3.5b. Select the first image, and according to the relative position transformation relationship between the image and other images, select the image pair with the smallest rotation and translation scale as the matching image of the image;

3.5c、以第一张图像的匹配图像作为第二张待匹配图像并执行3.5a与3.5b,以此类推确定所有图像的匹配图像;3.5c. Use the matching image of the first image as the second image to be matched and execute 3.5a and 3.5b, and so on to determine the matching images of all images;

3.6、假定第一张图像的相机矩阵P0固定且是标准型,利用步骤3.5a中获取的相机之间位置变换关系得到匹配图像对中另一张图像的相机矩阵P1,并以此获取所有图像的相机矩阵;3.6. Assuming that the camera matrix P 0 of the first image is fixed and standard, use the position transformation relationship between the cameras obtained in step 3.5a to obtain the camera matrix P 1 of the other image in the matching image pair, and obtain the camera matrix for all images;

4、保留轮廓纹理拓扑关系的特征点提取;4. Feature point extraction that preserves the topological relationship of the contour texture;

4.1、利用Canny边缘提取算法提取每一幅图像中目标地物所有轮廓纹理特征,检测的轮廓纹理不建立等级关系,每一幅图像的轮廓纹理数据都保存在二维容器中,其中每条轮廓纹理数据保存为点数据格式;4.1. Use the Canny edge extraction algorithm to extract all the contour texture features of the target features in each image. The detected contour texture does not establish a hierarchical relationship. The contour texture data of each image is stored in a two-dimensional container. Each contour Texture data is saved in point data format;

4.2、利用道格拉斯-普克算法(建议阈值为5)对每一幅图像的每一条轮廓纹理点数据进行精简:4.2. Use the Douglas-Puke algorithm (recommended threshold is 5) to simplify the data of each contour texture point of each image:

4.2a、保留精简后的轮廓纹理点,作为待匹配特征点,并标记该点所属的图像和轮廓编号,每一幅图像的处理结果单独保存在一个二维容器中;4.2a. Retain the simplified contour texture points as feature points to be matched, and mark the image and contour number to which the point belongs, and the processing result of each image is stored separately in a two-dimensional container;

4.2b、保留精简前的轮廓纹理点,作为特征点匹配的特征点库。4.2b. Retain the contour texture points before simplification as the feature point library for feature point matching.

5、特征点匹配及过滤误匹配;5. Feature point matching and filter mismatching;

5.1、利用SIFT算子对待匹配特征点进行特征描述;5.1. Use the SIFT operator to describe the feature points to be matched;

5.2、特征点匹配:5.2. Feature point matching:

5.2a、选取第一张图像,根据步骤3.5中获取的待匹配图像对,确定该图像的匹配图像;5.2a, select the first image, and determine the matching image of the image according to the pair of images to be matched obtained in step 3.5;

5.2b、在匹配图像对中,一张图像上的待匹配特征点与匹配图像上的特征点库利用步骤3.4中获取本质矩阵确定的“极线约束”关系缩小搜索范围;5.2b. In the matching image pair, the feature points to be matched on one image and the feature point library on the matching image use the "epipolar constraint" relationship determined by the essential matrix obtained in step 3.4 to narrow the search range;

5.2c、利用FLANN实现高效匹配;5.2c, using FLANN to achieve efficient matching;

5.3、利用RANSAC算法过滤误匹配。5.3. Use RANSAC algorithm to filter false matches.

6、利用步骤5中获取的匹配特征点对,根据三角形法解算三维点云;6. Use the matching feature point pairs obtained in step 5 to solve the 3D point cloud according to the triangle method;

6.1、在一个匹配图像对中通过两个二维点来近似一个三维点,即利用步骤3.6中获取的相机矩阵P与步骤5.3中获取的匹配特征点对之间的约束关系,解算空间点的三维坐标;6.1. Approximate a three-dimensional point by two two-dimensional points in a matching image pair, that is, use the constraint relationship between the camera matrix P obtained in step 3.6 and the matching feature point pair obtained in step 5.3 to solve the spatial point three-dimensional coordinates;

6.2、通过一个循环对匹配点对执行步骤6.1操作来实现完整的三角形法,获取两幅图像重建的三维点云,以此作为序列图像三维重建的初始结构;6.2. Perform the step 6.1 operation on the matching point pairs through a loop to realize the complete triangle method, and obtain the 3D point cloud reconstructed from the two images, which is used as the initial structure of the 3D reconstruction of the sequence image;

6.3、将剩下的图像逐个加入这个初始结构中,即在剩下的图像中找到与第二张图像匹配的图像作为第三张重建的图像,重复执行步骤6.2,即可得到序列图像的三维点云。6.3. Add the remaining images to this initial structure one by one, that is, find the image that matches the second image in the remaining images as the third reconstructed image, and repeat step 6.2 to obtain the three-dimensional sequence image point cloud.

7、将步骤4获取特征点中包含的二维轮廓纹理拓扑信息映射到三维点云,将无组织的三维点云转变为可分类的、带有轮廓纹理拓扑信息的三维点云。7. Map the two-dimensional contour texture topological information contained in the feature points obtained in step 4 to a three-dimensional point cloud, and transform the unorganized three-dimensional point cloud into a classifiable three-dimensional point cloud with contour texture topological information.

7.1、在步骤6.1三角形法解算过程中,通过两幅图像中的两个二维匹配点解算一个三维点坐标时,将两个特征点所属的图像和轮廓信息保留到解算出的三维点信息中,即完成了二维拓扑信息到三维拓扑信息的映射;7.1. During the calculation process of the triangle method in step 6.1, when calculating the coordinates of a 3D point through the two 2D matching points in the two images, the image and contour information to which the two feature points belong are retained in the calculated 3D point information, that is, the mapping from two-dimensional topological information to three-dimensional topological information is completed;

7.2、在步骤7.1中得到的三维点云中,每一个三维点都包含了该点来自于哪两幅图像以及其在该图像上具体的轮廓编号,据此可对点云进行分类:7.2. In the 3D point cloud obtained in step 7.1, each 3D point includes which two images the point comes from and its specific contour number on the image, and the point cloud can be classified accordingly:

7.2a、根据该三维点的图像编号对所得点云进行一级分类;7.2a. Perform primary classification on the obtained point cloud according to the image number of the 3D point;

7.2b、根据该三维点对应该图像上的轮廓编号对一级分类后的点云进行二级分类。7.2b. Perform secondary classification on the point cloud after the primary classification according to the contour number on the image corresponding to the three-dimensional point.

附图2和附图3的对比可以体现出本发明较常规方法而言,获取的点云分布更加均匀,门框等细节处数据更丰富。图4(a)为不加约束的点云构网,4(b)为轮廓纹理约束信息示意,4(c)为带轮廓纹理约束的点云构网,体现了利用本发明所提供的保留拓扑信息的三维点云获取方法获取的点云数据在构建模型表面网格时的优势,即使模型更加贴近真实场景,且提高了模型精度。The comparison between Figure 2 and Figure 3 shows that compared with the conventional method, the distribution of the obtained point cloud is more uniform, and the data of details such as the door frame is more abundant. Figure 4(a) is a point cloud network without constraints, 4(b) is a schematic diagram of contour texture constraint information, and 4(c) is a point cloud network with contour texture constraints, which embodies the preservation provided by the present invention. The point cloud data obtained by the 3D point cloud acquisition method of topological information has the advantage of constructing the surface mesh of the model, even if the model is closer to the real scene, and the model accuracy is improved.

Claims (7)

1. a kind of three-dimensional point cloud acquisition methods for retaining topology information, it is characterised in that comprise the following steps:
Step 1, camera calibration, obtain camera internal parameter, preserve with a matrix type;
Step 2, the view data for obtaining by way of taking photo by plane around formula or low latitude target area;
Step 3, the view data to acquisition pre-process;
Step 4, the feature point extraction for retaining profile texture topological relation;
Step 5, Feature Points Matching and filtering error hiding;
Step 6, using the matching characteristic point pair obtained in step 5, three-dimensional point cloud is resolved according to triangulation method;
Step 4, is obtained the two-dimensional silhouette texture topology information included in characteristic point and is mapped to three-dimensional point cloud by step 7, will be without group The three-dimensional point cloud knitted is changed into three-dimensional point cloud classifiable, with profile texture topology information.
2. the three-dimensional point cloud acquisition methods according to claim 1 for retaining topology information, it is characterised in that:The step 3 In data prediction include:Image gray processing, Gauss denoising and photo alignment.
3. the three-dimensional point cloud acquisition methods according to claim 2 for retaining topology information, it is characterised in that the photo pair Comprise the following steps together:
Step 3.1, using FAST operator extraction characteristic points, and calculate description son;
Step 3.2, realize the matching of efficient feature point and bi-directional matching technology reduction error hiding using FLANN;
Step 3.3, utilize RANSAC filtering error hidings;
Step 3.4, using the characteristic point obtained in step 3.3 to resolving basis matrixes using 8 methods, with reference to existing in step 1 Camera calibration matrix obtain image pair essential matrix;
Step 3.5, the essential matrix using the image pair obtained in step 3.4, estimate the matching image of all images:
3.5a, by the way that the essential matrix of image pair is decomposed into rotation and translation two parts, obtain the position between two cameras Transformation relation, and so on can obtain the evolution relation of any pair of camera;
3.5b, choose first image, according to the image and other image relative position transformation relations, is revolved between selection image pair Turn and translate the matching image as the image of scale minimum;
3.5c, using the matching image of first image as second image to be matched and perform 3.5a and 3.5b, and so on Determine the matching image of all images;
Step 3.6, assume that the camera matrix of first image is fixed and is standard type, using the camera obtained in step 3.5a it Between position transformation relation obtain the camera matrix of matching another image of image pair, and obtain with this camera square of all images Battle array.
4. the three-dimensional point cloud acquisition methods according to claim 1 for retaining topology information, it is characterised in that the step 4 Include the following steps:
Step 4.1, utilize all profile textural characteristics of Target scalar in the every piece image of Canny Boundary extracting algorithms extraction, inspection The profile texture of survey does not establish hierarchical relationship, and the profile data texturing per piece image is all stored in two-dimensional container, wherein often Bar profile data texturing saves as point data form;
Step 4.2, using Douglas-Pu Ke algorithms simplify each profile texture point data of every piece image:
4.2a, retain the profile Texture Points after simplifying, and as characteristic point to be matched, and marks the image and profile volume belonging to it Number, the handling result per piece image is individually stored in a two-dimensional container;
4.2b, retain the profile Texture Points before simplifying, the characteristic point storehouse as Feature Points Matching.
5. the three-dimensional point cloud acquisition methods according to claim 1 for retaining topology information, it is characterised in that the step 5 Include the following steps:
Step 5.1, utilize to be matched characteristic point progress feature description of the SIFT operators to every piece image;
Step 5.2, Feature Points Matching:
5.2a, choose first image, according to the image pair to be matched obtained in step 3.5, determines the matching image of the image;
5.2b, the characteristic point to be matched in matching image pair a, image are with matching the step of the characteristic point Cooley on image " epipolar-line constraint " relation that essential matrix determines is obtained in rapid 3.4 and reduces search range;
5.2c, using FLANN realize efficient matchings;
Step 5.3, utilize RANSAC algorithms filtering error hiding.
6. the three-dimensional point cloud acquisition methods according to claim 3 for retaining topology information, it is characterised in that the step 6 Specifically include:
Step 6.1, at one match image pair by two two-dimensional points come an approximate three-dimensional point, i.e., using in step 3.6 Restriction relation between the matching characteristic point pair obtained in the camera matrix and step 5.3 of acquisition, resolves the three-dimensional seat of spatial point Mark;
Step 6.2, by a circulation perform matching double points step 6.1 operation to realize complete triangulation method, obtains two The three-dimensional point cloud of width image reconstruction, in this, as the initial configuration of three-dimensional reconstruction of sequence image;
Step 6.3, add remaining image in this initial configuration one by one, i.e., is found in remaining image and second figure As image of the matched image as the 3rd reconstruction, step 6.2 is repeated, you can obtain the three-dimensional point cloud of sequence image.
7. the three-dimensional point cloud acquisition methods according to claim 6 for retaining topology information, it is characterised in that have in step 7 Body includes:
7.1st, in step 6.1 triangulation method solution process, one three is resolved by two two-dimentional match points in two images When tieing up point coordinates, the image belonging to two characteristic points and profile information are remained into the three-dimensional point information calculated, that is, completed Mapping of the two-dimensional topology information to three-dimensional topology information;
7.2nd, in the three-dimensional point cloud obtained in step 7.1, each three-dimensional point contains which two images the point comes from And its specific profile numbering on this image, it can classify accordingly to a cloud:
7.2a, according to the picture number of the three-dimensional point carry out first-level class to institute's invocation point cloud;
7.2b, according to the three-dimensional point correspond on the image profile numbering to after first-level class point cloud carry out secondary classification.
CN201711178471.1A 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information Expired - Fee Related CN108010123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711178471.1A CN108010123B (en) 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711178471.1A CN108010123B (en) 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information

Publications (2)

Publication Number Publication Date
CN108010123A true CN108010123A (en) 2018-05-08
CN108010123B CN108010123B (en) 2021-02-09

Family

ID=62053322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711178471.1A Expired - Fee Related CN108010123B (en) 2017-11-23 2017-11-23 Three-dimensional point cloud obtaining method capable of retaining topology information

Country Status (1)

Country Link
CN (1) CN108010123B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN108765574A (en) * 2018-06-19 2018-11-06 北京智明星通科技股份有限公司 3D scenes intend true method and system and computer readable storage medium
CN109472802A (en) * 2018-11-26 2019-03-15 东南大学 A Surface Mesh Model Construction Method Based on Edge Feature Self-Constraint
CN109598783A (en) * 2018-11-20 2019-04-09 西南石油大学 A kind of room 3D modeling method and furniture 3D prebrowsing system
CN109816771A (en) * 2018-11-30 2019-05-28 西北大学 A method for automatic reorganization of cultural relic fragments combining feature point topology and geometric constraints
CN109951342A (en) * 2019-04-02 2019-06-28 上海交通大学 3D Matrix Topological Representation of Spatial Information Network and Implementation Method of Routing Traversal Optimization
CN110443785A (en) * 2019-07-18 2019-11-12 太原师范学院 The feature extracting method of three-dimensional point cloud under a kind of lasting people having the same aspiration and interest
CN111325854A (en) * 2018-12-17 2020-06-23 三菱重工业株式会社 Shape model correction device, shape model correction method, and storage medium
WO2021160071A1 (en) * 2020-02-11 2021-08-19 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Feature spatial distribution management for simultaneous localization and mapping
CN114494595A (en) * 2022-01-19 2022-05-13 上海识装信息科技有限公司 Biological model construction method and three-dimensional size measurement method
CN118154460A (en) * 2024-05-11 2024-06-07 成都大学 Processing method of three-dimensional point cloud data of asphalt pavement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183653A1 (en) * 2006-01-31 2007-08-09 Gerard Medioni 3D Face Reconstruction from 2D Images
CN102074015A (en) * 2011-02-24 2011-05-25 哈尔滨工业大学 Two-dimensional image sequence based three-dimensional reconstruction method of target
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN106651942A (en) * 2016-09-29 2017-05-10 苏州中科广视文化科技有限公司 Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183653A1 (en) * 2006-01-31 2007-08-09 Gerard Medioni 3D Face Reconstruction from 2D Images
CN102074015A (en) * 2011-02-24 2011-05-25 哈尔滨工业大学 Two-dimensional image sequence based three-dimensional reconstruction method of target
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN106651942A (en) * 2016-09-29 2017-05-10 苏州中科广视文化科技有限公司 Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN108765574A (en) * 2018-06-19 2018-11-06 北京智明星通科技股份有限公司 3D scenes intend true method and system and computer readable storage medium
CN109598783A (en) * 2018-11-20 2019-04-09 西南石油大学 A kind of room 3D modeling method and furniture 3D prebrowsing system
CN109472802A (en) * 2018-11-26 2019-03-15 东南大学 A Surface Mesh Model Construction Method Based on Edge Feature Self-Constraint
CN109472802B (en) * 2018-11-26 2021-10-19 东南大学 A Surface Mesh Model Construction Method Based on Edge Feature Self-Constraint
CN109816771B (en) * 2018-11-30 2022-11-22 西北大学 Cultural relic fragment automatic recombination method combining feature point topology and geometric constraint
CN109816771A (en) * 2018-11-30 2019-05-28 西北大学 A method for automatic reorganization of cultural relic fragments combining feature point topology and geometric constraints
CN111325854A (en) * 2018-12-17 2020-06-23 三菱重工业株式会社 Shape model correction device, shape model correction method, and storage medium
CN111325854B (en) * 2018-12-17 2023-10-24 三菱重工业株式会社 Shape model correction device, shape model correction method, and storage medium
CN109951342A (en) * 2019-04-02 2019-06-28 上海交通大学 3D Matrix Topological Representation of Spatial Information Network and Implementation Method of Routing Traversal Optimization
CN109951342B (en) * 2019-04-02 2021-05-11 上海交通大学 3D Matrix Topological Representation of Spatial Information Network and Implementation Method of Routing Traversal Optimization
CN110443785A (en) * 2019-07-18 2019-11-12 太原师范学院 The feature extracting method of three-dimensional point cloud under a kind of lasting people having the same aspiration and interest
WO2021160071A1 (en) * 2020-02-11 2021-08-19 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Feature spatial distribution management for simultaneous localization and mapping
CN114494595A (en) * 2022-01-19 2022-05-13 上海识装信息科技有限公司 Biological model construction method and three-dimensional size measurement method
CN118154460A (en) * 2024-05-11 2024-06-07 成都大学 Processing method of three-dimensional point cloud data of asphalt pavement

Also Published As

Publication number Publication date
CN108010123B (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN108010123A (en) A kind of three-dimensional point cloud acquisition methods for retaining topology information
CN110443836B (en) A method and device for automatic registration of point cloud data based on plane features
CN109410321B (en) Three-dimensional reconstruction method based on convolutional neural network
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN108038905B (en) A kind of Object reconstruction method based on super-pixel
US10217293B2 (en) Depth camera-based human-body model acquisition method and network virtual fitting system
WO2022178952A1 (en) Target pose estimation method and system based on attention mechanism and hough voting
Dall'Asta et al. A comparison of semiglobal and local dense matching algorithms for surface reconstruction
CN104392426B (en) A kind of no marks point three-dimensional point cloud method for automatically split-jointing of self adaptation
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN102222357B (en) Foot Shape 3D Surface Reconstruction Method Based on Image Segmentation and Mesh Subdivision
CN104463108A (en) Monocular real-time target recognition and pose measurement method
CN101908231A (en) Method and system for processing 3D point cloud reconstruction of scenes containing principal planes
CN104376596A (en) Method for modeling and registering three-dimensional scene structures on basis of single image
CN108229416A (en) Robot SLAM methods based on semantic segmentation technology
CN111998862B (en) BNN-based dense binocular SLAM method
CN108280858A (en) A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
CN102446356A (en) Parallel self-adaptive matching method for obtaining remote sensing images with uniformly distributed matching points
CN117115359A (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
Hirner et al. FC-DCNN: A densely connected neural network for stereo estimation
CN101661623B (en) Three-dimensional tracking method of deformable body based on linear programming
Jisen A study on target recognition algorithm based on 3D point cloud and feature fusion
CN110135474A (en) A kind of oblique aerial image matching method and system based on deep learning
US20220198707A1 (en) Method and apparatus with object pose estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210209