CN108010123A - A kind of three-dimensional point cloud acquisition methods for retaining topology information - Google Patents
A kind of three-dimensional point cloud acquisition methods for retaining topology information Download PDFInfo
- Publication number
- CN108010123A CN108010123A CN201711178471.1A CN201711178471A CN108010123A CN 108010123 A CN108010123 A CN 108010123A CN 201711178471 A CN201711178471 A CN 201711178471A CN 108010123 A CN108010123 A CN 108010123A
- Authority
- CN
- China
- Prior art keywords
- image
- point cloud
- dimensional
- matching
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 239000011159 matrix material Substances 0.000 claims description 31
- 238000000605 extraction Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims 3
- 238000007689 inspection Methods 0.000 claims 1
- 238000007781 pre-processing Methods 0.000 abstract description 5
- 238000010276 construction Methods 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
本发明提供了一种保留拓扑信息的三维点云获取方法,首先,利用相机通过围绕式或低空航拍获取图像,并对图像进行灰度化、高斯去噪、照片对齐等数据预处理;其次,进行保留拓扑信息的特征点提取与匹配;然后,解算三维点云并将二维拓扑关系映射到三维空间,获取的点云数据成果可用于构建三维模型。本发明与目前常规的基于序列图像的三维点云获取方法相比,具有点云分布均匀、自带三维拓扑信息的优势,可显著提高构建三维模型的精度。
The present invention provides a three-dimensional point cloud acquisition method that retains topological information. First, the camera is used to acquire images through surround or low-altitude aerial photography, and data preprocessing such as grayscale, Gaussian denoising, and photo alignment is performed on the images; secondly, Extract and match feature points that retain topological information; then, solve the 3D point cloud and map the 2D topological relationship to 3D space, and the obtained point cloud data results can be used to build a 3D model. Compared with the current conventional sequence image-based 3D point cloud acquisition method, the present invention has the advantages of uniform point cloud distribution and self-contained 3D topological information, and can significantly improve the precision of building a 3D model.
Description
技术领域technical field
本发明涉及计算机视觉领域的图像处理和基于序列图像的三维点云重建,尤其涉及一种保留拓扑信息的三维点云获取方法。The invention relates to image processing in the field of computer vision and three-dimensional point cloud reconstruction based on sequence images, in particular to a three-dimensional point cloud acquisition method that preserves topological information.
背景技术Background technique
在计算机视觉中,三维重建是指根据单视图或者多视图的图像重建三维信息的过程。由于单视图的信息不完全,因此三维重建需要利用经验知识。而多视图的三维重建(类似人的双目定位)相对比较容易,其方法是先对摄像机进行标定获取相机内参,然后通过匹配出的特征点对解算出相机的运动参数,两者结合即可计算出摄像机的图像坐标系与世界坐标系的关系,最终利用多个二维图像中的信息重建出三维信息。In computer vision, 3D reconstruction refers to the process of reconstructing 3D information from single-view or multi-view images. Since the information of a single view is incomplete, 3D reconstruction needs to utilize empirical knowledge. However, multi-view 3D reconstruction (similar to human binocular positioning) is relatively easy. The method is to first calibrate the camera to obtain the internal camera parameters, and then calculate the motion parameters of the camera through the matched feature point pairs. The two can be combined. Calculate the relationship between the image coordinate system of the camera and the world coordinate system, and finally use the information in multiple two-dimensional images to reconstruct three-dimensional information.
三维点云获取是基于多视图三维重建的关键技术与难点,三维点云的质量决定了后续构建三维模型的精度。现有的三维点云获取步骤:图像预处理、特征点提取与匹配、解算三维点云,其中特征点的提取与匹配是其中资源耗费最大的部分,也是相关学者致力于优化、改进的研究热点。现有的特征点提取算法有SIFT、SURF、ORB等,这些算法在克服图像尺度和旋转变化、光照变化和图像变形等方面取得较好成果。但目前各种方法提取出的特征点都存在冗余、分布不均、不包含二维拓扑信息的缺陷,影响后续构建三维模型的成败和精度。3D point cloud acquisition is a key technology and difficulty based on multi-view 3D reconstruction. The quality of 3D point cloud determines the accuracy of subsequent 3D model construction. The existing 3D point cloud acquisition steps: image preprocessing, feature point extraction and matching, and 3D point cloud solution. The feature point extraction and matching is the most resource-consuming part, and it is also the research that relevant scholars are committed to optimization and improvement. hotspot. The existing feature point extraction algorithms include SIFT, SURF, ORB, etc. These algorithms have achieved good results in overcoming image scale and rotation changes, illumination changes, and image deformation. However, the feature points extracted by various methods currently have defects such as redundancy, uneven distribution, and lack of two-dimensional topological information, which affect the success or failure and accuracy of the subsequent construction of the three-dimensional model.
发明内容Contents of the invention
发明目的:针对上述现有技术的局限性,本发明的目的是提供一种保留轮廓纹理拓扑信息的三维点云获取方法,解决点云数据冗余、分布不均和不包含轮廓纹理拓扑信息的缺陷,用于后续构建带拓扑约束的三维模型,提高模型精度。Purpose of the invention: Aiming at the limitations of the above-mentioned prior art, the purpose of the present invention is to provide a 3D point cloud acquisition method that retains the contour texture topological information, and solve the problems of point cloud data redundancy, uneven distribution and no contour texture topological information Defects are used to subsequently build a 3D model with topological constraints to improve model accuracy.
技术方案:一种保留拓扑信息的三维点云获取方法,该方法的主要流程如下:首先,利用相机通过围绕式或低空航拍获取图像,并对图像进行灰度化、高斯去噪、照片对齐等数据预处理;其次,进行保留拓扑信息的特征点提取与匹配;然后,解算三维点云并将二维拓扑关系映射到三维空间,获取的点云数据成果可用于构建三维模型。在基于序列图像的三维点云获取过程中,本发明所提供的方法,包括以下步骤:Technical solution: A 3D point cloud acquisition method that retains topological information. The main process of the method is as follows: First, use the camera to acquire images through surrounding or low-altitude aerial photography, and then grayscale the images, Gaussian denoising, photo alignment, etc. Data preprocessing; secondly, extract and match feature points that retain topological information; then, solve the 3D point cloud and map the 2D topological relationship to 3D space, and the obtained point cloud data results can be used to build a 3D model. In the process of acquiring a three-dimensional point cloud based on sequence images, the method provided by the present invention includes the following steps:
1、相机标定,获取相机内部参数,以矩阵的形式保存;1. Camera calibration, obtain the internal parameters of the camera, and save them in the form of a matrix;
2、通过围绕式或低空航拍的方式获取目标区域的图像数据;2. Obtain the image data of the target area by surrounding or low-altitude aerial photography;
3、对获取的图像数据进行图像灰度化、高斯去噪、照片对齐预处理,具体照片对齐步骤如下:3. Perform image grayscale, Gaussian denoising, and photo alignment preprocessing on the acquired image data. The specific photo alignment steps are as follows:
3.1、利用FAST算子提取特征点,并计算描述子;3.1. Use the FAST operator to extract feature points and calculate descriptors;
3.2、利用FLANN实现高效特征点匹配及双向匹配技术减少误匹配;3.2. Use FLANN to achieve efficient feature point matching and bidirectional matching technology to reduce mismatching;
3.3、利用RANSAC过滤误匹配;3.3. Use RANSAC to filter mismatches;
3.4、利用步骤3.3中获取的特征点对采用8点法解算基础矩阵,结合步骤1中已有的相机标定矩阵获取图像对的本质矩阵;3.4. Use the feature point pair obtained in step 3.3 to solve the basic matrix by using the 8-point method, and combine the existing camera calibration matrix in step 1 to obtain the essential matrix of the image pair;
3.5、利用步骤3.4中获取的图像对的本质矩阵,估算所有图像的匹配图像:3.5. Estimate the matching images of all images using the essential matrix of the image pair obtained in step 3.4:
3.5a、通过将图像对的本质矩阵分解为旋转和平移两部分,获取两个相机之间的位置变换关系,以此类推可获取任意一对相机的位置变换关系;3.5a. By decomposing the essential matrix of the image pair into two parts of rotation and translation, the position transformation relationship between the two cameras can be obtained, and the position transformation relationship of any pair of cameras can be obtained by analogy;
3.5b、选取第一张图像,根据该图像与其他图像相对位置变换关系,选取图像对之间旋转和平移尺度最小的作为该图像的匹配图像;3.5b. Select the first image, and according to the relative position transformation relationship between the image and other images, select the image pair with the smallest rotation and translation scale as the matching image of the image;
3.5c、以第一张图像的匹配图像作为第二张待匹配图像并执行3.5a与3.5b,以此类推确定所有图像的匹配图像;3.5c. Use the matching image of the first image as the second image to be matched and execute 3.5a and 3.5b, and so on to determine the matching images of all images;
3.6、假定第一张图像的相机矩阵固定且是标准型,利用步骤3.5a中获取的相机之间位置变换关系得到匹配图像对中另一张图像的相机矩阵,并以此获取所有图像的相机矩阵;3.6. Assuming that the camera matrix of the first image is fixed and standard, use the position transformation relationship between the cameras obtained in step 3.5a to obtain the camera matrix of the other image in the matching image pair, and use this to obtain the cameras of all images matrix;
4、保留轮廓纹理拓扑关系的特征点提取;4. Feature point extraction that preserves the topological relationship of the contour texture;
4.1、利用Canny边缘提取算法提取每一幅图像中目标地物所有轮廓纹理特征,检测的轮廓纹理不建立等级关系,每一幅图像的轮廓纹理数据都保存在二维容器中,其中每条轮廓纹理数据保存为点数据格式;4.1. Use the Canny edge extraction algorithm to extract all the contour texture features of the target features in each image. The detected contour texture does not establish a hierarchical relationship. The contour texture data of each image is stored in a two-dimensional container. Each contour Texture data is saved in point data format;
4.2、利用道格拉斯-普克算法对每一幅图像的每一条轮廓纹理点数据进行精简:4.2. Use the Douglas-Puck algorithm to simplify the data of each contour texture point of each image:
4.2a、保留精简后的轮廓纹理点,作为待匹配特征点,并标记该点所属的图像和轮廓编号,每一幅图像的处理结果单独保存在一个二维容器中;4.2a. Retain the simplified contour texture points as feature points to be matched, and mark the image and contour number to which the point belongs, and the processing result of each image is stored separately in a two-dimensional container;
4.2b、保留精简前的轮廓纹理点,作为特征点匹配的特征点库。4.2b. Retain the contour texture points before simplification as the feature point library for feature point matching.
5、特征点匹配及过滤误匹配;5. Feature point matching and filter mismatching;
5.1、利用SIFT算子对待匹配特征点进行特征描述;5.1. Use the SIFT operator to describe the feature points to be matched;
5.2、特征点匹配:5.2. Feature point matching:
5.2a、选取第一张图像,根据步骤3.5中获取的待匹配图像对,确定该图像的匹配图像;5.2a, select the first image, and determine the matching image of the image according to the pair of images to be matched obtained in step 3.5;
5.2b、在匹配图像对中,一张图像上的待匹配特征点与匹配图像上的特征点库利用步骤3.4中获取本质矩阵确定的“极线约束”关系缩小搜索范围;5.2b. In the matching image pair, the feature points to be matched on one image and the feature point library on the matching image use the "epipolar constraint" relationship determined by the essential matrix obtained in step 3.4 to narrow the search range;
5.2c、利用FLANN实现高效匹配;5.2c, using FLANN to achieve efficient matching;
5.3、利用RANSAC算法过滤误匹配。5.3. Use RANSAC algorithm to filter false matches.
6、利用步骤5中获取的匹配特征点对,根据三角形法解算三维点云;6. Use the matching feature point pairs obtained in step 5 to solve the 3D point cloud according to the triangle method;
6.1、在一个匹配图像对中通过两个二维点来近似一个三维点,即利用步骤3.6中获取的相机矩阵与步骤5.3中获取的匹配特征点对之间的约束关系,解算空间点的三维坐标;6.1. Approximate a three-dimensional point by two two-dimensional points in a matching image pair, that is, use the constraint relationship between the camera matrix obtained in step 3.6 and the matching feature point pair obtained in step 5.3 to solve the spatial point three-dimensional coordinates;
6.2、通过一个循环对匹配点对执行步骤6.1操作来实现完整的三角形法,获取两幅图像重建的三维点云,以此作为序列图像三维重建的初始结构;6.2. Perform the step 6.1 operation on the matching point pairs through a loop to realize the complete triangle method, and obtain the 3D point cloud reconstructed from the two images, which is used as the initial structure of the 3D reconstruction of the sequence image;
6.3、将剩下的图像逐个加入这个初始结构中,即在剩下的图像中找到与第二张图像匹配的图像作为第三张重建的图像,重复执行步骤6.2,即可得到序列图像的三维点云。6.3. Add the remaining images to this initial structure one by one, that is, find the image that matches the second image in the remaining images as the third reconstructed image, and repeat step 6.2 to obtain the three-dimensional sequence image point cloud.
7、将步骤4获取特征点中包含的二维轮廓纹理拓扑信息映射到三维点云,将无组织的三维点云转变为可分类的、带有轮廓纹理拓扑信息的三维点云。7. Map the two-dimensional contour texture topological information contained in the feature points obtained in step 4 to a three-dimensional point cloud, and transform the unorganized three-dimensional point cloud into a classifiable three-dimensional point cloud with contour texture topological information.
7.1、在步骤6.1三角形法解算过程中,通过两幅图像中的两个二维匹配点解算一个三维点坐标时,将两个特征点所属的图像和轮廓信息保留到解算出的三维点信息中,即完成了二维拓扑信息到三维拓扑信息的映射;7.1. During the calculation process of the triangle method in step 6.1, when calculating the coordinates of a 3D point through the two 2D matching points in the two images, the image and contour information to which the two feature points belong are retained in the calculated 3D point information, that is, the mapping from two-dimensional topological information to three-dimensional topological information is completed;
7.2、在步骤7.1中得到的三维点云中,每一个三维点都包含了该点来自于哪两幅图像以及其在该图像上具体的轮廓编号,据此可对点云进行分类:7.2. In the 3D point cloud obtained in step 7.1, each 3D point includes which two images the point comes from and its specific contour number on the image, and the point cloud can be classified accordingly:
7.2a、根据该三维点的图像编号对所得点云进行一级分类;7.2a. Perform primary classification on the obtained point cloud according to the image number of the 3D point;
7.2b、根据该三维点对应该图像上的轮廓编号对一级分类后的点云进行二级分类。7.2b. Perform secondary classification on the point cloud after the primary classification according to the contour number on the image corresponding to the three-dimensional point.
有益效果:相较于传统的基于序列图像三维点云获取技术,该方法支持围绕式和平铺式、有序和无序的拍照方式,获取的特征点分布均匀,并且提高了特征点匹配效率,最重要的优势是该方法保留了二维拓扑信息并将其映射到三维点云,可用于后续构建带拓扑约束的三维模型,提高模型精度。Beneficial effects: Compared with the traditional three-dimensional point cloud acquisition technology based on sequence images, this method supports surrounding and flat, orderly and disorderly photographing methods, and the acquired feature points are evenly distributed, and the matching efficiency of feature points is improved. The most important advantage is that this method retains 2D topological information and maps it to a 3D point cloud, which can be used to subsequently build a 3D model with topology constraints and improve model accuracy.
附图说明Description of drawings
图1为本发明方法的流程示意图;Fig. 1 is a schematic flow sheet of the inventive method;
图2为常规三维点云获取方法的重建出的小房子的三维点云;Fig. 2 is the three-dimensional point cloud of the reconstructed small house of conventional three-dimensional point cloud acquisition method;
图3为本发明提供的保留拓扑信息的三维点云获取方法的重建出的小房子的三维点云;Fig. 3 is the three-dimensional point cloud of the small house reconstructed by the three-dimensional point cloud acquisition method that retains topological information provided by the present invention;
图4(a)为不加约束的点云构网;Figure 4(a) is an unconstrained point cloud network;
图4(b)为轮廓纹理约束信息示意;Figure 4(b) is a schematic diagram of contour texture constraint information;
图4(c)为带轮廓纹理约束的点云构网。Figure 4(c) is a point cloud network with contour texture constraints.
具体实施方式Detailed ways
图1所示为本发明一种保留拓扑信息的三维点云获取方法的主要流程。通过本发明所提供的保留拓扑信息三维点云获取方法,得到的三维点云数据可用于后续构建带拓扑约束的三维模型,提高模型精度。以获取一间房屋的三维点云数据为例,结合附图1对以下各步骤进行详细描述:FIG. 1 shows the main flow of a method for acquiring a 3D point cloud that retains topological information in the present invention. Through the three-dimensional point cloud acquisition method that retains topological information provided by the present invention, the obtained three-dimensional point cloud data can be used for subsequent construction of a three-dimensional model with topology constraints to improve model accuracy. Taking the acquisition of 3D point cloud data of a house as an example, the following steps are described in detail in conjunction with Figure 1:
1、相机标定,获取相机内部参数,以矩阵K的形式保存;1. Camera calibration, obtain the internal parameters of the camera, and save them in the form of matrix K;
2、通过围绕式拍照方式获取目标区域的图像数据;2. Obtain the image data of the target area through surrounding photography;
3、对获取的图像数据进行图像灰度化、高斯去噪、照片对齐预处理,具体照片对齐步骤如下;3. Perform image grayscale, Gaussian denoising, and photo alignment preprocessing on the acquired image data. The specific photo alignment steps are as follows;
3.1、利用FAST算子提取特征点(建议设置Fast算子阈值为20,设定最多提取点数量为1000个),并计算描述子;3.1. Use the FAST operator to extract feature points (it is recommended to set the Fast operator threshold to 20, and set the maximum number of extraction points to 1000), and calculate the descriptor;
3.2、利用FLANN实现高效特征点匹配及双向匹配技术减少误匹配;3.2. Use FLANN to achieve efficient feature point matching and bidirectional matching technology to reduce mismatching;
3.3、利用RANSAC过滤误匹配;3.3. Use RANSAC to filter mismatches;
3.4、利用步骤3.3中获取的特征点对采用8点法解算基础矩阵F,结合步骤1中已有的相机标定矩阵K获取图像对的本质矩阵E;3.4. Use the feature point pair obtained in step 3.3 to solve the basic matrix F by using the 8-point method, and combine the existing camera calibration matrix K in step 1 to obtain the essential matrix E of the image pair;
3.5、利用步骤3.4中获取的图像对的本质矩阵E,估算所有图像的匹配图像:3.5. Using the essential matrix E of the image pair obtained in step 3.4, estimate the matching images of all images:
3.5a、通过将图像对的本质矩阵E分解为旋转R和平移t两部分,获取两个相机之间的位置变换关系,以此类推可获取任意一对相机的位置变换关系;3.5a. By decomposing the essential matrix E of the image pair into two parts of rotation R and translation t, the position transformation relationship between the two cameras can be obtained, and the position transformation relationship of any pair of cameras can be obtained by analogy;
3.5b、选取第一张图像,根据该图像与其他图像相对位置变换关系,选取图像对之间旋转和平移尺度最小的作为该图像的匹配图像;3.5b. Select the first image, and according to the relative position transformation relationship between the image and other images, select the image pair with the smallest rotation and translation scale as the matching image of the image;
3.5c、以第一张图像的匹配图像作为第二张待匹配图像并执行3.5a与3.5b,以此类推确定所有图像的匹配图像;3.5c. Use the matching image of the first image as the second image to be matched and execute 3.5a and 3.5b, and so on to determine the matching images of all images;
3.6、假定第一张图像的相机矩阵P0固定且是标准型,利用步骤3.5a中获取的相机之间位置变换关系得到匹配图像对中另一张图像的相机矩阵P1,并以此获取所有图像的相机矩阵;3.6. Assuming that the camera matrix P 0 of the first image is fixed and standard, use the position transformation relationship between the cameras obtained in step 3.5a to obtain the camera matrix P 1 of the other image in the matching image pair, and obtain the camera matrix for all images;
4、保留轮廓纹理拓扑关系的特征点提取;4. Feature point extraction that preserves the topological relationship of the contour texture;
4.1、利用Canny边缘提取算法提取每一幅图像中目标地物所有轮廓纹理特征,检测的轮廓纹理不建立等级关系,每一幅图像的轮廓纹理数据都保存在二维容器中,其中每条轮廓纹理数据保存为点数据格式;4.1. Use the Canny edge extraction algorithm to extract all the contour texture features of the target features in each image. The detected contour texture does not establish a hierarchical relationship. The contour texture data of each image is stored in a two-dimensional container. Each contour Texture data is saved in point data format;
4.2、利用道格拉斯-普克算法(建议阈值为5)对每一幅图像的每一条轮廓纹理点数据进行精简:4.2. Use the Douglas-Puke algorithm (recommended threshold is 5) to simplify the data of each contour texture point of each image:
4.2a、保留精简后的轮廓纹理点,作为待匹配特征点,并标记该点所属的图像和轮廓编号,每一幅图像的处理结果单独保存在一个二维容器中;4.2a. Retain the simplified contour texture points as feature points to be matched, and mark the image and contour number to which the point belongs, and the processing result of each image is stored separately in a two-dimensional container;
4.2b、保留精简前的轮廓纹理点,作为特征点匹配的特征点库。4.2b. Retain the contour texture points before simplification as the feature point library for feature point matching.
5、特征点匹配及过滤误匹配;5. Feature point matching and filter mismatching;
5.1、利用SIFT算子对待匹配特征点进行特征描述;5.1. Use the SIFT operator to describe the feature points to be matched;
5.2、特征点匹配:5.2. Feature point matching:
5.2a、选取第一张图像,根据步骤3.5中获取的待匹配图像对,确定该图像的匹配图像;5.2a, select the first image, and determine the matching image of the image according to the pair of images to be matched obtained in step 3.5;
5.2b、在匹配图像对中,一张图像上的待匹配特征点与匹配图像上的特征点库利用步骤3.4中获取本质矩阵确定的“极线约束”关系缩小搜索范围;5.2b. In the matching image pair, the feature points to be matched on one image and the feature point library on the matching image use the "epipolar constraint" relationship determined by the essential matrix obtained in step 3.4 to narrow the search range;
5.2c、利用FLANN实现高效匹配;5.2c, using FLANN to achieve efficient matching;
5.3、利用RANSAC算法过滤误匹配。5.3. Use RANSAC algorithm to filter false matches.
6、利用步骤5中获取的匹配特征点对,根据三角形法解算三维点云;6. Use the matching feature point pairs obtained in step 5 to solve the 3D point cloud according to the triangle method;
6.1、在一个匹配图像对中通过两个二维点来近似一个三维点,即利用步骤3.6中获取的相机矩阵P与步骤5.3中获取的匹配特征点对之间的约束关系,解算空间点的三维坐标;6.1. Approximate a three-dimensional point by two two-dimensional points in a matching image pair, that is, use the constraint relationship between the camera matrix P obtained in step 3.6 and the matching feature point pair obtained in step 5.3 to solve the spatial point three-dimensional coordinates;
6.2、通过一个循环对匹配点对执行步骤6.1操作来实现完整的三角形法,获取两幅图像重建的三维点云,以此作为序列图像三维重建的初始结构;6.2. Perform the step 6.1 operation on the matching point pairs through a loop to realize the complete triangle method, and obtain the 3D point cloud reconstructed from the two images, which is used as the initial structure of the 3D reconstruction of the sequence image;
6.3、将剩下的图像逐个加入这个初始结构中,即在剩下的图像中找到与第二张图像匹配的图像作为第三张重建的图像,重复执行步骤6.2,即可得到序列图像的三维点云。6.3. Add the remaining images to this initial structure one by one, that is, find the image that matches the second image in the remaining images as the third reconstructed image, and repeat step 6.2 to obtain the three-dimensional sequence image point cloud.
7、将步骤4获取特征点中包含的二维轮廓纹理拓扑信息映射到三维点云,将无组织的三维点云转变为可分类的、带有轮廓纹理拓扑信息的三维点云。7. Map the two-dimensional contour texture topological information contained in the feature points obtained in step 4 to a three-dimensional point cloud, and transform the unorganized three-dimensional point cloud into a classifiable three-dimensional point cloud with contour texture topological information.
7.1、在步骤6.1三角形法解算过程中,通过两幅图像中的两个二维匹配点解算一个三维点坐标时,将两个特征点所属的图像和轮廓信息保留到解算出的三维点信息中,即完成了二维拓扑信息到三维拓扑信息的映射;7.1. During the calculation process of the triangle method in step 6.1, when calculating the coordinates of a 3D point through the two 2D matching points in the two images, the image and contour information to which the two feature points belong are retained in the calculated 3D point information, that is, the mapping from two-dimensional topological information to three-dimensional topological information is completed;
7.2、在步骤7.1中得到的三维点云中,每一个三维点都包含了该点来自于哪两幅图像以及其在该图像上具体的轮廓编号,据此可对点云进行分类:7.2. In the 3D point cloud obtained in step 7.1, each 3D point includes which two images the point comes from and its specific contour number on the image, and the point cloud can be classified accordingly:
7.2a、根据该三维点的图像编号对所得点云进行一级分类;7.2a. Perform primary classification on the obtained point cloud according to the image number of the 3D point;
7.2b、根据该三维点对应该图像上的轮廓编号对一级分类后的点云进行二级分类。7.2b. Perform secondary classification on the point cloud after the primary classification according to the contour number on the image corresponding to the three-dimensional point.
附图2和附图3的对比可以体现出本发明较常规方法而言,获取的点云分布更加均匀,门框等细节处数据更丰富。图4(a)为不加约束的点云构网,4(b)为轮廓纹理约束信息示意,4(c)为带轮廓纹理约束的点云构网,体现了利用本发明所提供的保留拓扑信息的三维点云获取方法获取的点云数据在构建模型表面网格时的优势,即使模型更加贴近真实场景,且提高了模型精度。The comparison between Figure 2 and Figure 3 shows that compared with the conventional method, the distribution of the obtained point cloud is more uniform, and the data of details such as the door frame is more abundant. Figure 4(a) is a point cloud network without constraints, 4(b) is a schematic diagram of contour texture constraint information, and 4(c) is a point cloud network with contour texture constraints, which embodies the preservation provided by the present invention. The point cloud data obtained by the 3D point cloud acquisition method of topological information has the advantage of constructing the surface mesh of the model, even if the model is closer to the real scene, and the model accuracy is improved.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711178471.1A CN108010123B (en) | 2017-11-23 | 2017-11-23 | Three-dimensional point cloud obtaining method capable of retaining topology information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711178471.1A CN108010123B (en) | 2017-11-23 | 2017-11-23 | Three-dimensional point cloud obtaining method capable of retaining topology information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108010123A true CN108010123A (en) | 2018-05-08 |
CN108010123B CN108010123B (en) | 2021-02-09 |
Family
ID=62053322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711178471.1A Expired - Fee Related CN108010123B (en) | 2017-11-23 | 2017-11-23 | Three-dimensional point cloud obtaining method capable of retaining topology information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108010123B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
CN108765574A (en) * | 2018-06-19 | 2018-11-06 | 北京智明星通科技股份有限公司 | 3D scenes intend true method and system and computer readable storage medium |
CN109472802A (en) * | 2018-11-26 | 2019-03-15 | 东南大学 | A Surface Mesh Model Construction Method Based on Edge Feature Self-Constraint |
CN109598783A (en) * | 2018-11-20 | 2019-04-09 | 西南石油大学 | A kind of room 3D modeling method and furniture 3D prebrowsing system |
CN109816771A (en) * | 2018-11-30 | 2019-05-28 | 西北大学 | A method for automatic reorganization of cultural relic fragments combining feature point topology and geometric constraints |
CN109951342A (en) * | 2019-04-02 | 2019-06-28 | 上海交通大学 | 3D Matrix Topological Representation of Spatial Information Network and Implementation Method of Routing Traversal Optimization |
CN110443785A (en) * | 2019-07-18 | 2019-11-12 | 太原师范学院 | The feature extracting method of three-dimensional point cloud under a kind of lasting people having the same aspiration and interest |
CN111325854A (en) * | 2018-12-17 | 2020-06-23 | 三菱重工业株式会社 | Shape model correction device, shape model correction method, and storage medium |
WO2021160071A1 (en) * | 2020-02-11 | 2021-08-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Feature spatial distribution management for simultaneous localization and mapping |
CN114494595A (en) * | 2022-01-19 | 2022-05-13 | 上海识装信息科技有限公司 | Biological model construction method and three-dimensional size measurement method |
CN118154460A (en) * | 2024-05-11 | 2024-06-07 | 成都大学 | Processing method of three-dimensional point cloud data of asphalt pavement |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070183653A1 (en) * | 2006-01-31 | 2007-08-09 | Gerard Medioni | 3D Face Reconstruction from 2D Images |
CN102074015A (en) * | 2011-02-24 | 2011-05-25 | 哈尔滨工业大学 | Two-dimensional image sequence based three-dimensional reconstruction method of target |
CN104952075A (en) * | 2015-06-16 | 2015-09-30 | 浙江大学 | Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method |
CN106651942A (en) * | 2016-09-29 | 2017-05-10 | 苏州中科广视文化科技有限公司 | Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points |
-
2017
- 2017-11-23 CN CN201711178471.1A patent/CN108010123B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070183653A1 (en) * | 2006-01-31 | 2007-08-09 | Gerard Medioni | 3D Face Reconstruction from 2D Images |
CN102074015A (en) * | 2011-02-24 | 2011-05-25 | 哈尔滨工业大学 | Two-dimensional image sequence based three-dimensional reconstruction method of target |
CN104952075A (en) * | 2015-06-16 | 2015-09-30 | 浙江大学 | Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method |
CN106651942A (en) * | 2016-09-29 | 2017-05-10 | 苏州中科广视文化科技有限公司 | Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
CN108765574A (en) * | 2018-06-19 | 2018-11-06 | 北京智明星通科技股份有限公司 | 3D scenes intend true method and system and computer readable storage medium |
CN109598783A (en) * | 2018-11-20 | 2019-04-09 | 西南石油大学 | A kind of room 3D modeling method and furniture 3D prebrowsing system |
CN109472802A (en) * | 2018-11-26 | 2019-03-15 | 东南大学 | A Surface Mesh Model Construction Method Based on Edge Feature Self-Constraint |
CN109472802B (en) * | 2018-11-26 | 2021-10-19 | 东南大学 | A Surface Mesh Model Construction Method Based on Edge Feature Self-Constraint |
CN109816771B (en) * | 2018-11-30 | 2022-11-22 | 西北大学 | Cultural relic fragment automatic recombination method combining feature point topology and geometric constraint |
CN109816771A (en) * | 2018-11-30 | 2019-05-28 | 西北大学 | A method for automatic reorganization of cultural relic fragments combining feature point topology and geometric constraints |
CN111325854A (en) * | 2018-12-17 | 2020-06-23 | 三菱重工业株式会社 | Shape model correction device, shape model correction method, and storage medium |
CN111325854B (en) * | 2018-12-17 | 2023-10-24 | 三菱重工业株式会社 | Shape model correction device, shape model correction method, and storage medium |
CN109951342A (en) * | 2019-04-02 | 2019-06-28 | 上海交通大学 | 3D Matrix Topological Representation of Spatial Information Network and Implementation Method of Routing Traversal Optimization |
CN109951342B (en) * | 2019-04-02 | 2021-05-11 | 上海交通大学 | 3D Matrix Topological Representation of Spatial Information Network and Implementation Method of Routing Traversal Optimization |
CN110443785A (en) * | 2019-07-18 | 2019-11-12 | 太原师范学院 | The feature extracting method of three-dimensional point cloud under a kind of lasting people having the same aspiration and interest |
WO2021160071A1 (en) * | 2020-02-11 | 2021-08-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Feature spatial distribution management for simultaneous localization and mapping |
CN114494595A (en) * | 2022-01-19 | 2022-05-13 | 上海识装信息科技有限公司 | Biological model construction method and three-dimensional size measurement method |
CN118154460A (en) * | 2024-05-11 | 2024-06-07 | 成都大学 | Processing method of three-dimensional point cloud data of asphalt pavement |
Also Published As
Publication number | Publication date |
---|---|
CN108010123B (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108010123A (en) | A kind of three-dimensional point cloud acquisition methods for retaining topology information | |
CN110443836B (en) | A method and device for automatic registration of point cloud data based on plane features | |
CN109410321B (en) | Three-dimensional reconstruction method based on convolutional neural network | |
CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
CN108038905B (en) | A kind of Object reconstruction method based on super-pixel | |
US10217293B2 (en) | Depth camera-based human-body model acquisition method and network virtual fitting system | |
WO2022178952A1 (en) | Target pose estimation method and system based on attention mechanism and hough voting | |
Dall'Asta et al. | A comparison of semiglobal and local dense matching algorithms for surface reconstruction | |
CN104392426B (en) | A kind of no marks point three-dimensional point cloud method for automatically split-jointing of self adaptation | |
CN104850850B (en) | A kind of binocular stereo vision image characteristic extracting method of combination shape and color | |
CN110084243B (en) | File identification and positioning method based on two-dimensional code and monocular camera | |
CN102222357B (en) | Foot Shape 3D Surface Reconstruction Method Based on Image Segmentation and Mesh Subdivision | |
CN104463108A (en) | Monocular real-time target recognition and pose measurement method | |
CN101908231A (en) | Method and system for processing 3D point cloud reconstruction of scenes containing principal planes | |
CN104376596A (en) | Method for modeling and registering three-dimensional scene structures on basis of single image | |
CN108229416A (en) | Robot SLAM methods based on semantic segmentation technology | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN108280858A (en) | A kind of linear global camera motion method for parameter estimation in multiple view reconstruction | |
CN102446356A (en) | Parallel self-adaptive matching method for obtaining remote sensing images with uniformly distributed matching points | |
CN117115359A (en) | Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion | |
Hirner et al. | FC-DCNN: A densely connected neural network for stereo estimation | |
CN101661623B (en) | Three-dimensional tracking method of deformable body based on linear programming | |
Jisen | A study on target recognition algorithm based on 3D point cloud and feature fusion | |
CN110135474A (en) | A kind of oblique aerial image matching method and system based on deep learning | |
US20220198707A1 (en) | Method and apparatus with object pose estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210209 |