CN106023230B - A kind of dense matching method of suitable deformation pattern - Google Patents
A kind of dense matching method of suitable deformation pattern Download PDFInfo
- Publication number
- CN106023230B CN106023230B CN201610390400.7A CN201610390400A CN106023230B CN 106023230 B CN106023230 B CN 106023230B CN 201610390400 A CN201610390400 A CN 201610390400A CN 106023230 B CN106023230 B CN 106023230B
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- right image
- new
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012937 correction Methods 0.000 claims abstract description 28
- 238000004321 preservation Methods 0.000 claims abstract description 3
- 230000009466 transformation Effects 0.000 claims description 4
- 238000006467 substitution reaction Methods 0.000 claims 1
- 230000007246 mechanism Effects 0.000 abstract description 2
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- QOSSAOTZNIDXMA-UHFFFAOYSA-N Dicylcohexylcarbodiimide Chemical compound C1CCCCC1N=C=NC1CCCCC1 QOSSAOTZNIDXMA-UHFFFAOYSA-N 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000009131 signaling function Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
一种适合变形图像的稠密匹配方法,属于图像匹配技术领域,该方法包括步骤1:分别在左图像和右图像中人工提取匹配特征点对;步骤2:利用二次多项式纠正原始右图像的相对变形,得到纠正后的右图像;步骤3:对左图像和纠正后的右图像进行稠密匹配;步骤4:确定左图像中像素点在原始右图像中的匹配像素点的像素坐标;本发明将多项式纠正和坐标对应关系保存机制用于变形图像的稠密匹配,可用于多种情况下的图像匹配,特别在图像发生较大变形的情况下也能获得较好的匹配效果,同时为多源图像的稠密匹配提供了新的解决思路,在纠正后的图像中采用相对视差约束搜索区域,缩小了匹配点的搜索范围,提高了匹配效率,增加了匹配结果的可靠性。
A dense matching method suitable for deformed images, belonging to the field of image matching technology, the method includes step 1: manually extract matching feature point pairs in the left image and right image respectively; step 2: use quadratic polynomial to correct the relative deformation, obtain the corrected right image; Step 3: carry out dense matching to the left image and the corrected right image; Step 4: determine the pixel coordinates of the matching pixel of the pixel in the left image in the original right image; the present invention will The polynomial correction and coordinate correspondence preservation mechanism is used for dense matching of deformed images, which can be used for image matching in various situations, especially in the case of large deformations of images, and can also obtain better matching results. The dense matching provides a new solution. In the corrected image, the relative parallax is used to constrain the search area, which narrows the search range of matching points, improves the matching efficiency, and increases the reliability of the matching results.
Description
技术领域technical field
本发明属于图像匹配技术领域,具体涉及一种适合变形图像的稠密匹配方法。The invention belongs to the technical field of image matching, and in particular relates to a dense matching method suitable for deformed images.
背景技术Background technique
针对变形图像对之间的匹配,国内学者开展了较多的关于特征匹配的研究,杨恒等在2010年提出了一种新的局部不变特征检测和描述算法,首先在每一层尺度图像上提取Harris角点,然后在以Harris角点为中心的固定大小的搜索窗内搜索三维尺度空间的极值点作为局部特征点的位置和特征尺度,最后为每个特征点计算主方向,并采用梯度的距离和方向直方图来描述局部特征,完成特征匹配;陈梦婷等在2012年提出了基于Harris角点和SIFT描述符的高分辨率遥感影像匹配算法,首先进行Harris角点提取,然后对得到的点进行SIFT描述,最后使用最近邻/次近邻比值法完成匹配;赵西安等在2012年提出了一种具有尺度与旋转不变性的立体影像自动匹配方法,首先基于方向小波变换构造三尺度特征点算子,进行两尺度匹配,保证其尺度不变性问题,其次构造特征点64维描述向量,解决影像匹配的旋转不变性,最终完成特征匹配;叶沅鑫等在2013年提出了一种结合SIFT和边缘信息的多源遥感影像匹配方法,首先在高斯差分尺度空间进行特征点检测,并采用相位一致性提取可靠的边缘信息,然后结合改进的SIFT和形状上下文对特征点进行描述,最后将欧氏距离作为相似性测度获取同名点,完成特征匹配;张正鹏等在2014年提出了光流特征聚类的车载全景序列影像匹配方法,采用非参数化的均值漂移特征聚类思想,以SIFT多尺度特征匹配点的位置量和光流矢量,构建了影像特征空间的空域和值域,利用特征空间中对应的显著图像光流特征为聚类条件,实现了全景序列影像的匹配,并采用极线几何约束进行了误匹配的剔除;徐秋辉等在2015年提出了改进的DCCD和SIFT描述符的影像匹配方法,采用改进的DCCD快速检测影像上的关键点,然后确定关键点的主方向,生成特征点,采用SIFT描述符描述特征点,得到特征匹配结果后,通过BBF算法和随机采样一致性算法(RANSAC)进行特征点粗匹配和误匹配特征点剔除;张正鹏等在2015年提出了自适应运动结构特征的车载全景序列影像匹配方法,首先以采样点在空间域和光流域的局部空间结构定义自适应的带宽矩阵,采用局部光流特征向量的距离加权法,描述光流域上运动相似性结构特征的松弛扩散过程,然后给出自适应多变量核密度函数的表达形式,并给出了均值漂移向量的求解、终止条件以及种子点的选择方法,最后融合SIFT描述特征与运动结构特征,建立统一的全景影像匹配框架;肖雄武等在2015年提出了一种具有仿射不变性的倾斜影像快速匹配方法,首先通过估算影像的相机轴定向参数计算出初始仿射矩阵,通过逆仿射变换得到纠正影像,对纠正影像进行SIFT匹配,并通过多重约束提高匹配精度和可靠性;闫利等在2015年提出了分片纠正的球面全景影像匹配方法,先将等距形投影的全景影像按经纬度分成若干子影像块,对每个子块进行投影转换以得到畸变纠正后的影像,并对其做SIFT特征描述和提取,合并所有结果即可获得整幅全景影像的特征集,进而完成多幅全景影像间的特征匹配;赵宝玮等在2015年提出了视差约束的改进Hough变换多光谱影像匹配方法,该方法使用金字塔影像匹配策略,顶层金字塔影像通过尺度不变特征转换算子进行匹配来提供初始的视差约束条件,其他各层金字塔左右影像利用改进Hough变换影像方法进行匹配,匹配过程中利用上层匹配结果为当前层影像匹配提供视差约束条件,最终完成特征匹配。For the matching between deformed image pairs, domestic scholars have carried out more research on feature matching. In 2010, Yang Heng et al. proposed a new local invariant feature detection and description algorithm. Harris corner points are extracted from above, and then the extreme points of the three-dimensional scale space are searched in the fixed-size search window centered on the Harris corner point as the position and feature scale of the local feature points, and finally the main direction is calculated for each feature point, and Using gradient distance and direction histograms to describe local features and complete feature matching; Chen Mengting et al. proposed a high-resolution remote sensing image matching algorithm based on Harris corners and SIFT descriptors in 2012. First, the Harris corners were extracted, and then the The obtained points are described by SIFT, and finally the matching is completed using the nearest neighbor/sub-nearest neighbor ratio method; in 2012, Zhao Xi'an et al. proposed an automatic stereo image matching method with scale and rotation invariance, firstly constructing a three-scale image based on directional wavelet transform The feature point operator performs two-scale matching to ensure its scale invariance, and then constructs a 64-dimensional description vector of feature points to solve the rotation invariance of image matching, and finally completes feature matching; Ye Yuanxin et al. proposed a combination of SIFT in 2013 The multi-source remote sensing image matching method with edge information, firstly detects feature points in the Gaussian difference scale space, and uses phase consistency to extract reliable edge information, then combines the improved SIFT and shape context to describe the feature points, and finally the Ou In 2014, Zhang Zhengpeng et al. proposed a vehicle-mounted panorama sequence image matching method based on optical flow feature clustering, using non-parametric mean shift feature clustering ideas, and using SIFT multi-scale The position quantity and optical flow vector of the feature matching points construct the spatial domain and value domain of the image feature space, and use the corresponding salient image optical flow features in the feature space as the clustering conditions to realize the matching of panoramic sequence images, and adopt epipolar geometry In 2015, Xu Qiuhui proposed an improved DCCD and SIFT descriptor image matching method, using the improved DCCD to quickly detect key points on the image, and then determine the main direction of the key points to generate feature points , use the SIFT descriptor to describe the feature points, and after obtaining the feature matching results, use the BBF algorithm and the random sampling consensus algorithm (RANSAC) to perform rough matching of feature points and elimination of mismatched feature points; Zhang Zhengpeng et al. proposed an adaptive motion structure in 2015 The vehicle-mounted panorama sequence image matching method based on characteristics firstly defines an adaptive bandwidth matrix based on the sampling points in the spatial domain and the local spatial structure of the optical flow domain, and uses the distance weighting method of the local optical flow feature vector to describe the structural characteristics of the motion similarity in the optical flow domain. Relax the diffusion process, and then give the expression form of the adaptive multivariable kernel density function, and give the solution of the mean shift vector, the termination condition and the selection method of the seed point, and finally integrate the SIFT description feature and the motion structure feature to establish a unified panorama Image matching framework; Shaw In 2015, Xiongwu et al. proposed a fast matching method for oblique images with affine invariance. First, the initial affine matrix was calculated by estimating the camera axis orientation parameters of the image, and the corrected image was obtained through inverse affine transformation. SIFT matching, and improve the matching accuracy and reliability through multiple constraints; in 2015, Yan Li et al. proposed a slice-corrected spherical panoramic image matching method. First, the equidistant projection panoramic image is divided into several sub-image blocks according to Each sub-block is projected and transformed to obtain a distortion-corrected image, and SIFT feature description and extraction are performed on it, and all the results are combined to obtain the feature set of the entire panoramic image, and then the feature matching between multiple panoramic images is completed; Zhao Baowei proposed an improved Hough transform multispectral image matching method with disparity constraints in 2015. This method uses a pyramid image matching strategy. The top pyramid image is matched through a scale-invariant feature conversion operator to provide the initial disparity constraints. Other layers The left and right images of the pyramid are matched using the improved Hough transform image method. During the matching process, the upper layer matching results are used to provide disparity constraints for the current layer image matching, and the feature matching is finally completed.
上述学者的研究结果虽然都取得了较好的匹配效果,但是通过特征匹配只能完成低精度的三维表面重建,即得到三维表面的框架,不能完成精度要求较高的三维重建,仍需要通过进一步的稠密匹配来完成。通用的稠密匹配方法通常是采用一定的匹配测度作为衡量两个像素点是否为匹配点的依据,如:绝对差和、差平方和、截断绝对差、归一化互相关、去均值归一化互相关等方法,该类方法对噪声敏感,只能对理想的图像完成稠密匹配,对于发生较大变形的图像,由于利用匹配窗口进行匹配测度计算时,原本为匹配点的像素发生了变形,导致匹配测度计算出现偏差,从而影响到最终的匹配结果。Although the research results of the above-mentioned scholars have achieved good matching results, only low-precision 3D surface reconstruction can be completed through feature matching, that is, the frame of the 3D surface can be obtained, and 3D reconstruction with high precision requirements cannot be completed. Further research is still needed. to complete the dense matching. The general dense matching method usually uses a certain matching measure as the basis for measuring whether two pixel points are matching points, such as: sum of absolute difference, sum of squared difference, truncated absolute difference, normalized cross-correlation, de-mean normalization Cross-correlation and other methods are sensitive to noise and can only complete dense matching for ideal images. For images with large deformations, when using the matching window for matching measurement calculations, the pixels that were originally matching points are deformed. This leads to a deviation in the calculation of the matching measure, which affects the final matching result.
陈旺等在2009年提出了基于区域边界约束和图割优化的稠密匹配算法,该方法利用区域边界和边界像素间的约束构建能量函数,使得该函数既能求得全局最优解,又能使最终的重建表面较为平滑,最后采用计算机视觉领域中的经典图像获得了较好的稠密匹配结果,该方法在图像发生缩放时,其实现过程中基于窗口的区域匹配会出现偏差。葛亮等在2010年提出了一种改进的立体像对稠密匹配算法,该方法首先利用区域增长技术找到图像中的纹理单一区域,然后将整个区域作为匹配基元以得到纹理单一区域的稠密视差图,最后采用国际标准测试图像进行了实验,证明了方法的可行性与准确性,该方法重点处理了纹理单一区域、但当图像发生变形时,纹理单一的区域也会发生变形,导致算法的效率降低。李海滨等在2012年提出了基于候选点稠密匹配的三维场景重构方法,在空间中建立代表深度信息的网格节点,并对深度方向的节点分布进行合理规划,提高匹配的时效性,同时采用三目视觉系统代替双目,通过进行二次判决提高匹配的可靠性,该方法主要针对经典的双目立体图像进行处理,当外参数矩阵未知,且图像间发生相对旋转时,则无法完成图像的极线校正,从而不能完成后续稠密匹配。胡春海等在2013年提出了视差生长与张量相结合的多基线稠密匹配,利用SIFT算子进行初步的特征匹配,采用特征匹配结果作为稠密匹配的根点进行视差生长,并采用三视图进行匹配约束,提高稠密匹配精度,但是采用三视图作为稠密匹配的约束,当图像仅为两幅时,则无法完成算法提出的约束条件,从而影响到最终匹配结果的可靠性。张洁琳等在2014年提出了一种分层匹配方法,首先利用热核信号函数检测特征点,并采用局部融合策略和最远点采样法进行特征匹配结果的优化,在特征点集的基础上构造热核信号描述子,并利用熵将特征点按显著性排序作初始层匹配,再通过特征点各层邻域的局部匹配最终实现由粗到细的稠密匹配,该方法对于纹理重复的区域会造成拓扑关系混淆的问题。Chen Wang et al. proposed a dense matching algorithm based on region boundary constraints and graph cut optimization in 2009. This method uses the constraints between region boundaries and boundary pixels to construct an energy function, so that the function can not only obtain the global optimal solution, but also The final reconstruction surface is relatively smooth, and finally a good dense matching result is obtained by using the classic image in the field of computer vision. When the image is scaled, the window-based region matching will have deviations in the implementation process of this method. In 2010, Ge Liang et al. proposed an improved dense matching algorithm for stereo pairs. This method first uses the region growing technique to find the single texture region in the image, and then uses the entire region as a matching primitive to obtain the dense disparity of the single texture region. Finally, the experiment was carried out with the international standard test image, which proved the feasibility and accuracy of the method. This method focuses on the single-textured area, but when the image is deformed, the single-textured area will also be deformed, resulting in the algorithm's failure. Reduced efficiency. In 2012, Li Haibin et al. proposed a 3D scene reconstruction method based on dense matching of candidate points, establishing grid nodes representing depth information in space, and reasonably planning the distribution of nodes in the depth direction to improve the timeliness of matching. The trinocular vision system replaces the binocular and improves the reliability of matching by making a second judgment. This method is mainly for the classic binocular stereo image processing. When the external parameter matrix is unknown and the relative rotation between the images occurs, the image cannot be completed. The epipolar line correction, so that subsequent dense matching cannot be completed. In 2013, Hu Chunhai et al. proposed a multi-baseline dense matching combining disparity growing and tensor, using the SIFT operator for preliminary feature matching, using the feature matching result as the root point of dense matching for disparity growing, and using three views Matching constraints improve the accuracy of dense matching, but using three views as a constraint for dense matching, when there are only two images, the constraints proposed by the algorithm cannot be fulfilled, thus affecting the reliability of the final matching result. Zhang Jielin et al. proposed a hierarchical matching method in 2014. Firstly, the thermonuclear signal function is used to detect feature points, and the local fusion strategy and the farthest point sampling method are used to optimize the feature matching results. The thermonuclear signal descriptor, and use the entropy to sort the feature points according to the significance for the initial layer matching, and finally realize the dense matching from coarse to fine through the local matching of the neighborhood of each layer of feature points. The problem of confusing topological relationships.
发明内容Contents of the invention
针对上述现有技术存在的不足,本发明提供一种适合变形图像的稠密匹配方法。Aiming at the deficiencies in the above-mentioned prior art, the present invention provides a dense matching method suitable for deformed images.
本发明的技术方案如下:Technical scheme of the present invention is as follows:
一种适合变形图像的稠密匹配方法,包括如下步骤:A dense matching method suitable for deformed images, comprising the following steps:
步骤1:分别在左图像和右图像中人工提取匹配特征点对;Step 1: Manually extract matching feature point pairs in the left image and the right image respectively;
所述左图像和右图像为相对变形的一对原始待匹配图像;所述匹配特征点对大于5对;所述匹配特征点对构成匹配特征点,能够覆盖左图像和原始右图像的重叠区域;The left image and the right image are a pair of relatively deformed original images to be matched; the matching feature point pairs are greater than 5 pairs; the matching feature point pairs constitute matching feature points, which can cover the overlapping area of the left image and the original right image ;
步骤2:利用二次多项式纠正原始右图像的相对变形,得到纠正后的右图像;Step 2: Use a quadratic polynomial to correct the relative deformation of the original right image to obtain the corrected right image;
步骤2-1:将左图像中匹配特征点的像素坐标转换为像平面直角坐标,将原始右图像中所有像素点的像素坐标转换为像平面直角坐标;Step 2-1: Convert the pixel coordinates of the matching feature points in the left image to the image plane Cartesian coordinates, and convert the pixel coordinates of all pixels in the original right image to the image plane Cartesian coordinates;
步骤2-2:将匹配特征点的像平面直角坐标代入式(1)所示的二次多项式,求解该二次多项式的系数;Step 2-2: substituting the Cartesian coordinates of the image plane of the matching feature point into the quadratic polynomial shown in formula (1), and solving the coefficient of the quadratic polynomial;
其中,(X',Y')为原始右图像中匹配特征点的像平面直角坐标;(x',y')为左图像中匹配特征点的像平面直角坐标,a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5为二次多项式系数;Among them, (X', Y') is the Cartesian coordinates of the image plane of the matching feature points in the original right image; (x', y') is the Cartesian coordinates of the image plane of the matching feature points in the left image, a 0 , a 1 , a 2 , a 3 , a 4 , a 5 , b 0 , b 1 , b 2 , b 3 , b 4 , b 5 are quadratic polynomial coefficients;
步骤2-3:对原始右图像中所有像素点进行如下操作,得到纠正后的右图像:首先,将原始右图像中像素坐标为(I,J)的像素点对应的像平面直角坐标(X,Y)代入式(2)所示的二次多项式,得到纠正后该像素点的像平面直角坐标(Xnew,Ynew),然后再将该像平面直角坐标(Xnew,Ynew)转换为像素坐标(Inew,Jnew);Step 2-3: Perform the following operations on all the pixels in the original right image to obtain the corrected right image: First, the Cartesian coordinates of the image plane (X , Y) into the quadratic polynomial shown in formula (2) to obtain the corrected image plane Cartesian coordinates (X new , Y new ) of the pixel, and then transform the image plane Cartesian coordinates (X new , Y new ) is the pixel coordinate (I new , J new );
其中,(X,Y)为原始右图像像素点的像平面直角坐标;(Xnew,Ynew)为纠正后的右图像中像素点的像平面直角坐标;a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5为由步骤2-2求解的多项式系数;Among them, (X, Y) is the Cartesian coordinates of the image plane of the pixels in the original right image; (X new , Y new ) is the Cartesian coordinates of the image plane of the pixels in the corrected right image; a 0 , a 1 , a 2 , a 3 , a 4 , a 5 , b 0 , b 1 , b 2 , b 3 , b 4 , b 5 are polynomial coefficients solved by step 2-2;
步骤2-4:根据原始右图像中像素点的像素坐标(I,J)与纠正后右图像中像素点像素坐标(Inew,Jnew)的对应关系,划分纠正后的右图像中像素点的类别,根据纠正后的右图像中像素点的类别确定纠正后右图像中像素点的像素坐标(Inew,Jnew)位置的灰度值,具体方法为:判断原始右图像中像素点的像素坐标(I,J)与纠正后右图像像素点的像素坐标(Inew,Jnew)的对应关系,如果原始右图像中像素点像素坐标(I,J)与纠正后右图像中像素点的像素坐标(Inew,Jnew)为一一对应关系,则将该纠正后右图像像素点记为a类像素点,并记录纠正后右图像像素点的像素坐标(Inew,Jnew)与原始右图像中像素点的像素坐标(I,J)的对应关系,且将原始右图像中像素点的像素坐标(I,J)位置的灰度值赋给正后右图像像素点的像素坐标(Inew,Jnew)位置;如果原始右图像中有多个像素点的像素坐标(I,J)与纠正后右图像中同一个像素点的像素坐标(Inew,Jnew)对应,则将该纠正后右图像像素点记为b类像素点,并记录纠正后右图像像素点的像素坐标(Inew,Jnew)与原始右图像中多个像素点的像素坐标(I,J)的对应关系,且将所有与纠正后右图像像素点的像素坐标(Inew,Jnew)对应的原始右图像中像素点的像素坐标(I,J)位置的灰度值取平均值后赋给纠正后右图像像素点的像素坐标(Inew,Jnew)位置;如果原始右图像中没有像素点与纠正后右图像像素点的像素坐标(Inew,Jnew)对应,则将该类纠正后右图像像素点记为c类像素点,并使用双线性内插法计算出纠正后右图像像素点的像素坐标(Inew,Jnew)位置的灰度值;Step 2-4: According to the corresponding relationship between the pixel coordinates (I, J) of the pixels in the original right image and the pixel coordinates (I new , J new ) of the pixels in the corrected right image, divide the pixels in the corrected right image According to the category of the pixel in the corrected right image, the gray value of the pixel coordinate (I new , J new ) of the pixel in the corrected right image is determined. Correspondence between pixel coordinates (I, J) and pixel coordinates (I new , J new ) of pixels in the corrected right image, if pixel coordinates (I, J) of pixels in the original right image and pixels in the corrected right image The pixel coordinates (I new , J new ) of the pixel coordinates (I new , J new ) are one-to-one correspondence, then record the corrected right image pixel as a type pixel, and record the corrected pixel coordinates (I new , J new ) of the right image pixel Correspondence with the pixel coordinates (I, J) of the pixel in the original right image, and assign the gray value of the pixel coordinate (I, J) of the pixel in the original right image to the pixel of the pixel in the right image Coordinates (I new , J new ) position; if the pixel coordinates (I, J) of multiple pixel points in the original right image correspond to the pixel coordinates (I new , J new ) of the same pixel in the corrected right image, Then the right image pixel after the correction is recorded as a b-type pixel, and the pixel coordinates (I new , J new ) of the right image pixel after the correction and the pixel coordinates (I, J new ) of a plurality of pixels in the original right image are recorded. ), and after taking the average of the gray values of the pixel coordinates (I, J) of the pixels in the original right image corresponding to the pixel coordinates (I new , J new ) of the corrected right image pixels Assign the pixel coordinate (I new , J new ) position of the right image pixel point after correction; if there is no pixel point in the original right image corresponding to the pixel coordinate (I new , J new ) of the right image pixel point after correction, then the After class correction, the right image pixel point is recorded as the c class pixel point, and the gray value of the pixel coordinate (I new , J new ) position of the right image pixel point after correction is calculated using bilinear interpolation;
步骤3:对左图像和纠正后的右图像进行稠密匹配;Step 3: Perform dense matching on the left image and the corrected right image;
步骤4:根据左图像中像素点在纠正后右图像中匹配像素点的类别,确定左图像中像素点在原始右图像中的匹配像素点的像素坐标位置,具体方法为:对左图像中每一个像素点依次进行如下操作:如果左图像中像素点在纠正后的右图像中的匹配像素点为a类像素点,则根据步骤2-4中针对a类像素点保存的坐标对应关系,用原始右图像中的像素坐标代替纠正后的右图像中匹配像素点的像素坐标;如果左图像中像素点在纠正后的右图像中的匹配像素点为b类像素点,则对左图像中像素点采用SIFT算子生成特征描述符,对纠正后的右图像中的匹配像素点对应的原始右图像中的多个像素点采用SIFT算子生成特征描述符,进行SIFT特征匹配,确定左图像中像素点在原始右图像中的匹配像素点像素坐标;如果左图像中像素点在纠正后的右图像中的匹配像素点为c类像素点,将该纠正后的右图像中的匹配像素点的像素坐标转换为像平面直角坐标,然后再按照式(2)所示的二次多项式的逆变换,求解该匹配像素点在原始右图像中的像平面直角坐标后,分别对左图像中像素点和原始右图像中相应的像素点采用SIFT算子生成特征描述符,进行SIFT特征匹配,确定左图像中像素点在原始右图像中的匹配像素点的像素坐标。Step 4: According to the category of the pixel in the left image that matches the pixel in the corrected right image, determine the pixel coordinate position of the pixel in the left image that matches the pixel in the original right image. The specific method is: for each pixel in the left image A pixel point is followed in sequence: if the matching pixel point of the pixel point in the left image in the corrected right image is a type a pixel point, then according to the coordinate correspondence relationship saved for the type a pixel point in steps 2-4, use The pixel coordinates in the original right image replace the pixel coordinates of the matching pixel in the corrected right image; if the matching pixel of the pixel in the left image in the corrected right image is a type b pixel, then the pixel in the left image Use the SIFT operator to generate feature descriptors, use the SIFT operator to generate feature descriptors for multiple pixels in the original right image corresponding to the matching pixels in the corrected right image, perform SIFT feature matching, and determine The pixel coordinates of the matching pixel of the pixel in the original right image; if the matching pixel of the pixel in the left image in the corrected right image is a class c pixel, the matching pixel in the corrected right image The pixel coordinates are converted to the rectangular coordinates of the image plane, and then according to the inverse transformation of the quadratic polynomial shown in formula (2), after solving the rectangular coordinates of the image plane of the matching pixel in the original right image, the pixels in the left image are respectively The corresponding pixel points in the original right image use the SIFT operator to generate feature descriptors, perform SIFT feature matching, and determine the pixel coordinates of the matching pixel points in the left image in the original right image.
有益效果:本发明针对匹配图像之间发生变形的图像匹配问题提出的一种适合变形图像的稠密匹配方法,具有一下优点:Beneficial effects: the present invention proposes a dense matching method suitable for deformed images for the image matching problem of deformed images, which has the following advantages:
1、相对现有解决变形图像匹配问题的方法,本发明可以得到稠密的匹配结果,为完成精细的三维表面重构提供条件,克服了以往解决变形图像匹配问题时只能采用特征匹配的不足;1. Compared with the existing methods for solving the matching problem of deformed images, the present invention can obtain dense matching results, provide conditions for completing fine three-dimensional surface reconstruction, and overcome the shortcomings of only using feature matching when solving the matching problem of deformed images in the past;
2、相对于现有稠密匹配方法,本发明可以适应更多情况下的图像匹配,特别在图像发生较大变形的情况下也能获得更好的匹配效果;2. Compared with the existing dense matching method, the present invention can adapt to image matching in more situations, and can obtain better matching effects especially when the image is greatly deformed;
3、首次将多项式纠正和坐标对应关系保存机制用于变形图像的稠密匹配,为多源图像的稠密匹配提供了新的解决思路;3. For the first time, the polynomial correction and coordinate correspondence preservation mechanism is used for the dense matching of deformed images, which provides a new solution for the dense matching of multi-source images;
4、采用多项式纠正图像间变形,在纠正后的右图像中采用视差约束搜索区域来进行匹配点搜索,缩小了匹配点的搜索范围,降低了匹配过程耗时,提高了匹配效率;4. Polynomials are used to correct the deformation between images, and the parallax-constrained search area is used to search for matching points in the corrected right image, which narrows the search range of matching points, reduces the time-consuming matching process, and improves matching efficiency;
5、采用多项式纠正后的位置作为匹配点搜索的约束条件,增加了匹配过程的可靠性,特别增加了纹理重复等区域匹配结果的可靠性。5. The position after polynomial correction is used as the constraint condition for matching point search, which increases the reliability of the matching process, especially the reliability of the matching results in areas such as texture repetition.
附图说明Description of drawings
图1为本发明一种实施方式的适合变形图像的稠密匹配方法流程图;Fig. 1 is a flow chart of a dense matching method suitable for deformed images according to an embodiment of the present invention;
图2(a)为本发明一种实施方式的待匹配图像对中左图像示意图;Fig. 2 (a) is a schematic diagram of the left image in the image pair to be matched according to an embodiment of the present invention;
图2(b)为本发明一种实施方式的待匹配图像对中右图像示意图;Fig. 2 (b) is a schematic diagram of the right image in the image pair to be matched according to an embodiment of the present invention;
图3为本发明一种实施方式纠正后的右图像示意图;Fig. 3 is a schematic diagram of the corrected right image in an embodiment of the present invention;
图4(a)为本发明一种实施方式的3×3匹配模版示意图;Figure 4(a) is a schematic diagram of a 3×3 matching template in an embodiment of the present invention;
图4(b)为本发明一种实施方式的3×3搜索区域示意图。FIG. 4( b ) is a schematic diagram of a 3×3 search area in an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明的一种实施方式作详细说明。An embodiment of the present invention will be described in detail below in conjunction with the accompanying drawings.
如图1所示,本实施方式的适合变形图像的稠密匹配方法,包括如下步骤:As shown in Figure 1, the dense matching method suitable for deformed images in this embodiment includes the following steps:
步骤1:分别在左图像和右图像中人工提取9对匹配特征点对,左图像和右图像为相对变形的一对原始待匹配图像;匹配特征点对构成匹配特征点,能够覆盖左图像和右图像的重叠区域,如图2所示,其中图2(a)为左图像,图2(b)为右图像,图2(b)中黑色边框内为右图像与左图像的重叠区域,白色圆框代表特征匹配点分布的位置;Step 1: Manually extract 9 pairs of matching feature points from the left image and the right image respectively. The left image and the right image are a pair of relatively deformed original images to be matched; the matching feature point pairs constitute matching feature points, which can cover the left image and The overlapping area of the right image, as shown in Figure 2, where Figure 2(a) is the left image, Figure 2(b) is the right image, and the black border in Figure 2(b) is the overlapping area between the right image and the left image, The white circle represents the location of the distribution of feature matching points;
步骤2:利用二次多项式纠正原始右图像的相对变形,得到纠正后的右图像;Step 2: Use a quadratic polynomial to correct the relative deformation of the original right image to obtain the corrected right image;
步骤2-1:分别将左图像中匹配特征点的像素坐标及原始右图像中所有像素点的像素坐标转换为像平面直角坐标;Step 2-1: Convert the pixel coordinates of the matching feature points in the left image and the pixel coordinates of all the pixels in the original right image to the rectangular coordinates of the image plane;
设定左图像或右图像中像素点的像素坐标为(I,J),其中像素坐标系的原点位于图像的左上角,X轴水平方向向右为正,Y轴垂直方向向下为正,结合图像像素大小及像主点坐标,将像素坐标转换为像平面直角坐标(X,Y),其中像平面直角坐标系的原点位于图像中心,X轴水平方向向右为正,Y轴垂直方向向上为正,转换公式如下:Set the pixel coordinates of the pixels in the left image or the right image to be (I, J), wherein the origin of the pixel coordinate system is located at the upper left corner of the image, the horizontal direction of the X-axis is positive to the right, and the vertical direction of the Y-axis is positive downwards, Combining the image pixel size and the coordinates of the principal point of the image, the pixel coordinates are converted into the Cartesian coordinates of the image plane (X, Y), where the origin of the Cartesian coordinate system of the image plane is located in the center of the image, the horizontal direction of the X-axis is positive to the right, and the vertical direction of the Y-axis Up is positive, and the conversion formula is as follows:
其中:W为图像的宽度,H为图像的高度,P为像素大小,x0为像主点横坐标,y0为像主点纵坐标;Where: W is the width of the image, H is the height of the image, P is the pixel size, x 0 is the abscissa of the principal point of the image, and y 0 is the ordinate of the principal point of the image;
步骤2-2:将9对将匹配特征点的像平面直角坐标代入式(2)所示的二次多项,求解该二次多项式的系数:Step 2-2: Substituting 9 pairs of Cartesian coordinates of the image plane of the matching feature points into the quadratic polynomial shown in formula (2), and solving the coefficient of the quadratic polynomial:
其中,(X',Y')为原始右图像中匹配特征点的像平面直角坐标;(x',y')为左图像中匹配特征点的像平面直角坐标,a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5为二次多项式系数;本实施例中二次多项式系数的求解方法如下:Among them, (X', Y') is the Cartesian coordinates of the image plane of the matching feature points in the original right image; (x', y') is the Cartesian coordinates of the image plane of the matching feature points in the left image, a 0 , a 1 , a 2 , a 3 , a 4 , a 5 , b 0 , b 1 , b 2 , b 3 , b 4 , and b 5 are quadratic polynomial coefficients; the method for solving the quadratic polynomial coefficients in this embodiment is as follows:
步骤2-2-1、由于在实际求解过程中,式(2)左右两侧的数值不可能绝对相同,因此,将式(2)改写为误差方程的形式:Step 2-2-1. In the actual solution process, the values on the left and right sides of formula (2) cannot be absolutely the same, therefore, formula (2) is rewritten into the form of error equation:
其中,vx'和vy'为二次多项式的求解误差;Among them, v x' and v y' are the solution errors of quadratic polynomials;
步骤2-2-2、将上述误差方程改写为矩阵和向量的形式:v=AZ-l,具体公式如下:Step 2-2-2, rewrite the above error equation into the form of matrix and vector: v=AZ-l, the specific formula is as follows:
其中: in:
将9对匹配特征点的像平面直角坐标代入,通过最小二乘方法:Z=(ATA)-1ATl,计算多项式的各项系数;Substituting the image plane Cartesian coordinates of 9 pairs of matching feature points, by the method of least squares: Z=(A T A) -1 A T l, calculate the various coefficients of the polynomial;
步骤2-3:对原始右图像中所有像素点进行如下操作,得到如图3所示的纠正后的右图像:首先,将原始右图像中像素坐标为(I,J)的像素点对应的像平面直角坐标(X,Y)代入式(2)所示的二次多项式,得到纠正后像素该点的像平面直角坐标(Xnew,Ynew),然后再将该像平面直角坐标(Xnew,Ynew)转换为像素坐标(Inew,Jnew)Step 2-3: Perform the following operations on all pixels in the original right image to obtain the corrected right image as shown in Figure 3: First, the pixel coordinates in the original right image are (I, J) corresponding to The Cartesian coordinates of the image plane (X, Y) are substituted into the quadratic polynomial shown in formula (2), and the Cartesian coordinates of the image plane (X new , Y new ) of the point after correction are obtained, and then the Cartesian coordinates of the image plane (X new ) are obtained. new , Y new ) to pixel coordinates (I new , J new )
其中,(X,Y)为原始右图像像素点的像平面直角坐标;(Xnew,Ynew)为原始右图像中像平面直角坐标为(X,Y)的像素点纠正后的像平面直角坐标;a0、a1、a2、a3、a4、a5、b0、b1、b2、b3、b4、b5为步骤2-2求解的多项式系数;Among them, (X, Y) is the rectangular coordinates of the image plane of the pixels of the original right image; (X new , Y new ) is the rectangular coordinates of the image plane after correction of the pixels whose image plane coordinates are (X, Y) in the original right image Coordinates; a 0 , a 1 , a 2 , a 3 , a 4 , a 5 , b 0 , b 1 , b 2 , b 3 , b 4 , and b 5 are the polynomial coefficients to be solved in step 2-2;
所述对初始像素坐标进行取整具体是将初始像素坐标的横坐标和纵坐标分别加0.5并向下取整;The rounding of the initial pixel coordinates is specifically adding 0.5 to the abscissa and ordinate of the initial pixel coordinates and rounding down;
步骤2-4:对于图3所示的右图像,由于取整操作使原像素坐标(I,J)与像素坐标(Inew,Jnew)存在3种对应关系,首先根据原始右图像中像素点的像素坐标(I,J)与纠正后右图像中像素点像素坐标(Inew,Jnew)的对应关系,划分纠正后的右图像像素点的类别,然后根据纠正后的右图像中像素点的类别确定纠正后右图像中像素点的像素坐标(Inew,Jnew)位置的灰度值,具体方法为:判断原始右图像中像素点的像素坐标(I,J)与纠正后右图像像素点的像素坐标(Inew,Jnew)的对应关系,如果原始右图像中像素点像素坐标(I,J)与纠正后右图像像素点的像素坐标(Inew,Jnew)的为一一对应关系,则将该类纠正后右图像像素点记为a类像素点,并记录纠正后右图像像素点的像素坐标(Inew,Jnew)与原始右图像中像素点的像素坐标(I,J)的对应关系,将原始右图像中像素点的像素坐标(I,J)位置的灰度值赋给正后右图像像素点的像素坐标(Inew,Jnew)位置;如果有多个原始右图像中像素点的像素坐标(I,J)与纠正后右图像中同一个像素点的像素坐标(Inew,Jnew)对应,则将该类纠正后右图像像素点记为b类像素点,并记录纠正后右图像像素点的像素坐标(Inew,Jnew)与原始右图像中多个像素点的像素坐标(I,J)的对应关系,将所有与纠正后右图像像素点的像素坐标(Inew,Jnew)对应的原始右图像中像素点的像素坐标(I,J)位置的灰度值取平均值后赋给纠正后右图像像素点的像素坐标(Inew,Jnew)位置;如果没有原始右图像中像素点的像素坐标(I,J)与纠正后右图像像素点的像素坐标(Inew,Jnew)对应,则将该类纠正后右图像像素点记为c类像素点,使用双线性内插法计算出纠正后右图像像素点的像素坐标(Inew,Jnew)位置的灰度值;Step 2-4: For the right image shown in Figure 3, there are three correspondences between the original pixel coordinates (I, J) and the pixel coordinates (I new , J new ) due to the rounding operation. First, according to the pixel in the original right image The corresponding relationship between the pixel coordinates (I, J) of the point and the pixel coordinates (I new , J new ) of the pixel in the corrected right image is to divide the category of the pixel in the corrected right image, and then according to the pixel in the corrected right image The category of the point determines the gray value of the pixel coordinate (I new , J new ) of the pixel point in the right image after correction. The specific method is: judge the pixel coordinate (I, J ) of the pixel point in the original right image The corresponding relationship of the pixel coordinates (I new , J new ) of the image pixels, if the pixel coordinates (I, J) of the pixel points in the original right image and the pixel coordinates (I new , J new ) of the corrected right image pixels are One-to-one correspondence, then record the corrected right image pixel point as a type pixel point, and record the pixel coordinates (I new , J new ) of the corrected right image pixel point and the pixel coordinates of the pixel point in the original right image (I, J) corresponding relationship, assign the gray value of the pixel coordinates (I, J) position of the pixel point in the original right image to the pixel coordinates (I new , J new ) position of the right image pixel point after the positive; if If there are multiple pixel coordinates (I, J) of pixels in the original right image corresponding to the pixel coordinates (I new , J new ) of the same pixel in the corrected right image, then mark the corrected right image pixel as Be the b-type pixel, and record the correspondence between the pixel coordinates (I new , J new ) of the pixel of the right image after correction and the pixel coordinates (I, J) of multiple pixels in the original right image, and compare all the corrected The pixel coordinates (I new , J new ) of the right image pixel point correspond to the pixel coordinates (I, J) of the pixel point in the original right image. (I new , J new ) position; if there is no pixel coordinate (I, J) of the pixel point in the original right image corresponding to the pixel coordinate (I new , J new ) of the pixel point of the corrected right image, then the corrected The right image pixel is recorded as the c class pixel, and the gray value of the pixel coordinate (I new , J new ) position of the right image pixel after correction is calculated using bilinear interpolation;
步骤3:对左图像和纠正后的右图像进行稠密匹配;Step 3: Perform dense matching on the left image and the corrected right image;
步骤3-1:在左图像中选定大于1的奇数×大于1的奇数大小的区域作为匹配模板,选择区域中心位置的像素点作为待匹配像素点,本实施方式中选择左图像中像素坐标为(3,3)的像素点作为待匹配像素点,选择3×3大小的区域作为匹配模板,如图4(a)所示深灰色实线框内的区域;Step 3-1: In the left image, select an area with an odd number greater than 1 × an odd number greater than 1 as a matching template, and select the pixel at the center of the area as the pixel to be matched. In this embodiment, select the pixel coordinates in the left image The pixel point of (3, 3) is used as the pixel point to be matched, and a 3×3 size area is selected as the matching template, as shown in Figure 4(a) within the dark gray solid line frame;
步骤3-2:首先人工从左图像和纠正后的右图像中任意选择一对匹配点,计算左图像和纠正后的右图像的左右视差p和上下视差q;然后在纠正后的右图像中以像素位置为(3+q,3+p)的像素点为中心,选择大于1的奇数×大于1的奇数大小的区域作为搜索区域,即像素坐标包括(2+q,2+p)、(2+q,3+p)、(2+q,4+p)、(3+q,2+p)、(3+q,3+p)、(3+q,4+p)、(4+q,2+p)、(4+q,3+p)、(4+q,4+p)像素位置的区域,本实施方式中p=566,q=370,本实施方式像素位置为(373,569)的像素点作为搜索区域中心,选择3×3大小的区域作为搜索区域,如图4(b)所示虚线框内的区域;Step 3-2: First, manually select a pair of matching points from the left image and the corrected right image, and calculate the left and right disparity p and the upper and lower disparity q of the left image and the corrected right image; then in the corrected right image Taking the pixel point with the pixel position (3+q, 3+p) as the center, select an odd number greater than 1 × an odd number greater than 1 as the search area, that is, the pixel coordinates include (2+q, 2+p), (2+q, 3+p), (2+q, 4+p), (3+q, 2+p), (3+q, 3+p), (3+q, 4+p), (4+q, 2+p), (4+q, 3+p), (4+q, 4+p) pixel position area, in this embodiment, p=566, q=370, the pixel of this embodiment The pixel at the position (373, 569) is used as the center of the search area, and a 3×3 area is selected as the search area, as shown in Figure 4(b) within the dotted box;
步骤3-3:依次以纠正后的右图像中搜索区域内像素点为中心像素点,以每个中心像素点与其周围的若干个像素点组成与匹配模板同样大小的区域作为目标区域,例如以图4(b)所示虚线框内的搜索区域中的像素点位置为(2+q,2+p)的像素点为中心,及其周围的8个像素点组成与图4(a)所示的3×3匹配模板同样大小的区域作为目标区域,如图4(b)所示灰色实线框内的区域;利用公式(6)计算每个目标区域中像素点与匹配模版中像素点的相关系数;然后选择相关系数最大的像素点作为待匹配像素点在纠正后的右图像中的匹配像素点;Step 3-3: Take the pixels in the search area in the corrected right image as the center pixels in turn, and use each center pixel and several surrounding pixels to form an area of the same size as the matching template as the target area, for example, take The pixel in the search area in the dotted line box shown in Figure 4(b) is located at the pixel point (2+q, 2+p) as the center, and the surrounding 8 pixel points are the same as those shown in Figure 4(a) The area of the same size as the 3×3 matching template shown in Figure 4(b) is used as the target area, as shown in Figure 4(b) within the gray solid line frame; use the formula (6) to calculate the pixel points in each target area and the pixel points in the matching template The correlation coefficient; then select the pixel with the largest correlation coefficient as the matching pixel in the corrected right image of the pixel to be matched;
所述相关系数计算公式如下:The formula for calculating the correlation coefficient is as follows:
式中:ρ为相关系数;(c,r)为中心像素点像素坐标与待匹配像素点像素坐标的差;gi,j为以左图像匹配模版左上角为像素坐标系坐标原点时,像素坐标(i,j)位置的灰度值;g′i,j为以纠正后的右图像目标区域左上角为像素坐标系坐标原点时,像素坐标(i,j)位置的灰度值;m、n分别为匹配模版的高度和宽度;gi+r,j+c为以左图像匹配模版左上角为像素坐标系坐标原点时,像素坐标(i+r,j+c)位置的灰度值;g′i+r,j+c为以纠正后的右图像目标区域左上角为像素坐标系坐标原点时,像素坐标(i+r,j+c)位置的灰度值;所述高度和宽度以像素为单位,如3×3的匹配区域,表示匹配区域为3个像素的高度和3个像素的宽度;In the formula : ρ is the correlation coefficient; (c, r) is the difference between the pixel coordinates of the central pixel point and the pixel coordinates of the pixel point to be matched; The gray value of the coordinate (i, j) position; g′ i, j is the gray value of the pixel coordinate (i, j) position when the upper left corner of the corrected right image target area is the coordinate origin of the pixel coordinate system; m , n are the height and width of the matching template respectively; g i+r, j+c are the grayscale of the pixel coordinate (i+r, j+c) position when the upper left corner of the left image matching template is taken as the coordinate origin of the pixel coordinate system value; g' i+r, j+c is the grayscale value of the pixel coordinate (i+r, j+c) position when the upper left corner of the corrected right image target area is the coordinate origin of the pixel coordinate system; the height and width are in pixels, such as a 3×3 matching area, which means that the matching area is 3 pixels high and 3 pixels wide;
步骤4:根据左图像中像素点在纠正后右图像中匹配像素点的类别,确定左图像中像素点在原始右图像中的匹配像素点的像素坐标位置,具体方法为:对左图像中每一个像素点依次进行如下操作:如果左图像中像素点在纠正后的右图像中的匹配像素点为a类像素点,根据步骤2-4中针对a类像素点保存的坐标对应关系,用原始右图像中的像素坐标代替纠正后的右图像中匹配像素点的像素坐标;如果左图像中像素点在纠正后的右图像中的匹配像素点为b类像素点,对左图像中像素点采用SIFT算子生成特征描述符,对纠正后的右图像中的匹配像素点对应的原始右图像中的多个像素点采用SIFT算子生成特征描述符,进行SIFT特征匹配,确定左图像中像素点在原始右图像中的匹配像素点像素坐标;如果左图像中像素点在纠正后的右图像中的匹配像素点为c类像素点,将该纠正后的右图像中的匹配像素点的像素坐标转换为像平面直角坐标,然后再按照式(5)所示的二次多项式的逆变换,求解该匹配像素点在原始右像中的像平面直角坐标后,分别对左图像中像素点和原始右图像中相应的像素点采用SIFT算子生成特征描述符,进行SIFT特征匹配,确定左图像中像素点在原始右图像中的匹配像素点的像素坐标,所述原始右图像中相应的像素点,指的是将原始右像中的像平面直角坐标转换为未取整的像素坐标后,再将前述未取整的像素坐标的横坐标和纵坐标分别向上或向下取整后得到的前述未取整像素坐标周围的4个像素点。Step 4: According to the category of the pixel in the left image that matches the pixel in the corrected right image, determine the pixel coordinate position of the pixel in the left image that matches the pixel in the original right image. The specific method is: for each pixel in the left image Perform the following operations on a pixel point in turn: if the matching pixel point of the pixel point in the left image in the corrected right image is a type a pixel point, according to the coordinate correspondence saved for the type a pixel point in steps 2-4, use the original The pixel coordinates in the right image replace the pixel coordinates of the matching pixel in the corrected right image; if the pixel in the left image matches the pixel in the corrected right image as a b-type pixel, the pixel in the left image uses The SIFT operator generates feature descriptors, and uses the SIFT operator to generate feature descriptors for multiple pixels in the original right image corresponding to the matching pixels in the corrected right image, and performs SIFT feature matching to determine the pixels in the left image The pixel coordinates of the matching pixel in the original right image; if the matching pixel of the pixel in the left image in the corrected right image is a class c pixel, the pixel coordinate of the matching pixel in the corrected right image Convert to the Cartesian coordinates of the image plane, and then according to the inverse transformation of the quadratic polynomial shown in formula (5), after solving the Cartesian coordinates of the image plane of the matching pixel in the original right image, the pixels in the left image and the original Corresponding pixel in the right image adopts SIFT operator to generate feature descriptor, carries out SIFT feature matching, determines the pixel coordinates of the matching pixel of pixel in the original right image in the left image, and the corresponding pixel in the original right image , refers to the above-mentioned obtained after converting the rectangular coordinates of the image plane in the original right image into unrounded pixel coordinates, and then rounding up or down the abscissa and ordinate of the aforementioned unrounded pixel coordinates 4 pixels around unrounded pixel coordinates.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610390400.7A CN106023230B (en) | 2016-06-02 | 2016-06-02 | A kind of dense matching method of suitable deformation pattern |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610390400.7A CN106023230B (en) | 2016-06-02 | 2016-06-02 | A kind of dense matching method of suitable deformation pattern |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106023230A CN106023230A (en) | 2016-10-12 |
CN106023230B true CN106023230B (en) | 2018-07-24 |
Family
ID=57090727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610390400.7A Active CN106023230B (en) | 2016-06-02 | 2016-06-02 | A kind of dense matching method of suitable deformation pattern |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106023230B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10453204B2 (en) * | 2016-12-06 | 2019-10-22 | Adobe Inc. | Image alignment for burst mode images |
CN107590502B (en) * | 2017-09-18 | 2020-05-22 | 西安交通大学 | A Fast Matching Method for Dense Points in the Whole Field |
CN108021886B (en) * | 2017-12-04 | 2021-09-14 | 西南交通大学 | Method for matching local significant feature points of repetitive texture image of unmanned aerial vehicle |
CN108364013B (en) * | 2018-03-15 | 2021-10-29 | 苏州大学 | Image keypoint feature descriptor extraction method and system based on neighborhood Gaussian differential distribution |
CN108961322B (en) * | 2018-05-18 | 2021-08-10 | 辽宁工程技术大学 | Mismatching elimination method suitable for landing sequence images |
CN108986150B (en) * | 2018-07-17 | 2020-05-22 | 南昌航空大学 | A method and system for image optical flow estimation based on non-rigid dense matching |
CN112070005B (en) * | 2020-09-08 | 2024-09-06 | 深圳市华汉伟业科技有限公司 | Three-dimensional primitive data extraction method and device and storage medium |
CN113034556B (en) * | 2021-03-19 | 2024-04-16 | 南京天巡遥感技术研究院有限公司 | Frequency domain correlation semi-dense remote sensing image matching method |
CN117036488B (en) * | 2023-10-07 | 2024-01-02 | 长春理工大学 | Binocular vision positioning method based on geometric constraint |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702056A (en) * | 2009-11-25 | 2010-05-05 | 安徽华东光电技术研究所 | Stereo image displaying method based on stereo image pairs |
CN103136750A (en) * | 2013-01-30 | 2013-06-05 | 广西工学院 | Stereo matching optimization method of binocular visual system |
-
2016
- 2016-06-02 CN CN201610390400.7A patent/CN106023230B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702056A (en) * | 2009-11-25 | 2010-05-05 | 安徽华东光电技术研究所 | Stereo image displaying method based on stereo image pairs |
CN103136750A (en) * | 2013-01-30 | 2013-06-05 | 广西工学院 | Stereo matching optimization method of binocular visual system |
Non-Patent Citations (4)
Title |
---|
An Optimized Method for Terrain Reconstruction Based on Descent Images;Xu Xinchao 等;《Journal of Engineering and Technological Sciences》;20160229;第48卷(第1期);31-48 * |
利用SIFT算子与图像插值实现图像匹配;卜凡艳 等;《计算机工程与应用》;20111231;第47卷(第16期);156-158,162 * |
基于多项式的遥感图像快速几何校正;曹玲玲 等;《电脑开发与应用》;20111231;第24卷(第1期);5-7,34 * |
震害遥感图像变化检测技术研究;窦爱霞;《中国优秀硕士学位论文全文数据库基础科学辑》;20031215;第2003年卷(第4期);第3.1-3.4节,图3.2-3.5 * |
Also Published As
Publication number | Publication date |
---|---|
CN106023230A (en) | 2016-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106023230B (en) | A kind of dense matching method of suitable deformation pattern | |
CN106709947B (en) | Three-dimensional human body rapid modeling system based on RGBD camera | |
CN107833181B (en) | Three-dimensional panoramic image generation method based on zoom stereo vision | |
CN106683173B (en) | A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching | |
CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
CN110264416A (en) | Sparse point cloud segmentation method and device | |
CN108335350A (en) | The three-dimensional rebuilding method of binocular stereo vision | |
CN115205489A (en) | Three-dimensional reconstruction method, system and device in large scene | |
CN107767456A (en) | A kind of object dimensional method for reconstructing based on RGB D cameras | |
CN104574347A (en) | On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data | |
CN113538569B (en) | Weak texture object pose estimation method and system | |
CN107220996B (en) | A matching method of UAV linear array and area array images based on the consistent triangular structure | |
CN117115359B (en) | Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion | |
CN111325828A (en) | Three-dimensional face acquisition method and device based on three-eye camera | |
CN102446356A (en) | Parallel self-adaptive matching method for obtaining remote sensing images with uniformly distributed matching points | |
CN112132876A (en) | Initial pose estimation method in 2D-3D image registration | |
CN113034678A (en) | Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence | |
CN115601406A (en) | Local stereo matching method based on fusion cost calculation and weighted guide filtering | |
CN117115336A (en) | Point cloud reconstruction method based on remote sensing stereoscopic image | |
CN117522803A (en) | Precise positioning method of bridge components based on binocular vision and target detection | |
CN110942102B (en) | Probability relaxation epipolar matching method and system | |
CN110969650B (en) | Intensity image and texture sequence registration method based on central projection | |
CN114612412A (en) | Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium | |
CN118918279A (en) | Binocular SLAM three-dimensional reconstruction and positioning based on self-supervision nerve radiation field | |
Jisen | A study on target recognition algorithm based on 3D point cloud and feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |