Nothing Special   »   [go: up one dir, main page]

CN110211043B - A Registration Method Based on Grid Optimization for Panoramic Image Stitching - Google Patents

A Registration Method Based on Grid Optimization for Panoramic Image Stitching Download PDF

Info

Publication number
CN110211043B
CN110211043B CN201910391076.4A CN201910391076A CN110211043B CN 110211043 B CN110211043 B CN 110211043B CN 201910391076 A CN201910391076 A CN 201910391076A CN 110211043 B CN110211043 B CN 110211043B
Authority
CN
China
Prior art keywords
image
matching
grid
points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910391076.4A
Other languages
Chinese (zh)
Other versions
CN110211043A (en
Inventor
范益波
周思远
杨吉喆
孟子皓
池俊
曾晓洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201910391076.4A priority Critical patent/CN110211043B/en
Publication of CN110211043A publication Critical patent/CN110211043A/en
Application granted granted Critical
Publication of CN110211043B publication Critical patent/CN110211043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of digital images, and particularly relates to a grid optimization-based registration method suitable for free view image stitching. In recent years, panoramic images have received great attention due to their application prospects in the fields of virtual reality, medical imaging, and the like. Image registration mainly acquires the position parameters of each image in the panorama. The traditional image registration needs to execute a series of operations such as feature point extraction, matching, homography matrix solving, camera parameter correction and the like. The invention uses a registration method based on grid optimization to replace the classical registration technology, not only has obvious improvement in speed, but also is suitable for splicing images with large parallax photographed by free view angles. In the method, ORB rapid feature extraction is used for acquiring feature points of images, a strategy of step-by-step coarse-fine matching is introduced, and finally three constraint terms based on grid optimization are introduced to obtain optimal registration parameters between the images by a method of minimizing an error function.

Description

一种用于全景图像拼接的基于网格优化的配准方法A Registration Method Based on Grid Optimization for Panoramic Image Stitching

技术领域technical field

本发明属于数字图像处理技术领域,具体涉及一种用于全景图像拼接的基于网格优化的图像配准方法。The invention belongs to the technical field of digital image processing, and in particular relates to an image registration method based on grid optimization for panorama image mosaic.

背景技术Background technique

随着信息技术的迅猛发展和人们生活水平的提高,人们对于高质量的全景图像的需求也越来越旺盛,无论是虚拟现实、全景直播,还是医学成像、辅助驾驶,全景图像都有很好的应用前景。从功能上,人们能够从全景图像中获取更多的信息,获得更好的视觉体验。With the rapid development of information technology and the improvement of people's living standards, people's demand for high-quality panoramic images is also increasing. Whether it is virtual reality, panoramic live broadcast, medical imaging, or assisted driving, panoramic images are very good. application prospects. Functionally, people can get more information from panoramic images and get a better visual experience.

为了获取宽视角的图像,传统的方法是使用广角镜头,如鱼眼镜头,它可以拍摄水平面上近乎180度的全部场景,但是这种方法存在至少以下三个不足:一是会引入肉眼可见的畸变和失真,二是由于拍摄视角过大造成分辨率下降,三是广角镜头造价昂贵。In order to obtain wide-angle images, the traditional method is to use a wide-angle lens, such as a fisheye lens, which can capture nearly 180 degrees of the entire scene on the horizontal plane, but this method has at least the following three shortcomings: First, it will introduce distortion visible to the naked eye and distortion, the second is that the resolution is reduced due to the large shooting angle, and the third is that the wide-angle lens is expensive.

在这样的背景与需求下,图像拼接技术应运而生,该技术旨在将一组带有重叠区域的、小视角的图像序列,经过一系列的处理与操作,最终得到一张包含全部场景信息的、具有大视角特性和超高分辨率的全景图。由于每个镜头只负责拍摄场景中的一部分,如果采用高清镜头,那么细节的分辨率就会较高,同时普通镜头也避免了广角引入的畸变。Under such a background and demand, image mosaic technology came into being. This technology aims to combine a group of image sequences with overlapping areas and small viewing angles, through a series of processing and operations, and finally obtain a picture that contains all scene information. Panoramic images with large viewing angle characteristics and ultra-high resolution. Since each lens is only responsible for shooting a part of the scene, if a high-definition lens is used, the resolution of the details will be higher, and the normal lens will also avoid the distortion introduced by the wide-angle.

传统的图像拼接技术主要可分为以下步骤:图像获取、SIFT特征提取、特征匹配、单应性矩阵求解、相机参数矫正以及图像融合。一个单应性矩阵只能对齐一个平面,如果在初始图像中存在视差和拉伸,使用传统方法得到的拼接图中就会出现非常严重的重影和倾斜,甚至拼接失败(结果图完全失真)。The traditional image stitching technology can be divided into the following steps: image acquisition, SIFT feature extraction, feature matching, homography matrix solution, camera parameter correction and image fusion. A homography matrix can only align one plane, if there is parallax and stretching in the initial image, very serious ghosting and skewing will appear in the mosaic obtained by using traditional methods, and even the mosaic will fail (the resulting image is completely distorted) .

发明内容Contents of the invention

本发明的目的在于克服当前技术不足,提供一种用于全景图像拼接的基于网格优化的图像配准方法。The purpose of the present invention is to overcome the shortcomings of the current technology and provide an image registration method based on grid optimization for panoramic image mosaic.

本发明适用于自由视角的图像配准,对图像的尺度变换、亮度变换和旋转变换都表现出优越的稳定性。因此,此发明不仅能够应用于镜头固定的全景监控,还适用于镜头移动、旋转的无人机勘测等。The invention is suitable for image registration of free viewing angles, and exhibits excellent stability for image scale transformation, brightness transformation and rotation transformation. Therefore, this invention can not only be applied to panoramic monitoring with a fixed lens, but also to drone surveys with moving and rotating lenses.

对于图像序列的获取,本发明使用三种方法:其一,固定摄像头旋转拍摄获取周围全景,这样得到的图像序列最为规则;其二,手持相机拍摄图像序列,相邻重叠区域在20%左右即可,如此获取的图像序列会存在旋转、抖动、平移等变换;其三,无人机高空获取图像,相机位置固定但会调整拍摄方向,获得的图像存在尺度、旋转变换且分辨率较高。For the acquisition of the image sequence, the present invention uses three methods: one, the fixed camera rotates and shoots to obtain the surrounding panorama, and the image sequence obtained like this is the most regular; Yes, the image sequence obtained in this way will have transformations such as rotation, shaking, and translation; third, the UAV acquires images at high altitude, the camera position is fixed but the shooting direction will be adjusted, and the obtained images have scale, rotation transformation and high resolution.

本发明提供的用于全景图像拼接的基于网格优化的图像配准方法,具体步骤如下:The grid optimization-based image registration method for panoramic image mosaic provided by the present invention, the specific steps are as follows:

(1)应用ORB【1】进行快速的特征提取。使用FAST【2】算法进行特征检测,然后基于改进的BRIEF【3】描述符生成特征点的相关描述子,其中包含该特征点的尺度信息、位置和方向信息。(1) Apply ORB [1] for fast feature extraction. Use the FAST [2] algorithm for feature detection, and then generate the relevant descriptor of the feature point based on the improved BRIEF [3] descriptor, which contains the scale information, position and direction information of the feature point.

(2)采用高维树(K-D树)和最优节点优先算法(BBF)进行特征点的粗匹配,然后对得到的匹配点用公式(1)进行比值测试,p为当前特征点,其定义为p的最邻近特征点pbest-closed与次邻近特征点psecond-closed的汉明距离的比值要小于一个阈值(ratio),阈值一般取为0.65;应用公式(2)进行交叉测试;遍历图像I中的特征点,寻找图像J中与之匹配的点,记为MI->J,然后遍历图像J中的特征点,寻找图像I中相对应的点,记为MJ->I,交叉测试认为只有当两者相互对应时方为一对正确的匹配;(2) Use the high-dimensional tree (KD tree) and the optimal node first algorithm (BBF) for rough matching of feature points, and then use the formula (1) to perform a ratio test on the obtained matching points, p is the current feature point, which is defined as p The ratio of the Hamming distance between the nearest feature point p best-closed and the second adjacent feature point p second-closed is less than a threshold (ratio), the threshold is generally taken as 0.65; apply the formula (2) for cross-testing; traverse the image I The feature points in the image J, find the matching point in the image J, recorded as M I->J , then traverse the feature points in the image J, find the corresponding point in the image I, recorded as M J->I , cross The test considers a pair of correct matches only when the two correspond to each other;

Figure BDA0002056529790000021
Figure BDA0002056529790000021

MI&J=MI->J∩MJ->I (2)M I&J =M I->J ∩M J->I (2)

(3)然后,对精匹配后的匹配点对进行多层RANSAC【4】筛选,在图像的多个平面中筛选特征点对的内点集,使得最终内点集的数目占总匹配对数目的80%以上,最大限度的保留匹配信息。(3) Then, perform multi-layer RANSAC [4] screening on the matching point pairs after fine matching, and filter the interior point sets of feature point pairs in multiple planes of the image, so that the number of final interior point sets accounts for the total number of matching pairs More than 80% of the matching information is retained to the greatest extent.

(4)通过MDLT(移动直接线性转换)将特征匹配点映射为分布更加规则均匀的顶点匹配点。将图像划分成密集网格,每个网格对应一个单应性矩阵投影变换,如公式(3)所示,其中

Figure BDA0002056529790000022
表示网格中某一匹配点的初始坐标(x,y,1),/>
Figure BDA0002056529790000023
表示变换后的坐标(x′,y′,1),均为三维,因此矩阵H的维度为3×3。(4) Through MDLT (Moving Direct Linear Transformation), the feature matching points are mapped to vertex matching points with more regular and uniform distribution. Divide the image into dense grids, and each grid corresponds to a homography matrix projection transformation, as shown in formula (3), where
Figure BDA0002056529790000022
Indicates the initial coordinates (x,y,1) of a matching point in the grid, />
Figure BDA0002056529790000023
Indicates the transformed coordinates (x′, y′, 1), all of which are three-dimensional, so the dimension of the matrix H is 3×3.

Figure BDA0002056529790000024
Figure BDA0002056529790000024

不妨设H为:May wish to set H as:

Figure BDA0002056529790000025
Figure BDA0002056529790000025

则将公式(3)化开可得:Then formula (3) can be decomposed to get:

Figure BDA0002056529790000026
Figure BDA0002056529790000026

整理成矩阵相乘的格式,便为:Organized into the format of matrix multiplication, it is:

aih=0 (6)a i h = 0 (6)

其中,ai表示第i对匹配点构成的2×9矩阵,h为H的列向量格式(维度为9×1):Among them, a i represents the 2×9 matrix formed by the i-th pair of matching points, and h is the column vector format of H (the dimension is 9×1):

Figure BDA0002056529790000031
Figure BDA0002056529790000031

h=[h1h2h3h4h5h6h7h8h9]T h=[h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 ] T

将M对匹配点全部考虑在内,如公式(8),通过最小化平方误差ek来求解变换矩阵h,其中A为M对匹配点对应的变换矩阵,维度为2M×9;Wk为每对匹配点的权值对角阵,维度为2M×2M,每个元素

Figure BDA0002056529790000032
通过公式(4)计算;/>
Figure BDA0002056529790000033
为第i对匹配点对第k个网络的影响权值,由该匹配点与网格中心的距离决定。公式(4)中xk表示第k个网格的中心坐标,xi表示第i对匹配点的坐标,μ为调整参数。Taking all M pairs of matching points into consideration, as shown in formula (8), the transformation matrix h is solved by minimizing the square error e k , where A is the transformation matrix corresponding to M pairs of matching points, and its dimension is 2M×9; W k is The weight diagonal matrix of each pair of matching points, the dimension is 2M×2M, each element
Figure BDA0002056529790000032
Calculated by formula (4); />
Figure BDA0002056529790000033
is the influence weight of the i-th pair of matching points on the k-th network, which is determined by the distance between the matching point and the grid center. In formula (4), x k represents the center coordinate of the k-th grid, x i represents the coordinates of the i-th pair of matching points, and μ is an adjustment parameter.

ek=argmin||WkAh||2s.t.||h||=1 (8)e k =argmin||W k Ah|| 2 st||h||=1 (8)

Figure BDA0002056529790000034
Figure BDA0002056529790000034

然后应用计算得到的单应性矩阵在另一幅图像中找到网格顶点的匹配点,顶点匹配点分布均匀规则,以此作为网格优化中的匹配点,能够有效地减少计算量。Then use the calculated homography matrix to find the matching points of the grid vertices in another image. The distribution of the matching points of the vertices is even and regular, and they are used as the matching points in the grid optimization, which can effectively reduce the amount of calculation.

(5)引入基于网格优化的三个约束项:针对重叠区域的坐标对齐约束项、针对非重叠区域的局部相似约束项和针对全局结构一致性的全局相似约束项。首先设置几个标志,将图像之间的匹配关系存入集合T,图像i和图像j经步骤(4)映射后得到的匹配点对的集合设为Mij,由于图像已划分成网格,设Vi和Ei为图像i中网格的顶点集合和边集合。(5) Three constraint terms based on grid optimization are introduced: the coordinate alignment constraint term for overlapping regions, the local similarity constraint term for non-overlapping regions, and the global similarity constraint term for global structural consistency. First set several flags, store the matching relationship between images into the set T, and set the set of matching point pairs obtained after image i and image j are mapped in step (4) as M ij , since the image has been divided into grids, Let V i and E i be the set of vertices and edges of the mesh in image i.

坐标对齐约束项如公式(10)所示,其作用为确保在网格优化后与相应匹配点的坐标尽可能一致,减少相邻图像重叠区域的对齐误差。其中m(p)返回特征点p在另一幅图像上的匹配点,Ψ(p)用4个网格顶点坐标的线性组合来表示特征点p的位置。The coordinate alignment constraint item is shown in formula (10), and its function is to ensure that the coordinates of the corresponding matching points are as consistent as possible after grid optimization, and reduce the alignment error in the overlapping area of adjacent images. Among them, m(p) returns the matching point of feature point p on another image, and Ψ(p) uses the linear combination of four grid vertex coordinates to represent the position of feature point p.

Figure BDA0002056529790000035
Figure BDA0002056529790000035

局部相似约束项如公式(11)所示,其主要目的是确保网格优化前后同一向量边的长度和方向不会有太大变化。由于投影矩阵主要适用于重叠区域,在非常重叠区域引入相似变换矩阵

Figure BDA0002056529790000036
表示边/>
Figure BDA0002056529790000037
的相似变换,具体计算如公式(12),其中/>
Figure BDA0002056529790000038
和/>
Figure BDA0002056529790000039
是关于顶点变量的线性组合,主要模拟旋转和大小变换【5】;/>
Figure BDA00020565297900000310
和/>
Figure BDA00020565297900000311
表示原图像上同一条边的两个顶点位置,vji和v′k k表示网格优化后的顶点位置,Ei表示网格的边集合。The local similarity constraint item is shown in formula (11), and its main purpose is to ensure that the length and direction of the same vector edge will not change much before and after mesh optimization. Since the projection matrix is mainly suitable for overlapping areas, similar transformation matrices are introduced in very overlapping areas
Figure BDA0002056529790000036
means side />
Figure BDA0002056529790000037
The similarity transformation of , the specific calculation is as formula (12), where />
Figure BDA0002056529790000038
and />
Figure BDA0002056529790000039
It is a linear combination of vertex variables, mainly simulating rotation and size transformation [5]; />
Figure BDA00020565297900000310
and />
Figure BDA00020565297900000311
Indicates the two vertex positions of the same edge on the original image, v ji and v′ k k represent the vertex positions after grid optimization, and E i represents the edge set of the grid.

Figure BDA00020565297900000312
Figure BDA00020565297900000312

Figure BDA0002056529790000041
Figure BDA0002056529790000041

全局相似约束项如公式(13)所示,其旨在提升图像序列中的整体结构一致性。其中

Figure BDA0002056529790000042
为每条边的权值,从重叠区域到非重叠区域渐变,距离重叠区域越远,权重越大,定义为公式(14)。si定义为图像i的尺度量,可通过光束平差法估计图像i的相机参数获得;θi定义为图像i对于基准图像的旋转量,本发明定义LSD【6】特征线检测得到的线特征之间的夹角的平均值为两幅图像之间的旋转角;参数/>
Figure BDA0002056529790000043
和/>
Figure BDA0002056529790000044
在公式(12)中已经有所提及。The global similarity constraint term is shown in formula (13), which aims to improve the overall structural consistency in the image sequence. in
Figure BDA0002056529790000042
is the weight of each edge, gradually changing from the overlapping area to the non-overlapping area, the farther away from the overlapping area, the greater the weight, defined as formula (14). s i is defined as the scale quantity of image i, which can be obtained by estimating the camera parameters of image i through bundle adjustment method; θ i is defined as the rotation amount of image i relative to the reference image, and the present invention defines the line obtained by LSD [6] characteristic line detection The average of the angles between the features is the rotation angle between the two images; parameter />
Figure BDA0002056529790000043
and />
Figure BDA0002056529790000044
It has been mentioned in formula (12).

Figure BDA0002056529790000045
Figure BDA0002056529790000045

Figure BDA0002056529790000046
Figure BDA0002056529790000046

其中,η和λ是调整参数,由实验产生,实施例中,η取6,λ取20。

Figure BDA0002056529790000047
是共享边/>
Figure BDA0002056529790000048
的网格集合(1个或2个,取决于是否为边界边),Mi表示图像i的重叠区域的全部网格的结合;d(qk,Mi)是一个函数,用以计算集合/>
Figure BDA0002056529790000049
内的网格qk到重叠区域的距离;Ri和Ci表示图像i网格的行数和列数。Wherein, η and λ are adjustment parameters, which are generated by experiments. In the embodiment, η is 6, and λ is 20.
Figure BDA0002056529790000047
is the shared side/>
Figure BDA0002056529790000048
The set of grids (1 or 2, depending on whether it is a boundary edge), M i represents the combination of all grids in the overlapping area of image i; d(q k , M i ) is a function to calculate the set />
Figure BDA0002056529790000049
The distance from the grid q k within to the overlapping area; R i and C i represent the number of rows and columns of the grid of image i.

公式(15)结合三个约束项,最小化此公式即可获得网格优化后每个像素点在全景图中的坐标值,图像配准完成。Equation (15) combines the three constraints, and the coordinate value of each pixel in the panorama after grid optimization can be obtained by minimizing this equation, and the image registration is completed.

Figure BDA00020565297900000410
Figure BDA00020565297900000410

式中,γ为局部相似约束项的调整系数,实施例中取0.54。In the formula, γ is the adjustment coefficient of the local similarity constraint item, which is 0.54 in the embodiment.

以上便是本发明的基本步骤,如附图1所示。The above is the basic steps of the present invention, as shown in Figure 1.

本发明可以有效消除结果图中的重影和倾斜,提高配准精度,同时减少配准的时间消耗。The invention can effectively eliminate the ghost and tilt in the result image, improve the registration accuracy, and reduce the time consumption of registration at the same time.

附图说明Description of drawings

图1为本发明的架构流程图。FIG. 1 is a flowchart of the architecture of the present invention.

图2为手持相机拍摄图像序列的配准结果图与拼接全景图。Figure 2 shows the registration results and mosaic panorama of the image sequence captured by the hand-held camera.

图3为无人机高空拍摄高分辨率图像序列的配准结果图与拼接全景图。Figure 3 shows the registration results and mosaic panorama of the high-resolution image sequence captured by the drone at high altitude.

具体实施方式Detailed ways

下面通过实例,结合附图进一步描述本发明方法。Below by example, further describe the method of the present invention in conjunction with accompanying drawing.

对于一组测试图像序列A,使用本发明的方法对其进行图像配准,具体过程如下:For a group of test image sequences A, use the method of the present invention to carry out image registration, the specific process is as follows:

1、使用ORB算法对A中每幅图像进行快速特征提取,获得每幅图像的特征点;1. Use the ORB algorithm to perform fast feature extraction on each image in A, and obtain the feature points of each image;

2、在特征点粗匹配中引入K-D树,由于每个特征点的描述子为256位的二进制串,正好可作为32维数据,每维8位,应用于K-D树构建,然后基于BBF搜寻找到与此特征点最邻近的点,作为一对匹配点对;2. Introduce the K-D tree in the rough matching of feature points. Since the descriptor of each feature point is a 256-bit binary string, it can be used as 32-dimensional data, with 8 bits per dimension. It is applied to the K-D tree construction, and then found based on BBF search The points closest to this feature point are used as a pair of matching points;

3、对得到的匹配点对应用比值测试和交叉测试,剔除其中的错误匹配,保留下来的变为精匹配点对;3. Apply ratio test and cross test to the obtained matching point pairs, eliminate the wrong matches, and keep the remaining ones as fine matching point pairs;

4、对精匹配点对作多层RANAC筛选,在图像的多个平面中筛选特征点对的内点集,使得最终内点集的数目占总匹配对数目的80%以上,最大限度的保留匹配信息;4. Perform multi-layer RANAC screening on fine matching point pairs, and filter the interior point sets of feature point pairs in multiple planes of the image, so that the number of final interior point sets accounts for more than 80% of the total number of matching pairs, and the maximum retention matching information;

5、通过MDLT(移动直接线性转换)将特征带你匹配点映射为分布更加规则均匀的顶点匹配点。将图像划分成密集网格,每个网格对应一个单应性矩阵投影变换,然后应用得到的单应性矩阵在另一幅图像中找到网格顶点的匹配点,顶点匹配点分布均匀规则,以此作为网格优化中的匹配点能够有效地减少计算量;5. Through MDLT (Moving Direct Linear Transformation), the feature belt matching points are mapped to vertex matching points with more regular and uniform distribution. Divide the image into dense grids, each grid corresponds to a homography matrix projection transformation, and then apply the obtained homography matrix to find the matching points of the grid vertices in another image, the vertex matching points are evenly distributed, Using it as a matching point in grid optimization can effectively reduce the amount of calculation;

6、引入基于网格优化的三个约束项:针对重叠区域的坐标对齐约束项、针对非重叠区域的局部相似约束项和针对全局结构一致性的全局相似约束项,通过最小化误差函数的方法来获得像素点在全景图上的坐标位置。6. Introduce three constraints based on grid optimization: coordinate alignment constraints for overlapping regions, local similarity constraints for non-overlapping regions, and global similarity constraints for global structural consistency. By minimizing the error function To obtain the coordinate position of the pixel point on the panorama.

综上所述,测试图像序列A的图像配准完成,得到的配准结果以及最终的拼接全景图如附图2和3所示。In summary, the image registration of the test image sequence A is completed, and the obtained registration results and the final mosaic panorama are shown in Figures 2 and 3 .

参考文献:references:

【1】定向简洁特征提取算法,参考文献:E.Rublee,V.Rabaud,K.Konolige,etal.ORB:An efficient alternative to SIFTor SURF[C].International Conference onComputer Vision,2011:2564-2571.【1】Oriented concise feature extraction algorithm, reference: E.Rublee, V.Rabaud, K.Konolige, etal.ORB: An efficient alternative to SIFTor SURF[C].International Conference on Computer Vision,2011:2564-2571.

【2】快速角点检测算法,参考文献:E.Rosten,T.Drummond.Machine learning forhigh-speed corner detection[C].European Conference on computer Vision,2006:430-443.【2】Fast corner detection algorithm, reference: E.Rosten, T.Drummond. Machine learning for high-speed corner detection [C]. European Conference on computer Vision, 2006: 430-443.

【3】简洁描述符,参考文献:M.Calonder,V.Lepetit,C.Strecha,et al.BRIEF:Binary Robust Independent Elementary Features[C].European Conference onComputer Vision.2010:778-792.【3】Concise descriptors, references: M.Calonder, V.Lepetit, C.Strecha, et al.BRIEF: Binary Robust Independent Elementary Features[C].European Conference on Computer Vision.2010:778-792.

【4】随机抽样一致性算法,参考文献:M.A.Fischler,Bolles R.C..Random sampleconsensus:a paradigm for model fitting with applications to image analysisand automatic cartography[J].Communications of the ACM,1981,24(6):381-395.【4】Random sampling consensus algorithm, reference: M.A.Fischler, Bolles R.C..Random sample consensus: a paradigm for model fitting with applications to image analysis and automatic cartography[J].Communications of the ACM,1981,24(6):381 -395.

【5】相似变换的详情可参考文献:T.Igarashi,Y.Igarashi.Implementing as-rigid-as-possible shape manipulation and surface flattening[J].Journal ofGraphics Gpu&Game Tools,2009.【5】For details of similar transformation, please refer to: T.Igarashi, Y.Igarashi. Implementing as-rigid-as-possible shape manipulation and surface flattening[J].Journal ofGraphics Gpu&Game Tools,2009.

【6】Line Segment Detevtor的缩写,一种线特征检测算法,具体可参考:R.G.V.Gioi,J.Jakubowicz,J.M.Morel,er al.LSD:a line segment detector[J].ImageProcessing On Line,2012,2:35-55。【6】Abbreviation of Line Segment Detector, a line feature detection algorithm, for details, please refer to: R.G.V.Gioi, J.Jakubowicz, J.M.Morel, er al.LSD: a line segment detector[J].ImageProcessing On Line,2012,2 :35-55.

Claims (1)

1. The grid optimization-based image registration method for panoramic image stitching is characterized by comprising the following specific steps of:
(1) Performing rapid feature extraction by using ORB; performing feature detection by using a FAST algorithm, and then generating related descriptors of feature points based on the improved BRIEF descriptor, wherein the related descriptors comprise scale information, position and direction information of the feature points;
(2) Coarse matching of feature points is carried out by adopting a K-D tree and an optimal node priority algorithm, then the obtained matching points are subjected to ratio test by using a formula (1), wherein p is the current feature point and is defined as the nearest neighbor feature point p of p best-closed And next adjacent feature point p second-closed The ratio of hamming distances of (a) is less than a threshold ratio; applying the formula (2) to perform cross test; traversing the characteristic points in the image I, searching the matched points in the image J, and marking the points as M I->J Then traversing the characteristic points in the image J to find the corresponding points in the image I, which are marked as M J->I Cross testing considers that only when the two correspond to each other, the parties are a pair of correct matches;
Figure FDA0004196977680000011
M I&J =M I->J ∩M J->I (2)
M I&J indicating that image I and image J are the correct matches;
(3) Then, carrying out multilayer RANSAC screening on the matched point pairs after the fine matching, and screening the inner point sets of the characteristic point pairs in a plurality of planes of the image, so that the number of the final inner point sets accounts for more than 80% of the total matched pairs, and reserving the matching information to the maximum extent;
(4) Mapping the feature matching points into vertex matching points with more regular and uniform distribution through mobile direct linear conversion; dividing the image into dense grids, each grid corresponding to a homography matrix projective transformation, as shown in equation (3), wherein
Figure FDA0004196977680000012
Initial coordinates (x, y, 1) representing a certain matching point in the grid, are->
Figure FDA0004196977680000013
The transformed coordinates (x ', y', 1) are all three-dimensional, so the dimension of the matrix H is 3 x 3;
Figure FDA0004196977680000014
let H be:
Figure FDA0004196977680000015
the formula (3) is formed as follows:
Figure FDA0004196977680000016
the data are arranged into a matrix multiplication format, namely:
a i h=0 (6)
wherein a is i Represents a 2X 9 matrix formed by the ith pair of matching points, H is the column direction of HVolume format, dimension 9×1:
Figure FDA0004196977680000021
taking all M pairs of matching points into account, as shown in equation (8), by minimizing the squared error e k Solving a transformation matrix h, wherein A is a transformation matrix corresponding to M pairs of matching points, and the dimension is 2M multiplied by 9; w (W) k For each pair of matching points, the dimension is 2M×2M, each element
Figure FDA0004196977680000022
Calculated by equation (4); />
Figure FDA0004196977680000023
The influence weight of the ith pair of matching points on the kth network is determined by the distance between the matching points and the grid center; x in formula (4) k Represents the center coordinates, x, of the kth grid i Representing the coordinates of the ith pair of matching points, wherein mu is an adjustment parameter;
e k =argmin||W k Ah|| 2 s.t.||h||=1 (8)
Figure FDA0004196977680000024
then, finding matching points of grid vertexes in another image by using the homography matrix obtained by calculation, and uniformly distributing the matching points of the vertexes to serve as the matching points in grid optimization;
(5) Three constraint terms based on grid optimization are introduced: aligning constraint terms for coordinates of overlapping regions, local similarity constraint terms for non-overlapping regions, and global similarity constraint terms for global structural consistency; firstly, setting a plurality of marks, storing the matching relation between images into a set T, and setting a set of matching point pairs obtained by mapping an image i and an image j in the step (4) as M ij Since the image is divided into grids, set V i And E is i Vertex sets and edge sets for the mesh in image i;
the coordinate alignment constraint term is shown in a formula (10), so that the coordinates of the grid optimized matching points are ensured to be as consistent as possible, and the alignment error of the overlapping area of the adjacent images is reduced; wherein m (p) returns a matching point of the feature point p on the other image, and ψ (p) represents the position of the feature point p with a linear combination of 4 mesh vertex coordinates;
Figure FDA0004196977680000025
the local similarity constraint term is shown in a formula (11), so that the length and the direction of the same vector edge before and after grid optimization are not greatly changed; since the projection matrix is mainly suitable for the overlapping area, the similar transformation matrix is introduced in the very overlapping area
Figure FDA0004196977680000026
Representing edge->
Figure FDA0004196977680000027
Is calculated as equation (12), wherein +.>
Figure FDA0004196977680000028
And->
Figure FDA0004196977680000029
Linear combination of vertex variables, mainly modeling rotation and size transformation; />
Figure FDA0004196977680000031
And->
Figure FDA0004196977680000032
Representing the positions of two vertexes of the same edge on the original image, v j ′i And v kk Representing the vertex position after grid optimization, E i Representing a set of edges of a mesh;
Figure FDA0004196977680000033
Figure FDA0004196977680000034
the global similarity constraint term is shown in a formula (13) and aims to improve the consistency of the whole structure in the image sequence; wherein the method comprises the steps of
Figure FDA0004196977680000035
For the weight of each edge, gradually changing from an overlapping area to a non-overlapping area, wherein the farther the distance from the overlapping area is, the larger the weight is, and the formula (14) is defined; s is(s) i The method comprises the steps of defining a ruler measure of an image i, and estimating camera parameters of the image i through a beam adjustment method to obtain the ruler measure; θ i Defining the rotation quantity of the image i relative to a reference image, wherein the average value of included angles between line features detected according to LSD feature lines is the rotation angle between the two images;
Figure FDA0004196977680000036
Figure FDA0004196977680000037
where eta and lambda are the adjustment parameters,
Figure FDA0004196977680000038
is a shared edge->
Figure FDA0004196977680000039
Grid set, M of (2) i A combination of all meshes representing overlapping areas of image i; d (q) k ,M i ) Is a function ofComputing set->
Figure FDA00041969776800000310
Inner grid q k Distance to the overlap region; r is R i And C i Representing the number of rows and columns of the image i grid;
combining the three constraint terms, as shown in a formula (15), minimizing the formula to obtain coordinate values of each pixel point in the panorama after grid optimization, and completing image registration;
Figure FDA00041969776800000311
wherein, gamma is the adjustment coefficient of the local similarity constraint term.
CN201910391076.4A 2019-05-11 2019-05-11 A Registration Method Based on Grid Optimization for Panoramic Image Stitching Active CN110211043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910391076.4A CN110211043B (en) 2019-05-11 2019-05-11 A Registration Method Based on Grid Optimization for Panoramic Image Stitching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910391076.4A CN110211043B (en) 2019-05-11 2019-05-11 A Registration Method Based on Grid Optimization for Panoramic Image Stitching

Publications (2)

Publication Number Publication Date
CN110211043A CN110211043A (en) 2019-09-06
CN110211043B true CN110211043B (en) 2023-06-27

Family

ID=67785790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910391076.4A Active CN110211043B (en) 2019-05-11 2019-05-11 A Registration Method Based on Grid Optimization for Panoramic Image Stitching

Country Status (1)

Country Link
CN (1) CN110211043B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781903B (en) * 2019-10-12 2022-04-01 中国地质大学(武汉) Unmanned aerial vehicle image splicing method based on grid optimization and global similarity constraint
IL271518B2 (en) * 2019-12-17 2023-04-01 Elta Systems Ltd Radiometric correction in image mosaicing
CN111160466B (en) * 2019-12-30 2022-02-22 深圳纹通科技有限公司 Feature matching algorithm based on histogram statistics
CN111242848B (en) * 2020-01-14 2022-03-04 武汉大学 Method and system for stitching of binocular camera images based on regional feature registration
CN111369495B (en) * 2020-02-17 2024-02-02 珀乐(北京)信息科技有限公司 Panoramic image change detection method based on video
CN111507904B (en) * 2020-04-22 2023-06-02 华中科技大学 Image splicing method and device for microscopic printing pattern
CN111640065B (en) * 2020-05-29 2023-06-23 深圳拙河科技有限公司 Image stitching method and imaging device based on camera array
CN111899164B (en) * 2020-06-01 2022-11-15 东南大学 An Image Stitching Method for Multi-focal Scenes
CN111968035B (en) * 2020-08-05 2023-06-20 成都圭目机器人有限公司 Image relative rotation angle calculation method based on loss function
CN112437253B (en) * 2020-10-22 2022-12-27 中航航空电子有限公司 Video splicing method, device, system, computer equipment and storage medium
CN112270755B (en) * 2020-11-16 2024-04-05 Oppo广东移动通信有限公司 Three-dimensional scene construction method and device, storage medium and electronic equipment
CN112435163B (en) * 2020-11-18 2022-10-18 大连理工大学 Unmanned aerial vehicle aerial image splicing method based on linear feature protection and grid optimization
CN113112531B (en) * 2021-04-02 2024-05-07 广州图匠数据科技有限公司 Image matching method and device
CN113052765B (en) * 2021-04-23 2021-10-08 中国电子科技集团公司第二十八研究所 Panoramic image splicing method based on optimal grid density model
CN113450255A (en) * 2021-06-04 2021-09-28 西安超越申泰信息科技有限公司 Aerial image splicing method and device
CN115019076A (en) * 2022-06-27 2022-09-06 杭州萤石软件有限公司 A kind of space line position calculation method, mobile robot and electronic equipment
CN117221466B (en) * 2023-11-09 2024-01-23 北京智汇云舟科技有限公司 Video stitching method and system based on grid transformation
CN118822904B (en) * 2024-09-19 2025-01-21 天津象小素科技有限公司 An adaptive composition method and system based on feature fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN106485740A (en) * 2016-10-12 2017-03-08 武汉大学 A kind of combination point of safes and the multidate SAR image registration method of characteristic point
CN107067370A (en) * 2017-04-12 2017-08-18 长沙全度影像科技有限公司 A kind of image split-joint method based on distortion of the mesh
CN108470324A (en) * 2018-03-21 2018-08-31 深圳市未来媒体技术研究院 A kind of binocular stereo image joining method of robust
CN109389555A (en) * 2018-09-14 2019-02-26 复旦大学 A kind of Panorama Mosaic method and device
CN109658370A (en) * 2018-11-29 2019-04-19 天津大学 Image split-joint method based on mixing transformation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN106485740A (en) * 2016-10-12 2017-03-08 武汉大学 A kind of combination point of safes and the multidate SAR image registration method of characteristic point
CN107067370A (en) * 2017-04-12 2017-08-18 长沙全度影像科技有限公司 A kind of image split-joint method based on distortion of the mesh
CN108470324A (en) * 2018-03-21 2018-08-31 深圳市未来媒体技术研究院 A kind of binocular stereo image joining method of robust
CN109389555A (en) * 2018-09-14 2019-02-26 复旦大学 A kind of Panorama Mosaic method and device
CN109658370A (en) * 2018-11-29 2019-04-19 天津大学 Image split-joint method based on mixing transformation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
自由视角图像配准与拼接方法的研究;刘健;电子科技大学硕士学位论文信息科技辑(第09期);23-47 *

Also Published As

Publication number Publication date
CN110211043A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110211043B (en) A Registration Method Based on Grid Optimization for Panoramic Image Stitching
CN107918927B (en) A fast image stitching method with matching strategy fusion and low error
CN104732482B (en) A kind of multi-resolution image joining method based on control point
US10257501B2 (en) Efficient canvas view generation from intermediate views
CN109064404A (en) It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN107665483B (en) Calibration-free convenient monocular head fisheye image distortion correction method
CN112767542A (en) Three-dimensional reconstruction method of multi-view camera, VR camera and panoramic camera
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN107767339B (en) Binocular stereo image splicing method
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN106910208A (en) A kind of scene image joining method that there is moving target
CN107660336A (en) For the image obtained from video camera, possess the image processing apparatus and its method of automatic compensation function
CN104408689A (en) Holographic-image-based streetscape image fragment optimization method
CN105303615A (en) Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image
CN112085659A (en) Panorama splicing and fusing method and system based on dome camera and storage medium
CN112801870B (en) An image stitching method based on grid optimization, stitching system and readable storage medium
CN112862683B (en) A Neighborhood Image Stitching Method Based on Elastic Registration and Grid Optimization
CN110111250A (en) A kind of automatic panorama unmanned plane image split-joint method and device of robust
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
Fu et al. Image stitching techniques applied to plane or 3-D models: a review
CN112258581B (en) On-site calibration method for panoramic camera with multiple fish glasses heads
CN109767381A (en) A shape-optimized rectangular panoramic image construction method based on feature selection
CN116309844A (en) Three-dimensional measurement method based on single aviation picture of unmanned aerial vehicle
CN108269234A (en) A kind of lens of panoramic camera Attitude estimation method and panorama camera
Bergmann et al. Gravity alignment for single panorama depth inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant