CN110211043B - Registration method based on grid optimization for panoramic image stitching - Google Patents
Registration method based on grid optimization for panoramic image stitching Download PDFInfo
- Publication number
- CN110211043B CN110211043B CN201910391076.4A CN201910391076A CN110211043B CN 110211043 B CN110211043 B CN 110211043B CN 201910391076 A CN201910391076 A CN 201910391076A CN 110211043 B CN110211043 B CN 110211043B
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- points
- grid
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000005457 optimization Methods 0.000 title claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 2
- 238000009827 uniform distribution Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000012937 correction Methods 0.000 abstract description 2
- 238000002059 diagnostic imaging Methods 0.000 abstract description 2
- 230000006872 improvement Effects 0.000 abstract description 2
- 238000012545 processing Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- UDHXJZHVNHGCEC-UHFFFAOYSA-N Chlorophacinone Chemical compound C1=CC(Cl)=CC=C1C(C=1C=CC=CC=1)C(=O)C1C(=O)C2=CC=CC=C2C1=O UDHXJZHVNHGCEC-UHFFFAOYSA-N 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of digital images, and particularly relates to a grid optimization-based registration method suitable for free view image stitching. In recent years, panoramic images have received great attention due to their application prospects in the fields of virtual reality, medical imaging, and the like. Image registration mainly acquires the position parameters of each image in the panorama. The traditional image registration needs to execute a series of operations such as feature point extraction, matching, homography matrix solving, camera parameter correction and the like. The invention uses a registration method based on grid optimization to replace the classical registration technology, not only has obvious improvement in speed, but also is suitable for splicing images with large parallax photographed by free view angles. In the method, ORB rapid feature extraction is used for acquiring feature points of images, a strategy of step-by-step coarse-fine matching is introduced, and finally three constraint terms based on grid optimization are introduced to obtain optimal registration parameters between the images by a method of minimizing an error function.
Description
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to an image registration method based on grid optimization for panoramic image stitching.
Background
Along with the rapid development of information technology and the improvement of living standard of people, the requirements of people for high-quality panoramic images are also more and more vigorous, and the panoramic images have good application prospects in the aspects of virtual reality, panoramic live broadcasting, medical imaging and driving assistance. Functionally, people can acquire more information from the panoramic image, and better visual experience is obtained.
In order to acquire an image of a wide angle of view, a conventional method is to use a wide angle lens, such as a fisheye lens, which can capture all scenes of almost 180 degrees on a horizontal plane, but this method has at least three disadvantages: distortion and distortion visible to naked eyes can be introduced, resolution is reduced due to overlarge shooting visual angles, and the wide-angle lens is high in manufacturing cost.
Under such background and demand, an image stitching technology has been developed, which aims to obtain a panoramic image with large viewing angle characteristics and ultrahigh resolution, which contains all scene information, by a series of processing and operation on a group of image sequences with overlapping areas and small viewing angles. Since each lens is only responsible for shooting a part of the scene, if a high-definition lens is adopted, the resolution of details is higher, and meanwhile, the common lens also avoids distortion introduced by a wide angle.
The traditional image stitching technology mainly comprises the following steps: image acquisition, SIFT feature extraction, feature matching, homography matrix solving, camera parameter correction and image fusion. A homography matrix can only align one plane, if there is parallax and stretching in the original image, very serious double-image and tilting will occur in the spliced image obtained by using the conventional method, and even splicing failure (as a result, the image is completely distorted).
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image registration method based on grid optimization for panoramic image stitching.
The invention is suitable for image registration of free view angles, and has excellent stability for scale transformation, brightness transformation and rotation transformation of images. Therefore, the invention not only can be applied to panoramic monitoring with fixed lenses, but also is applicable to unmanned aerial vehicle surveying with moving and rotating lenses and the like.
For the acquisition of image sequences, the present invention uses three methods: firstly, a fixed camera rotates to shoot and acquire surrounding panorama, and the obtained image sequence is most regular; secondly, shooting an image sequence by a handheld camera, wherein the adjacent overlapping area is about 20%, and the acquired image sequence can have transformation such as rotation, shaking, translation and the like; thirdly, the unmanned aerial vehicle acquires images at high altitude, the camera position is fixed but shooting direction can be adjusted, and the acquired images have dimensions, rotation transformation and higher resolution.
The invention provides an image registration method based on grid optimization for panoramic image stitching, which comprises the following specific steps:
(1) Quick feature extraction is performed using ORB [ 1 ]. Feature detection is performed using the FAST 2 algorithm, and then relevant descriptors of feature points are generated based on the modified BRIEF 3 descriptor, wherein scale information, position and direction information of the feature points are contained.
(2) Coarse matching of feature points is carried out by adopting Gao Weishu (K-D tree) and optimal node priority algorithm (BBF), then the obtained matching points are subjected to ratio test by using a formula (1), p is the current feature point and is defined as the nearest feature point p of p best-closed And next adjacent feature point p second-closed The ratio of hamming distances of (2) is less than a threshold (ratio), which is typically taken to be 0.65; applying the formula (2) to perform cross test; traversing the characteristic points in the image I, searching the matched points in the image J, and marking the points as M I->J Then traversing the characteristic points in the image J to find the corresponding points in the image I, which are marked as M J->I Cross testing considers that only when the two correspond to each other, the parties are a pair of correct matches;
M I&J =M I->J ∩M J->I (2)
(3) And then, carrying out multi-layer RANSAC [ 4 ] screening on the matched point pairs after the fine matching, and screening the inner point sets of the characteristic point pairs in a plurality of planes of the image, so that the number of the final inner point sets accounts for more than 80% of the total matched pairs, and the matching information is reserved to the maximum extent.
(4) Mapping feature matching points to distribution by MDLT (Mobile direct Linear transformation)The uniform vertices match the points. Dividing the image into dense grids, each grid corresponding to a homography matrix projective transformation, as shown in equation (3), whereinInitial coordinates (x, y, 1) representing a certain matching point in the grid, are->The transformed coordinates (x ', y', 1) are all three-dimensional, so the dimension of the matrix H is 3 x 3.
Let H be:
then the equation (3) is solved:
the data are arranged into a matrix multiplication format, namely:
a i h=0 (6)
wherein a is i Representing a 2 x 9 matrix of i-th pairs of matching points, H is the column vector format of H (dimension 9 x 1):
h=[h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 ] T
taking all M pairs of matching points into account, as in equation (8), by minimizing the square error e k To solve a transformation matrix h, where A is M pairsThe dimension of the transformation matrix corresponding to the matching point is 2M multiplied by 9; w (W) k For each pair of matching points, the dimension is 2M×2M, each elementCalculated by equation (4); />The influence weight of the ith pair of matching points on the kth network is determined by the distance between the matching points and the center of the grid. X in formula (4) k Represents the center coordinates, x, of the kth grid i And the coordinates of the ith pair of matching points are represented, and mu is an adjustment parameter.
e k =argmin||W k Ah|| 2 s.t.||h||=1 (8)
Then, the homography matrix obtained through calculation is applied to find the matching points of the grid vertices in another image, and the vertex matching points are uniformly distributed and regular, so that the matching points are used as the matching points in grid optimization, and the calculation amount can be effectively reduced.
(5) Three constraint terms based on grid optimization are introduced: the coordinates for the overlapping regions align constraint terms, local similarity constraint terms for the non-overlapping regions, and global similarity constraint terms for global structural consistency. Firstly, setting a plurality of marks, storing the matching relation between images into a set T, and setting a set of matching point pairs obtained by mapping an image i and an image j in the step (4) as M ij Since the image is divided into grids, set V i And E is i Is the vertex set and the edge set of the mesh in image i.
The coordinate alignment constraint term is shown in formula (10), and is used for ensuring that the coordinate of the grid optimized matching point is as consistent as possible, and reducing alignment errors of overlapping areas of adjacent images. Wherein m (p) returns the matching point of the feature point p on the other image, and ψ (p) represents the position of the feature point p with a linear combination of 4 mesh vertex coordinates.
The local similarity constraint term is shown in a formula (11), and the main purpose of the local similarity constraint term is to ensure that the lengths and directions of the same vector edges before and after grid optimization do not change greatly. Since the projection matrix is mainly suitable for the overlapping area, the similar transformation matrix is introduced in the very overlapping areaRepresenting edge->Is calculated as equation (12), wherein +.>And->Linear combinations of vertex variables, mainly modeling rotation and size transformations [ 5 ]; />And->Representing the positions of two vertexes of the same edge on the original image, v j ′ i And v' k k Representing the vertex position after grid optimization, E i Representing the edge set of the mesh.
Global similarityThe constraint term is shown in equation (13) and is intended to promote overall structural consistency in the image sequence. Wherein the method comprises the steps ofFor the weight of each edge, the weight is gradually changed from the overlapping region to the non-overlapping region, and the farther the distance from the overlapping region is, the greater the weight is, defined as formula (14). s is(s) i The scale measurement is defined as an image i, and can be obtained by estimating camera parameters of the image i through a beam adjustment method; θ i The method is defined as the rotation quantity of an image i relative to a reference image, and the average value of included angles between line features obtained by detecting LSD (6) feature lines is defined as the rotation angle between the two images; parameter->And->Already mentioned in equation (12).
Where η and λ are tuning parameters produced experimentally, in the example, η is 6 and λ is 20.Is a shared edge->(1 or 2, depending on whether or not it is a boundary edge), M i A combination of all meshes representing overlapping areas of image i; d (q) k ,M i ) Is a function for calculating the set +.>Inner grid q k Distance to the overlap region; r is R i And C i Representing the number of rows and columns of the image i grid.
And (3) combining the three constraint terms in the formula (15), and minimizing the formula to obtain coordinate values of each pixel point in the panorama after grid optimization, so that image registration is completed.
Where γ is the adjustment coefficient of the local similarity constraint, and 0.54 is taken in the embodiment.
The above is the basic step of the present invention, as shown in fig. 1.
The invention can effectively eliminate ghost and inclination in the result graph, improve the registration precision and reduce the registration time consumption.
Drawings
FIG. 1 is a flow chart of an architecture of the present invention.
Fig. 2 is a registration result diagram and a stitching panorama of a sequence of images captured by a handheld camera.
Fig. 3 is a registration result diagram and a stitching panorama of a high-resolution image sequence shot by the unmanned aerial vehicle.
Detailed Description
The process according to the invention is further described below by way of example with reference to the accompanying drawings.
For a set of test image sequences a, the method of the invention is used for image registration, and the specific process is as follows:
1. carrying out rapid feature extraction on each image in the A by using an ORB algorithm to obtain feature points of each image;
2. introducing a K-D tree in the rough matching of the feature points, wherein the descriptor of each feature point is a 256-bit binary string, which can be just used as 32-dimensional data, 8 bits in each dimension are applied to the construction of the K-D tree, and then searching and finding the point nearest to the feature point based on BBF to be used as a pair of matching point pairs;
3. the obtained matching points are subjected to application ratio test and cross test, error matching in the matching points is removed, and the remaining matching points become fine matching point pairs;
4. multi-layer RANAC screening is carried out on the fine matching point pairs, and the inner point sets of the characteristic point pairs are screened in a plurality of planes of the image, so that the number of the final inner point sets accounts for more than 80% of the total matching logarithm, and matching information is reserved to the maximum extent;
5. feature band you matching points are mapped to vertex matching points with more regular and uniform distribution through MDLT (moving direct linear transformation). Dividing an image into dense grids, projectively transforming each grid corresponding to a homography matrix, then applying the homography matrix to find matching points of grid vertexes in another image, and uniformly and regularly distributing the vertex matching points, so that the computing amount can be effectively reduced by taking the matching points as the matching points in grid optimization;
6. three constraint terms based on grid optimization are introduced: the coordinate positions of the pixel points on the panorama are obtained by a method of minimizing an error function for the coordinate alignment constraint term of the overlapped area, the local similarity constraint term of the non-overlapped area and the global similarity constraint term of the global structural consistency.
In summary, the image registration of the test image sequence a is completed, and the obtained registration result and the final stitched panorama are shown in fig. 2 and 3.
Reference is made to:
【1】 Orientation succinct feature extraction algorithm, reference: E.Rublee, V.Rabaud, K.Konolige, et al ORB An efficient alternative to SIFTor SURF [ C ]. International Conference on Computer Vision,2011:2564-2571.
【2】 Fast corner detection algorithm, reference: E.Rosten, T.Drummond.Machine learning for high-speed corner detection [ C ]. European Conference on computer Vision,2006:430-443.
【3】 Succinct descriptor, reference: M.Calonder, V.Lepetit, C.Strecha, et al BRIEF Binary Robust Independent Elementary Features [ C ]. European Conference on Computer Vision.2010:778-792.
【4】 Random sample consensus algorithm, reference: M.A. Fischler, bolles R.C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automatic cartography [ J ]. Communications of the ACM,1981,24 (6): 381-395.
【5】 Details of the similarity transformations can be referred to in: T.Igarashi, Y.Igarashi.Implementing as-rigid-as-possible shape manipulation and surface flattening [ J ]. Journal of Graphics Gpu & Game Tools,2009.
【6】 Line Segment Detevtor, a line feature detection algorithm, reference may be made specifically to: R.G.V.Gioi, J.Jakubowicz, J.M.Morel, er al LSD: a line segment detector [ J ]. Image Processing On Line,2012,2:35-55.
Claims (1)
1. The grid optimization-based image registration method for panoramic image stitching is characterized by comprising the following specific steps of:
(1) Performing rapid feature extraction by using ORB; performing feature detection by using a FAST algorithm, and then generating related descriptors of feature points based on the improved BRIEF descriptor, wherein the related descriptors comprise scale information, position and direction information of the feature points;
(2) Coarse matching of feature points is carried out by adopting a K-D tree and an optimal node priority algorithm, then the obtained matching points are subjected to ratio test by using a formula (1), wherein p is the current feature point and is defined as the nearest neighbor feature point p of p best-closed And next adjacent feature point p second-closed The ratio of hamming distances of (a) is less than a threshold ratio; applying the formula (2) to perform cross test; traversing the characteristic points in the image I, searching the matched points in the image J, and marking the points as M I->J Then traversing the characteristic points in the image J to find the corresponding points in the image I, which are marked as M J->I Cross testing considers that only when the two correspond to each other, the parties are a pair of correct matches;
M I&J =M I->J ∩M J->I (2)
M I&J indicating that image I and image J are the correct matches;
(3) Then, carrying out multilayer RANSAC screening on the matched point pairs after the fine matching, and screening the inner point sets of the characteristic point pairs in a plurality of planes of the image, so that the number of the final inner point sets accounts for more than 80% of the total matched pairs, and reserving the matching information to the maximum extent;
(4) Mapping the feature matching points into vertex matching points with more regular and uniform distribution through mobile direct linear conversion; dividing the image into dense grids, each grid corresponding to a homography matrix projective transformation, as shown in equation (3), whereinInitial coordinates (x, y, 1) representing a certain matching point in the grid, are->The transformed coordinates (x ', y', 1) are all three-dimensional, so the dimension of the matrix H is 3 x 3;
let H be:
the formula (3) is formed as follows:
the data are arranged into a matrix multiplication format, namely:
a i h=0 (6)
wherein a is i Represents a 2X 9 matrix formed by the ith pair of matching points, H is the column direction of HVolume format, dimension 9×1:
taking all M pairs of matching points into account, as shown in equation (8), by minimizing the squared error e k Solving a transformation matrix h, wherein A is a transformation matrix corresponding to M pairs of matching points, and the dimension is 2M multiplied by 9; w (W) k For each pair of matching points, the dimension is 2M×2M, each elementCalculated by equation (4); />The influence weight of the ith pair of matching points on the kth network is determined by the distance between the matching points and the grid center; x in formula (4) k Represents the center coordinates, x, of the kth grid i Representing the coordinates of the ith pair of matching points, wherein mu is an adjustment parameter;
e k =argmin||W k Ah|| 2 s.t.||h||=1 (8)
then, finding matching points of grid vertexes in another image by using the homography matrix obtained by calculation, and uniformly distributing the matching points of the vertexes to serve as the matching points in grid optimization;
(5) Three constraint terms based on grid optimization are introduced: aligning constraint terms for coordinates of overlapping regions, local similarity constraint terms for non-overlapping regions, and global similarity constraint terms for global structural consistency; firstly, setting a plurality of marks, storing the matching relation between images into a set T, and setting a set of matching point pairs obtained by mapping an image i and an image j in the step (4) as M ij Since the image is divided into grids, set V i And E is i Vertex sets and edge sets for the mesh in image i;
the coordinate alignment constraint term is shown in a formula (10), so that the coordinates of the grid optimized matching points are ensured to be as consistent as possible, and the alignment error of the overlapping area of the adjacent images is reduced; wherein m (p) returns a matching point of the feature point p on the other image, and ψ (p) represents the position of the feature point p with a linear combination of 4 mesh vertex coordinates;
the local similarity constraint term is shown in a formula (11), so that the length and the direction of the same vector edge before and after grid optimization are not greatly changed; since the projection matrix is mainly suitable for the overlapping area, the similar transformation matrix is introduced in the very overlapping areaRepresenting edge->Is calculated as equation (12), wherein +.>And->Linear combination of vertex variables, mainly modeling rotation and size transformation; />And->Representing the positions of two vertexes of the same edge on the original image, v j ′i And v k ′ k Representing the vertex position after grid optimization, E i Representing a set of edges of a mesh;
the global similarity constraint term is shown in a formula (13) and aims to improve the consistency of the whole structure in the image sequence; wherein the method comprises the steps ofFor the weight of each edge, gradually changing from an overlapping area to a non-overlapping area, wherein the farther the distance from the overlapping area is, the larger the weight is, and the formula (14) is defined; s is(s) i The method comprises the steps of defining a ruler measure of an image i, and estimating camera parameters of the image i through a beam adjustment method to obtain the ruler measure; θ i Defining the rotation quantity of the image i relative to a reference image, wherein the average value of included angles between line features detected according to LSD feature lines is the rotation angle between the two images;
where eta and lambda are the adjustment parameters,is a shared edge->Grid set, M of (2) i A combination of all meshes representing overlapping areas of image i; d (q) k ,M i ) Is a function ofComputing set->Inner grid q k Distance to the overlap region; r is R i And C i Representing the number of rows and columns of the image i grid;
combining the three constraint terms, as shown in a formula (15), minimizing the formula to obtain coordinate values of each pixel point in the panorama after grid optimization, and completing image registration;
wherein, gamma is the adjustment coefficient of the local similarity constraint term.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910391076.4A CN110211043B (en) | 2019-05-11 | 2019-05-11 | Registration method based on grid optimization for panoramic image stitching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910391076.4A CN110211043B (en) | 2019-05-11 | 2019-05-11 | Registration method based on grid optimization for panoramic image stitching |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110211043A CN110211043A (en) | 2019-09-06 |
CN110211043B true CN110211043B (en) | 2023-06-27 |
Family
ID=67785790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910391076.4A Active CN110211043B (en) | 2019-05-11 | 2019-05-11 | Registration method based on grid optimization for panoramic image stitching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110211043B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110781903B (en) * | 2019-10-12 | 2022-04-01 | 中国地质大学(武汉) | Unmanned aerial vehicle image splicing method based on grid optimization and global similarity constraint |
IL271518B2 (en) * | 2019-12-17 | 2023-04-01 | Elta Systems Ltd | Radiometric correction in image mosaicing |
CN111160466B (en) * | 2019-12-30 | 2022-02-22 | 深圳纹通科技有限公司 | Feature matching algorithm based on histogram statistics |
CN111242848B (en) * | 2020-01-14 | 2022-03-04 | 武汉大学 | Binocular camera image suture line splicing method and system based on regional feature registration |
CN111369495B (en) * | 2020-02-17 | 2024-02-02 | 珀乐(北京)信息科技有限公司 | Panoramic image change detection method based on video |
CN111507904B (en) * | 2020-04-22 | 2023-06-02 | 华中科技大学 | Image stitching method and device for microscopic printing patterns |
CN111640065B (en) * | 2020-05-29 | 2023-06-23 | 深圳拙河科技有限公司 | Image stitching method and imaging device based on camera array |
CN111899164B (en) * | 2020-06-01 | 2022-11-15 | 东南大学 | Image splicing method for multi-focal-segment scene |
CN111968035B (en) * | 2020-08-05 | 2023-06-20 | 成都圭目机器人有限公司 | Image relative rotation angle calculation method based on loss function |
CN112437253B (en) * | 2020-10-22 | 2022-12-27 | 中航航空电子有限公司 | Video splicing method, device, system, computer equipment and storage medium |
CN112270755B (en) * | 2020-11-16 | 2024-04-05 | Oppo广东移动通信有限公司 | Three-dimensional scene construction method and device, storage medium and electronic equipment |
CN112435163B (en) * | 2020-11-18 | 2022-10-18 | 大连理工大学 | Unmanned aerial vehicle aerial image splicing method based on linear feature protection and grid optimization |
CN113112531B (en) * | 2021-04-02 | 2024-05-07 | 广州图匠数据科技有限公司 | Image matching method and device |
CN113052765B (en) * | 2021-04-23 | 2021-10-08 | 中国电子科技集团公司第二十八研究所 | Panoramic image splicing method based on optimal grid density model |
CN113450255A (en) * | 2021-06-04 | 2021-09-28 | 西安超越申泰信息科技有限公司 | Aerial image splicing method and device |
CN117221466B (en) * | 2023-11-09 | 2024-01-23 | 北京智汇云舟科技有限公司 | Video stitching method and system based on grid transformation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN106485740A (en) * | 2016-10-12 | 2017-03-08 | 武汉大学 | A kind of combination point of safes and the multidate SAR image registration method of characteristic point |
CN107067370A (en) * | 2017-04-12 | 2017-08-18 | 长沙全度影像科技有限公司 | A kind of image split-joint method based on distortion of the mesh |
CN108470324A (en) * | 2018-03-21 | 2018-08-31 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method of robust |
CN109389555A (en) * | 2018-09-14 | 2019-02-26 | 复旦大学 | A kind of Panorama Mosaic method and device |
CN109658370A (en) * | 2018-11-29 | 2019-04-19 | 天津大学 | Image split-joint method based on mixing transformation |
-
2019
- 2019-05-11 CN CN201910391076.4A patent/CN110211043B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN106485740A (en) * | 2016-10-12 | 2017-03-08 | 武汉大学 | A kind of combination point of safes and the multidate SAR image registration method of characteristic point |
CN107067370A (en) * | 2017-04-12 | 2017-08-18 | 长沙全度影像科技有限公司 | A kind of image split-joint method based on distortion of the mesh |
CN108470324A (en) * | 2018-03-21 | 2018-08-31 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method of robust |
CN109389555A (en) * | 2018-09-14 | 2019-02-26 | 复旦大学 | A kind of Panorama Mosaic method and device |
CN109658370A (en) * | 2018-11-29 | 2019-04-19 | 天津大学 | Image split-joint method based on mixing transformation |
Non-Patent Citations (1)
Title |
---|
自由视角图像配准与拼接方法的研究;刘健;电子科技大学硕士学位论文信息科技辑(第09期);23-47 * |
Also Published As
Publication number | Publication date |
---|---|
CN110211043A (en) | 2019-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
CN111052176B (en) | Seamless image stitching | |
CN110390640B (en) | Template-based Poisson fusion image splicing method, system, equipment and medium | |
US9959653B2 (en) | Mosaic oblique images and methods of making and using same | |
CN107665483B (en) | Calibration-free convenient monocular head fisheye image distortion correction method | |
CN104778656B (en) | Fisheye image correcting method based on spherical perspective projection | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
CN111815517B (en) | Self-adaptive panoramic stitching method based on snapshot pictures of dome camera | |
CN106447602A (en) | Image mosaic method and device | |
CN116740288B (en) | Three-dimensional reconstruction method integrating laser radar and oblique photography | |
CN114143528A (en) | Multi-video stream fusion method, electronic device and storage medium | |
CN117665841B (en) | Geographic space information acquisition mapping method and device | |
CN111126418A (en) | Oblique image matching method based on planar perspective projection | |
CN105466399A (en) | Quick semi-global dense matching method and device | |
CN115619623A (en) | Parallel fisheye camera image splicing method based on moving least square transformation | |
CN114331835A (en) | Panoramic image splicing method and device based on optimal mapping matrix | |
CN108269234A (en) | A kind of lens of panoramic camera Attitude estimation method and panorama camera | |
CN112258581B (en) | On-site calibration method for panoramic camera with multiple fish glasses heads | |
Bergmann et al. | Gravity alignment for single panorama depth inference | |
CN107256563A (en) | Underwater 3 D reconstructing system and its method based on difference liquid level image sequence | |
Zhang et al. | Tests and performance evaluation of DMC images and new methods for their processing | |
CN108830781A (en) | A kind of wide Baseline Images matching line segments method under Perspective transformation model | |
CN117392317B (en) | Live three-dimensional modeling method, device, computer equipment and storage medium | |
CN114862934B (en) | Scene depth estimation method and device for billion pixel imaging | |
CN111598997B (en) | Global computing imaging method based on focusing stack single data subset architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |