CN111192194B - Panoramic image stitching method for curtain wall building facade - Google Patents
Panoramic image stitching method for curtain wall building facade Download PDFInfo
- Publication number
- CN111192194B CN111192194B CN201911234283.5A CN201911234283A CN111192194B CN 111192194 B CN111192194 B CN 111192194B CN 201911234283 A CN201911234283 A CN 201911234283A CN 111192194 B CN111192194 B CN 111192194B
- Authority
- CN
- China
- Prior art keywords
- image
- points
- registration
- feature
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000009466 transformation Effects 0.000 claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims abstract description 28
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000003993 interaction Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000010845 search algorithm Methods 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000012634 optical imaging Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000005286 illumination Methods 0.000 claims description 2
- 238000012887 quadratic function Methods 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 239000013598 vector Substances 0.000 claims description 2
- 239000011521 glass Substances 0.000 abstract description 6
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000007547 defect Effects 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 4
- 238000013016 damping Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A panoramic image stitching method for curtain wall building facades comprises the following steps: obtaining a high-definition image sequence of a curtain wall building facade, determining an image key region (Region of Interest, ROI) after image preprocessing, extracting feature points of the facade key region by utilizing a visual feature recognition algorithm, calculating and dividing an overlapping region between the image sequences, carrying out preliminary registration after eliminating irrelevant feature points, estimating feature point registration model parameters to finish accurate registration of the feature points between different images and solving a homography transformation matrix, and determining an optimal splicing seam and carrying out image splicing after correcting an optimized transformation model. The method overcomes the defect that the traditional panoramic image stitching algorithm is difficult to apply to scenes such as glass curtain wall buildings and the like, and provides an efficient method for detecting and evaluating the damage of the appearance quality of the curtain wall buildings.
Description
Technical Field
The invention relates to the technical fields of structural health monitoring, image feature recognition and registration, in particular to a panoramic image splicing method for curtain wall building facades.
Background
Computer vision has evolved rapidly in recent years, becoming one of the most popular directions of artificial intelligence. Image stitching is a research hot spot in the fields of computer vision and image processing, and is to spatially register a plurality of image sequences with overlapping areas, and form a high-resolution image of a wide-view scene containing the information of each image sequence after image fusion. At present, a set of complete and feasible panoramic high-definition image stitching method aiming at scenes such as curtain wall buildings and the like is not seen.
The image registration is used as the key and core of an image stitching technology, and is to solve an affine transformation model between images according to the consistency of overlapping areas of two images, namely, transforming a geometric model of one image to a coordinate plane of the other image, and aligning the overlapping areas of the images. There are mainly methods of feature-based image registration, gray-scale-based image registration, transform-domain-based image registration, and the like. At present, a feature-based image registration method is a main trend of research, the method is realized by registering feature attributes of feature points between two images, the method has the advantages of small calculated amount, affine invariance, stability and the like, and the feature point identification algorithm mainly based on the method comprises SIFT (Scale-Invariant Feature Transform), SURF (Speeded Up Robust Features), FAST (Features from Accelerated Segment Test), harris, KAZE and the like, and feature points can be defined through feature graphs formed by CNNs, such as LIFT (Learned Invariant Feature Transform) models and the like.
In terms of accurate registration of images, the RANSAC (RANdom SAmple Consensus) algorithm is most commonly used, but because the number of initial feature point pairs is often large, the number of interior points of the matched feature point pairs is relatively small compared with the routine pairs, so that the RANSAC algorithm is relatively low in execution efficiency. Furthermore, the RANSAC algorithm is sensitive to the selection of the correct noise threshold, and for this reason, the MSAC (M-estimator Sample Consensus) algorithm can achieve better results by modifying the way the loss function is calculated.
Due to the full permeability of the curtain wall glass, light rays can be reflected and refracted when passing through the glass, and problems of exposure difference of images, reflection of light on the surface of the glass and the like can be caused in the shooting process, so that the difficulty of visual feature identification and registration is increased. Therefore, the prior art cannot solve the image stitching suitable for glass curtain wall construction.
In the prior art, the quality damage detection and evaluation of the outer facade of the curtain wall building mainly relies on manual visual detection by an inspector through a hanging basket, and is time-consuming and labor-consuming; or video or image data are shot by UAVs for remote detection by inspectors, but in order to acquire high-definition data, only local areas can be shot, and the overall damage condition of a building structure is difficult to evaluate.
Disclosure of Invention
The invention aims to overcome the defect that the existing feature-based image stitching technology is poor in application effect or incapable of stitching in curtain wall building, and provides a panoramic image stitching thought and method for curtain wall building facades.
The technical scheme of the invention is as follows:
a panoramic image stitching method for curtain wall building facades is characterized by comprising the following steps:
s1, acquiring an image sequence of a curtain wall building structure shot by optical imaging equipment, arranging the image sequence according to the shooting sequence of an outer vertical surface of a building from bottom to top, and providing the ordered image sequence for S2;
s2, preprocessing an input image sequence, and providing a preprocessed image result for S3;
s3, defining a target Region (ROI) aiming at an input image to screen out irrelevant regions; identifying and extracting feature points in a target area of the image by using a SIFT feature point identification algorithm, determining an overlapping area between the images according to the distribution of the feature points in the images, and eliminating irrelevant feature points outside the overlapping area; providing the accurate pixel coordinates and feature descriptors of the feature points of the overlapping area between the images to S4;
s4, feature point data (accurate pixel coordinates and feature descriptors) of the overlapping areas between the images are upscaled from a one-dimensional array data structure to a KD-Tree multi-dimensional data structure, feature point preliminary registration between the overlapping areas of the images is carried out based on the KD-Tree data structure and a fast nearest neighbor search algorithm, and a preliminary registration result is provided for S5;
S5performing registration model fitting on the feature point preliminary registration result by using an MSAC algorithm, distinguishing feature points with incorrect registration, namely 'outer points', from feature points with correct registration, namely 'inner points', and obtaining model fitting parameters; for 'inner points' obtained by model fitting, defining that the 'inner points' meet the uniformly distributed judging conditions and the multistage constraint relation of the characteristic point registration area, and if the 'inner points' do not meet the condition definition, re-using an MSAC algorithm to perform registration model fitting, so that accurate registration of the characteristic points among images is realized; solving a homography transformation matrix between images according to the feature point registration model; providing the inter-image homography transformation matrix to S6;
s6, correcting and optimizing the homography transformation matrix among the images, reducing the distortion error of image splicing, enabling the image sequences to be spliced to be accurately spliced in the overlapping area, and providing the optimized homography transformation matrix for S7;
s7, performing image transformation through the homography transformation matrix pair. In order to eliminate obvious problems of the splicing seams caused by the difference of brightness, exposure and the like among the images, an optimal splicing seam among the images is determined by means of an optimal splicing seam searching algorithm based on dynamic programming, and the images are spliced, so that a splicing result is finally obtained.
In S3, the defining a target Region (ROI) mainly includes the steps of: selecting an ROI definition mode, and selecting a rectangular region in an input image in a man-machine interaction mode to determine a target region for identifying feature points in the image; the ROI definition mode comprises two modes of 'removing' and 'reserving', wherein the selection of the definition mode depends on the selected times of the rectangular area, and the selection of the definition mode comprises the following steps: the method comprises the steps of removing, namely judging all detected characteristic points in a rectangular area selected in human-computer interaction to be irrelevant characteristic points, removing, repeating rectangular area selection operation for a plurality of times, and finally carrying out characteristic point identification and extraction on the part outside all rectangular areas in an image; and (3) reserving, namely judging all detected characteristic points in the selected rectangular area in the man-machine interaction as relevant characteristic points, reserving, and carrying out non-repeatable rectangular area selection operation, and finally carrying out characteristic point identification and extraction on the part in the selected rectangular area in the image.
In S5, the feature point registration area uniformly distributes the judgment conditions, including: the number of the 'inner points' is not less than 4; the length and the width of the minimum rectangle surrounded by the inner points are not smaller than a certain proportion of the length and the width of the original image; the maximum distance between the 'inner points' is not less than a certain proportion of the minimum rectangle diagonal
In the formula, num points Representing the accurate registration number of the feature points; width (width) rect And height rect Respectively represent the width and the length of the minimum rectangle surrounded by the feature points rect A diagonal length representing the minimum rectangle; width (Width) image And Height image Respectively representing the width and the length of the original image; max (dist) points ) Representing the maximum distance between registration feature points; alpha 1 ,α 2 ,α 3 The larger the value, the more accurate the registration result, but also tends to make the registration result unsatisfactory.
S5, the multistage constraint relation comprises geometric features such as feature registration, hu moment and the like of midpoints among 'interior points' in the same image; and if the multi-level constraint relation is satisfied, judging that the characteristic points are registered correctly, otherwise, returning to S5, and carrying out registration model fitting operation again by using an MSAC algorithm.
The multi-level constraint relationship may be characterized as follows: assuming that the left rectangular frame and the right rectangular frame represent two images to be spliced with overlapping areas, after accurate registration, two pairs of characteristic points of A, B and A ' and B ' which are accurately registered are respectively arranged in the two images, and m ' represent midpoints of characteristic points A, B in the two images respectively, the multi-level constraint relation is expressed as follows:
in curvatures X Representing the curvature of the point X calculated based on the Hessian matrix; gradient X Representing gradient values in the neighborhood of the point X, which are calculated by the formula (1) in the step (S3.3); direction X Representing the direction value in the neighborhood of the point X, which is calculated by the formula (2) in the step (S3.3); movement X Representing the moment characteristics in the neighborhood of the point X, and the first two orders of invariant moment M of the Hu moment 1 、M 2 The calculation formula is as follows:
where f (X, y) represents a neighborhood image function of the point X; m is m pq Representing the standard moment of the p+q order of the neighborhood, mu pq Representing the p+q order central moment of the neighborhood; m, N the width and length of the neighborhood, respectively; representing the center of gravity of the neighborhood,M 1 、M 2 respectively represent HuThe first two orders in the moment have better invariance.
Compared with the prior art, the invention has the following outstanding substantial characteristics and remarkable advantages: the method overcomes the application limitation of the feature-based image stitching technology in scenes such as curtain wall glass, is suitable for panoramic image stitching of curtain wall building facades, realizes unification of the whole information of the curtain wall building facades and local high resolution, and also retains the high-definition image information of local structural damage under the condition that the whole position information of the structural damage is not lost.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 is a schematic diagram of a multi-level constraint relationship for image registration
FIG. 3 is a schematic view of the result of panoramic image stitching of curtain wall construction
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below with reference to the accompanying drawings. It is noted that the embodiments use the following known algorithms: SIFT feature point recognition algorithm, fast nearest neighbor search algorithm, MSAC algorithm, L-M (Levenberg-Marquard) algorithm, optimal splice joint search algorithm based on dynamic programming, etc., but are not limited to these algorithms.
The main flow chart is shown in fig. 1.
S1, acquiring an image sequence of a curtain wall building structure shot by optical imaging equipment, arranging the image sequence according to the shooting sequence of the outer vertical surface of the building from bottom to top, and providing the ordered image sequence for S2.
S2, preprocessing the input image sequence by conventional technology, including distortion correction, image graying, image denoising, exposure equalization and image enhancement. The preprocessed image result is provided to S3.
S3, defining a target Region (ROI) for an input image (one of innovation points of the invention) so as to screen irrelevant regions such as a building background; and identifying and extracting characteristic points in the image target area by using a SIFT characteristic point identification algorithm, determining an overlapping area between images according to the distribution of the characteristic points in the images, and eliminating irrelevant characteristic points outside the overlapping area. And providing the accurate pixel coordinates and the feature descriptors of the feature points of the overlapping area between the images to S4.
The defining the target Region (ROI) mainly comprises the following steps: and selecting an ROI definition mode, and selecting a rectangular region in an input image in a man-machine interaction mode to determine a target region for identifying characteristic points in the image.
The ROI definition mode comprises two modes of 'removing' and 'reserving', wherein the selection of the definition mode depends on the selected times of the rectangular area, and the selection of the definition mode comprises the following steps:
the method comprises the steps of removing, namely judging all detected characteristic points in a rectangular area selected in human-computer interaction to be irrelevant characteristic points, removing, repeating rectangular area selection operation for a plurality of times, and finally carrying out characteristic point identification and extraction on the part outside all rectangular areas in an image;
and (3) reserving, namely judging all detected characteristic points in the selected rectangular area in the man-machine interaction as relevant characteristic points, reserving, and carrying out non-repeatable rectangular area selection operation, and finally carrying out characteristic point identification and extraction on the part in the selected rectangular area in the image.
The SIFT feature extraction algorithm keeps invariance to rotation transformation, scale transformation and illumination transformation, maintains stability to a certain extent for video angle change, affine transformation and noise, and the obtained image registration result has high robustness. The method is the prior art, and mainly comprises the following steps:
(S3.1) constructing a linear scale space based on Gaussian kernel convolution operation, and searching a maximum value in a space neighborhood by calculating DoG (Difference of Gaussian) response values; the maximum result is provided (S3.2).
(S3.2) fitting the characteristic points by using a three-dimensional quadratic function to obtain accurate positioning and scale values of the characteristic points, calculating principal curvature based on a Hessian matrix, and filtering extreme points on the edges; the feature point accurate pixel coordinates are provided (S3.3).
(S3.3) calculating the gradient M (x, y) and the direction theta (x, y) of each point in the neighborhood of a certain characteristic point:
where L (x, y) is its dimensional space function.
And (3) counting gradient and direction data in the neighborhood region of the feature point, constructing a histogram, determining the main direction and the auxiliary direction of the feature point according to the peak value in the histogram, and providing the main direction and gradient information of the feature point (S3.4).
And (S3.4) rotating the coordinate axis of the scale image to a main direction, defining 16 sub-areas of 4 multiplied by 4 in the neighborhood range of the feature points, dividing each sub-area into 8 sub-directions (the width of each sub-direction is 45 degrees), and counting the gradient intensity of each sub-direction to finally generate a 4 multiplied by 8=128-dimensional feature descriptor (namely a description vector).
S4, feature point data (accurate pixel coordinates and feature descriptors) of the overlapping area between the images are upscaled from a one-dimensional array data structure to a KD-Tree multidimensional data structure. And (5) performing primary registration of feature points between the image overlapping areas by combining a rapid nearest neighbor search algorithm based on the KD-Tree data structure, and providing a primary registration result to S5.
The rapid nearest neighbor search algorithm is in the prior art, and the basic thought is as follows:
(S4.1) searching (comparing the values of the splitting dimensions of the node to be queried and the splitting node, entering the left subtree branch if the values are smaller than or equal to the values, entering the right subtree branch if the values are larger than the values until the values reach the leaf nodes) through a binary tree, and quickly finding the nearest neighbor approximate point along a search path, namely the leaf node which is in the same subspace as the node to be queried; execution continues (S4.2).
(S4.2) backtracking the search path, judging whether data points which are closer to the query point are possible in other sub-node spaces of the nodes on the search path, if so, turning to the other sub-node spaces to search (adding the other sub-nodes to the search path), and continuing to execute (S4.3).
(S4.3) repeating (S4.1) and (S4.2) until the search path is empty.
S5, performing registration model fitting on the feature point primary registration result by using an MSAC algorithm, distinguishing feature points with incorrect registration, namely 'outer points', from feature points with correct registration, namely 'inner points', and obtaining model fitting parameters. And (3) limiting the 'interior points' obtained by model fitting to meet the uniformly distributed judging condition (the second innovation point) and the multilevel constraint relation (the third innovation point) of the characteristic point registration area, and if the condition limitation is not met, re-utilizing an MSAC algorithm to perform registration model fitting so as to realize accurate registration of the characteristic points between images. And solving a homography transformation matrix between the images according to the characteristic point registration model. The inter-image homography transformation matrix is provided to S6.
The MSAC algorithm is the prior art, and comprises the following basic steps:
(S5.1) randomly decimating 4 matching point pairs from the sample set as one sample, and continuing (S5.2).
(S5.2) calculating a transformation matrix M from the 4 matching point pairs. If the transformation matrix can be successfully calculated, continuing to execute (S5.3); if the transformation matrix cannot be successfully calculated, the current error probability p is updated, and the execution is returned (S5.1), the iteration is continued, one sample is randomly selected, and the execution is continued (S5.3).
(S5.3) calculating a consistent set meeting the current transformation matrix according to the sample set, the transformation matrix M and the error loss function, returning the number of elements in the consistent set, and continuing to execute (S5.4).
Wherein the error loss function is:
wherein e represents an error rate; c represents an error rate threshold.
And (S5.4) judging whether the element number in the current consistent set is the optimal (maximum) consistent set, if so, updating the current optimal consistent set, and continuing to execute (S5.5).
And (S5.5) updating the current error probability p, and repeating (S5.1) to (S5.4) for iteration until the current error probability p is smaller than the minimum error probability if the current error probability p is larger than the minimum error probability allowed.
The feature point registration area uniform distribution judging conditions comprise: the number of the 'inner points' is not less than 4; the length and width of the minimum rectangle surrounded by the inner points are not smaller than a certain proportion of the length and width of the original image (the proportion is controlled by the overlapping degree between the images); the maximum distance between the "interior points" is not less than a certain proportion of the minimum rectangular diagonal.
In the formula, num points Representing the accurate registration number of the feature points; width (width) rect And height rect Respectively represent the width and the length of the minimum rectangle surrounded by the feature points rect A diagonal length representing the minimum rectangle; width (Width) image And Height image Respectively representing the width and the length of the original image; max (dist) points ) Representing the maximum distance between registration feature points; alpha 1 ,α 2 ,α 3 The larger the value, the more accurate the registration result, but also tends to make the registration result unsatisfactory.
The multistage constraint relation comprises geometrical characteristics such as feature registration, hu moment and the like of midpoints among 'inner points' in the same image. If the multi-level constraint relation is satisfied, judging that the feature point registration is correct, otherwise, returning to S5 to carry out registration model fitting operation again by using an MSAC algorithm.
The multi-level constraint relationship may be explained as follows: in fig. 2, assuming that the left and right rectangular frames represent two images to be spliced with overlapping areas, after accurate registration, two pairs of accurately registered feature points of A, B and a ' and B ' are respectively in the two images, and m ' respectively represent midpoints of feature points A, B in the two images, the multi-level constraint relationship may be expressed as follows:
in curvatures X Representing the curvature of the point X calculated based on the Hessian matrix; gradient X Representing gradient values in the neighborhood of the point X, which are calculated by the formula (1) in the step (S3.3); direction X Representing the direction value in the neighborhood of the point X, which is calculated by the formula (2) in the step (S3.3); movement X Representing the moment characteristics in the neighborhood of the point X, and the first two orders of invariant moment M of the Hu moment 1 、M 2 The calculation formula is as follows:
where f (X, y) represents a neighborhood image function of the point X; m is m pq Representing the standard moment of the p+q order of the neighborhood, mu pq Representing the p+q order central moment of the neighborhood; m, N the width and length of the neighborhood, respectively;and->Representing the center of gravity of the neighborhood,M 1 、M 2 the first two moments of Hu are shown with better invariance.
S6, correcting and optimizing the homography transformation matrix among the images, reducing the distortion error of image stitching, enabling the image sequences to be stitched to be accurately stitched at the overlapping area, and providing the optimized homography transformation matrix for S7.
The correction optimization method comprises the following steps: correcting, adjusting and optimizing the homography transformation matrix by means of an L-M algorithm.
The L-M algorithm is the most widely used nonlinear least squares algorithm, which is a modification of gaussian newton's method. The core idea is that when unknown parameter offset is calculated, damping terms are added, and the searching direction of the damping terms for the current iteration point is as follows:
wherein mu is k For damping term constant, J k Jacobian matrix, F, is a system of nonlinear equations, F (x) =0 k =F(x k )。
By appropriate adjustment of mu k And (5) finding the optimal parameters.
S7, performing image transformation through the homography transformation matrix pair. In order to eliminate obvious problems of the splicing seams caused by the difference of brightness, exposure and the like among the images, an optimal splicing seam among the images is determined by means of an optimal splicing seam searching algorithm based on dynamic programming, and the images are spliced, so that a splicing result is finally obtained as shown in figure 3.
The basic idea of the optimal splice seam search strategy based on dynamic programming is as follows: a series of coordinate points are found out on the energy function (the smaller the energy value is, the more similar the corresponding color and texture of the point are), and a splice joint with the minimum sum of the energy values is formed. The basic steps are as follows:
(S7.1) assuming that each column of pixel points in the first row in the energy function diagram corresponds to a splicing seam, initializing the energy value of each column of pixel points to the corresponding energy value of the current point, and continuing to execute (S7.2).
(S7.2) starting from the second row, selecting a best path node in the last row for each point thereof. The specific method is to compare the energy values of 3 adjacent points in the previous row opposite to the current point, record the column corresponding to the minimum value, and add the minimum value and the energy value corresponding to the current point to update the energy value of the splice seam:
M(i,j)=E(i,j)+min(M(i-1,j-1),M(I-1,J),M(i-1,j+1)) (8)
wherein: m (i, j) represents the current energy value of the splicing seam; e (i, j) represents the energy value corresponding to the current point.
Continuing (S7.3)
(S7.3) if the current point of the splice seam is the point of the last line in the figure, executing (S7.4), otherwise returning to (S7.2) to execute the next expansion.
(S7.4) selecting the route with the smallest energy value as the optimal splice: if each column in the overlap region has been calculated as a splice joint in accordance with the above procedure, the splice joint corresponding to the smallest energy value from among the splice joints may be selected as the optimal splice joint.
Claims (5)
1. A panoramic image stitching method for curtain wall building facades is characterized by comprising the following steps:
s1, acquiring an image sequence of a curtain wall building structure shot by optical imaging equipment, arranging the image sequence according to the shooting sequence of an outer vertical surface of a building from bottom to top, and providing the ordered image sequence for S2;
s2, preprocessing an input image sequence, and providing a preprocessed image result for S3;
s3, defining a target region ROI aiming at an input image to screen out irrelevant regions; identifying and extracting feature points in a target area of the image by using a SIFT feature point identification algorithm, determining an overlapping area between the images according to the distribution of the feature points in the images, and eliminating irrelevant feature points outside the overlapping area; providing the accurate pixel coordinates and feature descriptors of the feature points of the overlapping area between the images to S4;
the SIFT feature extraction algorithm keeps invariance to rotation transformation, scale transformation and illumination transformation, maintains stability to a certain extent on video angle change, affine transformation and noise, and the obtained image registration result has high robustness, which is the prior art, and mainly comprises the following steps:
s3.1, constructing a linear scale space based on Gaussian kernel convolution operation, and searching a maximum value in a space neighborhood by calculating a DoG response value; providing the maximum result to S3.2;
s3.2, fitting the characteristic points by using a three-dimensional quadratic function to obtain accurate positioning and scale values of the characteristic points, calculating principal curvature based on a Hessian matrix, and filtering extreme points on edges; providing the accurate pixel coordinates of the feature points to S3.3;
s3.3, calculating the gradient M (x, y) and the direction theta (x, y) of each point in the neighborhood of a certain characteristic point:
wherein L (x, y) is its dimensional space function;
counting gradient and direction data in the neighborhood region of the feature point, constructing a histogram, determining a main direction and an auxiliary direction of the feature point according to a peak value in the histogram, and providing the main direction and gradient information of the feature point to S3.4;
s3.4, rotating the coordinate axis of the scale image to a main direction, defining 16 sub-areas of 4 multiplied by 4 in the neighborhood range of the feature points, dividing each sub-area into 8 sub-directions, namely, the width of each sub-direction is 45 degrees, and counting the gradient strength of each sub-direction to finally generate 4 multiplied by 8=128-dimensional feature descriptors, namely, description vectors;
s4, the feature point data of the overlapping areas between the images are upscaled from a one-dimensional array data structure to a KD-Tree multidimensional data structure, the pixel coordinates and the feature descriptors are accurate, the feature points between the overlapping areas between the images are primarily registered based on the KD-Tree data structure and a quick nearest neighbor search algorithm, and a primary registration result is provided for S5;
s5, performing registration model fitting on the initial registration result of the feature points by using an MSAC algorithm, distinguishing feature points with incorrect registration, namely 'outer points', from feature points with correct registration, namely 'inner points', and obtaining model fitting parameters; for 'inner points' obtained by model fitting, defining that the 'inner points' meet the uniformly distributed judging conditions and the multistage constraint relation of the characteristic point registration area, and if the 'inner points' do not meet the condition definition, re-using an MSAC algorithm to perform registration model fitting, so that accurate registration of the characteristic points among images is realized; solving a homography transformation matrix between images according to the feature point registration model; providing the inter-image homography transformation matrix to S6;
s6, correcting and optimizing the homography transformation matrix among the images, reducing the distortion error of image splicing, enabling the image sequences to be spliced to be accurately spliced in the overlapping area, and providing the optimized homography transformation matrix for S7;
s7, performing image transformation through a homography transformation matrix pair; in order to eliminate obvious problems of splicing seams caused by brightness and exposure differences between images, an optimal splicing seam between images is determined by means of an optimal splicing seam searching algorithm based on dynamic programming, and image splicing is carried out, so that a splicing result is finally obtained.
2. The panoramic image stitching method for curtain wall building facades according to claim 1, wherein in S3, the defining the target region ROI mainly comprises the steps of: selecting an ROI definition mode, and selecting a rectangular region in an input image in a man-machine interaction mode to determine a target region for identifying feature points in the image; the ROI definition mode comprises two modes of 'removing' and 'reserving', wherein the selection of the definition mode depends on the selected times of the rectangular area, and the selection of the definition mode comprises the following steps: the method comprises the steps of removing, namely judging all detected characteristic points in a rectangular area selected in human-computer interaction to be irrelevant characteristic points, removing, repeating rectangular area selection operation for a plurality of times, and finally carrying out characteristic point identification and extraction on the part outside all rectangular areas in an image; and (3) reserving, namely judging all detected characteristic points in the selected rectangular area in the man-machine interaction as relevant characteristic points, reserving, and carrying out non-repeatable rectangular area selection operation, and finally carrying out characteristic point identification and extraction on the part in the selected rectangular area in the image.
3. The panoramic image stitching method for curtain wall building facades according to claim 1, wherein in S5, the feature point registration area uniformly distributes judgment conditions, including: the number of the 'inner points' is not less than 4; the length and the width of the minimum rectangle surrounded by the inner points are not smaller than a certain proportion of the length and the width of the original image; the maximum distance between the 'inner points' is not less than a certain proportion of the minimum rectangle diagonal
In the formula, num points Representing the accurate registration number of the feature points; width (width) rect And height rect Respectively represent the width and the length of the minimum rectangle surrounded by the feature points rect A diagonal length representing the minimum rectangle; width (Width) image And Height image Respectively representing the width and the length of the original image; max (dist) points ) Representing the maximum distance between registration feature points; alpha 1 ,α 2 ,α 3 The larger the numerical value, the more accurate the registration result, but also the registration result is liable to be insufficient for the above requirements.
4. A panoramic image stitching method for curtain wall building facades according to claim 1 or 3, wherein in S5, the multi-level constraint relation comprises feature registration of midpoints between 'interior points' in the same image and Hu moment geometrical features; and if the multi-level constraint relation is satisfied, judging that the characteristic points are registered correctly, otherwise, returning to S5, and carrying out registration model fitting operation again by using an MSAC algorithm.
5. The method for stitching panoramic images for curtain wall building facades according to claim 4, wherein the multi-level constraint relationship is characterized as follows: assuming that the left rectangular frame and the right rectangular frame represent two images to be spliced with overlapping areas, after accurate registration, two pairs of characteristic points of A, B and A ' and B ' which are accurately registered are respectively arranged in the two images, and m ' represent midpoints of characteristic points A, B in the two images respectively, the multi-level constraint relation is expressed as follows:
in curvatures X Representing the curvature of the point X calculated based on the Hessian matrix; gradient X Representing gradient values in the neighborhood of the point X, which are calculated by the formula (1) in S3.3; direction X Representing the direction value in the neighborhood of the point X, which is calculated by the formula (2) in S3.3; movement X Representing the moment characteristics in the neighborhood of the point X, and the first two orders of invariant moment M of the Hu moment 1 、M 2 The calculation formula is as follows:
M 1 =η 20 +η 02
M 2 =(η 20 -η 02 ) 2 +4η 11 2
where f (X, y) represents a neighborhood image function of the point X; m is m pq Representing the standard moment of the p+q order of the neighborhood, mu pq Representing the p+q order central moment of the neighborhood; m, N the width and length of the neighborhood, respectively;and->Representing the center of gravity of the neighborhood,M 1 、M 2 the first two moments of Hu are shown with better invariance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911234283.5A CN111192194B (en) | 2019-12-05 | 2019-12-05 | Panoramic image stitching method for curtain wall building facade |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911234283.5A CN111192194B (en) | 2019-12-05 | 2019-12-05 | Panoramic image stitching method for curtain wall building facade |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111192194A CN111192194A (en) | 2020-05-22 |
CN111192194B true CN111192194B (en) | 2023-08-08 |
Family
ID=70709504
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911234283.5A Active CN111192194B (en) | 2019-12-05 | 2019-12-05 | Panoramic image stitching method for curtain wall building facade |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192194B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689331B (en) * | 2021-07-20 | 2023-06-23 | 中国铁路设计集团有限公司 | Panoramic image stitching method under complex background |
CN113689332B (en) * | 2021-08-23 | 2022-08-02 | 河北工业大学 | Image splicing method with high robustness under high repetition characteristic scene |
CN114235814B (en) * | 2021-12-02 | 2024-07-16 | 福州市建筑科学研究院有限公司 | Crack identification method for building glass curtain wall |
CN114387153B (en) * | 2021-12-13 | 2023-07-04 | 复旦大学 | Visual field expanding method for intubation robot |
CN115035281B (en) * | 2022-05-27 | 2023-11-07 | 哈尔滨工程大学 | Rapid infrared panoramic image stitching method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN108416732A (en) * | 2018-02-02 | 2018-08-17 | 重庆邮电大学 | A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion |
-
2019
- 2019-12-05 CN CN201911234283.5A patent/CN111192194B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN108416732A (en) * | 2018-02-02 | 2018-08-17 | 重庆邮电大学 | A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion |
Non-Patent Citations (1)
Title |
---|
基于改进的图像配准方法的全景图像拼接研究;李致远;万方中国学位论文数据库;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111192194A (en) | 2020-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111192194B (en) | Panoramic image stitching method for curtain wall building facade | |
CN111899334B (en) | Visual synchronous positioning and map building method and device based on point-line characteristics | |
CN111784576B (en) | Image stitching method based on improved ORB feature algorithm | |
US9141871B2 (en) | Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space | |
CN109886878B (en) | Infrared image splicing method based on coarse-to-fine registration | |
CN109961399B (en) | Optimal suture line searching method based on image distance transformation | |
CN105957015A (en) | Thread bucket interior wall image 360 DEG panorama mosaicing method and system | |
CN110020995B (en) | Image splicing method for complex images | |
CN111192198A (en) | Pipeline panoramic scanning method based on pipeline robot | |
Choy et al. | High-dimensional convolutional networks for geometric pattern recognition | |
CN111553939B (en) | Image registration algorithm of multi-view camera | |
CN104134208B (en) | Using geometry feature from slightly to the infrared and visible light image registration method of essence | |
CN112396643A (en) | Multi-mode high-resolution image registration method with scale-invariant features and geometric features fused | |
CN112215925A (en) | Self-adaptive follow-up tracking multi-camera video splicing method for coal mining machine | |
CN106981077A (en) | Infrared image and visible light image registration method based on DCE and LSS | |
CN109752855A (en) | A kind of method of hot spot emitter and detection geometry hot spot | |
CN113989604B (en) | Tire DOT information identification method based on end-to-end deep learning | |
CN108805915A (en) | A kind of close-range image provincial characteristics matching process of anti-visual angle change | |
CN111080631A (en) | Fault positioning method and system for detecting floor defects of spliced images | |
CN117036404A (en) | Monocular thermal imaging simultaneous positioning and mapping method and system | |
CN116612165A (en) | Registration method for large-view-angle difference SAR image | |
CN104268550A (en) | Feature extraction method and device | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
CN117853656A (en) | Method and system for constructing three-dimensional model by fusing laser point cloud and single-lens image | |
CN115311691B (en) | Joint identification method based on wrist vein and wrist texture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |