CN111192194A - Panoramic image splicing method for curtain wall building vertical face - Google Patents
Panoramic image splicing method for curtain wall building vertical face Download PDFInfo
- Publication number
- CN111192194A CN111192194A CN201911234283.5A CN201911234283A CN111192194A CN 111192194 A CN111192194 A CN 111192194A CN 201911234283 A CN201911234283 A CN 201911234283A CN 111192194 A CN111192194 A CN 111192194A
- Authority
- CN
- China
- Prior art keywords
- image
- points
- feature points
- registration
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000009466 transformation Effects 0.000 claims abstract description 29
- 239000011159 matrix material Substances 0.000 claims abstract description 27
- 238000007781 pre-processing Methods 0.000 claims abstract description 3
- 230000003993 interaction Effects 0.000 claims description 9
- 238000010845 search algorithm Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000012634 optical imaging Methods 0.000 claims description 3
- 239000011521 glass Substances 0.000 abstract description 6
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000007547 defect Effects 0.000 abstract description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000013016 damping Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000009827 uniform distribution Methods 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A panoramic image splicing method for curtain wall building facades comprises the following steps: the method comprises the steps of obtaining a high-definition image sequence of a curtain wall building facade, determining a Region of Interest (ROI) of an image after image preprocessing, extracting feature points of the ROI by using a visual feature recognition algorithm, calculating and dividing overlapped regions among the image sequence, performing primary registration after removing irrelevant feature points, estimating feature point registration model parameters to complete accurate registration of the feature points among different images and solving of a homography transformation matrix, and determining an optimal splicing seam and performing image splicing after correcting an optimized transformation model. The method overcomes the defect that the traditional panoramic image splicing algorithm is difficult to apply to glass curtain wall buildings and other scenes, and provides an efficient method for detecting and evaluating the appearance quality damage of the curtain wall buildings.
Description
Technical Field
The invention relates to the field of structural health monitoring and the technical field of image feature identification and registration, in particular to a panoramic image splicing method for a curtain wall building facade.
Background
Computer vision has rapidly developed in recent years and is one of the very popular artificial intelligence directions. Image stitching is a research hotspot in the field of computer vision and image processing, and is characterized in that a plurality of image sequences with overlapped regions are subjected to spatial registration, and an image which contains information of each image sequence, has a wide view angle scene and is high in resolution is formed after image fusion. At present, a set of complete and feasible panoramic high-definition image splicing method aiming at scenes such as curtain wall buildings and the like is not available.
The image registration is used as the key and core of an image splicing technology, and an affine transformation model between images is solved according to the consistency of the overlapping areas of two images, namely, a geometric model of one image is transformed to a coordinate plane of the other image, and the overlapping areas of the images are aligned. The method mainly comprises methods of image registration based on characteristics, image registration based on gray scale, image registration based on transformation domain and the like. At present, a Feature-based image registration method is a main trend of research, and is implemented by registering Feature attributes of Feature points between two images, and the method not only has a small calculation amount, but also has the advantages of affine invariance and stability, and the Feature point recognition algorithms mainly based on SIFT (Scale-innovative Feature transform), surf (speedup Robust features), fast (features from additive segment test), Harris, KAZE, and the like, and Feature points can also be defined by Feature maps formed by CNNs, such as lift (learningvariant Feature transform) models, and the like.
In terms of precise registration of images, the most common is the RANSAC (random SAmple consensus) algorithm, but since the number of initial feature point pairs is often large, the number of inliers matching the feature point pairs is smaller than that of routine pairs, so that the RANSAC algorithm is performed less efficiently. In addition, the RANSAC algorithm is sensitive to the selection of the correct noise threshold, and for this reason, the MSAC (M-estimator Sample Consensus) algorithm can achieve better effect by modifying the calculation mode of the loss function.
Due to the total permeability of the curtain wall glass, light can be reflected and refracted when penetrating through the glass, the problems of exposure difference of images, reflection of light on the surface of the glass and the like can be caused in the shooting process, and the difficulty of visual feature identification and registration is increased. Therefore, the prior art cannot solve the problem of image splicing suitable for glass curtain wall buildings.
In the prior art, the quality damage detection and evaluation of the outer facade of the curtain wall building mainly depends on manual visual detection by inspectors by means of hanging baskets, and is time-consuming and labor-consuming; or UAVs is used for shooting video or image data for remote detection of inspectors, but in order to obtain high-definition data, only local areas can be shot, and the overall damage condition of the building structure is difficult to evaluate.
Disclosure of Invention
The invention aims to overcome the defect that the existing characteristic-based image splicing technology has poor application effect or cannot be spliced in a curtain wall building, and provides a panoramic image splicing idea and method for a curtain wall building facade.
The technical scheme of the invention is as follows:
a panoramic image splicing method for curtain wall building facades is characterized by comprising the following steps:
s1, acquiring an image sequence of the curtain wall building structure shot by the optical imaging equipment, arranging the image sequence according to a shooting sequence of the building facade from bottom to top, and providing the sequenced image sequence to S2;
s2, preprocessing the input image sequence, and providing the preprocessed image result to S3;
s3, aiming at the input image, defining a target Region (ROI) to screen out irrelevant regions; recognizing and extracting feature points in an image target region by using an SIFT feature point recognition algorithm, determining an overlapping region between images according to the distribution of the feature points in the images, and removing irrelevant feature points outside the overlapping region; providing the precise pixel coordinates and the feature descriptors of the feature points of the overlapping areas among the images to S4;
s4, feature point data (accurate pixel coordinates and feature descriptors) of the image overlapping regions are subjected to dimension increasing from a one-dimensional array data structure to a KD-Tree multi-dimensional data structure, preliminary registration of the feature points of the image overlapping regions is carried out based on the KD-Tree data structure and a fast nearest neighbor search algorithm, and a preliminary registration result is provided to S5;
S5performing registration model fitting on the preliminary registration result of the feature points by using an MSAC algorithm, distinguishing feature points with wrong registration, namely 'outer points', from feature points with correct registration, namely 'inner points', and obtaining model fitting parameters; for the 'interior points' obtained by model fitting, the uniform distribution judgment condition and the multi-stage constraint relation of the feature point registration area are defined,if the condition limit is not met, the registration model fitting is carried out by using the MSAC algorithm again, so that the accurate registration of the characteristic points between the images is realized; solving a homography transformation matrix between the images according to the feature point registration model; providing the inter-image homography transform matrix to S6;
s6, correcting and optimizing the homography transformation matrix among the images, reducing distortion errors of image splicing, enabling the image sequence to be spliced to be accurately spliced at an overlapping area, and providing the optimized homography transformation matrix to S7;
and S7, carrying out image transformation through the homography transformation matrix pair. In order to eliminate the obvious problem of splicing seams caused by differences of brightness, exposure and the like between images, the optimal splicing seams between the images are determined by means of an optimal splicing seam searching algorithm based on dynamic programming, the images are spliced, and finally a splicing result is obtained.
In S3, the defining of the target Region (ROI) mainly comprises: selecting an ROI (region of interest) definition mode, and selecting a rectangular region in an input image in a man-machine interaction mode to determine a target region for feature point identification in the image; the ROI definition mode comprises a removing mode and a retaining mode, and the definition mode is selected according to the selected times of the rectangular area, wherein: removing, namely judging all detected feature points in a rectangular area selected in the human-computer interaction as irrelevant feature points, removing, repeating the operation of selecting the rectangular area for multiple times, and finally identifying and extracting the feature points of all parts outside the rectangular area in the image; and reserving, namely judging all detected feature points in the rectangular area selected in the human-computer interaction as related feature points, reserving, not repeating the operation of selecting the rectangular area, and finally identifying and extracting the feature points of the part in the rectangular area selected in the image.
In S5, the determining conditions for uniform distribution of the feature point registration areas include: the number of the inner points is not less than 4; the length and the width of a minimum rectangle enclosed by the inner points are not less than a certain proportion of the length and the width of the original image; the maximum distance between the "inner points" is not less than a certain proportion of the diagonal line of the minimum rectangle
In the formula, numpointsRepresenting the accurate registration number of the characteristic points; width (width)rectHeight and heightrectRespectively representing the width and length of the smallest rectangle surrounded by the feature points, DigonalrectRepresents the diagonal length of the minimum rectangle; widthimageAnd HeightimageRespectively representing the width and the length of the original image; max (dist)points) Representing the maximum distance between registered feature points α1,α2,α3A scale factor is expressed, and the larger the value, the more accurate the registration result is, but it is easy to make the registration result not meet the above requirements.
In S5, the multi-level constraint relationship includes geometric features such as feature registration and Hu moments of midpoints between "interior points" in the same image; and if the multilevel constraint relations are met, judging that the registration of the feature points is correct, and otherwise, returning to S5 to perform the registration model fitting operation again by using an MSAC algorithm.
The multi-level constraint relationship can be characterized as follows: assuming that the left and right rectangular frames represent two images to be stitched with overlapping regions, after accurate registration, there are A, B and two pairs of accurately registered feature point pairs a ' and B ' in the two images, respectively, and m ' represent the midpoints of feature points A, B in the two images, respectively, the multilevel constraint relationship is expressed as:
in the formula, curlXRepresenting the curvature of the point X calculated based on the Hessian matrix; gradientXRepresenting the gradient value obtained by the formula (1) in the (S3.3) in the neighborhood of the point X; directionXRepresenting the direction value obtained by the formula (2) in the (S3.3) in the neighborhood of the point X; momentXRepresenting the characteristics of moments in X neighborhood, namely the first two-order invariant moment M of Hu moment1、M2The calculation formula is as follows:
in the formula, f (X, y) represents a neighborhood image function of the point X; m ispqThe standard moment of order p + q, mu, representing the neighborhoodpqRepresenting the p + q order central moment of the neighborhood; m, N denote the width and length of the neighborhood, respectively; the center of gravity of the neighborhood is represented,M1、M2respectively represents the two invariant moments with better invariance of the first two orders in the Hu moment.
Compared with the prior art, the invention has the following prominent substantive characteristics and obvious advantages: the method overcomes the application limitation of the image splicing technology based on characteristics in scenes such as curtain wall glass and the like, is suitable for splicing the panoramic image of the facade of the curtain wall building, realizes the unification of the whole information and the local high resolution of the facade of the curtain wall building, and also keeps the high-definition image information of the local structural damage under the condition of not losing the whole position information of the structural damage.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 is a diagram illustrating a multi-level constraint relationship for image registration
FIG. 3 is a schematic diagram of a splicing result of panoramic images of a curtain wall building
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below with reference to the accompanying drawings. It is noted that the embodiment uses the following known algorithm: SIFT feature point recognition algorithm, fast nearest neighbor search algorithm, MSAC algorithm, L-M (Levenberg-Marquard) algorithm, optimal splice joint search algorithm based on dynamic programming, and the like, but are not limited to these algorithms.
The main flow diagram is shown in fig. 1.
S1, acquiring an image sequence of the curtain wall building structure shot by the optical imaging device, arranging the image sequence according to the shooting sequence of the building facade from bottom to top, and providing the sequenced image sequence to S2.
And S2, carrying out conventional technical pretreatment on the input image sequence, wherein the pretreatment comprises distortion correction, image graying, image denoising, exposure equalization and image enhancement. The pre-processed image result is provided to S3.
S3, aiming at an input image, defining a target Region (ROI) (one of the innovation points of the invention) to screen out irrelevant regions such as a building background; and identifying and extracting the feature points in the image target region by using an SIFT feature point identification algorithm, determining an overlapping region between the images according to the distribution of the feature points in the images, and removing irrelevant feature points outside the overlapping region. The precise pixel coordinates of the feature points of the overlap region between the images and the feature descriptors are supplied to S4.
The method for defining the target area (ROI) mainly comprises the following steps: and selecting an ROI (region of interest) definition mode, and selecting a rectangular region in the input image in a man-machine interaction mode to determine a target region for feature point identification in the image.
The ROI definition mode comprises a removing mode and a retaining mode, and the definition mode is selected according to the selected times of the rectangular area, wherein:
removing, namely judging all detected feature points in a rectangular area selected in the human-computer interaction as irrelevant feature points, removing, repeating the operation of selecting the rectangular area for multiple times, and finally identifying and extracting the feature points of all parts outside the rectangular area in the image;
and reserving, namely judging all detected feature points in the rectangular area selected in the human-computer interaction as related feature points, reserving, not repeating the operation of selecting the rectangular area, and finally identifying and extracting the feature points of the part in the rectangular area selected in the image.
The SIFT feature extraction algorithm keeps invariance to rotation transformation, scale transformation and illumination transformation, keeps stability to a certain degree to view angle change, affine transformation and noise, and the obtained image registration result has high robustness. The method is the prior art and mainly comprises the following steps:
(S3.1) constructing a linear scale space based on Gaussian kernel convolution operation, and searching a maximum value in a space neighborhood by calculating a DoG (difference of Gaussian) response value; the maximum result is provided to (S3.2).
(S3.2) fitting the feature points by using a three-dimensional quadratic function to obtain the precise positioning and scale values of the feature points, calculating the principal curvature based on the Hessian matrix, and filtering extreme points on the edge; the feature point exact pixel coordinates are provided (S3.3).
(S3.3) calculating the gradient M (x, y) and the direction theta (x, y) of each point in a certain feature point neighborhood:
where L (x, y) is its scale space function.
And (S3.4) counting gradient and direction data in the neighborhood region of the feature point, constructing a histogram, determining the main direction and the auxiliary direction of the feature point according to the peak value in the histogram, and providing the main direction and the gradient information of the feature point.
(S3.4) rotating the coordinate axis of the scale image to the main direction, dividing 16 sub-regions in total of 4 × 4 in the neighborhood range of the feature point, dividing each sub-region into 8 sub-directions (the width of each sub-direction is 45 °), counting the gradient strength of each sub-direction, and finally generating a 128-dimensional feature descriptor (namely, a description vector) with 4 × 4 × 8.
And S4, raising the dimension of the feature point data (precise pixel coordinates and feature descriptors) of the overlapping area between the images from a one-dimensional array data structure to a KD-Tree multi-dimensional data structure. And based on the KD-Tree data structure, performing preliminary registration of the feature points between the image overlapping regions by combining a fast nearest neighbor search algorithm, and providing the preliminary registration result to S5.
The fast nearest neighbor search algorithm is the prior art, and the basic idea is as follows:
(S4.1) through binary tree search (comparing values of splitting dimensions of the node to be queried and the splitting node, if the values are less than or equal to the values, the left sub-tree branch is entered, and if the values are greater than the values, the right sub-tree branch is entered to a leaf node), the nearest neighbor approximate point, namely the leaf node which is located in the same subspace with the node to be queried, can be quickly found along a search path; execution continues (S4.2).
(S4.2) backtracking the search path, and judging whether data points closer to the query point are possible in other sub-node spaces of the nodes on the search path, if so, turning to the other sub-node spaces to search (adding other sub-nodes into the search path), and continuing to execute (S4.3).
(S4.3) repeating (S4.1) and (S4.2) until the search path is empty.
And S5, performing registration model fitting on the preliminary registration result of the feature points by using an MSAC algorithm, distinguishing feature points with wrong registration, namely 'outer points', from feature points with correct registration, namely 'inner points', and obtaining model fitting parameters. And limiting the 'inner points' obtained by model fitting to meet the judgment condition (the second innovation point of the invention) for uniformly distributing the feature point registration region and the multistage constraint relation (the third innovation point of the invention), and if the 'inner points' obtained by model fitting do not meet the condition limitation, re-utilizing the MSAC algorithm to carry out registration model fitting, thereby realizing the accurate registration of the feature points between the images. And solving a homography transformation matrix between the images according to the feature point registration model. The inter-image homography transform matrix is provided to S6.
The MSAC algorithm is the prior art, and comprises the following basic steps:
(S5.1) the process continues by randomly decimating 4 pairs of matched points from the sample set to one sample (S5.2).
(S5.2) calculating a transformation matrix M from the 4 pairs of matching points. If the transformation matrix can be successfully calculated, continuing to execute (S5.3); and if the transformation matrix cannot be successfully calculated, updating the current error probability p, returning to execute (S5.1), continuously iterating and randomly selecting a sample, and continuously executing (S5.3).
And (S5.3) calculating a consistent set meeting the current transformation matrix according to the sample set, the transformation matrix M and the error loss function, returning the number of elements in the consistent set, and continuing to execute (S5.4).
Wherein the error loss function is:
wherein e represents an error rate; c denotes an error rate threshold.
And (S5.4) judging whether the current consistent set is the optimal (maximum) consistent set according to the number of elements in the current consistent set, if so, updating the current optimal consistent set, and continuing to execute (S5.5).
And (S5.5) updating the current error probability p, if the current error probability p is larger than the allowed minimum error probability, repeating (S5.1) to (S5.4) and continuing the iteration until the current error probability p is smaller than the minimum error probability.
The judging condition for uniformly distributing the feature point registration areas comprises the following steps: the number of the inner points is not less than 4; the length and width of the minimum rectangle enclosed by the inner points are not less than a certain proportion of the length and width of the original image (the proportion is controlled by the overlapping degree between the images); the maximum distance between the "interior points" is not less than a certain proportion of the diagonal of the minimum rectangle.
In the formula, numpointsRepresenting the accurate registration number of the characteristic points; width (width)rectHeight and heightrectRespectively representing the width and length of the smallest rectangle surrounded by the feature points, DigonalrectRepresents the diagonal length of the minimum rectangle; widthimageAnd HeightimageRespectively representing the width and the length of the original image; max (dist)points) Representing the maximum distance between registered feature points α1,α2,α3A scale factor is expressed, and the larger the value, the more accurate the registration result is, but it is easy to make the registration result not meet the above requirements.
The multi-level constraint relationship comprises geometrical characteristics such as feature registration and Hu moment of midpoints among 'interior points' in the same image. And if the multilevel constraint relations are met, judging that the registration of the feature points is correct, and otherwise, returning to S5 to perform registration model fitting operation again by using an MSAC algorithm.
The multi-level constraint relationship may be interpreted as follows: in fig. 2, it is assumed that the left and right rectangular boxes represent two images to be stitched with an overlapping region, after accurate registration, there are A, B, a ' and B ' pairs of accurately registered feature points in the two images, and m ' represent midpoints of feature points A, B in the two images, respectively, then the multi-level constraint relationship may be represented as:
in the formula, curlXRepresenting the curvature of the point X calculated based on the Hessian matrix; gradientXRepresenting the gradient value obtained by the formula (1) in the (S3.3) in the neighborhood of the point X; directionXRepresenting the direction value obtained by the formula (2) in the (S3.3) in the neighborhood of the point X; momentXRepresenting the characteristics of moments in X neighborhood, namely the first two-order invariant moment M of Hu moment1、M2The calculation formula is as follows:
in the formula, f (X, y) represents a neighborhood image function of the point X; m ispqThe standard moment of order p + q, mu, representing the neighborhoodpqRepresenting the p + q order central moment of the neighborhood; m, N denote the width and length of the neighborhood, respectively;andthe center of gravity of the neighborhood is represented,M1、M2respectively represents the two invariant moments with better invariance of the first two orders in the Hu moment.
S6, correcting and optimizing the homography transformation matrix among the images, reducing distortion errors of image splicing, enabling the image sequence to be spliced to be accurately spliced at an overlapping area, and providing the optimized homography transformation matrix to S7.
The correction optimization method comprises the following steps: and correcting, adjusting and optimizing the homography transformation matrix by means of an L-M algorithm.
The L-M algorithm is the most widely used non-linear least squares algorithm, which is a modification of the gauss-newton method. The core idea is that when unknown parameter offset is solved, a damping term is added to a current iteration point, and the search direction is as follows:
in the formula, mukIs a damping term constant, JkJacobian matrix, F, being a non-linear system of equations F (x) ═ 0k=F(xk)。
By appropriate adjustment of μkAnd finding the optimal parameters.
And S7, carrying out image transformation through the homography transformation matrix pair. In order to eliminate the obvious problem of the splicing seams caused by the difference of brightness, exposure and the like between the images, the optimal splicing seams between the images are determined by means of an optimal splicing seam searching algorithm based on dynamic programming, the images are spliced, and finally the splicing result is obtained as shown in figure 3.
The basic idea of the optimal splice joint search strategy based on dynamic programming is as follows: and (4) finding a series of coordinate points on the energy function (the points with smaller energy values indicate that the corresponding color and texture are more similar), and forming a splicing seam with the minimum sum of energy values. The method comprises the following basic steps:
(S7.1) assuming that all the pixel points in the first row in the energy function graph correspond to a splicing seam, initializing the energy values to the corresponding current point energy values, and continuing to execute (S7.2).
(S7.2) starting from the second row, a best path node is selected in the previous row for each point thereof. The specific method of selecting is to compare the energy values of 3 adjacent points in the previous row over against the current point, record the column corresponding to the minimum value, and add the minimum value and the energy value corresponding to the current point to update the energy value of the splicing seam:
M(i,j)=E(i,j)+min(M(i-1,j-1),M(I-1,J),M(i-1,j+1)) (8)
in the formula: m (i, j) represents the current energy value of the splicing seam; e (i, j) represents the energy value corresponding to the current point.
Continue execution (S7.3)
(S7.3) if the current point of the splicing seam is the point of the last line in the graph, executing (S7.4), otherwise, returning to (S7.2) and performing next expansion.
(S7.4) selecting the route with the minimum energy value as the optimal splicing seam: if each column in the overlapping area calculates a splicing seam according to the steps, the splicing seam corresponding to the minimum energy value can be selected from the splicing seams to be used as the optimal splicing seam.
Claims (5)
1. A panoramic image splicing method for curtain wall building facades is characterized by comprising the following steps:
s1, acquiring an image sequence of the curtain wall building structure shot by the optical imaging equipment, arranging the image sequence according to a shooting sequence of the building facade from bottom to top, and providing the sequenced image sequence to S2;
s2, preprocessing the input image sequence, and providing the preprocessed image result to S3;
s3, aiming at the input image, defining a target Region (ROI) to screen out irrelevant regions; recognizing and extracting feature points in an image target region by using an SIFT feature point recognition algorithm, determining an overlapping region between images according to the distribution of the feature points in the images, and removing irrelevant feature points outside the overlapping region; providing the precise pixel coordinates and the feature descriptors of the feature points of the overlapping areas among the images to S4;
s4, feature point data (accurate pixel coordinates and feature descriptors) of the image overlapping regions are subjected to dimension increasing from a one-dimensional array data structure to a KD-Tree multi-dimensional data structure, preliminary registration of the feature points of the image overlapping regions is carried out based on the KD-Tree data structure and a fast nearest neighbor search algorithm, and a preliminary registration result is provided to S5;
S5performing registration model fitting on the preliminary registration result of the feature points by using an MSAC algorithm, distinguishing feature points with wrong registration, namely 'outer points', from feature points with correct registration, namely 'inner points', and obtaining model fitting parameters; for the 'interior points' obtained by model fitting, limiting the interior points to meet the judgment condition for uniformly distributing the registration areas of the feature points and the multistage constraint relation, and if the interior points do not meet the condition limitation, re-utilizing an MSAC algorithm to carry out registration model fitting so as to realize accurate registration of the feature points between the images; solving a homography transformation matrix between the images according to the feature point registration model; providing the inter-image homography transform matrix to S6;
s6, correcting and optimizing the homography transformation matrix among the images, reducing distortion errors of image splicing, enabling the image sequence to be spliced to be accurately spliced at an overlapping area, and providing the optimized homography transformation matrix to S7;
and S7, carrying out image transformation through the homography transformation matrix pair. In order to eliminate the obvious problem of splicing seams caused by differences of brightness, exposure and the like between images, the optimal splicing seams between the images are determined by means of an optimal splicing seam searching algorithm based on dynamic programming, the images are spliced, and finally a splicing result is obtained.
2. The method for stitching panoramic images for curtain-wall building facades as claimed in claim 1, wherein in S3, the step of defining the target Region (ROI) comprises the following steps: selecting an ROI (region of interest) definition mode, and selecting a rectangular region in an input image in a man-machine interaction mode to determine a target region for feature point identification in the image; the ROI definition mode comprises a removing mode and a retaining mode, and the definition mode is selected according to the selected times of the rectangular area, wherein: removing, namely judging all detected feature points in a rectangular area selected in the human-computer interaction as irrelevant feature points, removing, repeating the operation of selecting the rectangular area for multiple times, and finally identifying and extracting the feature points of all parts outside the rectangular area in the image; and reserving, namely judging all detected feature points in the rectangular area selected in the human-computer interaction as related feature points, reserving, not repeating the operation of selecting the rectangular area, and finally identifying and extracting the feature points of the part in the rectangular area selected in the image.
3. The method for stitching the panoramic images of the facade of the curtain wall building according to claim 1, wherein in S5, the determination conditions for uniformly distributing the registration areas of the feature points comprise: the number of the inner points is not less than 4; the length and the width of a minimum rectangle enclosed by the inner points are not less than a certain proportion of the length and the width of the original image; the maximum distance between the "inner points" is not less than a certain proportion of the diagonal line of the minimum rectangle
In the formula, numpointsRepresenting the accurate registration number of the characteristic points; width (width)rectHeight and heightrectRespectively representing the width and length of the smallest rectangle surrounded by the feature points, DigonalrectRepresents the diagonal length of the minimum rectangle; widthimageAnd HeightimageRespectively representing the width and the length of the original image; max (dist)points) Representing the maximum distance between registered feature points α1,α2,α3A scale factor is expressed, and the larger the value, the more accurate the registration result is, but it is easy to make the registration result not meet the above requirements.
4. The method for stitching the panoramic images for the facade of the curtain wall building as claimed in claim 1 or 3, wherein in S5, the multi-level constraint relationship comprises geometric features such as feature registration, Hu moment and the like of the middle points between 'interior points' in the same image; and if the multilevel constraint relations are met, judging that the registration of the feature points is correct, and otherwise, returning to S5 to perform the registration model fitting operation again by using an MSAC algorithm.
5. The method for stitching panoramic images of the facade of a curtain wall building according to claim 4, wherein the multilevel constraint relationship is characterized as follows: assuming that the left and right rectangular frames represent two images to be stitched with overlapping regions, after accurate registration, there are A, B and two pairs of accurately registered feature point pairs a ' and B ' in the two images, respectively, and m ' represent the midpoints of feature points A, B in the two images, respectively, the multilevel constraint relationship is expressed as:
in the formula, curlXRepresenting the curvature of the point X calculated based on the Hessian matrix; gradientXRepresenting the gradient value obtained by the formula (1) in the (S3.3) in the neighborhood of the point X; directionXRepresenting the direction value obtained by the formula (2) in the (S3.3) in the neighborhood of the point X; momentXRepresenting the characteristics of moments in X neighborhood, namely the first two-order invariant moment M of Hu moment1、M2The calculation formula is as follows:
in the formula, f (X, y) represents a neighborhood image function of the point X; m ispqThe standard moment of order p + q, mu, representing the neighborhoodpqRepresenting the p + q order central moment of the neighborhood; m, N denote the width and length of the neighborhood, respectively;andthe center of gravity of the neighborhood is represented,M1、M2respectively represents the two invariant moments with better invariance of the first two orders in the Hu moment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911234283.5A CN111192194B (en) | 2019-12-05 | 2019-12-05 | Panoramic image stitching method for curtain wall building facade |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911234283.5A CN111192194B (en) | 2019-12-05 | 2019-12-05 | Panoramic image stitching method for curtain wall building facade |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111192194A true CN111192194A (en) | 2020-05-22 |
CN111192194B CN111192194B (en) | 2023-08-08 |
Family
ID=70709504
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911234283.5A Active CN111192194B (en) | 2019-12-05 | 2019-12-05 | Panoramic image stitching method for curtain wall building facade |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192194B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
CN113689332A (en) * | 2021-08-23 | 2021-11-23 | 河北工业大学 | Image splicing method with high robustness under high repetition characteristic scene |
CN114235814A (en) * | 2021-12-02 | 2022-03-25 | 福州市建筑科学研究院有限公司 | Crack identification method for building glass curtain wall |
CN114387153A (en) * | 2021-12-13 | 2022-04-22 | 复旦大学 | Visual field expanding method for intubation robot |
CN115035281A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Rapid infrared panoramic image splicing method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN108416732A (en) * | 2018-02-02 | 2018-08-17 | 重庆邮电大学 | A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion |
-
2019
- 2019-12-05 CN CN201911234283.5A patent/CN111192194B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN108416732A (en) * | 2018-02-02 | 2018-08-17 | 重庆邮电大学 | A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion |
Non-Patent Citations (1)
Title |
---|
李致远: "基于改进的图像配准方法的全景图像拼接研究", 万方中国学位论文数据库 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
CN113689331B (en) * | 2021-07-20 | 2023-06-23 | 中国铁路设计集团有限公司 | Panoramic image stitching method under complex background |
CN113689332A (en) * | 2021-08-23 | 2021-11-23 | 河北工业大学 | Image splicing method with high robustness under high repetition characteristic scene |
CN114235814A (en) * | 2021-12-02 | 2022-03-25 | 福州市建筑科学研究院有限公司 | Crack identification method for building glass curtain wall |
CN114235814B (en) * | 2021-12-02 | 2024-07-16 | 福州市建筑科学研究院有限公司 | Crack identification method for building glass curtain wall |
CN114387153A (en) * | 2021-12-13 | 2022-04-22 | 复旦大学 | Visual field expanding method for intubation robot |
CN115035281A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Rapid infrared panoramic image splicing method |
CN115035281B (en) * | 2022-05-27 | 2023-11-07 | 哈尔滨工程大学 | Rapid infrared panoramic image stitching method |
Also Published As
Publication number | Publication date |
---|---|
CN111192194B (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111192194B (en) | Panoramic image stitching method for curtain wall building facade | |
CN111899334B (en) | Visual synchronous positioning and map building method and device based on point-line characteristics | |
CN111640157B (en) | Checkerboard corner detection method based on neural network and application thereof | |
US9141871B2 (en) | Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space | |
CN109961399B (en) | Optimal suture line searching method based on image distance transformation | |
Choy et al. | High-dimensional convolutional networks for geometric pattern recognition | |
CN110020995B (en) | Image splicing method for complex images | |
CN105957015A (en) | Thread bucket interior wall image 360 DEG panorama mosaicing method and system | |
CN104134208B (en) | Using geometry feature from slightly to the infrared and visible light image registration method of essence | |
WO2011117069A1 (en) | Method and arrangement for multi-camera calibration | |
CN112396643A (en) | Multi-mode high-resolution image registration method with scale-invariant features and geometric features fused | |
CN111553939B (en) | Image registration algorithm of multi-view camera | |
CN106981077A (en) | Infrared image and visible light image registration method based on DCE and LSS | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
KR101753360B1 (en) | A feature matching method which is robust to the viewpoint change | |
CN111126412A (en) | Image key point detection method based on characteristic pyramid network | |
CN108805915A (en) | A kind of close-range image provincial characteristics matching process of anti-visual angle change | |
CN113989604A (en) | Tire DOT information identification method based on end-to-end deep learning | |
CN104268550A (en) | Feature extraction method and device | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
CN117542067B (en) | Region labeling form recognition method based on visual recognition | |
CN117853656A (en) | Method and system for constructing three-dimensional model by fusing laser point cloud and single-lens image | |
CN117671299A (en) | Loop detection method, device, equipment and storage medium | |
CN116128919A (en) | Multi-temporal image abnormal target detection method and system based on polar constraint | |
CN107330436A (en) | A kind of panoramic picture SIFT optimization methods based on dimensional criteria |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |