Nothing Special   »   [go: up one dir, main page]

CN104537707B - Image space type stereoscopic vision moves real-time measurement system online - Google Patents

Image space type stereoscopic vision moves real-time measurement system online Download PDF

Info

Publication number
CN104537707B
CN104537707B CN201410745020.1A CN201410745020A CN104537707B CN 104537707 B CN104537707 B CN 104537707B CN 201410745020 A CN201410745020 A CN 201410745020A CN 104537707 B CN104537707 B CN 104537707B
Authority
CN
China
Prior art keywords
stereo
image
dimensional
camera
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410745020.1A
Other languages
Chinese (zh)
Other versions
CN104537707A (en
Inventor
邢帅
王栋
徐青
葛忠孝
李鹏程
耿迅
张军军
侯晓芬
周杨
夏琴
江腾达
李建胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA Information Engineering University
Original Assignee
PLA Information Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA Information Engineering University filed Critical PLA Information Engineering University
Priority to CN201410745020.1A priority Critical patent/CN104537707B/en
Publication of CN104537707A publication Critical patent/CN104537707A/en
Application granted granted Critical
Publication of CN104537707B publication Critical patent/CN104537707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention relates to image space type stereoscopic vision to move real-time measurement system online, carries out camera calibration first;In the moving process of stereoscopic camera, some groups of stereo-pictures are obtained;Image preprocessing;Feature extraction and Stereo matching;Three-dimensional reconstruction;Stereo-picture model connects;For the stereopsis of any instant, obtain the corresponding image points in the stereopsis of adjacent moment, tie point using them as two groups of stereopsis, the model points of the same name of two groups of three-dimensional models are calculated by forward intersection, are transformed to two groups of three-dimensional models under the same space coordinate system by spacial similarity transformation;The stereopsis of subsequent time is equally handled successively, all stereo-picture models are connected into a block mold for being directed to whole scene.

Description

Image square type stereo vision on-line mobile real-time measuring system
Technical Field
The invention relates to an image space type stereoscopic vision on-line mobile real-time measurement system.
Background
The basic principle of stereo vision is to observe the same scene from two or more viewpoints to obtain images of objects at different viewing angles, and to calculate the positional deviation (i.e., parallax) between pixels of the images by the principle of triangulation to obtain three-dimensional information. Stereo vision measurement methods have been known for a long time and are widely used in the fields of industrial measurements and photogrammetry.
The "certain robot target location based on stereoscopic vision" (master paper of Nanjing university of science and technology) summarizes the main steps of the existing stereoscopic vision, including:
1) acquiring an image;
2) calibrating a camera;
3) image preprocessing and feature extraction;
4) stereo matching;
5) and (4) three-dimensional reconstruction.
The above method mainly has the following problems:
(1) the acquired three-dimensional scene information is incomplete. Some systems only acquire three-dimensional information of a part of markers in a scene, some systems only acquire three-dimensional information of a part of objects in the scene, and some systems acquire the three-dimensional information of the whole scene but have insufficient density and cannot express some detailed information in the scene.
(2) The acquired three-dimensional information of the scene cannot be integrated. Three-dimensional information corresponding to a scene can be generated by a three-dimensional image shot by the three-dimensional camera every time, but the shooting in a larger space can be completed along with the movement of the three-dimensional camera, but the scene three-dimensional information generated in the second time is processed in an isolated manner by each system, the relation among the scene three-dimensional information generated at different moments is not considered, and the integral measurement of a large-range scene cannot be realized.
(3) The system is not real-time enough. In the system, the system for acquiring only part of three-dimensional information of the markers in the scene can basically meet the requirement of real-time response, but the system for acquiring all fine three-dimensional information of the scene is often offline and cannot meet the requirement of real-time response.
The image space type stereoscopic vision on-line mobile real-time measurement system designed by the project realizes continuous observation aiming at the same target and generates complete and accurate three-dimensional information of the whole target in real time by optimizing the processing and matching algorithm of the stereoscopic image and combining the camera moving and continuously acquiring the stereoscopic image and the visual measurement model connection algorithm of the stereoscopic image.
Disclosure of Invention
The invention aims to provide an image space type stereoscopic vision online mobile real-time measurement system, which is used for solving the problems that the acquired scene information is incomplete and the scene three-dimensional information cannot be integrated in the prior art and further improving the real-time property.
In order to achieve the above object, the scheme of the invention comprises:
the system comprises a stereo camera consisting of two cameras and a camera fixing and distance adjusting device, wherein the stereo camera is connected with a control and calculation device, and the control and calculation device is used for controlling, storing, processing and outputting processing results of data acquired by the cameras; the measurement process is as follows:
1) calibrating a camera; 2) acquiring a plurality of groups of stereo images in the moving process of the stereo camera; 3) preprocessing an image; 4) feature extraction and stereo matching; 5) three-dimensional reconstruction; 6) connecting the three-dimensional image models; for the stereo image at any moment, acquiring homonymous image points in the stereo images at adjacent moments, taking the homonymous image points as connection points of two groups of stereo images, obtaining homonymous model points of two groups of stereo models through forward intersection calculation, and transforming the two groups of stereo models to the same space coordinate system through space similarity transformation; and sequentially carrying out the same treatment on the stereo images at the next moment, and connecting all the stereo image models into an integral model aiming at the whole scene.
The camera calibration method comprises the following steps: simultaneously acquiring a three-dimensional image; extracting angular points of the calibration plate; and (5) calibrating and resolving.
The image preprocessing comprises filtering and gray level equalization processing.
The feature extraction and stereo matching comprises: obtaining connection points between the three-dimensional models by utilizing an SURF operator; calculating the relative position and posture between the two images, and carrying out relative orientation calculation; determining the epipolar line relationship among the stereo images, and correcting the stereo images to obtain stereo images arranged according to the epipolar line; and carrying out dense matching on the stereo images by using an SGM algorithm to generate dense homonymous image points.
And reconstructing three-dimensional information of the target scene by adopting a multi-sheet forward intersection method according to the dense homonymous image points obtained by matching, so as to realize the three-dimensional reconstruction.
The measuring method of the invention is to continuously acquire the three-dimensional image and reconstruct the three-dimensional model of the target scene in real time in the moving process of the three-dimensional camera. The results of each reconstruction need to be concatenated in order to form a complete scene model. The principle of stereo image model connection is that the homonymous image points in two groups of stereo images at adjacent moments are obtained and are used as connection points of the two groups of stereo images, homonymous model points of the two groups of stereo models are obtained through forward intersection calculation, and finally the two groups of stereo models are transformed to the same space coordinate system through space similarity transformation. The stereo images obtained at the subsequent time are processed in the same way, so that all the stereo image models can be connected into an integral model aiming at the whole scene.
Drawings
FIG. 1 is a camera mount;
FIG. 2 is a design schematic of a hardware platform;
FIG. 3 is a software workflow diagram;
FIG. 4 is a checkerboard calibration plate and its coordinate system;
FIG. 5 is a SURF operator flow diagram;
FIG. 6 is the intersection of the homonymous projection rays at the target point after the relative orientation is completed;
FIG. 7 is a flow chart of individual image pair relative orientation;
FIG. 8 is an epipolar geometry for a parallel binocular stereo vision system;
FIG. 9 is a SGM matching algorithm flow diagram;
FIG. 10 is a front intersection measurement principle;
fig. 11 is a stereoscopic image model connection flowchart;
fig. 12 is a schematic diagram of the process of the system for making real-time measurements of movement.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
In the moving process of the stereo camera, the stereo image can be continuously acquired and the stereo model of the target scene can be reconstructed in real time. However, each reconstruction obtains a local model of the object in the current field of view, and the results of each reconstruction need to be connected to form a complete scene model, which is realized by the basic scheme of the present invention.
The basic scheme of the invention comprises the following steps: 1) calibrating a camera; 2) acquiring a plurality of groups of stereo images in the moving process of the stereo camera; 3) preprocessing an image; 4) feature extraction and stereo matching; 5) three-dimensional reconstruction; 6) connecting the three-dimensional image models; for the stereo image at any moment, acquiring homonymous image points in the stereo images at adjacent moments, taking the homonymous image points as connection points of two groups of stereo images, obtaining homonymous model points of two groups of stereo models through forward intersection calculation, and transforming the two groups of stereo models to the same space coordinate system through space similarity transformation; and sequentially carrying out the same treatment on the stereo images at the next moment, and connecting all the stereo image models into an integral model aiming at the whole scene.
Through the connection of the three-dimensional image models, the target three-dimensional information reconstructed by the system in the moving process can be integrated into a coordinate system of a first group of three-dimensional image models to form a complete geometric model of the whole scene.
An image-side stereo vision on-line mobile real-time measurement system is described in detail below. The system is mainly composed of two parts, namely hardware and software (namely, the method scheme of the invention). As shown in fig. 1, the hardware mainly includes two cameras, a lens, a data line, an acquisition and conversion device, a camera fixing device, a control and calculation device, and the like, and the software includes camera calibration, image preprocessing, feature extraction and matching, relative orientation, stereo image model connection, nuclear line image generation, dense matching, three-dimensional reconstruction, and the like.
1.1 hardware part
As shown in FIG. 1, the hardware required for the construction of the system is as follows:
(1) a camera: the resolution of the digital industrial camera is larger than 100 ten thousand pixels, the acquisition speed is not less than 30 frames/second, the USB interface supplies power, and the network interface, the fire wire interface or the USB interface transmits data.
(2) Lens: a standard interface lens engageable with the camera, the lens focal length being no less than 24 mm.
(3) Data line: kilomega network cable, fire wire and USB data cable of standard interface.
(4) Acquisition conversion equipment: a conversion device and protocol for connecting the data lines to the computer.
(5) The camera fixing device comprises: the fixed frame can be provided with two cameras and has a certain length (as shown in figure 1), and the distance between the two cameras can be adjusted by adjusting the distance between the mounting tables.
(6) The control and computing device: the method is used for controlling, storing, processing and outputting processing results of camera collected data, and a tablet computer or a notebook computer with a corresponding interface is recommended to be used.
The design schematic diagram of the hardware platform of the system is shown in FIG. 2.
1.2 software part
The system software consists of 8 modules of camera calibration, image preprocessing, feature extraction and matching, relative orientation, stereo image model connection, nuclear line image generation, dense matching and three-dimensional reconstruction, and the working flow of the system software is shown in figure 3.
The key techniques and methods involved in each module are as follows.
1.2.1 Camera calibration
The calibration of the stereo camera is to acquire internal parameters of each camera and the position relationship between the cameras. The internal parameters comprise focal length of the cameras, image principal point coordinates, distortion parameters of the lens and the like, and the position relationship comprises a rotation matrix and a translation matrix between the cameras.
At present, common camera calibration methods include an experimental field method, a Zhang Yong method, a Tsai two-step method, a self-calibration method and the like. Through comparison, the Zhang-Zhengyou method and the Tsai two-step method are convenient to operate, stable, reliable and good in accuracy, but the number of distortion parameters calculated by the Tsai two-step method is less than that of the distortion parameters calculated by the Tsai two-step method, so that the Tsai two-step method is used as a camera calibration method by the system, and a better distortion correction effect is achieved. The steps and conditions involved in the calibration process are as follows:
the first step is as follows: and simultaneously acquiring a stereoscopic image. The image information includes the size and color (gray level) of the image; the scene information contains a 12 × 9 checkerboard calibration board, the size of the grid edge is fixed (30mm), an object coordinate system (as shown in fig. 4) can be formed, and the calibration board is located in the center of the same-name area of the stereoscopic image and is distributed relatively uniformly.
The second step is that: and extracting the corner points of the calibration plate. Two-dimensional image coordinates x and y of 88 corner points are respectively extracted from a stereo image, and the positioning precision of the two-dimensional image coordinates x and y generally reaches a sub-pixel level (the corner point extraction method is referred to a related document [ Zhang Guangjun. visual measurement [ M ]. scientific Press, 2008: 55-61 ]).
The third step: and (5) calibrating and resolving. The internal parameters of the double cameras can be obtained by a Zhang Zhengyou calibration method, and comprise focal length f and image principal point coordinates (c)x,cy) Radial distortion k1,k2,k3Tangential distortion p1,p2(ii) a The extrinsic parameters include the rotation matrix R, translation vector T of the camera. The specific calibration solution principle is given below.
1. Internal parameter calibration
Let f be the focal length of the camera and c be the image principal point coordinatex,cy) The coordinates of the image point are (X, Y) and the coordinates of the object point are (X, Y) and (X, Y, Z) (since the checkerboard is defined as a plane, Z is 0), the relationship between them is
Wherein [ r ]1r2r3]R is a rotation matrix, H is sM R1r2T]Is a homography matrix, and
since the object coordinates of the corner points in the calibration plate are known, and the corresponding image coordinates can be obtained by image processing, the matrix H can be solved according to the formula (3), and is expressed as [ H ] in a vector form1h2h3]The result is represented as
Or
Wherein λ is 1/s. From the nature of the rotation matrix, one can derive
Thereby to obtain
Let B be M-TM-1B is a skew symmetric matrix with a spread
Thus, equation (4) can be expressed using a dot product of 6 element vectors, then
By usingTo define two constraints, which can be written as
If the camera acquires K checkerboard images simultaneously, a system of equations can be constructed: vb is 0. Where V is a 2K × 6 matrix, which can be calculated from the homography matrix H. Because each image has two constraint equations, when K is more than or equal to 3, a vector b can be solved by using a least square method, and then the camera internal parameters are obtained
Wherein,in combination with the homography matrix for each image, their extrinsic parameters can be calculated.
Taking the influence of lens distortion into consideration, the internal and external parameters obtained above are used as initial values of a following nonlinear equation set. Order (x)p,yp) Is the normalized coordinate of a point, (x)d,yd) Is a distorted real coordinate, and is modeled as
Wherein,therefore, after the internal and external parameters are re-estimated, the equation set can be iteratively solved by using the least square method, and the internal parameters with better precision can be finally obtained.
2. External parameter calibration
The rotation matrix among the coordinate systems of the stereo cameras is R, and the translation matrix is T. Given some object space point coordinate P, its coordinate in the left and right image plane coordinate system is
Wherein R isl、TlAnd Rr、TrIs a rotation and translation matrix of the point P from the corresponding image plane coordinate system to the world coordinate system. And P isl、PrRepresenting the same point in space, can be obtained
Pl=RT(Pr-T) (11)
Matrix operation is performed on the formulas (10) and (11) to obtain
And R in the formulal、Rr、TlAnd TrCan be obtained by the previous internal parameter calibration.
1.2.2 image preprocessing
When the stereo camera collects images, the quality of the images is reduced due to the quality of lenses, light change and the like, and negative influence is brought to subsequent reconstruction. To mitigate this effect, pre-processing of the image is required. The image preprocessing of the system mainly comprises filtering and gray level equalization, and aims to eliminate noise and enhance effect.
1. Filtering
In order to reduce the influence of image noise, a smoothing filtering method is often used to improve the image effect, and the basic idea is to remove or weaken the discontinuities through the operation of the target point and several surrounding points. Here, the filtering operation needs to select a corresponding template according to the noise characteristics, and common templates include
Wherein the last template is a gaussian template.
During the filtering process, a convolution operation formula can be adopted
To obtain a smoothed image.
2. Gray level equalization
Due to the fact that errors exist in the processing technology of the stereo camera, the situation that the radiation of the obtained stereo images is inconsistent exists, and the fact that the left image and the right image have certain gray level difference on the whole is mainly reflected, and therefore gray level equalization processing needs to be conducted on the left image and the right image before use. The system adopts a histogram equalization method to respectively stretch the gray scales of the left image and the right image.
The gray histogram is a function describing gray values, i.e. the number of pixels having a certain gray value in a statistical image, and represents the gray level of the pixel by the abscissa and the frequency of occurrence of the gray value by the ordinate. The purpose of gray level histogram equalization is to transform the histogram of the original image into a uniformly distributed form, so that the gray ranges of the three-dimensional images are consistent, and the gray values of the image points with the same name obtained by the three-dimensional camera are basically consistent.
In fact, the relative position relationship between the stereo cameras may cause the scene information in the left and right images not to be completely overlapped, which requires the overlapped area of the stereo images to be obtained through image matching. The method is used for carrying out gray histogram equalization processing on the basis of a stereo image overlapping area, and comprises the following specific steps of:
the first step is as follows: in the gray scale range [0,255]Internally scanning the whole imageAnd counting the number n of occurrences of the kth gray levelk,k∈[0,255];
The second step is that: and replacing the probability value with frequency approximation, namely performing histogram normalization processing. Pr(rk)=nk/n,0≤rk≤1,k=0,1,…,255。Pr(rk) Representing the grey value rkThe probability of occurrence. The sum of all components of the normalized histogram is equal to 1;
the third step: calculating the transformed gray value by the conversion formula
1.2.3 feature extraction and matching
The SURF operator is a feature point extraction and matching algorithm which is applied to the field of computer vision at present, and has the characteristics of good stability, high speed and high accuracy. Therefore, the system adopts the SURF operator to obtain the connection points between the three-dimensional models.
The core of the SURF operator is the calculation of the Hessian matrix. Assuming a function f (x, y), the Hessian matrix H is composed of the partial derivatives of the function:
h matrix discriminant:
the value of the discriminant is the eigenvalue of the H matrix, all points can be classified by using the symbol of the decision result, and whether the point is an extreme point or not is judged according to the positive and negative values of the discriminant. In SURF operator, image pixel I (x, y) is used to replace function value f (x, y), and second-order standard Gaussian function is selected as filterThe wave filter calculates the second partial derivative by convolution between specific kernels, so that three matrix elements L of the H matrix can be calculatedxx,Lyy,LxyThus, H:
L(X,t)=G(t)*I(X) (19)
Lxx(X, t) is a representation of an image at different resolutions, and can be implemented by convolution of a gaussian kernel G (t) with an image function I (X) at point X ═ X, y, where the kernel function G (t) is specifically represented by equation (20), G (t) is a gaussian function, t is a gaussian variance, and L is a square erroryyAnd LxyThe same is true. By the method, the determinant value of the H determinant can be calculated for each pixel in the image, and the interest point can be distinguished by the determinant value. For ease of application, Herbert Bay proposes an approximation DxxIn place of LxxIf a weight w is introduced for balancing an error between an accurate value and an approximate value and the weight w varies with the scale, the H matrix discriminant can be expressed as:
det(Happrox)=DxxDyy-(wDxy)2(21)
the detailed flow of the SURF operator is shown in fig. 5.
1.2.4 relative orientation
The relative position and posture between the two images can be calculated by using the connection point between the three-dimensional images obtained by matching, which is an important step for performing three-dimensional reconstruction by using the three-dimensional images.
The purpose of the relative orientation is to determine the relative orientation of the stereopair in space, comprising 5 relative orientation elements. The principle is that when the relative orientation of the stereopair is determined, the homonymous epipolar lines should be coplanar with the baseline, i.e. the projection rays of the corresponding image points should intersect within the epipolar plane pair (as shown in fig. 6).
As can be seen from FIG. 6, the coplanar condition equation is expressed in vector form in the basic form
Let S in FIG. 62At S1-X1Y1Z1The coordinates in the coordinate system are (B)X,BY,BZ),d1At S1-X1Y1Z1The coordinate in the coordinate system is (X)1,Y1,Z1),d2At S2-X2Y2Z2The coordinate in the coordinate system is (X)2,Y2,Z2). Then its coordinate expression is as
Wherein
In addition, the data required for the relative orientation of the individual image pairs are the image-side coordinates of the corresponding image points in the stereoscopic image, and the number is generally more than 6 and preferably uniformly distributed. The specific orientation solution flow is given below, as shown in fig. 7:
finally, the solution finds 5 parameters of the relative orientation, and the specific solution process is referred to (Zhang Bao Ming et al, photogrammetry).
1.2.5 epipolar image generation
According to the geometric principle of stereo camera imaging, the homonymous image points on the stereo image are necessarily located on homonymous epipolar lines, and the image points on the homonymous epipolar lines are in one-to-one correspondence, which has important significance for subsequent dense matching. Therefore, it is necessary to first determine the epipolar line relationship between the stereo images, and then correct the stereo images to obtain stereo images arranged according to the epipolar line, so as to prepare for the subsequent dense matching.
The system corrects the stereo image by adopting a Hartley algorithm according to a basic matrix obtained by relative orientation, and applies a pair of two-dimensional projective transformation to the image pair to ensure that the pair of two-dimensional projective transformation is matched with the limit and is coincided with the scanning line of the image. Through the two-dimensional projective transformation, the v-direction coordinate values of the matching point pairs in the two images can be the same, and the u-direction coordinate values of the matching point pairs can be close to each other as much as possible, even if the horizontal parallax is small, so that the search space during small matching is facilitated. The algorithm only utilizes the basis matrix of the image pair and does not need to know the camera projection matrix.
Position relationship of corrected stereo image pair referring to fig. 8, the pair limit of the corrected image coincides with the scan line, for which it is necessary to find a projective transformation such that the pair pole of the image becomes an infinite point, and the image shear distortion caused by the transformation is minimized. To meet this requirement, let u0For image center, it is desirable that the transformation matrix H can be at u0The vicinity is approximately rotated and translated, so that the distortion of the image is small. Let u0As the origin, the point p is (f,0,1) on the x-axis, and is transformed into
The antipode p can be moved to the point of infinity (f,0,0) using the G matrix, and the matrix is approximated as an identity matrix at the origin. For any point and antipodal point, there is H ═ GRT. Here, matrix T moves u to the origin, R is a rotation matrix that moves the antipode to a point (f,0,1) on the x-axis, and matrix G moves point (f,0,1) to the point of infinity. The three transformation matrixes are combined together to form the projective transformation which meets the requirement.
Setting the image pair to be corrected as J and J ', respectively acting a pair of two-dimensional projective transformations H and H ' on the two images, and setting lambda and lambda ' as a pair of epipolar lines, so that the required transformations satisfy:
H*λ=H′λ′ (25)
the above formula indicates that the transformed epipolar lines match.
H is the point mapping transformation matrix, then H*Is a line mapping transformation matrix corresponding to H. A transform satisfying equation (25) is referred to as a matched transform. Here, it is necessary to find a transformation H ' to move the antipode p ' to the point of infinity, and then find a transformation matrix H matching H ' that satisfies the following condition:
to find the transformation matrix H matching H', the following theorem is introduced.
Theorem: let the basis matrix F ═ p '| × M for the pair J and J', H 'be the projective transformation performed on J'. The projective transformation H made on J matches H' if and only if H satisfies the following form:
H=(I+H′p′aT)H′M (27)
in the formula, a is an arbitrary vector.
When the transformation matrix H 'has moved the antipode p' to the point of infinity (1,0,0)TWhen there is
Let H0H' M, then H-AH0
Is provided withThe above minimization problem can be expressed in the form of:
solving the minimization problem can obtain a pair of two-dimensional projective transformation matrixes H and H' meeting the requirements, and then resampling the original image and carrying out gray interpolation processing so as to generate a new stereo image pair.
The precision of the algorithm depends on the recovery precision of the epipolar geometry, so that the precision of the epipolar geometry can be recovered in an off-line mode in advance, and the correction precision can be ensured.
The correction algorithm steps are as follows:
the first step is as follows: performing high-precision restoration of epipolar geometry in an off-line mode, and finding epipolar points p and p' on the two images;
the second step is that: finding the mapping of antipode p' to infinity point (1,0,0)TProjective transformation H';
the third step: finding projective transformation H matching with transformation matrix H' and making it satisfy
Wherein m is1i=(u1,v1,1),m2i=(u2,v2,1);
The fourth step: and according to projective transformation H and H', resampling the two original images respectively to obtain a corrected stereo image pair.
1.2.6 dense matching
In the existing stereo image matching methods such as a GC algorithm, an SGM algorithm, a BP algorithm and the like, the SGM algorithm is high in speed, high in accuracy and good in stability. Therefore, the system adopts the SGM algorithm to carry out dense matching on the stereo images so as to generate dense three-dimensional scene information.
The basic idea of the SGM algorithm is: performing a pixel-by-pixel cost calculation based on mutual information, and then approximating a two-dimensional smoothness constraint using a plurality of one-dimensional smoothness constraints[5]
Suppose that the pixel p of the reference image has a gray level of IbpThe gray level of the same-name point q corresponding to the image to be matched is Imq. Function q ═ ebm(p, d) represents the epipolar line on the matching image corresponding to the pixel p of the reference image, with the epipolar line parameter being d. Then, the MI-based matching cost function is
Wherein,respectively representing the entropy of the block image centered on the pixels p, q,representing the joint entropy of the two block images.
Along the direction of the path r, the cost L of the pixel pr(p, d) is defined by a recursive manner as
Wherein, P1、P2Is a penalty factor. The total matching cost can be obtained by adding the costs in all directions
Then, for each pixel point p, the depth dp=mindS (p, d). Finally, consistency check is also needed, that is, depth values of the matching point pairs are compared, so that a depth map with an obvious contour and rich information is generated, and a specific implementation process is shown in fig. 9.
1.2.7 three-dimensional reconstruction
According to the calibration and dense matching results of the stereo camera, the system reconstructs the three-dimensional information of the target scene by using a multi-slice forward intersection method.
The shape and position of a three-dimensional object are uniquely determined if all points on the surface of the object are available. As shown in FIG. 10, assume that an arbitrary point P in space is in two camera coordinate systems C1And C2The lower image point is p1And p2,p1And p2Corresponding points of the same point P in the space in the left image and the right image are obtained, simultaneously, the cameras are calibrated, and the projection matrixes are respectively M1And M2Is then provided with
In the formula (u)1,v11) and (u)2,v21) are each p1And p2Image homogeneous coordinates of points in respective images; (X, Y, Z,1) is the homogeneous coordinate of the point P in a world coordinate system;(k-1, 2; i-1, …, 3; j-1, …,4) are each MkRow i and column j.
By the above two formulae to eliminate Zc1Or Zc2Four linear equations can be obtained for X, Y, Z:
as can be seen from the analytic geometry, the plane equation of the three-dimensional space is a linear equation, and the simultaneous two plane equations is a space linear equation (changing the straight line to the intersection line of the two planes), so the meaning of the formula (1) is O1p1(or O)2p2) Is measured.
Now there are 4 equations, requiring 3 unknowns, which can be solved using least squares, taking into account the presence of data noise. Rewriting formula (1) in matrix form
Equation (37) may be abbreviated
KX=U (38)
Wherein K is a 4 × 3 matrix to the left of equation (37); x is an unknown three-dimensional vector; u is a 4 × 1 vector to the right of equation (37). K and U are known vectors, the least squares solution of equation (37) is
m=(KTK)-1KTU (39)
Reconstruction in the usual euclidean geometrical sense requires a strict calibration of the camera, which can be done by the camera calibration method described above.
The two-dimensional image and the three-dimensional scene have perspective projection relation. This projection relationship may be described by a projection matrix (i.e., a camera parameter matrix). Firstly, a projection matrix can be restored through three-dimensional information of a small number of image points; then, the three-dimensional information of each point is restored by the least square method through the double projection matrix of the double cameras, so that the three-dimensional appearance of the object is restored.
1.2.8 stereo image model connection
The system can continuously acquire the three-dimensional images and reconstruct the three-dimensional model of the target scene in real time in the moving process of the three-dimensional camera. However, each reconstruction obtains a local model of the target in the current field of view, and the results of each reconstruction need to be connected to form a complete scene model, which is the main task of the stereo image model connection module.
The principle of stereo image model connection is that the homonymous image points in two groups of stereo images at adjacent moments are obtained and are used as connection points of the two groups of stereo images, homonymous model points of the two groups of stereo models are obtained through forward intersection calculation, finally the two groups of stereo models are transformed to the same space coordinate system through space similarity transformation, and the stereo images obtained at subsequent moments are processed in the same way, so that all stereo image models can be connected into an integral model aiming at the whole scene. The calculation process of the stereo image model connection is shown in fig. 11.
The system adopts a space similarity transformation formula as
Wherein, XT、YT、ZTIs the coordinates of the model point in the previous set of stereographic image model coordinates, X, Y, Z is the coordinates of the homonymous model point in the next adjacent set of stereographic image models in its model coordinates, X0、Y0、Z0Is the coordinate of the coordinate system origin of the next adjacent group of stereo image models in the coordinate system of the previous group of stereo image models, λ is the scale factor of the two groups of stereo image models, ai、bi、ciIs a corner element phi,A function of gamma. If these 7 parameters are known, coordinate transformation between two sets of stereoscopic image model coordinate systems can be performed.
As shown in the formula (40), the formula contains 7 unknown parameters, and the number of equations for a pair of similar points is 3, so that at least 3 homonymous feature points which are not on a straight line are required for solving the same. In the actual processing process, in order to ensure the accuracy and reliability, 4 or more than 4 homonymous feature points are often needed to solve the transformation parameters. Since the equation (40) is a nonlinear equation, the error equation obtained by linearization is
Through the connection of the three-dimensional image models, the target three-dimensional information reconstructed by the system in the moving process can be integrated into a coordinate system of a first group of three-dimensional image models to form a complete geometric model of the whole scene. If there are several marker points in the scene with known spatial coordinates, the geometric model of the whole scene can be transformed to be completely consistent with the position and size of the actual scene.
2 operating process of system
The system obtains sequence observation stereopair of a shot target from different angles by continuously moving a stereo camera platform relative to the shot target on the basis of finishing camera calibration in the previous period, calculates the three-dimensional reconstruction result of each group of stereopair in real time in the shooting process, and simultaneously connects the three-dimensional reconstruction results of all stereopair to generate a three-dimensional reconstruction model of the shot target. The process of camera calibration is performed as described in section 1.2.1, and the process of the system for performing real-time measurement of movement is described in detail below (as shown in fig. 12).
The first step is as follows: the stereo camera takes a picture to acquire a stereo pair. In order to obtain a complete model of a target, a system platform needs to be continuously moved to obtain a sequence stereo image of the target, and a certain speed needs to be kept during movement so that adjacent stereo image pairs contain homonymous feature points.
The second step is that: and extracting and matching features of the current stereopair. Aiming at a group of stereo image pairs obtained currently, a plurality of homonymous feature points (which are required to be not less than 6 and cannot be positioned on a straight line) on left and right images in the stereo image pair are obtained through a feature point extraction and influence matching algorithm.
The third step: the current stereo pair is relatively oriented. And calculating the relative position and posture relation of the left image and the right image in the stereoscopic image pair according to the homonymous feature points acquired in the last step and by combining the camera calibration result, wherein the relative position and posture of the right image relative to the left image are usually calculated by taking the left image as a reference.
The fourth step: the left and right images of the current stereopair are respectively corrected into epipolar images. And according to the relative orientation result of the current stereopair, resampling the left image and the right image according to the epipolar relation to generate corresponding epipolar images, wherein the homonymous image points on the left image and the right image are on the same image line.
The fifth step: and performing dense matching on the left and right epipolar line images. And performing pixel-by-pixel matching on the left and right epipolar images by adopting an algorithm of 1.2.6 sections to obtain coordinate values of all image points with the same name on the left and right images.
And a sixth step: the current stereopair is reconstructed three-dimensionally. And according to the dense matching result, calculating the space coordinates of the object space points corresponding to each pair of image points with the same name by using a forward intersection method to obtain the three-dimensional information of the target corresponding to the current stereopair.
The seventh step: and connecting the three-dimensional models of the targets acquired by the two adjacent stereo pairs. Repeating the first step to the sixth step to obtain the next group of stereopair and the corresponding target three-dimensional model, wherein the two groups of models have certain overlap. The geometric relationship between the three-dimensional models of the targets obtained by the two adjacent stereo pairs can be established by the method of section 1.2.8, and then the three-dimensional models of the targets corresponding to the next stereo pair are transformed into a uniform coordinate system, so that the two models are integrated.
Eighth step: and acquiring complete three-dimensional geometric information of the target. And repeating the steps one to seven, integrating three-dimensional models generated by the stereopair shot each time, and obtaining complete and accurate three-dimensional discrete point information of the target surface while finishing shooting all surfaces of the target.
The specific embodiments are given above, but the present invention is not limited to the described embodiments. The basic idea of the present invention lies in the above basic scheme, and it is obvious to those skilled in the art that no creative effort is needed to design various modified models, formulas and parameters according to the teaching of the present invention. Variations, modifications, substitutions and alterations may be made to the embodiments without departing from the principles and spirit of the invention, and still fall within the scope of the invention.

Claims (5)

1. The system is characterized by comprising a stereo camera consisting of two cameras and a camera fixing and distance adjusting device, wherein the stereo camera is connected with a control and calculation device which is used for controlling, storing, processing and outputting processing results of camera acquisition data; the measurement process is as follows:
1) calibrating a camera; 2) acquiring a plurality of groups of stereo images in the moving process of the stereo camera; 3) preprocessing an image; 4) feature extraction and stereo matching; 5) three-dimensional reconstruction; 6) connecting the three-dimensional image models; for the stereo image at any moment, acquiring homonymous image points in the stereo images at adjacent moments, taking the homonymous image points as connection points of two groups of stereo images, obtaining homonymous model points of two groups of stereo models through forward intersection calculation, and transforming the two groups of stereo models to the same space coordinate system through space similarity transformation; sequentially carrying out the same treatment on the stereo images at the next moment, and connecting all the stereo image models into an integral model aiming at the whole scene; in the moving process of the stereo camera, a stereo image is continuously acquired and a stereo model of a target scene is reconstructed in real time, the stereo model of the target scene is a local model of a target in the current field of view, and the reconstructed results are required to be connected in order to form a complete scene model.
2. The system for on-line mobile real-time measurement of image-side stereo vision according to claim 1, wherein the camera calibration method comprises: simultaneously acquiring a three-dimensional image; extracting angular points of the calibration plate; and (5) calibrating and resolving.
3. The system of claim 1, wherein the image-side stereo vision on-line mobile real-time measurement system comprises a filtering and gray-scale equalization process.
4. The system of claim 1, wherein the feature extraction and stereo matching comprises: obtaining connection points between the three-dimensional models by utilizing an SURF operator; calculating the relative position and posture between the two images, and carrying out relative orientation calculation; determining the epipolar line relationship among the stereo images, and correcting the stereo images to obtain stereo images arranged according to the epipolar line; and carrying out dense matching on the stereo images by using an SGM algorithm to generate dense homonymous image points.
5. The system of claim 1, wherein the three-dimensional reconstruction is achieved by reconstructing three-dimensional information of a target scene by a multi-slice frontal intersection method according to the dense homonymous image points obtained by matching.
CN201410745020.1A 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online Active CN104537707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410745020.1A CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410745020.1A CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Publications (2)

Publication Number Publication Date
CN104537707A CN104537707A (en) 2015-04-22
CN104537707B true CN104537707B (en) 2018-05-04

Family

ID=52853226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410745020.1A Active CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Country Status (1)

Country Link
CN (1) CN104537707B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469418B (en) * 2016-01-04 2018-04-20 中车青岛四方机车车辆股份有限公司 Based on photogrammetric big field-of-view binocular vision calibration device and method
CN106384368A (en) * 2016-09-14 2017-02-08 河南埃尔森智能科技有限公司 Distortion self-correction method for non-measurement type camera lens and light-sensing chip
CN108344369A (en) * 2017-01-22 2018-07-31 北京林业大学 A kind of method that mobile phone stereoscan measures forest diameter
CN107167077B (en) * 2017-07-07 2021-05-14 京东方科技集团股份有限公司 Stereoscopic vision measuring system and stereoscopic vision measuring method
CN107392898B (en) * 2017-07-20 2020-03-20 海信集团有限公司 Method and device for calculating pixel point parallax value applied to binocular stereo vision
CN107729824B (en) * 2017-09-28 2021-07-13 湖北工业大学 Monocular visual positioning method for intelligent scoring of Chinese meal banquet table
DE102017130897A1 (en) * 2017-12-21 2019-06-27 Pilz Gmbh & Co. Kg A method of determining range information from a map of a space area
CN107958469A (en) * 2017-12-28 2018-04-24 北京安云世纪科技有限公司 A kind of scaling method of dual camera, device, system and mobile terminal
CN110148205B (en) * 2018-02-11 2023-04-25 北京四维图新科技股份有限公司 Three-dimensional reconstruction method and device based on crowdsourcing image
CN108645426B (en) * 2018-04-09 2020-04-10 北京空间飞行器总体设计部 On-orbit self-calibration method for space target relative navigation vision measurement system
CN111336073B (en) * 2020-03-04 2022-04-05 南京航空航天大学 Wind driven generator tower clearance visual monitoring device and method
CN113379822B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN111445528B (en) * 2020-03-16 2021-05-11 天目爱视(北京)科技有限公司 Multi-camera common calibration method in 3D modeling
CN112837411A (en) * 2021-02-26 2021-05-25 由利(深圳)科技有限公司 Method and system for realizing three-dimensional reconstruction of movement of binocular camera of sweeper

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607533A (en) * 2011-12-28 2012-07-25 中国人民解放军信息工程大学 Block adjustment locating method of linear array CCD (Charge Coupled Device) optical and SAR (Specific Absorption Rate) image integrated local area network
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607533A (en) * 2011-12-28 2012-07-25 中国人民解放军信息工程大学 Block adjustment locating method of linear array CCD (Charge Coupled Device) optical and SAR (Specific Absorption Rate) image integrated local area network
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《一种基于图像相关的图像特征提取匹配算法》;王建文等;《科技创新导报》;20081231(第21期);全文 *
《一种实现三维立体模型重建的新方法》;宋丽华等;《计算机应用研究》;20040630(第6期);第148页左栏第2段, 第6节第2段第6-7行, 第1节第1段第5-6行, 第3节第1段, 第4节第1-2段, 第6节 *
《基于SURF和TPS的立体影像密集匹配方法》;侯文广等;《华中科技大学学报(自然科学版)》;20100731;第38卷(第7期);全文 *

Also Published As

Publication number Publication date
CN104537707A (en) 2015-04-22

Similar Documents

Publication Publication Date Title
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN106981083B (en) The substep scaling method of Binocular Stereo Vision System camera parameters
CN106780619B (en) Human body size measuring method based on Kinect depth camera
CN107063228B (en) Target attitude calculation method based on binocular vision
CN114399554B (en) Calibration method and system of multi-camera system
CN107155341B (en) Three-dimensional scanning system and frame
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
JP5285619B2 (en) Camera system calibration
CN110349251A (en) A kind of three-dimensional rebuilding method and device based on binocular camera
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
KR102709842B1 (en) System and method for efficient 3d reconstruction of objects with telecentric line-scan cameras
CN105043250B (en) A kind of double-visual angle data alignment method based on 1 common indicium points
CN109961485A (en) A method of target positioning is carried out based on monocular vision
EP2751521A1 (en) Method and system for alignment of a pattern on a spatial coded slide image
CN107610215B (en) High-precision multi-angle oral cavity three-dimensional digital imaging model construction method
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN112734863A (en) Crossed binocular camera calibration method based on automatic positioning
JPWO2020188799A1 (en) Camera calibration device, camera calibration method, and program
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
Furferi et al. A RGB-D based instant body-scanning solution for compact box installation
CN109029379B (en) High-precision small-base-height-ratio three-dimensional mapping method
CN111432117B (en) Image rectification method, device and electronic system
JP2019032660A (en) Imaging system and imaging method
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant