CN112767338A - Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision - Google Patents
Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision Download PDFInfo
- Publication number
- CN112767338A CN112767338A CN202110040531.3A CN202110040531A CN112767338A CN 112767338 A CN112767338 A CN 112767338A CN 202110040531 A CN202110040531 A CN 202110040531A CN 112767338 A CN112767338 A CN 112767338A
- Authority
- CN
- China
- Prior art keywords
- image
- industrial
- target
- eye camera
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000013307 optical fiber Substances 0.000 claims abstract description 13
- 229910000831 Steel Inorganic materials 0.000 claims abstract description 10
- 239000010959 steel Substances 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 46
- 238000012937 correction Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000009434 installation Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003014 reinforcing effect Effects 0.000 description 2
- 229910001294 Reinforcing steel Inorganic materials 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a binocular vision-based assembly type bridge prefabricated part hoisting and positioning system, which comprises: the system comprises a fixed support, an industrial left-eye camera, an industrial right-eye camera, a computer, a trigger, an optical fiber data line, a calibration plate and a target combined structure; the invention also discloses a hoisting and positioning method of the prefabricated component of the assembled bridge based on binocular vision, which comprises the following steps: step S1, calibrating the camera and solving internal and external parameters; step S2, mounting a target combination structure, and acquiring a first image and a second image; s3, correcting the first image and the second image, and S4, acquiring pixel coordinate values of a circle center positioning point of the target; and step S5, obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method. The invention can obtain the three-dimensional coordinates of the steel bar by obtaining the target arranged on the steel bar through binocular vision, provides the position coordinates to set the adjustment method of the steel bar, and saves time and cost for engineering.
Description
Technical Field
The invention relates to the field of computer vision and key target positioning of prefabricated components of an assembly type building, in particular to a system and a method for hoisting and positioning prefabricated components of an assembly type bridge based on binocular vision.
Background
With the development of novel urbanization of China, the trend of promoting the modernization of the building industry is inevitable, and the key point is to adapt to the development requirements of novel building industrialization and information technology. The fabricated building or the bridge has obvious effects on the aspects of energy conservation and emission reduction, environmental protection, short construction period, cost reduction, quality improvement and the like.
The hoisting process is an important component in the assembly type process, the optimized hoisting process is one of the key points of the assembly type building research, and the optimized hoisting process is related to the healthy development of the assembly type building.
Traditional hoist and mount location is mainly measured and detected by manual work, shows in hoist and mount in-process, at first tries to assemble, ensures that prefabricated hole and component reinforcing bar's position match, guarantees that the component can take one's place successfully, uses in the engineering to assemble the manual work in advance and detects the component counterpoint condition often consuming time longer. When the laser scanning technology is used for hoisting, positioning and detecting the components, the three-dimensional laser scanning measuring instrument is high in cost, needs a specific environment, is long in processing time of scanning point cloud, and is difficult to recalibrate once the accuracy of the scanning measuring instrument is inaccurate.
Therefore, developing a method for intelligently detecting key parts in the hoisting process is particularly important, detection data can be conveniently provided to formulate an adjusting method in the hoisting process, and time and cost are saved for engineering. Aiming at the problems, the invention develops a method and a system for positioning the hoisting key part of the prefabricated member of the assembled bridge based on binocular vision. Technical support is provided for solving the problems.
Disclosure of Invention
In order to solve the problems in the actual engineering, the invention aims to provide a binocular vision-based prefabricated bridge component hoisting and positioning system and method, which are used for solving the problem of positioning of the steel bars of the prefabricated components in the actual hoisting engineering.
In order to achieve the above object, the present invention provides a binocular vision based prefabricated bridge member hoisting and positioning system, comprising: the system comprises a fixed support, an industrial left-eye camera, an industrial right-eye camera, a computer, a trigger, an optical fiber data line, a calibration plate and a target combined structure;
the industrial left-eye camera and the industrial right-eye camera are arranged on the fixed support in parallel, and are in communication connection with the trigger through the optical fiber data line respectively, and the trigger is in communication connection with the computer;
a target measuring program is installed in the computer, and the computer performs data processing on images acquired by the industrial left-eye camera and the industrial right-eye camera through the target measuring program;
the target assembly structure includes: the device comprises a target veneer, a hollow rubber mounting handle and an adjusting knob, wherein the hollow rubber mounting handle is connected with the target veneer through a spherical holder connector, the adjusting knob is arranged on the hollow rubber mounting handle, and a target is arranged on the front side of the target veneer;
the industrial left-eye camera and the industrial right-eye camera shoot the calibration plate or the target to acquire corresponding images.
Furthermore, the optical fiber data line is a USB3.0 optical fiber data line, the calibration plate is a chessboard calibration plate, the surface of the calibration plate is provided with return light reflection mark points, and the target is provided with return light reflection mark points.
The invention also provides a hoisting and positioning method of the prefabricated component of the assembled bridge based on binocular vision, which comprises the following steps: step S1, calibrating the industrial left-eye camera and the industrial right-eye camera, and solving internal and external parameters of the industrial left-eye camera and the industrial right-eye camera;
step S2, mounting a target combination structure at the position of a steel bar to be measured of a prefabricated part, shooting the target through the industrial left-eye camera to obtain a first image of the target, and shooting the target through the industrial right-eye camera to obtain a second image of the target;
step S3, performing a rectification process on the first image and the second image acquired in step S2, the rectification process including: distortion correction and stereo correction;
step S4, processing the corrected first image and the corrected second image, and acquiring and storing pixel coordinate values of a target circle center positioning point in the first image and the second image;
and step S5, obtaining the pixel coordinate value of the target circle center positioning point of the first image and the pixel coordinate value of the target circle center positioning point of the second image according to the step S4, and obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
Further, the step S1 is specifically:
s101, fixing the industrial left-eye camera and the industrial right-eye camera in front of a prefabricated part to be tested, and placing the calibration plate in the visual field range of the industrial left-eye camera and the industrial right-eye camera;
step S102, acquiring a plurality of calibration plate images with different angles and different distances through the industrial left-eye camera and the industrial right-eye camera, and then putting the acquired images into a formulated folder;
s103, carrying out corner point detection on the image in the formulated folder through matlab, and carrying out calibration calculation to obtain parameters, wherein the parameters comprise: rotation matrix, translation vector, internal parameter matrix, reprojection error, skewness of image axis, principal point coordinate and scale factor;
s104, screening out the images which do not meet the requirement of the reprojection error from a formulated folder;
and step S105, carrying out second calibration on the image in the formulated folder, and calculating and saving the parameters.
Further, in step S2, the target combination structure is installed at the steel bar to be measured of the prefabricated component, and the target surface position of the target is adjusted through the spherical pan-tilt connector, so that the target surface is directly opposite to the industrial left-eye camera and the industrial right-eye camera.
Further, in step S3, the aberration correction specifically includes:
step S311, projecting three-dimensional space points in the first image and the second image to a normalized image plane;
step S312, correcting radial distortion and tangential distortion of the point on the normalized plane, specifically using formula (1) and formula (2), the expression is as follows:
xcorrected=x(1+k1r2+k2r4+k3r6)+2 p1xy+p2(r2+2x2) (1)
ycorrected=y(1+k1r2+k2r4+k3r6)+2 p2xy+p1(r2+2y2) (2)
in the formulas (1) to (2), x is the abscissa of the space point in the image coordinate system; x is the number ofcorrectedThe corrected space point is the abscissa of the image coordinate system; y is the vertical coordinate of the space point in the image coordinate system; y iscorrectedThe corrected space point is the vertical coordinate of the image coordinate system; k is a radical of1、k2、k3Is a camera radial distortion parameter; p1 and p2 are tangential distortion parameters of the camera;
step 313, projecting the corrected point to a pixel plane through an internal parameter matrix to obtain a correct position of the point on the image, wherein the expression is as follows:
u=fxxcorrected+cx (3)
v=fyycorrected+cy (4)
in formulae (3) to (4), cx、cyThe offset of the optical axis of the camera in an image coordinate system is taken as the offset; f. ofx、fyIs the focal length; u and v are coordinates of the space point in a pixel coordinate system;
in step S3, the stereo correction specifically includes:
step S321, dividing the rotation matrix R into two matrices RzAnd RyAnd R isz=R1/2,Ry=R1/2;
Step S322, construct e by offset matrix T1,e2,e3Through e1,e2,e3So that the left and right polar lines are parallel, and then pass through the constructed transformation matrix RrectThe matrix transforms the poles of the left view to infinity; wherein T ═ Tx,Ty,Tz]T,Wherein T is an offset matrix; rrectIs a constructed transformation matrix; t isx、Ty、TzIs a component of the offset matrix; e.g. of the type1、e2、e3As components of a transformation matrix;
step S323, multiplying the coordinate systems of the industrial left-eye camera and the industrial right-eye camera by corresponding integral rotation matrixes in sequence to enable the main optical axes of the two coordinate systems to be parallel, wherein the integral rotation matrixes can be obtained by multiplying a synthesized rotation matrix and a transformation matrix; the expression is as follows:
Rzuo=Rrect·rzuo (5)
Ryou=Rrect·ryou (6)
in formulae (5) to (6), Rzuo、RyouSynthetic rotation matrixes of an industrial left-eye camera and an industrial right-eye camera are respectively provided; rrectTo transform the camera poles to a matrix at infinity.
Further, the step S4 specifically includes:
step S401, a median filtering algorithm is adopted to filter the corrected first image and the corrected second image, and the median filtering output formula is as follows:
in formula (7), g (s, t) is the gray level value of the original image,is the gray value of the filtered image;
step S402, adopting a Laplacian operator to sharpen the filtered image, wherein the expression is as follows:
formula (8) represents the center coefficient of the laplacian mask;
s403, setting a proper threshold value through a canny operator to perform edge detection on the image to obtain an image contour, and screening out target contours meeting conditions through detecting the contour by the perimeter, the area and the roundness of the image contour;
fitting the target contour meeting the conditions, obtaining the circle center coordinates of the first image and the second image, averaging the circle center coordinates of the first image and the second image, and storing the circle center coordinates of the average value;
the expression of the canny operator is as follows:
I(x,y)=G(x,y,σ)*f(x,y) (10)
θ=arctan(IY(x,y)/IX(x,y)) (12)
in equations (9) to (12), G (x, y, σ) is a two-dimensional gaussian function, σ is a standard deviation of the gaussian function, the filter coverage area increases with the increase of σ, f (x, y) is an original image gray value, I (x, y) is a filtered image gray valueX(x, y) and IY(x, y) is the partial derivative of image I (x, y) in the x and y directions, M is the gradient strength at that point, and θ is its gradient vector.
Further, step S5 specifically includes:
step S501, obtaining the relation between parallax and three-dimensional coordinates according to the triangle geometry principle:
step S502, order (u)1-u2)=Xzuo-XyouAnd (3) obtaining:
step S503, calculating the three-dimensional coordinate of the circle center position according to the formula (15);
in the formulae (13) to (15), XwIs the abscissa of the central point under the world coordinate system; y iswIs the vertical coordinate of the central point under the world coordinate system; zwIs the vertical coordinate of the central point under the world coordinate system; xzuoIs a circle center pointThe horizontal coordinate of the image coordinate system of the industrial left eye camera imaging plane; xyouThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial right-eye camera is shown; u. of1The abscissa of the central point at the projection point of the first image; u. of2The abscissa of the central point at the projection point of the second image; v. of1The ordinate of the central point at the projection point of the first image is taken as the ordinate; (u)1-u2)=Xzuo-XyouIs the parallax d of the target point; f is the focal length of the two cameras; b is the distance between the two cameras as a base line.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, the target images are acquired by the left and right cameras, the image data is processed and calculated, the two-dimensional coordinates are converted into world three-dimensional coordinates, the absolute position of the reinforcing steel bar is rapidly obtained, the automation of the detection of the large prefabricated part of the assembled bridge is realized, the workload of manual measurement is reduced, the measurement precision is improved, the cost is low, the measurement is convenient, and the result can be processed in real time.
Drawings
Fig. 1 is a schematic view of a hoisting positioning system provided in embodiment 1.
Fig. 2 is a schematic structural diagram of a target assembly structure provided in embodiment 1.
Fig. 3 is a schematic structural diagram of another angle of the target assembly structure provided in embodiment 1.
Fig. 4 is a flowchart of the method of step S1 in embodiment 2.
Fig. 5 is a flowchart of a method for hoisting and positioning the prefabricated assembly bridge member based on binocular vision, which is provided in embodiment 2.
Fig. 6 and 7 are schematic perspective views of the middle-view correction in step S3 in embodiment 2.
Fig. 8 is a schematic diagram illustrating coordinate relationship conversion in different coordinate systems in step S3 in example 2.
Fig. 9 is a flowchart of the method of step S4 in embodiment 2.
In the figure, a 001-fixed support, a 002-industrial left-eye camera, a 003-industrial right-eye camera, a 004-computer, a 005-trigger, a 006-USB3.0 optical fiber data line, a 007-chessboard calibration plate, a 008-target combination structure, a 009-target veneer, a 010-hollow rubber installation handle, a 011-spherical pan-tilt connector and a 012-adjusting knob.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1 to 3, the present embodiment provides an assembly type bridge prefabricated part hoisting and positioning system based on binocular vision, including:
the system comprises a fixed support 001, an industrial left-eye camera 002, an industrial right-eye camera 003, a computer 004, a trigger 005, a USB3.0 optical fiber data line 006, a chessboard calibration plate 007 and a target combined structure 008; the industrial left-eye camera 002 and the industrial right-eye camera 003 are arranged on the fixing support 001 in parallel, and the fixing support 001 can adjust and read the distance between the industrial left-eye camera 002 and the industrial right-eye camera 003; the industrial left eye camera 002 and the industrial right eye camera 003 are in communication connection with the trigger 005 through the USB3.0 optical fiber data line 006 respectively, and the trigger 005 is used for ensuring that the industrial left eye camera 002 and the industrial right eye camera 003 can acquire images simultaneously; the trigger 005 is communicatively connected to the computer 004, specifically, connected through the USB3.0 optical fiber data line 006.
The target measuring program is installed in the computer 004, and the computer 004 processes the images acquired by the industrial left-eye camera 002 and the industrial right-eye camera 003 through the target measuring program.
Specifically, in this embodiment, the surface of the chessboard calibration plate 007 and the target are provided with the return light reflection mark points, which reduce the diffuse emission of the reflected light, and perform the corner detection more quickly and effectively.
This system is when using, the parallel setting of industry left eye camera 002 and industry right eye camera 003 are on fixed bolster 001, and industry left eye camera 002 and industry right eye camera 003 shoot the target on the target veneer 009 and chess board calibration board 007 and acquire corresponding image, and the image passes through USB3.0 optic fibre data line 006 and trigger 005 transmission to computer 004, and computer 004 is handled the image of receiving through built-in target measurement range order, it specifically includes to handle: 1. calibrating the industrial left-eye camera 002 and the industrial right-eye camera 003 through the acquired image of the chessboard calibration plate 007, and solving internal and external parameters of the industrial left-eye camera 002 and the industrial right-eye camera 003; 2. and carrying out correction processing on the acquired target image, wherein the correction processing comprises the following steps: distortion correction and stereo correction; 3. processing the corrected target image, and storing a pixel coordinate value of a target circle center positioning point of the target image; 4. and according to the obtained pixel coordinate value of the circle center positioning point of the target, obtaining the spatial three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
More specifically, in the above four processes, the images of the chessboard calibration plate 007 are referred to as being obtained by the industrial left-eye camera 002 and the industrial right-eye camera 003 simultaneously, and there are a plurality of images; the target image refers to a plurality of images obtained by the industrial left-eye camera 002 and the industrial right-eye camera 003 at the same time.
Example 2
Referring to fig. 4 to 9, the embodiment provides a method for hoisting and positioning an assembly type bridge prefabricated part based on binocular vision, which includes the following steps:
step S1, calibrating the industrial left eye camera and the industrial right eye camera, and solving internal and external parameters of the industrial left eye camera and the industrial right eye camera;
specifically, step S1 specifically includes:
s101, fixing an industrial left-eye camera and an industrial right-eye camera in front of a prefabricated part to be tested, and placing a chessboard calibration plate in the visual field range of the industrial left-eye camera and the industrial right-eye camera;
step S102, acquiring 10-20 images of chessboard calibration plates with different angles and different distances through an industrial left-eye camera and an industrial right-eye camera, and then putting the acquired images into a formulated folder;
s103, carrying out corner point detection on the image in the formulated folder through matlab, and carrying out calibration calculation to obtain parameters, wherein the parameters comprise: rotation matrix, translation vector, internal parameter matrix, reprojection error, skewness of image axis, principal point coordinate and scale factor;
s104, screening out the images which do not meet the requirement of the reprojection error from a formulated folder;
and step S105, carrying out second calibration on the image in the formulated folder, and calculating and saving the parameters.
More specifically, the reprojection error refers to a difference between a theoretical value of the target point projected onto the imaging plane and a real value on the measured image.
S2, mounting a target combination structure at the position of a steel bar to be measured of the prefabricated part, shooting a target through an industrial left-eye camera to obtain a first image of the target, and shooting the target through an industrial right-eye camera to obtain a second image of the target;
specifically, in step S2, the target combination structure is installed at the steel bar to be measured of the prefabricated component, and the target surface position of the target is adjusted through the spherical holder connector, so that the target surface is directly opposite to the industrial left-eye camera and the industrial right-eye camera, and the purpose of the operation is to reduce the error of the post-extraction of the center coordinates of the target surface.
Step S3, performing a rectification process on the first image and the second image acquired in step S2, the rectification process including: distortion correction and stereo correction;
specifically, the generation of image distortion directly affects the subsequent measurement accuracy. Due to processing errors of optical components such as a camera lens in the processing process, the camera has certain distortion in imaging.
The distortion correction is performed by the parameters and distortion coefficients in the industrial camera obtained in step S1, and the distortion correction is performed by the conversion relationship between the pixel coordinates and the image coordinates, and the relationship between the conversion relationship between different coordinate systems and the calibration parameters is expressed by the following formula:
in the formula: a isx,ayScale factors representing the horizontal and vertical axes of the image, respectively; k contains the focal length, principal point coordinates, etc. internal parameters, M1Containing a rotation matrix and a translation vector, M1The medium parameter is determined by the position of the camera coordinate system relative to the world coordinate system, M1Is a camera internal parameter matrix; the product M of the internal parameter matrix and the external parameter matrix is a projection matrix.
More specifically, in step S3, the aberration correction specifically includes:
step S311, projecting three-dimensional space points in the first image and the second image to a normalized image plane;
step S312, correcting radial distortion and tangential distortion of the point on the normalized plane, specifically solving through the following two formulas, where the expression is as follows:
xcorrected=x(1+k1r2+k2r4+k3r6)+2 p1xy+p2(r2+2x2)
ycorrected=y(1+k1r2+k2r4+k3r6)+2 p2xy+p1(r2+2y2)
in the formula, x is the abscissa of a space point in an image coordinate system; x is the number ofcorrectedThe corrected space point is the abscissa of the image coordinate system; y is the vertical coordinate of the space point in the image coordinate system; y iscorrectedThe corrected space point is the vertical coordinate of the image coordinate system; k is a radical of1、k2、k3Is a camera radial distortion parameter; p1 and p2 are tangential distortion parameters of the camera.
Step 313, projecting the corrected point to a pixel plane through an internal parameter matrix to obtain a correct position of the point on the image, wherein the expression is as follows:
u=fxxcorrected+cx
v=fyycorrected+cy
in the formula, cx、cyThe offset of the optical axis of the camera in an image coordinate system is taken as the offset; f. ofx、fyIs the focal length; u and v are coordinates of the space point in a pixel coordinate system.
In step S3, the stereo correction is to mathematically perform projective transformation on the left and right views captured in the same scene, so that the two imaging planes are parallel to the baseline, and the same point is located in the same line in the left and right views, which is referred to as coplanar line alignment. And calculating three-dimensional coordinates by using a trigonometric principle only after the alignment of the coplanar rows is achieved, and correcting by using a Bouguet polar line correction algorithm of OpenCV. The method specifically comprises the following steps:
step S321, dividing the rotation matrix into two matrixes, RzAnd RyAnd R isz=R1/2,Ry=R1/2;
Step S322, construct e by offset matrix T1,e2,e3Through e1,e2,e3So that the polar lines of the left and the right are parallel and then pass through RrectThe matrix transforms the poles of the left view to infinity; wherein T ═ Tx,Ty,Tz]T, Wherein T is an offset matrix; rrectIs a constructed transformation matrix; t isx、Ty、TzIs a component of the offset matrix; e.g. of the type1、e2、e3Are components of a transformation matrix.
Step S323, multiplying coordinate systems of the industrial left-eye camera and the industrial right-eye camera by corresponding integral rotation matrixes in sequence to enable main optical axes of the two coordinate systems to be parallel, wherein the integral rotation matrixes are obtained by multiplying a synthesized rotation matrix and a transformation matrix; the expression is as follows:
Rzuo=Rrect·rzuo
Ryou=Rrect·ryou
in the formula, Rzuo、RyouThe synthetic rotation matrixes of the left camera and the right camera are respectively; rrectTo transform the camera poles to a matrix at infinity.
Step S4, processing the corrected first image and the corrected second image, and acquiring and storing pixel coordinate values of a target circle center positioning point in the first image and the second image;
specifically, step S4 specifically includes:
step S401, filtering the corrected first image and second image by adopting a median filtering algorithm, more specifically, selecting a 3 x 3 filtering template window to enable the pixel point to be filtered on the image to coincide with the center of the filtering template, moving the templates in sequence, carrying out filtering operation on all gray values covered by the template on the image, recombining the gray values according to the sequence from small to large, and selecting the gray value at the center from the gray values, wherein the gray value selected by the method has the minimum difference with the gray values of the pixels around, thereby effectively removing noise; the median filter output formula is:
Step S402, adopting Laplacian to sharpen the filtered image, enhancing details, fusing the original image and the filtered image after filtering by using Laplacian template, and protecting background, wherein Laplacian is defined as:
the discrete expression in the image processing process is:
the above formula represents the center coefficient of the laplacian mask.
S403, setting a proper threshold value through a canny operator to perform edge detection on the image to obtain an image contour, and screening out target contours meeting conditions through detecting the contour by the perimeter, the area and the roundness of the image contour; when the roundness is set to be 0.8, the contour of the round target at the position can be well screened;
fitting the target contour meeting the conditions, obtaining the circle center coordinates of the first image and the second image, averaging the circle center coordinates of the concentric circles of the first image and the second image because the target is set as a concentric circle, and storing the average circle center coordinate;
the expression of the canny operator is:
I(x,y)=G(x,y,σ)*f(x,y)
θ=arctan(IY(x,y)/IX(x,y))
in the formula, G (x, y, sigma) is a two-dimensional Gaussian function, sigma is the standard deviation of the Gaussian function, the filter coverage area increases with the increase of sigma, f (x, y) is the gray value of the original image, I (x, y) is the gray value of the filtered image, and I (x, y) is the gray value of the filtered imageX(x, y) and IY(x, y) is the partial derivative of image I (x, y) in the x and y directions, M is the gradient strength at that point, and θ is its gradient vector.
And step S5, obtaining the pixel coordinate value of the target circle center positioning point of the first image and the pixel coordinate value of the target circle center positioning point of the second image according to the step S4, and obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
Specifically, step S5 specifically includes:
step S501, obtaining the relation between parallax and three-dimensional coordinates according to the triangle geometry principle:
step S502, order (u)1-u2)=Xzuo-XyouAnd (3) obtaining:
step S503, calculating the three-dimensional coordinates of the circle center position according to the following three formulas;
in the formula, XwIs the abscissa of the central point under the world coordinate system; y iswIs the vertical coordinate of the central point under the world coordinate system; zwIs the vertical coordinate of the central point under the world coordinate system; xzuoThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial left eye camera is shown; xyouThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial right-eye camera is shown; u. of1The abscissa of the central point at the projection point of the first image; u. of2The abscissa of the central point at the projection point of the second image; v. of1The ordinate of the central point at the projection point of the first image is taken as the ordinate; (u)1-u2)=Xzuo-XyouIs the parallax d of the target point; f is the focal length of the two cameras; b is the distance between the two cameras as a base line.
While the invention has been described in connection with specific embodiments thereof, it will be understood that these should not be construed as limiting the scope of the invention, which is defined in the following claims, and any variations which fall within the scope of the claims are intended to be embraced thereby.
Claims (8)
1. The utility model provides an assembled bridge prefabricated component hoist and mount positioning system based on binocular vision which characterized in that includes: the system comprises a fixed support, an industrial left-eye camera, an industrial right-eye camera, a computer, a trigger, an optical fiber data line, a calibration plate and a target combined structure;
the industrial left-eye camera and the industrial right-eye camera are arranged on the fixed support in parallel, and are in communication connection with the trigger through the optical fiber data line respectively, and the trigger is in communication connection with the computer;
a target measuring program is installed in the computer, and the computer performs data processing on images acquired by the industrial left-eye camera and the industrial right-eye camera through the target measuring program;
the target assembly structure includes: the device comprises a target veneer, a hollow rubber mounting handle and an adjusting knob, wherein the hollow rubber mounting handle is connected with the target veneer through a spherical holder connector, the adjusting knob is arranged on the hollow rubber mounting handle, and a target is arranged on the front side of the target veneer;
the industrial left-eye camera and the industrial right-eye camera shoot the calibration plate or the target to acquire corresponding images.
2. The assembled bridge prefabricated part hoisting and positioning system based on binocular vision is characterized in that the optical fiber data lines are USB3.0 optical fiber data lines, the calibration plate is a chessboard calibration plate, return light reflection mark points are arranged on the surface of the calibration plate, and return light reflection mark points are arranged on the targets.
3. The method for hoisting and positioning the assembled bridge prefabricated part based on the binocular vision, as claimed in any one of claims 1 to 2, wherein the method comprises the following steps:
step S1, calibrating the industrial left-eye camera and the industrial right-eye camera, and solving internal and external parameters of the industrial left-eye camera and the industrial right-eye camera;
step S2, mounting a target combination structure at the position of a steel bar to be measured of a prefabricated part, shooting the target through the industrial left-eye camera to obtain a first image of the target, and shooting the target through the industrial right-eye camera to obtain a second image of the target;
step S3, performing a rectification process on the first image and the second image acquired in step S2, the rectification process including: distortion correction and stereo correction;
step S4, processing the corrected first image and the corrected second image, and acquiring and storing pixel coordinate values of a target circle center positioning point in the first image and the second image;
and step S5, obtaining the pixel coordinate value of the target circle center positioning point of the first image and the pixel coordinate value of the target circle center positioning point of the second image according to the step S4, and obtaining the space three-dimensional coordinate of the circle center positioning point by adopting a parallax method.
4. The assembled bridge prefabricated part hoisting and positioning method based on binocular vision according to claim 3, wherein the step S1 is specifically as follows:
s101, fixing the industrial left-eye camera and the industrial right-eye camera in front of a prefabricated part to be tested, and placing the calibration plate in the visual field range of the industrial left-eye camera and the industrial right-eye camera;
step S102, acquiring a plurality of calibration plate images with different angles and different distances through the industrial left-eye camera and the industrial right-eye camera, and then putting the acquired images into a formulated folder;
s103, carrying out corner point detection on the image in the formulated folder through matlab, and carrying out calibration calculation to obtain parameters, wherein the parameters comprise: rotation matrix, translation vector, internal parameter matrix, reprojection error, skewness of image axis, principal point coordinate and scale factor;
s104, screening out the images which do not meet the requirement of the reprojection error from a formulated folder;
and step S105, carrying out second calibration on the image in the formulated folder, and calculating and saving the parameters.
5. The binocular vision based assembled bridge prefabricated part hoisting and positioning method as claimed in claim 4, wherein in the step S2, the target combination structure is installed at the steel bar to be measured of the prefabricated part, and the target surface position of the target is adjusted through the spherical pan-tilt connector, so that the target surface is opposite to the industrial left-eye camera and the industrial right-eye camera.
6. The binocular vision based assembled bridge prefabricated part hoisting and positioning method as recited in claim 5, wherein in the step S3, the distortion correction specifically comprises:
step S311, projecting three-dimensional space points in the first image and the second image to a normalized image plane;
step S312, correcting radial distortion and tangential distortion of the point on the normalized plane, specifically using formula (1) and formula (2), the expression is as follows:
xcorrected=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2) (1)
ycorrected=y(1+k1r2+k2r4+k3r6)+2p2xy+p1(r2+2y2) (2)
in the formulas (1) to (2), x is the abscissa of the space point in the image coordinate system; x is the number ofcorrectedThe corrected space point is the abscissa of the image coordinate system; y is the vertical coordinate of the space point in the image coordinate system; y iscorrectedThe corrected space point is the vertical coordinate of the image coordinate system; k is a radical of1、k2、k3Is a camera radial distortion parameter; p1 and p2 are tangential distortion parameters of the camera;
step 313, projecting the corrected point to a pixel plane through an internal parameter matrix to obtain a correct position of the point on the image, wherein the expression is as follows:
u=fxxcorrected+cx (3)
v=fyycorrected+cy (4)
in formulae (3) to (4), cx、cyThe offset of the optical axis of the camera in an image coordinate system is taken as the offset; f. ofx、fyIs the focal length; u and v are coordinates of the space point in a pixel coordinate system;
in step S3, the stereo correction specifically includes:
step S321, dividing the rotation matrix R into two matrices RzAnd RyAnd R isz=R1/2,Ry=R1/2;
Step S322, construct e by offset matrix T1,e2,e3Through e1,e2,e3So that the left and right polar lines are parallel, and then pass through the constructed transformation matrix RrectThe matrix transforms the poles of the left view to infinity; wherein T ═ Tx,Ty,Tz]T,Wherein T is an offset matrix; rrectIs a constructed transformation matrix; t isx、Ty、TzIs a component of the offset matrix; e.g. of the type1、e2、e3As components of a transformation matrix;
step S323, multiplying the coordinate systems of the industrial left-eye camera and the industrial right-eye camera by corresponding integral rotation matrixes in sequence to enable the main optical axes of the two coordinate systems to be parallel, wherein the integral rotation matrixes can be obtained by multiplying a synthesized rotation matrix and a transformation matrix; the expression is as follows:
Rzuo=Rrect·rzuo (5)
Ryou=Rrect·ryou (6)
in formulae (5) to (6), Rzuo、RyouSynthetic rotation matrixes of an industrial left-eye camera and an industrial right-eye camera are respectively provided; rrectTo transform the camera poles to a matrix at infinity.
7. The assembled bridge prefabricated part hoisting and positioning method based on binocular vision according to claim 6, wherein the step S4 specifically comprises:
step S401, a median filtering algorithm is adopted to filter the corrected first image and the corrected second image, and the median filtering output formula is as follows:
in formula (7), g (s, t) is the gray level value of the original image,is the gray value of the filtered image;
step S402, adopting a Laplacian operator to sharpen the filtered image, wherein the expression is as follows:
formula (8) represents the center coefficient of the laplacian mask;
s403, setting a proper threshold value through a canny operator to perform edge detection on the image to obtain an image contour, and screening out target contours meeting conditions through detecting the contour by the perimeter, the area and the roundness of the image contour;
fitting the target contour meeting the conditions, obtaining the circle center coordinates of the first image and the second image, averaging the circle center coordinates of the first image and the second image, and storing the circle center coordinates of the average value;
the expression of the canny operator is as follows:
I(x,y)=G(x,y,σ)*f(x,y) (10)
θ=arctan(IY(x,y)/IX(x,y)) (12)
in equations (9) to (12), G (x, y, σ) is a two-dimensional gaussian function, σ is a standard deviation of the gaussian function, the filter coverage area increases with the increase of σ, f (x, y) is an original image gray value, I (x, y) is a filtered image gray valueX(x, y) and IY(x, y) is the partial derivative of image I (x, y) in the x and y directions, M is the gradient strength at that point, and θ is its gradientAnd (5) vector quantity.
8. The assembled bridge prefabricated part hoisting and positioning method based on binocular vision according to claim 7, wherein the step S5 specifically comprises:
step S501, obtaining the relation between parallax and three-dimensional coordinates according to the triangle geometry principle:
step S502, order (u)1-u2)=Xzuo-XyouAnd (3) obtaining:
step S503, calculating the three-dimensional coordinate of the circle center position according to the formula (15);
in the formulae (13) to (15), XwIs the abscissa of the central point under the world coordinate system; y iswIs the vertical coordinate of the central point under the world coordinate system; zwIs the vertical coordinate of the central point under the world coordinate system; xzuoThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial left eye camera is shown; xyouThe horizontal coordinate of the central point under the image coordinate system of the imaging plane of the industrial right-eye camera is shown; u. of1The abscissa of the central point at the projection point of the first image; u. of2The abscissa of the central point at the projection point of the second image; v. of1The ordinate of the central point at the projection point of the first image is taken as the ordinate; (u)1-u2)=Xzuo-XyouIs the parallax d of the target point; f is the focal length of the two cameras; b is the distance between the two cameras as a base line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110040531.3A CN112767338A (en) | 2021-01-13 | 2021-01-13 | Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110040531.3A CN112767338A (en) | 2021-01-13 | 2021-01-13 | Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112767338A true CN112767338A (en) | 2021-05-07 |
Family
ID=75699898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110040531.3A Pending CN112767338A (en) | 2021-01-13 | 2021-01-13 | Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767338A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114612565A (en) * | 2022-05-12 | 2022-06-10 | 中建安装集团有限公司 | Prefabricated part positioning method and system based on binocular vision |
CN115082557A (en) * | 2022-06-29 | 2022-09-20 | 中交第二航务工程局有限公司 | Tower column hoisting relative attitude measurement method based on binocular vision |
CN115126267A (en) * | 2022-07-25 | 2022-09-30 | 中建八局第三建设有限公司 | Optical positioning control system and method applied to concrete member embedded joint bar alignment |
CN115306165A (en) * | 2022-08-25 | 2022-11-08 | 中国建筑第二工程局有限公司 | Assembly type prefabricated part mounting system |
CN116310099A (en) * | 2023-03-01 | 2023-06-23 | 南京工业大学 | Three-dimensional reconstruction method of steel bridge component based on multi-view images |
CN116678337A (en) * | 2023-06-08 | 2023-09-01 | 交通运输部公路科学研究所 | Image recognition-based bridge girder erection machine girder front and rear pivot point position height difference and girder deformation monitoring and early warning system and method |
CN117846318A (en) * | 2023-12-11 | 2024-04-09 | 湖北工建集团第三建筑工程有限公司 | Integral hoisting construction method for large-span double-slope trapezoid steel roof truss |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130058581A1 (en) * | 2010-06-23 | 2013-03-07 | Beihang University | Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame |
CN107133983A (en) * | 2017-05-09 | 2017-09-05 | 河北科技大学 | Bundled round steel end face binocular vision system and space orientation and method of counting |
CN109166153A (en) * | 2018-08-21 | 2019-01-08 | 江苏德丰建设集团有限公司 | Tower crane high altitude operation 3-D positioning method and positioning system based on binocular vision |
CN109493313A (en) * | 2018-09-12 | 2019-03-19 | 华中科技大学 | A kind of the coil of strip localization method and equipment of view-based access control model |
CN110189382A (en) * | 2019-05-31 | 2019-08-30 | 东北大学 | A kind of more binocular cameras movement scaling method based on no zone of mutual visibility domain |
CN111062990A (en) * | 2019-12-13 | 2020-04-24 | 哈尔滨工程大学 | Binocular vision positioning method for underwater robot target grabbing |
-
2021
- 2021-01-13 CN CN202110040531.3A patent/CN112767338A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130058581A1 (en) * | 2010-06-23 | 2013-03-07 | Beihang University | Microscopic Vision Measurement Method Based On Adaptive Positioning Of Camera Coordinate Frame |
CN107133983A (en) * | 2017-05-09 | 2017-09-05 | 河北科技大学 | Bundled round steel end face binocular vision system and space orientation and method of counting |
CN109166153A (en) * | 2018-08-21 | 2019-01-08 | 江苏德丰建设集团有限公司 | Tower crane high altitude operation 3-D positioning method and positioning system based on binocular vision |
CN109493313A (en) * | 2018-09-12 | 2019-03-19 | 华中科技大学 | A kind of the coil of strip localization method and equipment of view-based access control model |
CN110189382A (en) * | 2019-05-31 | 2019-08-30 | 东北大学 | A kind of more binocular cameras movement scaling method based on no zone of mutual visibility domain |
CN111062990A (en) * | 2019-12-13 | 2020-04-24 | 哈尔滨工程大学 | Binocular vision positioning method for underwater robot target grabbing |
Non-Patent Citations (1)
Title |
---|
王洋: "基于双目视觉的工件定位与尺寸测量研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114612565A (en) * | 2022-05-12 | 2022-06-10 | 中建安装集团有限公司 | Prefabricated part positioning method and system based on binocular vision |
CN115082557A (en) * | 2022-06-29 | 2022-09-20 | 中交第二航务工程局有限公司 | Tower column hoisting relative attitude measurement method based on binocular vision |
CN115082557B (en) * | 2022-06-29 | 2024-03-15 | 中交第二航务工程局有限公司 | Binocular vision-based tower column hoisting relative attitude measurement method |
CN115126267A (en) * | 2022-07-25 | 2022-09-30 | 中建八局第三建设有限公司 | Optical positioning control system and method applied to concrete member embedded joint bar alignment |
CN115126267B (en) * | 2022-07-25 | 2024-05-31 | 中建八局第三建设有限公司 | Optical positioning control system and method applied to pre-buried dowel bar alignment of concrete member |
CN115306165A (en) * | 2022-08-25 | 2022-11-08 | 中国建筑第二工程局有限公司 | Assembly type prefabricated part mounting system |
CN116310099A (en) * | 2023-03-01 | 2023-06-23 | 南京工业大学 | Three-dimensional reconstruction method of steel bridge component based on multi-view images |
CN116678337A (en) * | 2023-06-08 | 2023-09-01 | 交通运输部公路科学研究所 | Image recognition-based bridge girder erection machine girder front and rear pivot point position height difference and girder deformation monitoring and early warning system and method |
CN117846318A (en) * | 2023-12-11 | 2024-04-09 | 湖北工建集团第三建筑工程有限公司 | Integral hoisting construction method for large-span double-slope trapezoid steel roof truss |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112767338A (en) | Assembled bridge prefabricated part hoisting and positioning system and method based on binocular vision | |
CN110276808B (en) | Method for measuring unevenness of glass plate by combining single camera with two-dimensional code | |
CN110146038B (en) | Distributed monocular camera laser measuring device and method for assembly corner of cylindrical part | |
US10690492B2 (en) | Structural light parameter calibration device and method based on front-coating plane mirror | |
CN111612853B (en) | Camera parameter calibration method and device | |
CN109029299B (en) | Dual-camera measuring device and method for butt joint corner of cabin pin hole | |
CN109859272B (en) | Automatic focusing binocular camera calibration method and device | |
CN112907676A (en) | Calibration method, device and system of sensor, vehicle, equipment and storage medium | |
CN114705122B (en) | Large-view-field stereoscopic vision calibration method | |
CN110827360B (en) | Photometric stereo measurement system and method for calibrating light source direction thereof | |
Boochs et al. | Increasing the accuracy of untaught robot positions by means of a multi-camera system | |
CN104315978A (en) | Method and device for measuring pipeline end face central points | |
CN113324478A (en) | Center extraction method of line structured light and three-dimensional measurement method of forge piece | |
CN105809706B (en) | A kind of overall calibration method of the more camera systems of distribution | |
CN113781579B (en) | Geometric calibration method for panoramic infrared camera | |
WO2024138916A1 (en) | 2d area-array camera and line laser 3d sensor joint calibration method and apparatus | |
KR101597163B1 (en) | Method and camera apparatus for calibration of stereo camera | |
CN110595374B (en) | Large structural part real-time deformation monitoring method based on image transmission machine | |
CN111383264A (en) | Positioning method, positioning device, terminal and computer storage medium | |
CN116740187A (en) | Multi-camera combined calibration method without overlapping view fields | |
CN110458951B (en) | Modeling data acquisition method and related device for power grid pole tower | |
CN114963981B (en) | Cylindrical part butt joint non-contact measurement method based on monocular vision | |
CN116147477A (en) | Joint calibration method, hole site detection method, electronic device and storage medium | |
JPH09329440A (en) | Coordinating method for measuring points on plural images | |
CN115790366A (en) | Visual positioning system and method for large array surface splicing mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210507 |