Nothing Special   »   [go: up one dir, main page]

CN105809706B - A kind of overall calibration method of the more camera systems of distribution - Google Patents

A kind of overall calibration method of the more camera systems of distribution Download PDF

Info

Publication number
CN105809706B
CN105809706B CN201610354114.5A CN201610354114A CN105809706B CN 105809706 B CN105809706 B CN 105809706B CN 201610354114 A CN201610354114 A CN 201610354114A CN 105809706 B CN105809706 B CN 105809706B
Authority
CN
China
Prior art keywords
target
camera
coordinate
coordinate system
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610354114.5A
Other languages
Chinese (zh)
Other versions
CN105809706A (en
Inventor
吴晓龙
吴森堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201610354114.5A priority Critical patent/CN105809706B/en
Publication of CN105809706A publication Critical patent/CN105809706A/en
Application granted granted Critical
Publication of CN105809706B publication Critical patent/CN105809706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of overall calibration methods of the more camera systems of distribution, include the following steps:Use the two-dimensional calibrations target being made of two groups of mutually orthogonal parallel lines, the relative position and attitude (pose) of camera and target is estimated based on vanishing point and the line that goes out, by going out line equation and known target geometric dimension obtains relative pose initial value, the optimal value of camera and the relative pose for demarcating target is obtained by seeking re-projection error function minimum;Adjacent target image, which is shot, using auxiliary camera obtains the transformation matrix of coordinates between adjacent target, relative pose initial value of each calibration target relative to reference target is acquired by increment method, by seeking, based on the re-projection error functional minimum value for being closed image sequence, obtaining optimal value of each target relative to the transformation matrix of reference target;By corresponding coordinate transform, optimal value of each camera relative to the transformation matrix with reference to camera is acquired, carries out the global calibration of distributed more camera systems.

Description

Global calibration method of distributed multi-camera system
Technical Field
The invention belongs to the technical field of photogrammetry of an optical system, and particularly relates to a global calibration method for a distributed multi-camera system.
Background
The optical measuring system has the advantages of flexibility, diversity and higher precision. The distributed multi-camera system is a typical optical measurement system, has a wide field coverage, can fuse images acquired by multiple cameras, and is widely used in the fields of vision measurement, object detection and the like. The global calibration of the multi-camera system is to acquire the relative position and attitude relationship between cameras by a certain method, so that the multi-camera coordinate system is unified to the global coordinate system, which is one of the preconditions for optical measurement.
At present, the commonly used global calibration method for multiple cameras includes: the calibration method based on the precise instrument, the self-calibration method based on the feature matching, the calibration method based on the mirror reflection structure overlapping view field and the photogrammetry method based on the target.
The calibration method based on the precision instrument is generally realized by adopting a precision calibration target, a plurality of total stations or a three-dimensional laser measuring device, the measurement precision is high, but the cost of the precision measuring instrument is high. The self-calibration method does not need a special calibration target and is realized by carrying out characteristic detection and matching on the same scene under different visual angles; however, the self-calibration method has low accuracy, requires a sufficiently large overlap of the fields of view of adjacent cameras, and is not suitable for low-brightness and low-ambient texture situations. In order to avoid field shading and obtain better field distribution, the multi-camera system is generally distributed, the field overlap between adjacent cameras is small, and the calibration is difficult to be completed by adopting the method. The calibration method of constructing the overlapped view field based on the mirror reflection is difficult to ensure that each camera can clearly image the calibration target.
Disclosure of Invention
The invention aims to solve the problem of accumulated errors caused by multiple times of coordinate transformation in the global calibration of a distributed multi-camera system, and provides a global calibration method of the distributed multi-camera system.
The invention designs a two-dimensional calibration target consisting of two groups of mutually orthogonal parallel straight lines, and provides a relative position posture (pose) estimation method of a camera and the target based on vanishing point and vanishing line.
The invention relates to a global calibration method of distributed targets, which comprises the steps of shooting images of adjacent targets by using an auxiliary camera to obtain a coordinate transformation matrix between the adjacent targets, and obtaining initial values of relative poses of the calibration targets relative to a reference target by an incremental coordinate transformation method; and obtaining the optimal value of the transformation matrix of each target relative to the reference target by calculating the minimum value of the re-projection error function based on the closed image sequence. After the relative pose estimation of the cameras and the calibration target and the global calibration of the distributed targets are completed, the optimal value of the transformation matrix of each camera relative to the reference camera is obtained through corresponding coordinate transformation, and the global calibration of the distributed multi-camera system is completed.
The invention has the advantages that:
(1) the plane calibration target designed by the invention consists of two groups of mutually orthogonal parallel lines, and compared with point characteristics, the straight line characteristics can better overcome the influence of image noise;
(2) the designed target is used for camera calibration, attitude angle information is deduced according to a vanishing line equation, and a translation vector is determined by combining the known length of a parallel line;
(3) the method uses the auxiliary camera to obtain the closed image sequence of the adjacent target, and is suitable for calibrating a distributed multi-camera system with overlapped view fields and a distributed multi-camera system without the overlapped view fields;
(4) in order to correct accumulated errors caused by multiple times of coordinate transformation, the invention minimizes a re-projection error function according to the closed image sequence constraint, thereby obtaining the optimal value of a transformation matrix of each target relative to a reference target.
Drawings
FIG. 1 is a flow chart of a global calibration method for a distributed camera system;
FIG. 2 is a top view of a calibration target;
FIG. 3 is a schematic diagram of global calibration of a distributed camera system;
FIG. 4 is a schematic representation of adjacent targets (i, j) taken using an auxiliary camera;
FIG. 5 is a schematic representation of the vanishing point and vanishing line of the planar target;
FIG. 6 is a diagram of global calibration result error results.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The distributed multi-camera system is composed of M cameras, CkCF (1. ltoreq. k. ltoreq.M) and AiCF (i is more than or equal to 1 and less than or equal to M) respectively represents a camera coordinate system of the kth camera and a camera coordinate system of an auxiliary camera for shooting an adjacent target (i, j); i iskCF (1. ltoreq. k. ltoreq.M) denotes the image pixel coordinate system of the kth camera, IkThe origin of coordinates of the CF is located at the center of the image plane.
As shown in FIG. 2, the target consists of two mutually orthogonal sets with lengthsKnown parallel lines are formed, the length of the parallel line is L1At a pitch of L2Respectively representing the mth angular point and the mth characteristic straight line of the target k. T iskCF (1. ltoreq. k. ltoreq.M) denotes the coordinate system of the target k, otIs the origin of a coordinate system, xt,ztRepresenting the x-axis and z-axis, respectively. The ECF represents a ground coordinate system, and the origin of the ECF is fixedly connected with the ground and is a coordinate system of the northeast.
Measurement model
Let p be [ u, v ]]TAnd P ═ X, Y, Z]TRepresenting points in a two-dimensional image plane and in a three-dimensional space, respectively, u and v are pixel coordinates in an image coordinate system, X, Y, Z represent three-dimensional coordinates,respectively, represent the corresponding homogeneous coordinates,the projection of a point within the target coordinate system TCF onto the camera image plane can be expressed as:
wherein: s represents a scale factor, K is an internal parameter matrix, fxAnd fyIs the equivalent focal length, (u)0,v0) Are the principal point coordinates. T denotes a coordinate transformation matrix from the target coordinate system to the camera coordinate system, R is a rotation matrix of 3 × 3 dimensions, and T is a translation vector of 3 × 1 dimensions. The rotation matrix R may be represented by Euler angles (Y-X-Z): yaw anglePitch angle θ and roll angle φ:
in the present invention, the coordinate transformation matrix is defined as follows:
TABLE 1 definition of coordinate transformation matrix
The global calibration method provided by the invention is shown in fig. 3, in the invention, a camera 1 is selected as a reference camera, and a target 1 is selected as a reference target, and the specific implementation steps of the global calibration method of the distributed multi-camera system provided by the invention are as follows:
estimating the relative pose of a camera and a calibration target;
using a two-dimensional calibration target consisting of two groups of mutually orthogonal parallel straight lines, estimating the relative position posture (pose) of the camera and the target based on vanishing points and vanishing lines, obtaining a relative pose initial value through a vanishing line equation and the known geometric dimension of the target, and obtaining the optimal value of the relative pose of the camera and the calibration target through solving the minimum value of a reprojection error function;
step 1.1, calibrating internal parameters of each camera respectively, wherein the internal parameters of the cameras are regarded as fixed constants, and the posture of the cameras is kept unchanged in the calibration process;
step 1.2, placing the target in the field of view of each camera, wherein the symmetry axis of the target points to the corresponding camera, IkAn image of the target k captured by the camera k is represented, and an image sequence I ═ { I ═ is obtainedk|1≤k≤M};
Step 1.3, calculating a transformation matrix according to the image sequence IThe method specifically comprises the following steps:
step 1.3.1, obtaining the equation of the line extinguishing
As shown in FIG. 5, two sets of parallel lines converge to vanishing point v, respectively, on the image plane1And v2Passing point v1,v2The straight line of (1) is the extinction line. In the target coordinate system, two linear equations which are not parallel to each other are as follows:
aix+ciz+di=0,y=0,(i=1,2) (3)
wherein: a is1c2-a2c1≠0,ai,ci,diIs the correlation coefficient of the straight line equation.
V1And V2Respectively representing points at infinity on two straight lines, there are:
wherein:siis a scale factor, and is a function of,and K represents an internal parameter matrix of the camera K, and is the homogeneous coordinate of the ith vanishing point in an image coordinate system.
From formulae (1) and (4), there are:
wherein R isijRepresenting the ith row and jth column element of the rotation matrix R.
The line can be extinguishedCalculated, according to equation (5), there are:
according to equations (2) and (6), the equation for the line out can be expressed as:
step 1.3.2, obtaining a rotation matrix
And obtaining a vanishing line equation by using a least square method according to the characteristic points on the characteristic straight line:
wherein,are the coefficients of the resulting linear equation.
From equations (7) and (8), the roll angle Φ and the pitch angle θ can be obtained:
the vanishing point coordinate and the direction vector of the parallel lines in the camera coordinate system have the following relations:
wherein d isiIs a straight line at CkA 3 × 1 dimensional direction vector in CF.
Andrespectively with TkThe z-axis and x-axis of the CF coincide with:
from the formulas (2) and (11), it is possible to obtainAndto thereby obtain a value ofThe rotation matrix R of (a).
Step 1.3.3, obtaining translation vector
Setting characteristic pointsIs a virtual point on the target plane, becauseAndin the vector d2The projection on is equal to that:
from formulas (1) and (12), there are:
wherein: z is a radical of1And z2Are respectively pointsAndat CkZ-axis coordinate of CF.
Due to the fact thatThe length of (a) is known as:
from the formulae (13) and (14), z is obtained1And z2Thereby obtainingAt CkPosition coordinates in CF; whileIs TkOrigin of coordinates of CF, thereforeThe translation vector t of (a) may be expressed as:
step 1.3.4, nonlinear optimization
The mth corner point representing the target k is at TkHomogeneous coordinates in CF, which is in image IkCorresponding secondary coordinate of (A) isM is more than or equal to 1 and less than or equal to 6, and the formula comprises the following components:
wherein:is a scale factor.
The image points are interfered by independent Gaussian noise with the same distribution, maximum likelihood estimation is obtained by minimizing the sum of squares of the distances between the characteristic straight lines and the reprojected points, the minimum value of the function is obtained by using a Levenberg-Marquardt method, and then a transformation matrix is obtainedOptimum value of (2):
wherein: space of function arguments Respectively representing images IkThe m and n lines of the target k, d (-) represents the point-to-line distance.
Step two, global calibration of the distributed targets;
shooting images of adjacent targets by using an auxiliary camera to obtain a coordinate transformation matrix between the adjacent targets, obtaining a relative pose initial value of each calibration target relative to a reference target by an incremental coordinate transformation method, and obtaining an optimal value of the transformation matrix of each target relative to the reference target by obtaining a minimum value of a re-projection error function based on a closed image sequence;
step 2.1, obtaining a sequence of closed images by an auxiliary camera, as shown in fig. 4Wherein the target (i, j) isIt can be seen that:
step 2.2 is illustrated by figure 4,representing the transformation matrices from target i and target j to the auxiliary camera coordinate system, respectively.Can be obtained by the same method as in step 1.3, thereby obtaining
Step 2.3, calculating a transformation matrix by adopting an incremental coordinate transformation methodAnd then optimized.
The initial values of (a) are obtained by a plurality of coordinate transformations:
according to the imaging model, there are:
wherein:andthe mth angular points respectively representing the target i and the target j are arranged atThe coordinates of the image on the upper side,is a scale factor.
Calculating the minimum value of the reprojection error function by using a Levenberg-Marquardt methodOptimum value of (2):
wherein: space of independent variablesFor a four-dimensional identity matrix, the iteration initial values are provided by equations (20) and (21).
Step three, calculating a transformation matrixAnd finishing the global calibration.
Using the target as a medium to obtainThe optimized value of (c):
and obtaining a transformation matrix of each camera relative to the reference camera, thereby completing the calibration of the distributed multi-camera system.
Example (b):
in this embodiment, the distributed multi-camera system consists of 8 cameras, and the position coordinates of the cameras in the ECF and the euler angles with the ECF are shown in table 1. Target size L1=500mm,L2Calculated as 200mmThe accuracy of the global calibration method provided by the invention is evaluated according to the error.
TABLE 1 Camera position and attitude
100 independent experiments are carried out after the mean value of 0 and the standard deviation of 0.2 pixel Gaussian noise are added into the characteristic point coordinates.
To illustrate the effect of the proposed method, two methods were used to calculate the calibration error: an incremental calibration method and a global calibration method. The only difference between the two methods is whether to proceed or notTarget global nonlinear optimization represented by line (23). Of incremental calibration methodsThe global calibration method is the method provided by the invention, and the global calibration method is directly obtained by using the formula (21) without global optimization.
As can be seen in fig. 6, the rotation matrix errors and the translation vector errors accumulate gradually as the number of coordinate transformations increases and reach a maximum at the camera 5 because the camera 5 is farthest from the reference camera (camera 1). The global calibration method provided by the invention can effectively reduce the accumulated error caused by the incremental coordinate transformation.

Claims (3)

1. A global calibration method for a distributed multi-camera system, wherein:
the distributed multi-camera system is composed of M cameras, CkCF and AiCF respectively represents a camera coordinate system of a kth camera and a camera coordinate system of an auxiliary camera for shooting an adjacent target (i, j), k is more than or equal to 1 and less than or equal to M, and i is more than or equal to 1 and less than or equal to M; i iskCF denotes the image pixel coordinate system of the kth camera, IkThe origin of coordinates of the CF is located at the center of the image plane;
the target consists of two groups of parallel lines which are mutually orthogonal and have known lengths, and the lengths of the parallel lines areIs L1At a pitch of L2Respectively representing the mth angular point and the mth characteristic straight line of the target k, wherein m is more than or equal to 1 and less than or equal to 6; t iskCF denotes the coordinate system of target k, otIs the origin of a coordinate system, xt,ztRespectively representing the x-axis and the z-axis; ECF represents a ground coordinate system, and the origin of the ECF is fixedly connected to the ground and is a coordinate system of the northeast;
let p be [ u, v ]]TAnd P ═ X, Y, Z]TRepresenting points in a two-dimensional image plane and in a three-dimensional space, respectively, u and v are pixel coordinates in an image coordinate system, X, Y, Z represent three-dimensional coordinates,respectively, represent the corresponding homogeneous coordinates,the projection of a point within the target coordinate system TCF onto the camera image plane is represented as:
wherein: s represents a scale factor, K is an internal parameter matrix, fxAnd fyIs the equivalent focal length, (u)0,v0) Is a principal point coordinate; t denotes a coordinate transformation matrix from the target coordinate system to the camera coordinate system, R is a rotation matrix of 3 × 3 dimensions, and T is a translation vector of 3 × 1 dimensions; the rotation matrix R may be represented by Euler angles (Y-X-Z): yaw anglePitch angle θ and roll angle φ:
is provided withRepresents TkCF to CkThe coordinate conversion matrix of the CF is,represents TiCF to AiThe coordinate conversion matrix of the CF is,represents TjCF to AiThe coordinate conversion matrix of the CF is,represents TiCF to TjThe coordinate conversion matrix of the CF is,is represented by CiCF to CjA coordinate transformation matrix of the CF;
the global calibration method of the distributed multi-camera system is characterized by comprising the following steps:
estimating the relative pose of a camera and a calibration target;
using a two-dimensional calibration target consisting of two groups of mutually orthogonal parallel straight lines, estimating the relative position posture of the camera and the target based on vanishing points and vanishing lines, obtaining a relative posture initial value through a vanishing line equation and the known geometric dimension of the target, and obtaining the optimal value of the relative posture of the camera and the calibration target through solving the minimum value of a reprojection error function;
the method specifically comprises the following steps:
step 1.1, calibrating internal parameters of each camera respectively, wherein the internal parameters of the cameras are regarded as fixed constants, and the posture of the cameras is kept unchanged in the calibration process;
step 1.2, placing the target in the field of view of each camera, wherein the symmetry axis of the target points to the corresponding camera, IkAn image sequence I is obtained by representing an image of a target k photographed by a camera k, Ik|1≤k≤M};
Step 1.3, calculating a transformation matrix according to the image sequence IThe method specifically comprises the following steps:
step 1.3.1, obtaining the equation of the line extinguishing
Two groups of parallel lines converge to vanishing point v on image plane1And v2Passing point v1,v2The straight line of (1) is the line of extinction; in the target coordinate system, two linear equations which are not parallel to each other are as follows:
aix+ciz+di=0,y=0 (3)
wherein: a is1c2-a2c1≠0,ai,ci,diIs the correlation coefficient of the linear equation, i is 1, 2;
V1and V2Respectively representing points at infinity on two straight lines, there are:
wherein:siis a scale factor, and is a function of,the homogeneous coordinate of the ith vanishing point in an image coordinate system is shown, and K represents an internal parameter matrix of a camera K;
from formulae (1) and (4), there are:
wherein R isijThe ith row and the jth column of the rotation matrix R are represented;
the line is extinguishedCalculated, according to equation (5), there are:
according to equations (2) and (6), the equation for the line out is expressed as:
step 1.3.2, obtaining a rotation matrix
And obtaining a vanishing line equation by using a least square method according to the characteristic points on the characteristic straight line:
wherein,the coefficients of the obtained linear equation;
from equations (7) and (8), the roll angle Φ and the pitch angle θ are obtained:
the vanishing point coordinate and the direction vector of the parallel lines in the camera coordinate system have the following relations:
wherein d isiIs a straight line at CkA 3 × 1 dimensional direction vector in CF;
andrespectively with TkThe z-axis and x-axis of the CF coincide with:
from the formulas (2) and (11), theAndto thereby obtain a value ofThe rotation matrix R of (2);
step 1.3.3, obtaining translation vector
Setting characteristic pointsIs a virtual point on the target plane, becauseAndin the vector d2The projection on is equal to that:
from formulas (1) and (12), there are:
wherein: z is a radical of1And z2Are respectively a point P1 kAndat CkZ-axis coordinates of CF;
due to the fact thatThe length of (a) is known as:
from the formulae (13) and (14), z is obtained1And z2To obtain P1 kAt CkPosition coordinates in CF; p1 kIs TkOrigin of coordinates of CF, thenIs represented as:
step 1.3.4, nonlinear optimization
The mth corner point representing the target k is at TkHomogeneous coordinates in CF, which is in image IkCorresponding secondary coordinate of (A) isM is more than or equal to 1 and less than or equal to 6, and the formula comprises the following components:
wherein:is a scale factor;
let us assume that the image points are disturbed by an independent and equally distributed gaussian noiseThe maximum likelihood estimation is obtained by minimizing the sum of squares of the distances between the characteristic straight line and the reprojection point, the minimum value of the function is obtained by using a Levenberg-Marquardt method, and a transformation matrix is obtainedOptimum value of (2):
wherein: space of function argumentsRespectively representing images IkThe m and n straight lines of the target k, d (-) represents the distance from point to straight line;
step two, global calibration of the distributed targets;
shooting images of adjacent targets by using an auxiliary camera to obtain a coordinate transformation matrix between the adjacent targets, obtaining a relative pose initial value of each calibration target relative to a reference target by an incremental coordinate transformation method, and obtaining an optimal value of the transformation matrix of each target relative to the reference target by obtaining a minimum value of a re-projection error function based on a closed image sequence;
and step three, calculating a transformation matrix to finish global calibration.
2. The global calibration method of the distributed multi-camera system according to claim 1, wherein the second step specifically comprises:
step 2.1, obtaining a sequence of closed images by an auxiliary cameraWherein the target (i, j) isIt can be seen that:
step 2.2,Respectively representing transformation matrices from target i and target j to the auxiliary camera coordinate system, thenComprises the following steps:
step 2.3, calculating a transformation matrix by adopting an incremental coordinate transformation methodAnd then optimized;
the initial values of (a) are obtained by a plurality of coordinate transformations:
wherein k1 is more than or equal to 2 and less than or equal to M, and according to the imaging model, the method comprises the following steps:
wherein:andthe mth angular points respectively representing the target i and the target j are arranged atThe coordinates of the image on the upper side,is a scale factor;
calculating the minimum value of a reprojection error function by using a Levenberg-Marquardt method to obtainOptimum value of (2):
wherein: space of independent variablesIs a four-dimensional identity matrix.
3. The global calibration method of the distributed multi-camera system according to claim 1, wherein the third step is specifically:
using the target as a medium to obtainThe optimized value of (c):
and after the transformation matrix of each camera relative to the reference camera is obtained, the calibration of the distributed multi-camera system is finished.
CN201610354114.5A 2016-05-25 2016-05-25 A kind of overall calibration method of the more camera systems of distribution Active CN105809706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610354114.5A CN105809706B (en) 2016-05-25 2016-05-25 A kind of overall calibration method of the more camera systems of distribution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610354114.5A CN105809706B (en) 2016-05-25 2016-05-25 A kind of overall calibration method of the more camera systems of distribution

Publications (2)

Publication Number Publication Date
CN105809706A CN105809706A (en) 2016-07-27
CN105809706B true CN105809706B (en) 2018-10-30

Family

ID=56452936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610354114.5A Active CN105809706B (en) 2016-05-25 2016-05-25 A kind of overall calibration method of the more camera systems of distribution

Country Status (1)

Country Link
CN (1) CN105809706B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846393B (en) * 2017-01-22 2019-10-25 武汉大学 Vanishing point extracting method and system based on global search
CN107576282A (en) * 2017-09-01 2018-01-12 微鲸科技有限公司 Camera deflects angle measuring method and device
CN108648242B (en) * 2018-05-18 2020-03-24 北京航空航天大学 Two-camera calibration method and device without public view field based on assistance of laser range finder
CN108765498B (en) 2018-05-30 2019-08-23 百度在线网络技术(北京)有限公司 Monocular vision tracking, device and storage medium
CN111415387B (en) * 2019-01-04 2023-12-29 南京人工智能高等研究院有限公司 Camera pose determining method and device, electronic equipment and storage medium
CN111275770A (en) * 2020-01-20 2020-06-12 南昌航空大学 Global calibration method of four-eye stereoscopic vision system based on one-dimensional target rotation motion
CN113706610B (en) * 2021-09-03 2024-06-07 西安电子科技大学广州研究院 Pallet pose calculating method based on RGB-D camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101286235A (en) * 2008-06-10 2008-10-15 北京航空航天大学 Video camera calibration method based on flexible stereo target
CN104766292A (en) * 2014-01-02 2015-07-08 株式会社理光 Method and system for calibrating multiple stereo cameras

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894366B (en) * 2009-05-21 2014-01-29 北京中星微电子有限公司 Method and device for acquiring calibration parameters and video monitoring system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101286235A (en) * 2008-06-10 2008-10-15 北京航空航天大学 Video camera calibration method based on flexible stereo target
CN104766292A (en) * 2014-01-02 2015-07-08 株式会社理光 Method and system for calibrating multiple stereo cameras

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A global calibration method for multiple vision sensors based on multiple targets;Zhen Liu et al.;《Measurement Science and Technology》;20111019;第22卷;第1-10页 *
Parallel-based calibration method for line-structured light vision sensor;Zhengzhong Wei et al.;《Optical Engineering》;20140317;第53卷(第3期);第1-12 *

Also Published As

Publication number Publication date
CN105809706A (en) 2016-07-27

Similar Documents

Publication Publication Date Title
CN105809706B (en) A kind of overall calibration method of the more camera systems of distribution
CN105096329B (en) Method for accurately correcting image distortion of ultra-wide-angle camera
CN109767476B (en) Automatic focusing binocular camera calibration and depth calculation method
CN104374338B (en) Single-axis rotation angle vision measurement method based on fixed camera and single target
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
CN104616292B (en) Monocular vision measuring method based on global homography matrix
CN104392435B (en) Fisheye camera scaling method and caliberating device
CN103759670B (en) A kind of object dimensional information getting method based on numeral up short
CN109064516B (en) Camera self-calibration method based on absolute quadratic curve image
CN106530358A (en) Method for calibrating PTZ camera by using only two scene images
CN109727278B (en) Automatic registration method for airborne LiDAR point cloud data and aerial image
CN110345921B (en) Stereo visual field vision measurement and vertical axis aberration and axial aberration correction method and system
Chatterjee et al. Algorithms for coplanar camera calibration
CN102472609A (en) Position and posture calibration method and device
CN104376553B (en) A kind of vision measuring method at the single-shaft-rotation angle based on mobile camera and dual-target
CN104697463B (en) The blanking feature constraint scaling method and device of a kind of binocular vision sensor
CN101354796B (en) Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model
CN101377404B (en) Method for disambiguating space round gesture recognition ambiguity based on angle restriction
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN101377405A (en) Vision measuring method of space round gesture parameter and geometric parameter
CN109544642B (en) N-type target-based TDI-CCD camera parameter calibration method
Silva et al. Camera calibration using a color-depth camera: Points and lines based DLT including radial distortion
CN113744340A (en) Calibrating cameras with non-central camera models of axial viewpoint offset and computing point projections
CN111709995A (en) Position calibration method between laser radar and camera
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant