Nothing Special   »   [go: up one dir, main page]

CN114663518A - Camera calibration method, system, terminal device and computer readable storage medium - Google Patents

Camera calibration method, system, terminal device and computer readable storage medium Download PDF

Info

Publication number
CN114663518A
CN114663518A CN202210138141.4A CN202210138141A CN114663518A CN 114663518 A CN114663518 A CN 114663518A CN 202210138141 A CN202210138141 A CN 202210138141A CN 114663518 A CN114663518 A CN 114663518A
Authority
CN
China
Prior art keywords
camera
calibration
calibration object
sampling
expected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210138141.4A
Other languages
Chinese (zh)
Inventor
蔡晶鑫
陈宣成
陶元发
吴立见
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruben Technology Co ltd
Original Assignee
Shenzhen Ruben Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ruben Technology Co ltd filed Critical Shenzhen Ruben Technology Co ltd
Priority to CN202210138141.4A priority Critical patent/CN114663518A/en
Publication of CN114663518A publication Critical patent/CN114663518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The present application relates to the field of camera calibration technologies, and in particular, to a camera calibration method, a camera calibration system, a terminal device, and a computer-readable storage medium. The camera calibration method is applied to a camera calibration system, and comprises the following steps: controlling the first driving mechanism to drive the calibration object to move to any sampling position; controlling the second driving mechanism to drive the cameras to move to a plurality of expected sampling poses respectively, so that the calibration objects are located at different positions in the visual field range of the cameras respectively; controlling the camera to shoot the calibration object at the sampling position at a plurality of expected sampling poses respectively to obtain an image sequence of the calibration object; and acquiring a calibration result of the camera based on the image sequence. The method and the device have the advantages of wide practicability, strong expansibility and high calibration efficiency.

Description

Camera calibration method, system, terminal device and computer readable storage medium
Technical Field
The present application relates to the field of camera calibration technologies, and in particular, to a camera calibration method, a camera calibration system, a terminal device, and a computer-readable storage medium.
Background
The camera calibration is mainly used for determining the mutual relation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in an image so as to correct the image shot by the camera later and obtain the image with relatively small distortion. In image measurement or machine vision application, camera parameter calibration is a very critical link, and the accuracy of a calibration result and the stability of an algorithm directly influence the accuracy of a result generated by the operation of a camera. The camera calibration includes solving an intrinsic parameter matrix and an extrinsic parameter matrix of the camera. Internal parameter matrix
Figure BDA0003505797960000011
Wherein alpha and beta are effective focal lengths of a u axis and a v axis of a pixel coordinate system respectively, and u is0、v0γ is the optical center of the pixel coordinate system, which characterizes any possible tilt between the sensor axes, caused by the sensor being mounted not perpendicular to the optical axis. R and T are the rotation matrix and translation vector, respectively, called the extrinsic parameter matrix of the camera.
The inventor of the application finds that, in a long-term research, in general, the application of a civil camera is mainly simple imaging and shooting, the requirement on the calibration precision is not very high, and simple calibration can be performed in a room full of calibration patterns, but an industrial camera needs higher precision, has more rigorous requirements on the calibration process, and is more complex in calibration compared with a 2D industrial camera and a 3D industrial camera. In addition, the industrial camera has flexible working distance and depth of field change due to the large visual field, and is difficult to process all calibration problems by using a set of standard system. Therefore, in the present stage, for the calibration of the industrial camera, the calibration plates with different sizes are frequently replaced manually according to the size of the visual field of the calibration camera, the large calibration plate is heavy, the replacement process is complex, and the industrial camera has certain dangerousness. In a word, the calibration process of the industrial camera is extremely complicated and depends on manual experience, the calibration efficiency is low, and the cost is high.
Disclosure of Invention
The application aims to provide a camera calibration method, a camera calibration system, terminal equipment and a computer readable storage medium, and solves the problems that the calibration process of an industrial camera is extremely complicated, depends on manual experience, and is low in calibration efficiency and high in cost.
In order to achieve the above object, a camera calibration method provided in an embodiment of the present application is applied to a camera calibration system, where the camera calibration system includes a calibration object, a first driving mechanism and a second driving mechanism, the first driving mechanism is used to drive the calibration object to move, and the second driving mechanism is used to drive a camera to move; the method comprises the following steps:
controlling the first driving mechanism to drive the calibration object to move to any sampling position; wherein the sampling location is within a working distance of the camera;
controlling the second driving mechanism to drive the cameras to move to a plurality of expected sampling poses respectively, so that the calibration objects are located at different positions in the visual field range of the cameras respectively; the expected sampling poses are obtained through conversion relations between the camera motion and the calibration object pixel point motion respectively and correspond to different positions in the visual field range of the camera one by one respectively;
controlling the camera to shoot the calibration object at the sampling position at a plurality of expected sampling poses respectively to obtain an image sequence of the calibration object;
and acquiring a calibration result of the camera based on the image sequence.
The camera calibration system provided by the embodiment of the application comprises:
a calibration object;
the first driving mechanism is used for driving the calibration object to move;
the second driving mechanism is used for driving the camera to move;
the processor is respectively connected with the first driving mechanism and the second driving mechanism; and
a memory coupled to the processor for storing one or more programs;
when the one or more programs are executed by the processor, the processor is enabled to implement the camera calibration method according to the above embodiment.
The camera calibration method provided by the embodiment of the application comprises the following steps:
controlling the calibration object to move to any sampling position; wherein the sampling location is within a working distance of the camera;
controlling the cameras to respectively move to a plurality of expected sampling poses, so that the calibration objects are respectively at different positions in the visual field range of the cameras; the expected sampling poses are obtained through conversion relations between the camera motion and the calibration object pixel point motion respectively and correspond to different positions in the visual field range of the camera one by one respectively;
controlling the camera to shoot the calibration object at the sampling position in a plurality of expected sampling poses respectively so as to obtain an image sequence of the calibration object;
and acquiring a calibration result of the camera based on the image sequence.
The embodiment of the application provides a terminal device, including:
a processor; and
a memory coupled to the processor for storing one or more programs;
when the one or more programs are executed by the processor, the processor is enabled to implement the camera calibration method according to the above embodiment.
A computer-readable storage medium is provided in an embodiment of the present application, and a computer program is stored thereon, and when being executed by a processor, the computer program implements the camera calibration method according to the above embodiment.
Compared with the prior art, the camera calibration method, the camera calibration system, the terminal device and the computer readable storage medium in the embodiment of the application have the following beneficial effects:
the first driving mechanism is controlled to drive the calibration object to move to any sampling position, and the sampling position is located in the working distance of the camera, so that automatic calibration of 2D and 3D industrial cameras with different working distances and different depth of field ranges can be met. Meanwhile, the second driving mechanism is controlled to drive the camera to move to a plurality of expected sampling poses respectively, the plurality of expected sampling poses enable the calibration object to be located at different positions of the visual field range of the camera respectively, then the camera is controlled to shoot the calibration object located at the sampling position with the plurality of expected sampling poses respectively so as to obtain an image sequence of the calibration object, finally the image sequence is used as the input of a camera calibration algorithm, and the calibration result of the camera is output. In addition, the camera calibration system is high in automation degree, one-key calibration can be basically achieved, the calibration process is simple, manual experience is not relied on, the calibration efficiency is high, and the cost is low.
Drawings
The present application will now be described with reference to the accompanying drawings. The drawings in the present application are for the purpose of illustrating embodiments only. Other embodiments can be readily made by those skilled in the art from the following description of the steps described without departing from the principles of the present application.
Fig. 1 is a schematic view of a calibration scenario performed by a camera calibration system according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present application;
FIG. 7 is a schematic flowchart of a camera calibration system method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a plurality of positions of a calibration object required in a field of view of a camera according to an embodiment of the present application;
FIG. 9 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present application;
fig. 10 is a schematic diagram of the interconversion between the camera rotation angle and the camera motion vector according to the embodiment of the present application;
FIG. 11 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present application;
FIG. 12 is a diagram illustrating a calibration object in multiple positions within a field of view of a camera after changing an initial pose of the camera multiple times according to an embodiment of the present application;
FIG. 13 is a schematic flow chart diagram illustrating a camera calibration system method according to an embodiment of the present application;
fig. 14 is a schematic flowchart of a camera calibration system method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "first", "second", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Some technical terms referred to herein are explained below to facilitate understanding by those skilled in the art.
The Working Distance (WD) is the Distance between the lowest mechanical surface of the lens and the object when the camera lens is focused on the shooting object, and is also called the operating Distance.
The depth of field refers to the depth range within which a person feels that the camera lens is focused and images relatively clearly.
A Field of View (FOV) refers to a shooting range of a camera lens in a working distance range.
Referring to fig. 1, a camera calibration system 100 according to an embodiment of the present disclosure includes a calibration object 10, a first driving mechanism 20, a second driving mechanism 30, a processor, and a memory.
The first driving mechanism 20 is used for driving the calibration object 10 to move. The second driving mechanism 30 is used to drive the camera 200 to move. The processor is connected to the first drive mechanism 20 and the second drive mechanism 30, respectively. A memory is coupled to the processor for storing one or more programs. When the one or more programs are executed by the processor, the processor is enabled to implement the camera calibration method in any embodiment of the present application.
In this embodiment, the processor may be one or more processors for controlling the overall operation of the camera calibration system 100 to complete all or part of the steps of the camera calibration method of the present application. The memory is used to store various types of data to support operation of the camera calibration system 100, which may include, for example, instructions for any application or method operating on the camera calibration system 100, as well as application-related data. The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
The calibration object 10 may be a calibration board such as a black and white checkerboard, a code, a ring, a dot, etc. The camera 200 may be any camera to be calibrated, such as a film camera, a consumer digital camera, an industrial digital camera, etc., and is not limited herein.
The first driving mechanism 20 can drive the calibration object 10 to move close to or away from the camera 200, so as to change the sampling position of the calibration object 10, wherein the sampling position is located within the working distance of the camera 200, so as to meet the camera calibration requirements of different working distances and depths of field.
In one embodiment, if the working distance of the camera 200 is known, the method for acquiring the working distance of the camera 200 includes: the working distance is directly obtained from camera calibration condition parameters directly input by the user, or the working distance of the current camera 200 is called from a database.
In another embodiment, if the working distance of the camera 200 is unknown, the method for acquiring the working distance of the camera 200 includes: according to the existing camera working distance detection method, the camera 200 or the calibration object 10 is controlled to move back and forth, and the current working distance of the camera 200 is acquired.
In one embodiment, the first drive mechanism 20 includes a target platform 21 coupled to the processor, and the target 10 is disposed on the target platform 21.
The processor is configured to acquire calibration condition parameters of the camera 200, the calibration condition parameters including a working distance of the camera 200, and then acquire an initial pose of the camera 200 and an initial pose of the calibration object platform 21 based on the calibration condition parameters. The calibration object platform 21 is used for being controlled by the processor to move to the initial position of the calibration object platform 21, and the second driving mechanism 30 is used for being controlled by the processor to drive the camera 200 to move to the initial position of the camera 200, so that the calibration object 10 is in the visual field range of the camera 200.
The calibration condition parameters of the camera 200 include the depth of field in addition to the working distance of the camera 200. In general, the working distance and the depth of field of the camera 200 are a range, and the sampling positions of the calibration object 10 are designed so that the distance between the calibration object 10 and the camera 200 corresponding to each sampling position of the present application covers the distance range, and the movement of the calibration object 10 in the distance range is automatically completed by the calibration object platform 21.
Specifically, the initial attitude of the marker platform 21 is such that the marker 10 plane is facing the attitude of the camera 200 and within the working distance of the camera 200.
In the initialization stage of the camera calibration system 100, the calibration object platform 21 is moved back and forth to the initial pose of the calibration object platform 21 according to the working distance L input by the user, so that the calibration object 10 can be moved to the specified sampling position, the calibration object 10 is located within the effective working distance of the camera 200, and the plane of the calibration object 10 is opposite to the camera 200. Wherein, the direction of the calibration object platform 21 far away from the camera 200 is a negative direction, the direction of the calibration object platform 21 towards the camera 200 is a positive direction, and the initial distance l between the calibration object platform 21 and the camera 200 is combined1Initial position reading of calibration object platform 21 is l2The distance Δ L that the calibration object platform 21 needs to move can be calculated, i.e. the distance Δ L that the calibration object platform 21 needs to move is the working distance L-the initial distance L that the calibration object 10 needs to move1Initial position l of the calibration object platform 212If the distance Δ l that the calibration object platform 21 needs to move is a positive value, the calibration object platform 21 is controlled to move in the negative direction, and if the distance Δ l that the calibration object platform 21 needs to move is a positive value, the calibration object platform 21 is controlled to move in the positive direction.
Further, the initial pose of the camera 200 is a pose that enables the calibration object 10 to be within the visual field range of the camera 200, and the initial image is an image of the calibration object taken by the camera 200 in the initial pose. The initial pose of the camera 200 can be determined by the working distance of the camera 200, for example, by driving the camera 200 to move by the second driving mechanism 30, so that when the distance between the camera 200 and the calibration object 10 is the working distance, the initial pose of the camera 200 can be determined to be moved to the initial pose of the camera 200.
The user may also input the calibration distance L0(L0=l1+l3) Adjusting the initial position of the camera 200 such that the initial position of the camera 200 reads l3Reading l from the initial position of the camera 2003And a user-entered nominal distance L0The initial distance l between the calibration object platform 21 and the camera 200 can be obtained1And then reading as l according to the initial position of the calibration object platform 212The distance Δ L that the calibration object platform 21 needs to move can be calculated, so that the calibration object 10 is moved to the specified sampling position L (L ═ L)1+l2I (1)) so that the calibration object 10 is within the effective working distance of the camera 200.
In one embodiment, the calibration object platform 21 comprises a moving platform 211 and a rotating platform 212 respectively connected to the processor, the rotating platform 212 is disposed on the moving platform 211, and a plurality of different types of calibration objects 10 are disposed on the rotating platform 212. The calibration condition parameters further include the corresponding type of calibration object 10, and the initial pose of the platform of the calibration object 10 includes the initial sampling position of the moving platform 211 and the initial pose of the rotating platform 212.
The processor is further configured to acquire a corresponding type of the calibration object 10 based on the working distance of the camera 200, and then acquire an initial sampling position of the mobile platform 211 and an initial pose of the rotary platform 212 based on the working distance of the camera 200 and the corresponding type of the calibration object 10, respectively. The rotating platform 212 is controlled by the processor to move to the initial pose of the rotating platform 212, so that the calibration object 10 of the type corresponding to the working distance of the camera 200 is opposite to the camera 200. The moving platform 211 is used for being controlled by the processor to drive the rotating platform 212 and the plurality of calibration objects 10 to move together to an initial sampling position of the platform of the calibration object 10.
In the present embodiment, the rotating platform 212 is disposed on the moving platform 211, and the plurality of calibration objects 10 with different size types are disposed on the rotating platform 212, so that the moving platform 211 can drive the rotating platform 212 and the plurality of calibration objects 10 to move together in a direction toward or away from the camera 200, thereby changing the sampling position of the calibration object 10.
The processor is also capable of selecting a calibration object 10 having a size type in a preset ratio to the field of view of the camera 200 and then controlling the rotating platform 212 to direct the selected calibration object 10 toward the camera 200.
Specifically, after the working distance L corresponding to the current sampling position of the calibration object 10 is obtained, other calibration condition parameters of the camera 200, such as the focal length f of the lens of the camera 200 and the size a of the photosensitive element of the camera 200, are further obtained, and then the size of the field of view of the camera 200 is calculated according to the following formula of the pinhole imaging model:
D=L*a/f (2);
other calibration condition parameters of the camera 200 may be directly input by the user, or may be further obtained through information input by the user, for example, when the user inputs the model of the camera 200, the focal length f of the lens and the size a of the photosensitive element of the camera 200 may be known.
In one embodiment, the predetermined ratio is about 1: 3.
according to the conventional calibration experience, it is known that the calibration object 10 can be calibrated with a size closest to D/3 because the whole image area occupied by the image of the calibration object 10 in the camera 200 is about 1/3.
For example, in a general calibration scenario, calibration of a camera with a working distance of 250 and 350 mm requires the use of a calibration object 10 with a size of a9, calibration of a camera with a working distance of 800 and 1000 mm requires the use of a calibration object 10 with a size of A3, and calibration of a camera with a working distance of 2000 and 3000 mm requires the use of a calibration object 10 with a size of a 0.
In one embodiment, the rotating platform 212 includes a plurality of bearing surfaces, and different types of calibration objects 10 are respectively disposed on each bearing surface, and the number of calibration objects 10 corresponds to the number of bearing surfaces one to one. The processor can select the calibration objects 10 with the size type in a preset proportion to the visual field range of the camera 200, and then control the rotating platform 212 to enable the bearing surface where the selected calibration objects 10 are located to face the camera 200.
Illustratively, taking 10 bearing surfaces as an example, the object 10 is a 10-size type of a0-a9 calibration board, each size of calibration board being disposed on one bearing surface.
In another embodiment, the rotating platform 212 includes a plurality of bearing surfaces, at least a portion of the calibration objects 10 with larger dimensions is fixed on one of the bearing surfaces, and at least a portion of the calibration objects 10 with smaller dimensions is detachably disposed on the other bearing surfaces.
In the embodiment, the inconvenience of replacing the large calibration plate is considered, and meanwhile, the simplicity of the algorithm for selecting the calibration object is considered, the calibration object 10 with a large part size is fixed on the bearing surface, and the calibration object 10 with a small part size is detachably arranged on other bearing surfaces, so that the calibration object 10 with a large part size does not need to be replaced.
Illustratively, the target 10 is a 10 size class of A0-A9 calibration plate, using four bearing surfaces as an example. The calibration plates of A0, A1 and A2 are placed on three fixed bearing surfaces without being detached, and the calibration plate of any size of A3-A9 is placed on the other bearing surface and can be manually replaced to adapt to the cameras 200 with different visual fields.
In one embodiment, at least some of the smaller sized markers 10 are combined on the same bearing surface.
It is understood that the manner in which the plurality of markers 10 are disposed on the rotating platform 212 includes, but is not limited to, the above-described embodiments.
The processor of the present embodiment determines the calibration object 10 with the same size type according to the visual field range of the camera 200, and the rotating platform 212 controlled by the processor can automatically align the selected calibration object 10 with the camera 200 for use in the subsequent calibration task. Thus, when calibrating cameras with different visual fields, the rotary platform 212 can automatically rotate to select the proper calibration object 10, and the manual disassembly process is omitted.
Since the calibration requires the calibration objects 10 to be located at different positions in the visual field of the camera 200, the camera 200 is driven by the second driving mechanism 30 to move to different desired sampling positions, while the calibration objects 10 are kept still, so that the calibration objects 10 are located at different positions in the visual field of the camera 200.
In one embodiment, the second driving mechanism 30 includes a robot arm 31 and a base 32, the robot arm 31 is mounted on the base 32, the camera 200 is mounted at the end of the robot arm 31, and both the robot arm 31 and the camera 200 are connected to the processor. The processor is used to control the camera 200 on the end of the robotic arm 31 to move to an initial pose of the camera 200 and a plurality of desired sampling poses.
In one embodiment, the first driving mechanism 20 and the second driving mechanism 30 may be electrically driven, for example, by using a stepping motor as a driving motor.
Referring to fig. 2, a camera calibration method provided in the present embodiment is applied to the camera calibration system 100 in any of the above embodiments, and the method includes the following steps:
and S110, controlling the first driving mechanism 20 to drive the calibration object 10 to move to any sampling position. Wherein the sampling location is within a working distance of the camera 200.
The working distance range of the capturing camera 200 is [ L ]min,Lmax]Selecting working distances L respectivelymin、([Lmin+Lmax)/2、LmaxAnd controls the first driving mechanism 20 to drive the calibration object 10 to move to any one of the sampling positions, such as LminTo (3). It is understood that in other embodiments, the working distance range L may be selectedmin,Lmax]The other sampling positions in the calibration object 10 may be set to cover the entire depth-of-field space, and are not limited in this respect.
To better illustrate the overall camera calibration process of the present application, the following embodiments respectively use the working distances as LminFor example, camera calibration is performed at the sampling position.
Determining the current working distance L corresponding to the sampling positionminThereafter, it of the camera 200 is further acquiredHe calibrates the condition parameters, such as the focal length f of the lens of the camera 200 and the size a of the light sensing element of the camera 200, and then determines the field of view of the camera 200 at this time to be D according to the formula (2) of the pinhole imaging modelmin=Lmin*a/f。
And S120, controlling the second driving mechanism 30 to drive the camera 200 to move to a plurality of expected sampling poses respectively, so that the calibration objects 10 are located at different positions in the visual field range of the camera 200 respectively. Wherein, a plurality of the expected sampling poses are obtained by the conversion relationship between the motion of the camera 200 and the motion of the pixel point of the calibration object 10, and correspond to different positions in the visual field of the camera 200 one by one.
During calibration, the processor may divide the field of view of the camera 200 into a plurality of regions. To improve the accuracy of the calibration results, the field of view of the camera 200 may be divided into N2And (3) areas, wherein N is a natural number greater than 1, for example, divided into 4 areas, 9 areas or 16 areas. The more divided areas, the more complex the calculation process and the more accurate the calibration result. In order to improve the calibration accuracy and success rate, the calibration object 10 may be respectively located in different areas of the field of view, and the range of each field of view is not exceeded, and simultaneously, the imaging patterns of multiple calibration objects 10 jointly cover the pixel plane of the whole camera 200. In order to achieve the above-described functions, it is necessary to cause the camera 200 to shoot the calibration object 10 at different desired sampling poses.
It should be noted that the calibration objects 10 are located in different positions in the visual field of the camera 200, which means that the calibration objects 10 are all located in the visual field of the camera 200, and there is no case where some of the calibration objects 10 are out of the visual field of the camera 200.
In order to obtain a plurality of desired sampling poses, a conversion relationship between the motion of the camera 200 and the motion of the pixel point of the calibration object 10 needs to be determined, where the conversion relationship is used to represent the motion on the pixel plane, and in order to move a calibration point on the calibration object 10 from a certain pixel position to another pixel position in the pixel plane of the camera 200, and the required pixel point translation vector is Δ P, the second driving mechanism 30 needs to control the camera 200 to move with the corresponding camera motion vector Δ D to move to the corresponding desired sampling pose.
Referring to fig. 3, in an embodiment, before the step S110 controls the first driving mechanism to drive the calibration object to move to any sampling position, the method further includes the following steps:
s210, obtaining calibration condition parameters of the camera 200, wherein the calibration condition parameters comprise the working distance of the camera 200;
s220, acquiring an initial pose of the camera 200 based on the calibration condition parameters, and controlling the second driving mechanism 30 to drive the camera 200 to move to the initial pose of the camera 200, so that the calibration object 10 is in the visual field range of the camera 200;
s230, controlling the camera 200 to shoot the calibration object 10 located at the sampling position at the initial pose of the camera 200 so as to obtain an initial image;
s240, based on the initial image, obtaining a conversion relation between the motion of the camera 200 and the motion of the pixel point of the calibration object 10.
Assuming that the pixel position of the calibration object 10 moving in the pixel plane is Δ P pixel points, which correspond to the actual camera motion vector Δ D m, and assuming that the relationship between Δ P and Δ D is linear, then:
ΔD=k*ΔP (3)。
by estimating the transformation ratio k and further by the transformation relationship between the preset camera coordinate system of the camera 200 and the reference coordinate system of the second driving mechanism 30, the transformation relationship between the motion of the camera 200 and the motion of the pixel point of the calibration object 10 can be obtained.
In order to obtain the conversion relationship between the motion of the camera 200 and the motion of the pixel point of the calibration object 10, the embodiment determines the conversion ratio k between the motion of the camera 200 and the motion of the pixel point of the calibration object 10 through the initial image.
Specifically, the initial pose of the camera 200 is a pose that enables the calibration object 10 to be within the visual field range of the camera 200, and the initial image is an image of the calibration object captured by the camera 200 in the initial pose. The initial pose of the camera 200 may be determined by the working distance of the camera 200.
Firstly, through the above formula of the pinhole imaging model, the field of view of the camera 200 is calculated as follows according to the working distance L of the camera 200, the focal length f of the lens of the camera 200 and the size a of the photosensitive element of the camera 200:
D=L*a/f (2);
the camera 200 or the calibration object 10 is then moved such that the sampling distance between the camera 200 and the calibration object 10 is the working distance L while the calibration object 10 is within the field of view of the camera 200. And then controlling the camera 200 to shoot the calibration object 10 at the sampling position with the initial pose to obtain an initial image.
Since the actual distances of all the mark points on the calibration object 10 are known, after the initial image is obtained, the pixel coordinates of all the mark points on the calibration object 10 on the initial image are also known, so that the conversion ratio k between the motion of the camera 200 and the motion of the pixel points of the calibration object 10 can be determined by the pixel coordinates and the actual distances of all the mark points on the calibration object 10, and further, the conversion relationship between the motion of the camera 200 and the motion of the pixel points of the calibration object 10 is determined by the conversion relationship between the preset camera coordinate system of the camera 200 and the reference coordinate system of the second driving mechanism 30.
Referring to fig. 4, in an embodiment, the step S240 specifically includes the following sub-steps:
s241, determining the pixel distance between any two mark points on the calibration object 10 based on the initial image;
s242, acquiring an actual distance between the two mark points, and then determining a conversion ratio of the pixel distance and the actual distance;
s243, obtaining a conversion relationship between the motion of the camera 200 and the motion of the pixel point of the calibration object 10 based on the conversion ratio between the pixel distance and the actual distance, and a preset conversion relationship between the camera coordinate system of the camera 200 and the reference coordinate system of the second driving mechanism 30.
Since the actual distance D' between two marker points on the calibration object 10 is known, and in the initial image, it can be identified in the image by the calibration object detection algorithmTwo mark points and corresponding pixel positions are distinguished, and the pixel distance P ' of the two mark points can be obtained based on the pixel positions of the two mark points, so that the conversion ratio of the actual distance D ' and the pixel distance P ' of the two mark points can be calculated
Figure BDA0003505797960000131
For example, taking the calibration point as the center of the circular ring as an example, the actual distance D' between two adjacent centers is known. In addition, the calibration object detection algorithm can be used to detect the pixel position of each circle center on the calibration object 10 in the pixel plane, and further obtain the pixel distance P' between two adjacent circle centers, thereby obtaining the pixel distance P
Figure BDA0003505797960000132
Assuming that the pixel translation vector is Δ P and the camera motion vector is Δ D, the conversion ratio between the camera 200 motion and the calibration object 10 pixel motion is
Figure BDA0003505797960000133
In addition, since the movement of the pixel point of the fixer 10 in the pixel plane is defined in the camera coordinate system of the camera 200, and the actual movement of the camera 200 is defined in the reference coordinate system of the second driving mechanism 30, a conversion relationship between the camera coordinate system of the camera 200 and the reference coordinate system of the second driving mechanism 30 needs to be established to determine the corresponding relationship between the movement directions of the second driving mechanism 30 driving the camera 200. Taking the second driving mechanism 30 as the mechanical arm 31 as an example, the reading of the mechanical arm 31 can only obtain the pose of the end reference coordinate system of the mechanical arm 31, and the pixel points of the calibration object 10 are defined in the camera coordinate system of the camera 200, and the conversion relationship between the camera coordinate system and the end reference coordinate system needs to be known to convert the movement of the pixel points of the calibration object 10 into the end reference coordinate system of the mechanical arm 31, i.e. the conversion relationship of hand-eye calibration.
The reference coordinate system of the second driving mechanism 30 establishes a Tool coordinate system at the end of the robot arm 31, and the origin thereof is a Tool Center Point (TCP), i.e. a TCP Tool coordinate system. In the present embodiment, the tool provided on the distal end of the robot arm 31 is the camera 200, and the conversion relationship between the preset camera coordinate system of the camera 200 and the reference coordinate system of the second driving mechanism 30 can be obtained by the hand-eye calibration method.
Referring to fig. 5, the camera calibration method according to the embodiment of the present application further includes the following steps:
s250, respectively acquiring a plurality of expected sampling poses of the camera 200 based on a plurality of expected sampling points in the pixel plane of the camera 200 and the conversion relation.
To locate the calibration object 10 at different positions in the field of view of the camera 200, the plurality of desired sample points in the pixel plane of the camera 200 are at least the central area and the pixel points on the four corner areas in the pixel plane of the camera 200, such as the central pixel point of each area.
For better understanding, the following embodiments are described by taking an example that the field of view is divided into 9 regions, then in one embodiment, the calibration object 10 is respectively located in a central region and four corner regions of the field of view (as shown in fig. 8), the imaging pattern of the calibration object 10 needs to be respectively located in the central region and the four corner regions of the whole pixel plane, 5 desired sampling positions are obtained, the 5 desired sampling positions are in one-to-one correspondence with the 5 desired sampling poses, and then the 5 desired sampling points are respectively central pixel points of the central region and the four corner regions of the pixel plane.
Referring to fig. 6, the step S250 specifically includes the following sub-steps:
s251, determining a corresponding translation vector of an expected pixel point based on the pixel coordinates of a plurality of expected sampling points in the pixel plane of the camera 200 and the initial pixel coordinates of the calibration object 10 in the pixel plane;
s252, determining an expected camera motion vector corresponding to the expected pixel translation vector based on the expected pixel translation vector and the conversion relationship, so as to obtain a plurality of expected sampling poses corresponding to the camera 200 respectively.
After obtaining the conversion ratios of the motion of the camera 200 and the motion of the pixels of the calibration object 10, and determining the pixel coordinates of a plurality of desired sampling points in the pixel plane of the camera 200, desired camera motion vectors ap may be respectively determined, and then determines the expected sampling pose Δ P corresponding to each expected camera motion vector Δ P, and then brings the expected camera motion vector Δ P into the above-mentioned transformation relation equation (3), the desired camera motion vector deltad corresponding to each desired camera motion vector deltap can be calculated separately, then, the second driving mechanism 30 controls the camera 200 to move at the desired camera motion vector deltad according to a preset conversion relationship between the camera coordinate system of the camera 200 and the reference coordinate system of the second driving mechanism 30, and thus to the respective desired sampling poses, so that the calibration objects 10 are respectively at different positions in the field of view of the camera 200.
Continuing to take 5 expected sampling points as the center area of the pixel plane and the center pixel point of the four corner areas respectively as an example, in one embodiment, 5 expected sampling poses Δ p are obtained by the following method:
the positions of the four corners of the calibration object 10 in the image coordinate system are obtained: pupper_left、Pupper_right、Plower_left、Plower_right. Wherein the four positions are obtained according to a calibration plate detection algorithm.
Acquiring the positions of four corners of the image in an image coordinate system: i isupper_left、Iupper_right、Ilower_left、Ilower_right. Wherein the four positions are related to the width and height of the image.
Obtaining the width and height of the image, the pixel coordinates of four corners of the image are Iupper_left=(0,0)、Iupper_right=(width,0)、Ilower_left=(0,height)、Ilower_right=(width,height)。
In order for the calibration object 10 to be located in the center area and the four corner areas of the image in sequence, and the calibration object image is located entirely within the image, then:
when the calibration object 10 is located at the upper right corner of the image, the expected position of the calibration plate moving in the image coordinate system is:
Figure BDA0003505797960000151
in the above expression, min () represents a minimum function, which means that after the pose of the camera 200 is changed, neither the x nor y coordinates of the upper right corner of the calibration object 10 can exceed the x and y coordinates of the upper right corner of the image, and the right boundary of the calibration object 10 cannot exceed the right boundary of the image, so that the upper right corner of the image is not exceeded
Figure BDA0003505797960000152
Move to
Figure BDA0003505797960000153
Pixel point translation vector and right lower edge angle
Figure BDA0003505797960000154
Move to
Figure BDA0003505797960000155
Selecting the smaller pixel point translation vector of the two translation vectors to determine the translation vector as delta PxOn the right upper corner
Figure BDA0003505797960000161
Move to
Figure BDA0003505797960000162
Pixel point translation vector and left upper edge angle
Figure BDA0003505797960000163
Move to
Figure BDA0003505797960000164
Selecting the smaller pixel point translation vector of the two pixel point translation vectors to determine the smaller pixel point translation vector as delta Py
Similarly, when the calibration object 10 is located at the upper left corner of the image, the expected position of the calibration board moving in the image coordinate system is:
Figure BDA0003505797960000165
by analogy, the positions of the expected calibration plate where the calibration object 10 is located in the image center area, the lower left corner and the lower right corner in the image coordinate system are obtained, and 5 different expected sampling poses are determined.
In one embodiment, all of the expected sampling poses may be determined prior to capture.
Specifically, after 5 expected sampling points of the center area and the four corner areas of the field of view, which are required by the calibration object 10, are determined, all 5 expected sampling positions may be determined before shooting based on the initial pixel coordinates of the calibration object 10 in the pixel plane, and then, each time the second driving mechanism 30 controls the camera 200 to move to one of the expected sampling positions and shoots an image of the calibration object 10, the camera 200 may be driven to return to the initial position first, and then the next camera 200 is driven to move to the next expected sampling position and shoot an image of the calibration object 10, and this is repeated until the center area and the four corner areas are traversed. Since all 5 desired sampling poses are known, no new calculation amount is needed to control the camera 200 to return to the initial pose from one of the desired sampling poses, and therefore the overall calculation amount of the processor is small.
In another embodiment, the corresponding expected sampling pose is calculated before each shot.
Specifically, after 5 desired sampling points of the calibration object 10 required to be located in the center area and the four corner areas of the field of view are determined, one of the desired sampling poses may be determined before photographing based on the initial pixel coordinates of the calibration object 10 in the pixel plane, and then the second driving mechanism 30 controls the camera 200 to move to the desired sampling pose and photographs an image of the calibration object 10, and then drives the camera 200 back to the initial pose. Another desired sampling pose is determined before shooting, and then the next time the camera 200 is driven to move to the desired sampling pose and shoot the image of the calibration object 10, and so on until the center area and the four corner areas are traversed.
In other embodiments, after the second driving mechanism 30 controls the camera 200 to move to one of the expected sampling positions and capture the image of the calibration object 10 each time, a new expected sampling position can also be determined by the current expected sampling point and the next expected sampling point, and then the second driving mechanism 30 controls the camera 200 to move to the new expected sampling position and capture the image of the calibration object 10, and this is repeated until the center area and the four corner areas are traversed.
After the expected sampling pose is obtained, an expected camera motion vector can be further obtained through the expected sampling pose, then the expected camera motion vector is obtained by combining the conversion ratio of the motion of the camera 200 and the motion of the pixel point of the calibration object 10, and then the corresponding relation of the second driving mechanism 30 for driving the camera 200 to move in the moving direction is determined through the conversion relation between the preset camera coordinate system of the camera 200 and the reference coordinate system of the second driving mechanism 30, so that the second driving mechanism 30 can drive the camera 200 to respectively move to the corresponding expected sampling pose according to the expected camera motion vector and the moving direction.
Referring to fig. 7, in an embodiment, the step S120 of controlling the second driving mechanism 30 to drive the camera 200 to move to a plurality of desired sampling positions respectively, so that the calibration objects 10 are located at different positions within the field of view of the camera 200 respectively, further includes the following sub-steps:
s121, when the desired camera motion vector is less than or equal to the movement threshold of the second driving mechanism 30, controlling the second driving mechanism 30 to drive the camera 200 to translate to the desired sampling pose corresponding to the desired camera motion vector;
and S122, when the expected camera motion vector is larger than the movement threshold of the second driving mechanism 30, controlling the second driving mechanism 30 to drive the camera 200 to rotate to the expected sampling pose corresponding to the expected camera motion vector.
Since the second driving mechanism 30 has a movement threshold limit, wherein the size of the movement threshold is related to the structure of the second driving mechanism 30, for example, the mechanical arm 31 has a movement limit range. The robot arm 31 carrying the camera 200 can directly move the desired camera motion vector Δ D, and when the desired camera motion vector Δ D is equal to or less than the movement threshold, the robot arm 31 can easily realize the above-described function, but when the desired camera motion vector Δ D is equal to or less than the movement threshold, it is difficult to move the camera 200 to a desired position due to the spread limitation of the robot arm 31.
To solve this problem, a large-sized long-reach robot arm 31 may be deployed, but this will undoubtedly bring about a series of problems such as increased deployment difficulty, increased space requirement, and increased maintenance cost.
However, the present embodiment solves the problem of insufficient extension of the mechanical arm 31 from the perspective of the algorithm.
The second driving mechanism 30 has a degree of freedom of rotation in addition to the degree of freedom of translation, and with this characteristic, the present embodiment selects either the translation camera 200 or the rotation camera 200 according to the magnitude of the desired camera motion vector Δ D.
Specifically, referring to fig. 8, the 0' th region (shown by a solid line) is the position of the calibration object 10 within the field of view of the camera 200 when the camera 200 is in the initial pose p. The 1' region (shown by a dotted line) is the central position of the field of view of the camera 200 when the camera 200 is located in the first desired sampling pose, the calibration object 10 needs to move from the 0' region to the 1' region, the desired camera motion vector Δ D is small (less than or equal to the movement threshold of the second driving mechanism 30), and the calibration object 10 can be moved from the 0' region to the 1' region by means of translation.
The 2' region is the upper left corner position of the field of view of the camera 200 at which the calibration object 10 is located when the camera 200 is in the second desired sampling pose; the 3' region is the upper right angular position of the field of view of the camera 200 at which the calibration object 10 is located when the camera 200 is in the third desired sampling pose; the 4' region is the lower left corner position of the field of view of the camera 200 at which the calibration object 10 is located when the camera 200 is in the fourth desired sampling pose; the 5' th region is a lower right corner position of the field of view of the camera 200 at which the calibration object 10 is located when the camera 200 is in the fifth desired sampling attitude. The calibration object 10 needs to move from the 0 th 'area to the 2 nd', 3 rd ', 4 th' and 5 th 'areas respectively, and it is expected that the camera motion vector Δ D is large (larger than the movement threshold of the second driving mechanism 30), and it is difficult to move in a translation manner, and then the calibration object 10 can be moved from the 0 th' area to the 2 nd ', 3 rd', 4 th 'and 5 th' areas respectively in a rotation manner, so that the problem of insufficient arm extension of the mechanical arm 31 is solved.
Referring to fig. 9, in an embodiment, the step S122 specifically includes the following sub-steps:
s122a, acquiring a camera rotation angle corresponding to the desired camera motion vector and a rotation direction corresponding to the desired sampling pose when the desired camera motion vector is greater than the movement threshold of the second drive mechanism 30;
s122b, controlling the second drive mechanism 30 to drive the camera 200 to rotate to the desired sampling pose corresponding to the desired camera motion vector based on the camera rotation angle and the rotation direction.
Referring to fig. 10, if the desired camera motion vector Δ D is obtained from the above embodiment, the calculation formula of the camera rotation angle θ is as follows:
Figure BDA0003505797960000191
as can be seen from the above equation (4), the camera rotation angle θ is equivalent to the desired camera motion vector Δ D, so that the movement of the camera 200 itself can be converted into rotation, and the problems of limited arm extension and inconvenient deployment of the mechanical arm 31 can be solved.
Taking the second driving mechanism 30 as the robot arm 31 as an example, the coordinate system o-xyz at the end of the robot arm 31 has a z-axis perpendicular to the plane of the calibration object 10, and an x-axis and a y-axis parallel to the length direction and the width direction of the calibration object 10, respectively.
As can be seen from the above equation (4), since the change in the field of view of the camera 200 caused by the positive θ rotation around the x-axis direction of the end coordinate system of the robot arm 31 and the change in the field of view caused by the Δ D translation of the camera 200 in the positive y-axis direction are identical, when the camera 200 is desired to be translated in the positive y-axis direction, the camera rotation angle θ and the rotation direction thereof are determined by only rotating the robot arm 31 around the x-axis of the end coordinate system by a certain positive angle. Similarly, since the change in the field of view of the camera 200 caused by the positive θ rotation around the y-axis of the end coordinate system of the robot arm 31 and the change in the field of view caused by the Δ D translation of the camera 200 in the positive x-axis direction are identical, when it is desired to translate the Δ D translation of the camera 200 in the positive x-axis direction, the camera rotation angle θ and the rotation direction thereof can be determined by only rotating the camera 200 by a certain positive angle around the y-axis of the end coordinate system of the robot arm 31.
Continuing with the example of rotating from the 0 'region to the 2' region, the corresponding camera rotation angle θ and rotation direction are: firstly rotating around the x axis by a certain angle theta1Realize the positive translation of delta D along the y-axis1And then rotated by a certain angle theta around the y axis2Realize translation Δ D along the negative x-axis direction2
Figure BDA0003505797960000192
Otherwise, firstly rotate a certain angle theta around the y axis2And then rotated by a certain angle theta around the x axis1Rotation from the 0 'region to the 2' region can also be achieved.
Similarly, the rotation from the 0 'th region to the 3' th, 4 'th and 5' th regions is performed in a similar manner as described above.
And S130, controlling the camera 200 to shoot the calibration object 10 at the sampling position respectively in a plurality of expected sampling poses so as to obtain an image sequence of the calibration object 10.
The camera 200 captures images of the calibration object 10 at the sampling positions in each of the desired sampling poses, and if there are 5 desired sampling poses, the image sequence of the calibration object 10 includes 5 images of the calibration object 10.
S140, acquiring a calibration result of the camera 200 based on the image sequence.
The image sequence of the calibration object 10 is input to the camera calibration algorithm, thereby outputting the calibration result of the camera 200. The camera calibration algorithm may be a conventional camera calibration method, a subjective visual camera calibration method, or a camera self-calibration method, and the camera calibration algorithm may be pre-stored in the memory of the camera calibration system 100.
After the calibration result is calculated, the processor will automatically write the calibration parameters into the camera 200, and the whole calibration process is completed.
Referring to fig. 11, in an embodiment, the camera calibration method further includes the following steps:
310. determining a field of view of the camera 200 based on the sampled position of the calibration object 10;
320. controlling the second driving mechanism 30 to change the initial pose of the camera 200 by a preset step length, wherein the ratio of the preset step length to the visual field range is less than 1;
330. controlling the camera 200 to shoot the calibration object 10 at the sampling position with the changed initial pose to obtain a corresponding initial image;
340. based on the initial image, the changed conversion ratio of the camera 200 motion and the calibration object 10 pixel point motion is obtained.
Then the step 250 of respectively acquiring a plurality of expected sampling poses of the camera 200 based on a plurality of expected sampling points in the pixel plane of the camera 200 and the transformation relation comprises the following sub-steps:
s253, determining a corresponding desired pixel point translation vector based on the pixel coordinates of a plurality of desired sampling points in the pixel plane of the camera 200 and the initial pixel coordinates of the calibration object 10 in the pixel plane;
and S254, determining an expected camera motion vector corresponding to the expected pixel translation vector based on the expected pixel translation vector and the converted relation between the camera 200 motion and the calibration object 10 pixel motion, so as to respectively acquire a plurality of expected sampling poses corresponding to the camera 200.
In order to implement uniform sampling, the initial pose of the camera 200 is changed once or multiple times in this embodiment, and after the initial pose is changed according to the camera calibration method in any one of the above embodiments, the camera 200 captures images of the calibration object 10 at the sampling position with a plurality of expected sampling poses, respectively, to form an image sequence of the calibration object 10 corresponding to the changed initial pose.
Illustratively, in the coordinate system o-xyz at the end of the robot arm 31, since the initial pose of the camera 200 is directly facing the calibration object 10, there is no need to consider movement along the z-axis, but movement along the x-axis and movement along the y-axis are considered, for example, images of the calibration object 10 at (x + Δ x, y), (x + Δ x, y + Δ y), (x, y- Δ y) are respectively acquired at 4 positions, respectively, 5 pixel sampling points are sampled in total in combination with the initial position (x, y), and 4 × 5 is 20, so that the image sequence of the final calibration object 10 includes 20 images.
Referring to fig. 12, the 0' th area (shown by a solid line) is the position of the calibration object 10 within the visual field of the camera 200 when the camera 200 is in the first initial posture. The I-th area (shown by a dotted line) is a position where the calibration object 10 is located within the visual field of the camera 200 when the camera 200 is in the second initial attitude. The II-th area is a position where the calibration object 10 is located within the visual field of the camera 200 when the camera 200 is in the third initial pose. The III-th area is a position where the calibration object 10 is located within the visual field of the camera 200 when the camera 200 is in the fourth initial attitude. The IV area is a position where the calibration object 10 is located within the visual field of the camera 200 when the camera 200 is in the fifth initial attitude.
Wherein, the movement from the 0 th region to the I, II, III, IV regions respectively can follow the movement method from step S121 to step S122.
For example, when the camera 200 is in the second initial position, the calibration object 10 is located at the center of the field of view of the camera 200, the calibration object 10 needs to move from the 0 th 'area to the I th area, and the camera motion vector Δ D is expected to be small (less than or equal to the movement threshold of the second driving mechanism 30), so that the calibration object 10 can be moved from the 0 th' area to the I th area in a translational manner. The calibration object 10 needs to move from the 0 th 'area to the I th area, the II th area, the III th area and the IV th area respectively, and it is expected that the camera motion vector Δ D is large (larger than the moving threshold of the second driving mechanism 30), and it is difficult to move the calibration object 10 from the 0 th' area to the I th area, the II th area, the III th area and the IV th area respectively in a translation manner and a rotation manner, so that the problem of insufficient arm extension of the mechanical arm 31 is solved.
The selection of the preset step Δ x and the preset step Δ y is related to the size of the field of view of the camera, and in this embodiment, the ratio of the preset step to the size of the field of view is less than 1.
In one embodiment, the preset step size Δ x and the preset step size Δ y are 1/10 of the camera field of view size.
Specifically, after obtaining the current working distance L corresponding to the sampling position of the calibration object, the focal length f of the lens of the camera 200 and the size a of the photosensitive element of the camera 200 are further obtained, and then the size of the field of view of the camera is calculated according to the formula (2) of the pinhole imaging model:
D=L*a/f (2);
the calibration condition parameters of the camera 200 may be directly input by the user, or may be further obtained through information input by the user, for example, when the user inputs the model of the camera 200, the focal length f of the lens and the size a of the photosensitive element of the camera 200 may be known.
Illustratively, the working distance L corresponding to the current sampling position is determinedminFurther acquiring other calibration condition parameters of the camera 200, such as the focal length f of the lens of the camera 200 and the size a of the photosensitive element of the camera 200, and then determining the size D of the field of view of the camera 200 at this time according to the formula (2) of the pinhole imaging modelmin=Lmin*a/f。
In obtaining the visual field size D of the camera 200minThen, if X and Y are the length and width of the field of view, respectively,
Figure BDA0003505797960000221
then
Figure BDA0003505797960000222
Referring to fig. 13, in an embodiment, the camera calibration method further includes the following steps:
410. controlling the first driving mechanism 20 to change the sampling position of the calibration object 10, wherein the changed sampling position is also located within the working distance of the camera 200;
the step S130 of controlling the camera 200 to capture the calibration object 10 located at the sampling position with a plurality of the desired sampling poses respectively to obtain the image sequence of the calibration object 10 includes the following sub-steps:
131. and controlling the camera 200 to shoot the calibration object 10 at the modified sampling position in a plurality of the expected sampling poses respectively to obtain the modified image sequence of the calibration object 10.
In the present embodiment, let the working distance range [ L ] of the camera 200 be acquiredmin,Lmax]Selecting working distances L respectivelymin、([Lmin+Lmax)/2、LmaxAnd controls the first drive mechanism 20 to drive the calibration object 10 to move to one of the sampling positions, for example, LminTo (3). It is understood that in other embodiments, the working distance range L may be selectedmin,Lmax]The other sampling positions in the calibration object 10 may be set to make the moving range of the calibration object in the entire depth space, which is not specifically limited herein.
In this embodiment, the sampling position of the calibration object 10 is changed once or many times, and according to the camera calibration method in any one of the above embodiments, the camera 200 is controlled to capture images of the calibration object 10 at the changed sampling position in multiple desired sampling poses, respectively, so as to obtain a sequence of images of the changed calibration object 10.
Illustratively, according to a total of Lmin、([Lmin+Lmax)/2、LmaxEach sampling position respectively acquires 5 images of the calibration object 10 of the expected sampling pose, and 3 × 5 is 15, so that an image sequence containing 15 images of the calibration object 10 is obtained. Meanwhile, in order to achieve uniform sampling, the initial pose of the camera 200 may be changed 4 times at each sampling position, for example, (x + Δ x, y), (x + Δ x, y + Δ y), (x, y + Δ y), and (x, y- Δ y), and 5 pixel sampling points are combined with the initial position (x, y) to respectively acquire images of the calibration object 10 at 5 desired sampling poses. The image sequence of the calibration object 10 finally used for calibration is 3 × 5 × 5 — 75 images.
For the convenience of use in practical production, the camera calibration system 100 further includes a display screen or an external display screen, and the display screen is connected to the processor.
The method and the device can integrate all functions of the method and the device with one interface of the display screen. Firstly, a user clicks a refresh button on a display screen interface, the interface can display all cameras which can be connected, the user selects a required camera 200, then calibration parameters such as the exposure time and the depth of field range of the camera (if software can be automatically read from the camera parameters, the software does not need to be set) and the like are set, finally, a calibration starting button on the display screen interface is clicked, the automatic acquisition of calibration data can be started according to the camera calibration method in any embodiment of the application, the calibration result is calculated after the data acquisition is finished, and then the calculated calibration parameters are automatically written into the camera.
So, the camera calibration system 100 degree of automation of this application is high, can realize a key basically and mark, and the calibration process is simple, does not rely on artificial experience, and it is efficient and with lower costs to mark, has solved the calibration process of industrial camera extremely loaded down with trivial details and rely on artificial experience, marks the comparatively low and higher problem of cost of efficiency.
In summary, compared with the prior art, the camera calibration method and the camera calibration system 100 in the embodiment of the present application have the following beneficial effects:
the first driving mechanism 20 is controlled to drive the calibration object 10 to move to any sampling position; the sampling position is located within the working distance of the camera 200, so that automatic calibration of 2D and 3D industrial cameras with different working distances and different depth ranges can be met. Meanwhile, the second driving mechanism 30 is controlled to drive the camera 200 to move to a plurality of expected sampling poses respectively, the plurality of expected sampling poses enable the calibration object 10 to be located at different positions of the visual field range of the camera 200 respectively, then the camera 200 is controlled to shoot the calibration object 10 located at the sampling position with the plurality of expected sampling poses respectively so as to obtain an image sequence of the calibration object 10, and finally a calibration result of the camera 200 is obtained based on the image sequence. In addition, the camera calibration system 100 of the application has high automation degree, can basically realize one-key calibration, has simple calibration process, does not depend on manual experience, and has high calibration efficiency and lower cost.
Referring to fig. 14, a camera calibration method provided in the embodiment of the present application includes the following steps:
510. controlling the calibration object 10 to move to any sampling position; wherein the sampling location is within a working distance of the camera 200;
520. controlling the cameras 200 to respectively move to a plurality of desired sampling poses such that the calibration objects 10 are respectively at different positions within the visual field of the cameras 200; wherein, a plurality of the expected sampling poses are obtained by the conversion relationship between the motion of the camera 200 and the motion of the pixel point of the calibration object 10, and respectively correspond to different positions in the visual field range of the camera 200 one by one;
530. controlling the camera 200 to shoot the calibration object 10 at the sampling position in a plurality of the expected sampling poses respectively to obtain an image sequence of the calibration object 10;
540. calibration results of the camera 200 are acquired based on the sequence of images.
The camera calibration method according to the embodiment of the present application is applicable to the camera calibration system 100 according to any one of the above embodiments, and is also applicable to other existing camera calibration systems, and achieves the technical effects consistent with the above methods.
It can be understood that, a camera calibration method according to an embodiment of the present application is the same as part of the steps of the camera calibration method according to any one of the embodiments described above, and specific limitations on the steps of the camera calibration method according to the embodiment of the present application may refer to limitations of the camera calibration method according to any one of the embodiments described above, and are not described herein again.
The embodiment of the application also provides the terminal equipment. The terminal device includes a processor and a memory. A memory coupled to the processor for storing one or more programs; when executed by the processor, the one or more programs cause the processor to implement the camera calibration method as described in the embodiment of fig. 14.
In an exemplary embodiment, the present application further provides a computer readable storage medium comprising a computer program, which when executed by a processor, performs the steps of the camera calibration method as described in any one of the above embodiments. For example, the computer readable storage medium may be the above-mentioned memory including a computer program, which can be executed by the processor of the camera calibration system 100 or the terminal device to complete the camera calibration method according to any one of the above-mentioned embodiments, and achieve the technical effects consistent with the above-mentioned methods.
The above description is only for the purpose of illustrating the preferred embodiments of the present application and is not intended to limit the scope of the present application, which is defined by the appended claims and their equivalents, and all changes that can be made therein without departing from the spirit and scope of the invention.

Claims (15)

1. The camera calibration method is characterized by being applied to a camera calibration system, wherein the camera calibration system comprises a calibration object, a first driving mechanism and a second driving mechanism, the first driving mechanism is used for driving the calibration object to move, and the second driving mechanism is used for driving a camera to move; the method comprises the following steps:
controlling the first driving mechanism to drive the calibration object to move to any sampling position; wherein the sampling location is within a working distance of the camera;
controlling the second driving mechanism to drive the cameras to move to a plurality of expected sampling poses respectively, so that the calibration objects are located at different positions in the visual field range of the cameras respectively; the expected sampling poses are obtained through conversion relations between the camera motion and the calibration object pixel point motion respectively and correspond to different positions in the visual field range of the camera one by one respectively;
controlling the camera to shoot the calibration object at the sampling position at a plurality of expected sampling poses respectively to obtain an image sequence of the calibration object;
and acquiring a calibration result of the camera based on the image sequence.
2. The camera calibration method according to claim 1, further comprising, before the controlling the first driving mechanism to drive the calibration object to move to any sampling position:
acquiring calibration condition parameters of the camera, wherein the calibration condition parameters comprise the working distance of the camera;
acquiring an initial pose of the camera based on the calibration condition parameters, and controlling the second driving mechanism to drive the camera to move to the initial pose of the camera, so that the calibration object is in the visual field range of the camera;
controlling the camera to shoot the calibration object at the sampling position in an initial pose of the camera to obtain an initial image;
and acquiring the conversion relation between the camera motion and the motion of the pixel points of the calibration object based on the initial image.
3. The camera calibration method according to claim 2, wherein said obtaining a conversion relationship between the camera motion and the motion of the calibration object pixel points based on the initial image comprises:
determining a pixel distance between any two marker points on the calibration object based on the initial image;
acquiring an actual distance between the two mark points, and then determining a conversion ratio of the pixel distance and the actual distance;
and acquiring the conversion relation between the camera motion and the motion of the pixel points of the calibration object based on the conversion ratio between the pixel distance and the actual distance and the conversion relation between the preset camera coordinate system of the camera and the reference coordinate system of the second driving mechanism.
4. The camera calibration method according to claim 2, further comprising:
and respectively acquiring a plurality of expected sampling poses of the camera based on a plurality of expected sampling points in a pixel plane of the camera and the conversion relation.
5. The camera calibration method according to claim 4, wherein the obtaining a plurality of expected sampling poses of the camera based on a plurality of expected sampling points in a pixel plane of the camera and the transformation relation respectively comprises:
determining a corresponding expected pixel point translation vector based on pixel coordinates of a plurality of expected sampling points in a pixel plane of the camera and initial pixel coordinates of the calibration object in the pixel plane;
and determining an expected camera motion vector corresponding to the expected pixel translation vector based on the expected pixel translation vector and the conversion relation, so as to respectively obtain a plurality of expected sampling poses corresponding to the camera.
6. The camera calibration method according to claim 5, wherein the controlling the second driving mechanism to drive the cameras to move to a plurality of desired sampling poses respectively, so that the calibration objects are at different positions within a visual field range of the cameras respectively comprises:
when the expected camera motion vector is smaller than or equal to a preset translation threshold value of the second driving mechanism, controlling the second driving mechanism to drive the camera to translate to the expected sampling pose corresponding to the expected camera motion vector;
when the expected camera motion vector is larger than a preset translation threshold value of the second driving mechanism, controlling the second driving mechanism to drive the camera to rotate to the expected sampling pose corresponding to the expected camera motion vector.
7. The camera calibration method according to claim 6, wherein the controlling the second driving mechanism to drive the camera to rotate to the desired sampling pose corresponding to the desired camera motion vector when the desired camera motion vector is greater than a preset translation threshold of the second driving mechanism comprises:
when the desired camera motion vector is greater than a preset translation threshold of the second drive mechanism, acquiring a desired camera rotation vector corresponding to the desired camera motion vector, the rotation vector comprising a camera rotation angle and a rotation direction;
based on the desired camera rotation vector, controlling the second drive mechanism to drive the camera to rotate to the desired sampling pose corresponding to the desired camera motion vector.
8. The camera calibration method according to any one of claims 4 to 7, further comprising:
determining a field of view of the camera based on the sampled position of the calibration object;
controlling the second driving mechanism to change the initial pose of the camera by a preset step length, wherein the ratio of the preset step length to the visual field range is less than 1;
controlling the camera to shoot the calibration object at the sampling position with the changed initial pose so as to obtain a corresponding initial image;
based on the initial image, acquiring a conversion relation between the changed camera motion and the motion of the pixel point of the calibration object;
then the obtaining a plurality of expected sampling poses of the camera based on a plurality of expected sampling points in a pixel plane of the camera and the transformation relation respectively comprises:
determining a corresponding expected pixel point translation vector based on pixel coordinates of a plurality of expected sampling points in a pixel plane of the camera and initial pixel coordinates of the calibration object in the pixel plane;
and determining an expected camera motion vector corresponding to the expected pixel translation vector based on the expected pixel translation vector and the converted relation between the camera motion and the calibration object pixel motion, so as to respectively obtain a plurality of expected sampling poses corresponding to the camera.
9. The camera calibration method according to any one of claims 1 to 7, further comprising:
controlling the first driving mechanism to change the sampling position of the calibration object, wherein the changed sampling position is also positioned in the working distance of the camera;
the controlling the camera to shoot the calibration object located at the sampling position in a plurality of the expected sampling poses respectively to obtain the image sequence of the calibration object includes:
and controlling the camera to shoot the calibration object at the changed sampling position respectively at a plurality of expected sampling poses so as to obtain a corresponding image sequence of the calibration object.
10. A camera calibration system, comprising:
a calibration object;
the first driving mechanism is used for driving the calibration object to move;
the second driving mechanism is used for driving the camera to move;
the processor is respectively connected with the first driving mechanism and the second driving mechanism; and
a memory coupled to the processor for storing one or more programs;
when executed by the processor, the one or more programs cause the processor to implement the camera calibration method of any one of claims 1-9.
11. The camera calibration system of claim 10, wherein the first drive mechanism comprises a calibration object platform connected to the processor, the calibration object being disposed on the calibration object platform;
the processor is used for acquiring calibration condition parameters of the camera, wherein the calibration condition parameters comprise the working distance of the camera, and then acquiring the initial pose of the camera and the initial pose of the calibration object platform based on the calibration condition parameters;
the calibration object platform is used for being controlled by the processor to move to the initial pose of the calibration object platform, and the second driving mechanism is used for being controlled by the processor to drive the camera to move to the initial pose of the camera, so that the calibration object is in the visual field range of the camera.
12. The camera calibration system of claim 11, wherein the calibration object platform comprises a moving platform and a rotating platform respectively connected to the processor, the rotating platform being disposed on the moving platform, a plurality of different types of the calibration objects being disposed on the rotating platform;
the calibration condition parameters further comprise corresponding calibration object types, and the initial pose of the calibration object platform comprises an initial sampling position of the mobile platform and an initial pose of the rotating platform;
the processor is further used for acquiring a corresponding calibration object type based on the working distance of the camera, and then respectively acquiring an initial sampling position of the mobile platform and an initial pose of the rotating platform based on the working distance of the camera and the corresponding calibration object type;
the rotating platform is controlled by the processor to move to an initial pose of the rotating platform so that a calibration object of a type corresponding to the working distance of the camera is opposite to the camera;
the mobile platform is used for being controlled by the processor to drive the rotary platform and the calibration objects to jointly move to the initial sampling position of the calibration object platform.
13. A camera calibration method is characterized by comprising the following steps:
controlling the calibration object to move to any sampling position; wherein the sampling location is within a working distance of the camera;
controlling the cameras to respectively move to a plurality of expected sampling poses, so that the calibration objects are respectively at different positions in the visual field range of the cameras; the expected sampling poses are obtained through conversion relations between the camera motion and the calibration object pixel point motion respectively and correspond to different positions in the visual field range of the camera one by one respectively;
controlling the camera to shoot the calibration object at the sampling position at a plurality of expected sampling poses respectively to obtain an image sequence of the calibration object;
and acquiring a calibration result of the camera based on the image sequence.
14. A terminal device, comprising:
a processor; and
a memory coupled to the processor for storing one or more programs;
the one or more programs, when executed by the processor, cause the processor to perform the camera calibration method of claim 13.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a camera calibration method according to any one of claims 1-9, 13.
CN202210138141.4A 2022-02-15 2022-02-15 Camera calibration method, system, terminal device and computer readable storage medium Pending CN114663518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210138141.4A CN114663518A (en) 2022-02-15 2022-02-15 Camera calibration method, system, terminal device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210138141.4A CN114663518A (en) 2022-02-15 2022-02-15 Camera calibration method, system, terminal device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114663518A true CN114663518A (en) 2022-06-24

Family

ID=82027199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210138141.4A Pending CN114663518A (en) 2022-02-15 2022-02-15 Camera calibration method, system, terminal device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114663518A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170675A (en) * 2022-07-22 2022-10-11 信利光电股份有限公司 Method for expanding camera view

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170675A (en) * 2022-07-22 2022-10-11 信利光电股份有限公司 Method for expanding camera view
CN115170675B (en) * 2022-07-22 2023-10-03 信利光电股份有限公司 Method for expanding camera vision

Similar Documents

Publication Publication Date Title
US8619144B1 (en) Automatic camera calibration
US10571254B2 (en) Three-dimensional shape data and texture information generating system, imaging control program, and three-dimensional shape data and texture information generating method
US8662676B1 (en) Automatic projector calibration
CN110246185B (en) Image processing method, device, system, storage medium and calibration system
CN110136208A (en) A kind of the joint automatic calibration method and device of Visual Servoing System
CN108830906B (en) Automatic calibration method for camera parameters based on virtual binocular vision principle
CN111127565B (en) Calibration method, calibration system, and computer-readable storage medium
JP2002325199A (en) Electronic imaging device
WO2022126430A1 (en) Auxiliary focusing method, apparatus and system
CN113330487A (en) Parameter calibration method and device
CN114663518A (en) Camera calibration method, system, terminal device and computer readable storage medium
CN113298886A (en) Calibration method of projector
CN109087360A (en) A kind of scaling method that robot camera is joined outside
CN115760602A (en) Image correction method, laser cutting apparatus, and storage medium
CN113596276B (en) Scanning method and system for portable electronic equipment, electronic equipment and storage medium
CN107527323B (en) Calibration method and device for lens distortion
JPH07174538A (en) Image input camera
CN115239816A (en) Camera calibration method, system, electronic device and storage medium
JPH07174537A (en) Image input camera
CN113079318B (en) System and method for automatically focusing edge defects and computer storage medium
CN113840084A (en) Method for realizing control of panoramic tripod head based on PTZ (Pan/Tilt/zoom) return technology of dome camera
CN114612574A (en) Vehicle-mounted panoramic aerial view camera panoramic aerial view calibration and conversion splicing method based on unmanned aerial vehicle
CN115079727A (en) Method for adjusting cradle head of inspection robot
JPH11101640A (en) Camera and calibration method of camera
CN114066963A (en) Drawing construction method and device and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination