Nothing Special   »   [go: up one dir, main page]

CN118305790A - Robot vision positioning method, calibration method, device, equipment and medium - Google Patents

Robot vision positioning method, calibration method, device, equipment and medium Download PDF

Info

Publication number
CN118305790A
CN118305790A CN202410448078.3A CN202410448078A CN118305790A CN 118305790 A CN118305790 A CN 118305790A CN 202410448078 A CN202410448078 A CN 202410448078A CN 118305790 A CN118305790 A CN 118305790A
Authority
CN
China
Prior art keywords
mechanical arm
identification point
coordinate system
coordinate
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410448078.3A
Other languages
Chinese (zh)
Inventor
朱松
尹昌顺
皮富涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Co Ltd filed Critical Hangzhou Hikrobot Co Ltd
Priority to CN202410448078.3A priority Critical patent/CN118305790A/en
Publication of CN118305790A publication Critical patent/CN118305790A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides a robot vision positioning method, a calibration method, a device, equipment and a medium, wherein the second conversion relation is a relative pose conversion relation from a target workpiece to a mark point. Therefore, after the first pose is determined, the second pose of the target workpiece under the mechanical arm coordinate system can be determined according to the second conversion relation, then the mechanical arm can be controlled to move based on the second pose, the mechanical arm is controlled to move to the pose, and the workpiece grabbing and placing is completed. In addition, the application can solve the stationary phase-to-pose conversion relation from the target workpiece to the identification point without establishing a TCP coordinate system by the mechanical arm. Because all physical coordinates are in the mechanical arm base coordinate system, the teaching operation and programming complexity of the mechanical arm can be obviously reduced, the debugging efficiency is effectively improved, and the grabbing and releasing precision is higher.

Description

Robot vision positioning method, calibration method, device, equipment and medium
Technical Field
The application relates to the technical field of computer images, in particular to a robot vision positioning method, a calibration method, a device, equipment and a medium.
Background
The compound robot is widely applied to scenes such as warehouse logistics, automatic factories, automatic goods supermarkets, security inspection and the like, and can realize automatic carrying of materials, loading and unloading of articles and material sorting in a flexible manner.
At present, two technical schemes are available for realizing the positioning function: a two-dimensional plane-based deviation positioning method and a mechanical arm-based TCP (Tool Centre Point, tool center point) deviation positioning method.
The deviation positioning method based on the two-dimensional plane comprises the following steps: through N images (N is more than or equal to 3) and physical coordinates of the mechanical arm, a corresponding relation of hand and eye positions of all the acquisition points is established, a conversion matrix of the camera and the mechanical arm is solved, the offset of the mechanical arm is calculated according to pixel coordinates and the conversion matrix of a workpiece to be grabbed in a camera view field, the mechanical arm is controlled to move according to the offset, the placing position of the workpiece can be accurately positioned, and further accurate grabbing and placing are achieved.
The TCP offset positioning method based on the mechanical arm comprises the following steps: on the basis of a deviation positioning method based on a two-dimensional plane, a marking area serving as a guiding reference is attached to a workbench surface, and the relative positioning relation between the marking area and a workpiece is kept unchanged. Selecting a point with a characteristic from the identification area as a datum point, establishing a TCP coordinate system by the mechanical arm by using the datum point, teaching and grabbing a workpiece under the coordinate system, recording the position and the gesture of the mechanical arm at the moment, calculating the offset of the datum point position according to the pixel coordinate and the conversion matrix of the workpiece to be grabbed in the camera view field, and solving the offset of the mechanical arm from the calibration origin point to the workpiece placement position along the TCP coordinate system by combining the datum position and the gesture of the workpiece to be grabbed under the TCP coordinate system, thereby completing the task of grabbing and placing operation.
The two-dimensional plane-based deviation positioning method requires that a workpiece to be grasped and placed can only move in a two-dimensional plane relative to the compound robot, and when the pose of the workpiece changes in space as a three-dimensional pose, the calculated workpiece placement position deviates from the actual target position, so that accurate grasping and placement cannot be realized. According to the TCP offset positioning method based on the mechanical arm, the three-dimensional pose change of the workpiece is converted into the offset of the mechanical arm under the TCP coordinate system of the datum point, and the workpiece placement position is solved by correcting the TCP coordinate system.
Disclosure of Invention
The embodiment of the application aims to provide a robot vision positioning method, a calibration method, a device, equipment and a medium, so as to improve the workpiece grabbing and placing precision. The specific technical scheme is as follows:
The embodiment of the application provides a robot vision positioning method, which comprises the following steps:
under the condition that the robot stays at a first target position, acquiring a first image comprising an identification point, wherein the first image is acquired by a camera arranged on the mechanical arm;
Identifying pixel coordinates of the identification point in the first image to obtain running pixel coordinates and a first angle of the identification point; the first angle is an angle of the identification point relative to a preset reference plane;
Determining a first pose of the identification point under a mechanical arm coordinate system of the mechanical arm according to a first conversion relation, a reference pixel coordinate, a reference angle of the identification point and a first mechanical arm coordinate which are calibrated in advance;
The first conversion relation is a conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system; the reference pixel coordinates are pixel coordinates of the identification points in the second image; the reference angle of the identification point is the angle of the identification point in the second image relative to the preset reference plane; the second image is acquired by the camera when the robot stays at a second target position; the first mechanical arm coordinate is the coordinate of the mechanical arm under the robot coordinate system when the robot stays at the second target position and the tail end teaching tool of the mechanical arm vertically contacts the identification point; the coordinates of the mechanical arm in the robot coordinate system when the camera collects the first image are the same as the coordinates of the mechanical arm in the robot coordinate system when the camera collects the second image;
And determining a second pose of the target workpiece under the mechanical arm coordinate system according to the first pose and a second pre-calibrated conversion relation, wherein the second conversion relation is a relative pose conversion relation from the target workpiece to the identification point.
In an alternative embodiment, the method further comprises:
Acquiring a sample image acquired by a camera mounted on a mechanical arm of the robot when the robot stays at the second target position, the mechanical arm coordinates of the mechanical arm of the robot at least three different positions and the mechanical arm coordinates of the mechanical arm at least three different positions;
According to each sample image and each mechanical arm coordinate, a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm is established;
Acquiring a second image which is acquired by the camera and comprises an identification point, and identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane to obtain a reference angle of the identification point;
Acquiring a first mechanical arm coordinate of the mechanical arm under the robot coordinate system when the tail end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm under the robot coordinate system when the mechanical arm grabs and releases a target workpiece;
and establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates.
In an optional implementation manner, the determining, according to the pre-calibrated first conversion relation, the reference pixel coordinate, the reference angle of the identification point, and the first mechanical arm coordinate, the first pose of the identification point under the mechanical arm coordinate system of the mechanical arm includes:
Mapping the reference pixel coordinate to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating a reference physical coordinate of the identification point in the mechanical arm coordinate system;
Mapping the operation pixel coordinate to the mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating an operation physical coordinate of the identification point in the mechanical arm coordinate system;
Calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point;
Determining the angle change amount of the identification point based on the angle of the identification point in the second image relative to a preset reference plane and the angle of the identification point in the first image relative to the preset reference plane;
and summing the first mechanical arm coordinates and the first angle of the identification point with the coordinate variation quantity and the angle variation quantity of the identification point respectively to obtain a first pose of the identification point under the mechanical arm coordinate system of the mechanical arm.
The embodiment of the application also provides a robot vision calibration method, which comprises the following steps:
acquiring a sample image acquired by a camera mounted on a mechanical arm of the robot when the robot stays at a second target position, the mechanical arm coordinates of the mechanical arm of the robot at least three different positions and the mechanical arm coordinates of the mechanical arm at least three different positions;
According to each sample image and each mechanical arm coordinate, a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm is established;
Acquiring a second image which is acquired by the camera and comprises an identification point, and identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane to obtain a reference angle of the identification point;
Acquiring a first mechanical arm coordinate of the mechanical arm under the robot coordinate system when the tail end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm under the robot coordinate system when the mechanical arm grabs and releases a target workpiece;
establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates;
The robot vision calibration result comprises: the first conversion relation, the reference pixel coordinates, the reference angle of the identification point and the second conversion relation.
The embodiment of the application also provides a robot vision positioning device, which comprises:
The first acquisition module is used for acquiring a first image including an identification point, which is acquired by a camera arranged on the mechanical arm under the condition that the robot stays at a first target position;
The first identification module is used for identifying pixel coordinates of the identification point in the first image to obtain running pixel coordinates and a first angle of the identification point; the first angle is an angle of the identification point relative to a preset reference plane;
the first determining module is used for determining a first pose of the identification point under a mechanical arm coordinate system of the mechanical arm according to a first conversion relation calibrated in advance, a reference pixel coordinate, a reference angle of the identification point and a first mechanical arm coordinate;
The first conversion relation is a conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system; the reference pixel coordinates are pixel coordinates of the identification points in the second image; the reference angle of the identification point is the angle of the identification point in the second image relative to the preset reference plane; the second image is acquired by the camera when the robot stays at a second target position; the first mechanical arm coordinate is the coordinate of the mechanical arm under the robot coordinate system when the robot stays at the second target position and the tail end teaching tool of the mechanical arm vertically contacts the identification point; the coordinates of the mechanical arm in the robot coordinate system when the camera collects the first image are the same as the coordinates of the mechanical arm in the robot coordinate system when the camera collects the second image;
And the second determining module is used for determining a second pose of the target workpiece under the mechanical arm coordinate system according to the first pose and a pre-calibrated second conversion relation, wherein the second conversion relation is a relative pose conversion relation from the target workpiece to the identification point.
In an alternative embodiment, the apparatus further comprises:
The second acquisition module is used for acquiring the coordinates of the mechanical arm of the robot at least three different positions when the robot stays at the second target position and the sample images acquired by cameras arranged on the mechanical arm when the mechanical arm is at the coordinates of the mechanical arm at least three different positions;
The first establishing module is used for establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
the third acquisition module is used for acquiring a second image which is acquired by the camera and comprises an identification point, identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane, and obtaining the reference angle of the identification point;
A fourth obtaining module, configured to obtain a first mechanical arm coordinate of the mechanical arm in the robot coordinate system when the end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm in the robot coordinate system when the mechanical arm grabs and releases the target workpiece;
and the second establishing module is used for establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates.
In an alternative embodiment, the first determining module is specifically configured to:
Mapping the reference pixel coordinate to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating a reference physical coordinate of the identification point in the mechanical arm coordinate system;
Mapping the operation pixel coordinate to the mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating an operation physical coordinate of the identification point in the mechanical arm coordinate system;
Calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point;
Determining the angle change amount of the identification point based on the angle of the identification point in the second image relative to a preset reference plane and the angle of the identification point in the first image relative to the preset reference plane;
and summing the first mechanical arm coordinates and the first angle of the identification point with the coordinate variation quantity and the angle variation quantity of the identification point respectively to obtain a first pose of the identification point under the mechanical arm coordinate system of the mechanical arm.
The embodiment of the application also provides a robot vision calibration device, which comprises:
the second acquisition module is used for acquiring the coordinates of the mechanical arm of the robot at least three different positions when the robot stays at a second target position and the sample images acquired by cameras arranged on the mechanical arm when the mechanical arm is at the coordinates of the mechanical arm at least three different positions;
The first establishing module is used for establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
the third acquisition module is used for acquiring a second image which is acquired by the camera and comprises an identification point, identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane, and obtaining the reference angle of the identification point;
A fourth obtaining module, configured to obtain a first mechanical arm coordinate of the mechanical arm in the robot coordinate system when the end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm in the robot coordinate system when the mechanical arm grabs and releases the target workpiece;
the second establishing module is used for establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates;
The robot vision calibration result comprises: the first conversion relation, the reference pixel coordinates, the reference angle of the identification point and the second conversion relation.
The embodiment of the application also provides electronic equipment, which comprises: a processor, a memory;
a memory for storing a computer program;
and the processor is used for realizing any one of the robot vision positioning methods when executing the program stored in the memory.
The embodiment of the application also provides electronic equipment, which comprises: a processor, a memory;
a memory for storing a computer program;
And the processor is used for realizing any one of the robot vision calibration methods when executing the program stored in the memory.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program realizes the robot vision positioning method when being executed by a processor.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program realizes the robot vision calibration method when being executed by a processor.
The embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform any of the above-described robot vision positioning methods.
The embodiment of the application also provides a computer program product containing instructions, which when run on a computer, cause the computer to execute the robot vision calibration method.
The embodiment of the application has the beneficial effects that:
The embodiment of the application provides a robot vision positioning method, a calibration device, equipment and a medium, wherein the second conversion relation is a relative pose conversion relation from a target workpiece to a mark point. Therefore, after the first pose is determined, the second pose of the target workpiece under the mechanical arm coordinate system can be determined according to the second conversion relation, then the mechanical arm can be controlled to move based on the second pose, the mechanical arm is controlled to move to the pose, and the workpiece grabbing and placing is completed. In addition, the application can solve the stationary phase-to-pose conversion relation from the target workpiece to the identification point without establishing a TCP coordinate system by the mechanical arm. Because all physical coordinates are in the mechanical arm base coordinate system, the teaching operation and programming complexity of the mechanical arm can be obviously reduced, the debugging efficiency is effectively improved, and the grabbing and releasing precision is higher.
Of course, it is not necessary for any one product or method of practicing the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the application, and other embodiments may be obtained according to these drawings to those skilled in the art.
Fig. 1 is a schematic flow chart of a robot vision positioning method according to an embodiment of the present application;
fig. 2 is a first schematic diagram of a robot arm motion according to an embodiment of the present application;
Fig. 3 is a second schematic diagram of a robot arm motion according to an embodiment of the present application;
Fig. 4 is a third schematic diagram of a robotic arm motion provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a relative pose from a target workpiece to a marking point in a marking area according to an embodiment of the present application;
Fig. 6 is a schematic flow chart of a robot vision calibration method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a robot vision positioning device according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a robot vision calibration device according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by the person skilled in the art based on the present application are included in the scope of protection of the present application.
First, terms involved in the embodiments of the present application will be explained:
And (3) a robot: the novel robot composed of AGV/AMR/RGV/IGV (automatic navigation robot), mechanical arm and machine vision has the advantages of 'hands, feet, eyes and brains', is widely applied to storage logistics, automatic factories, automatic goods supermarkets, security inspection and the like, and can flexibly realize automatic carrying, loading and unloading of materials and sorting of materials.
Feeding and discharging: and conveying the workpiece to be processed to a processing position on the machine table and taking the processed workpiece out of the processing position.
Pose estimation: corresponding points between the real world and the sensor projections are found, and then corresponding methods are adopted to estimate the position and the posture of the sensor according to the types (2D-2D, 2D-3D, 3D-3D) of the point pairs.
In order to achieve improvement of workpiece pick-and-place precision, the embodiment of the application provides a robot vision positioning method, a device, electronic equipment, a computer readable storage medium and a computer program product containing instructions.
In the embodiment of the application, the scheme can be applied to any electronic equipment capable of providing the visual positioning of the robot, such as a computer, a mobile phone, a tablet, a console and the like. The robot vision positioning method provided by the embodiment of the application can be realized by at least one of software, a hardware circuit and a logic circuit arranged in the electronic equipment.
As shown in fig. 1, fig. 1 is a schematic flow chart of a robot vision positioning method according to an embodiment of the present application, where the method includes:
s110, under the condition that the robot stays at the first target position, acquiring a first image including the identification point, which is acquired by a camera installed on the mechanical arm.
The robot is used for grabbing and placing a workpiece on the workbench, wherein the workpiece is placed on the workbench, the workbench is provided with a marking area, and the marking area comprises marking points. The identification point may be a point in the identification area or may be an area in the identification area.
The camera is arranged on the mechanical arm of the robot and is used for collecting images, and the camera can be any camera capable of collecting images, namely, the camera can be a 2D industrial camera or an intelligent camera, and also can be a stereoscopic camera. The tail end of the mechanical arm is provided with a clamp which is used for grabbing and placing a workpiece. The camera may be mounted on a robotic arm that may carry the camera along when the robotic arm moves. The camera can also be fixedly installed (not installed on the mechanical arm), so that the mechanical arm of the robot can be controlled to carry calibration reference objects (calibration plates or materials and the like) to move.
When the robot is in actual operation, the robot is controlled to move to a first target position so that the robot stays at the first target position, when the robot stays at the first target position, the robot arm is controlled to move to a teaching photographing position, and the camera acquires images to obtain a first image, wherein the first image comprises identification points. The first target position may be a preset photographing position, where a first image including the identification point may be photographed. The shooting position is taught as a preset position at which the camera can shoot the identification area to obtain an image including the identification point.
S120, recognizing pixel coordinates of the identification point in the first image to obtain a first angle of the operation pixel coordinates and the identification point; the first angle is an angle of the identification point relative to a preset reference plane.
After the first image is acquired, the first image can be identified, so that the pixel coordinates of the identification points in the first image can be obtained, and the running pixel coordinates and the angles of the identification points relative to a preset reference plane can be obtained. The preset reference plane may be a plane passing through an edge of the first image and parallel to the identification area, and when the identification point is a point, the preset reference plane may also be a reference line, for example, a transverse center line of the first image is taken as a reference line, or a vertical center line of the first image is taken as a reference line, which may be specifically set based on practical situations, and is not limited herein. The method for identifying the pixel coordinates of the identification point in the first image from the first image may refer to the method for identifying the pixel coordinates of the target object in the image in the related art, which is not described herein.
S130, determining a first pose of the identification point under a mechanical arm coordinate system of the mechanical arm according to a first conversion relation, a reference pixel coordinate, a reference angle of the identification point and a first mechanical arm coordinate which are calibrated in advance.
The first conversion relationship is a conversion relationship between an image coordinate system of the camera and a mechanical arm coordinate system, and the first conversion relationship can be predetermined. And the reference pixel coordinates are pixel coordinates of identification points in a second image acquired by the camera when the robot stays at the second target position and the mechanical arm carries the camera to move to the teaching photographing position. The reference angle of the identification point is the angle of the identification point relative to the preset reference plane in the second image. The first mechanical arm coordinates are coordinates of the mechanical arm under a robot coordinate system when the robot stays at the second target position and the tail end teaching tool of the mechanical arm vertically contacts the identification point. The coordinates of the mechanical arm in the robot coordinate system when the camera collects the first image are the same as the coordinates of the mechanical arm in the robot coordinate system when the camera collects the second image. That is, when the acquisition camera acquires the first image and the camera acquires the second image, the mechanical arm is located at a teaching photographing position, and the teaching photographing position defines the position of the mechanical arm in the robot coordinate system. For clarity of solution and clarity of layout, a detailed description is provided below in connection with another embodiment. The reference angle of the identification point is the angle of the identification point relative to a preset reference plane.
After the running pixel coordinates are determined, mapping the reference pixel coordinates to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating reference physical coordinates of the identification points in the mechanical arm coordinate system; mapping the operation pixel coordinates to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating operation physical coordinates of the identification points in the mechanical arm coordinate system; and then calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point. And then determining the angle change amount of the identification point based on the angle of the identification point in the second image relative to the preset reference plane and the angle of the identification point in the first image relative to the first image reference line. And summing the coordinates and the angles of the first mechanical arm of the identification point with the coordinate variation and the angle variation of the identification point respectively to obtain a first pose of the identification point under the mechanical arm coordinate system of the mechanical arm.
And S140, determining the second pose of the target workpiece under the mechanical arm coordinate system according to the first pose and a second conversion relation calibrated in advance.
The second conversion relation is a relative pose conversion relation from the target workpiece to the identification point. Therefore, after the first pose is determined, the second pose of the target workpiece under the mechanical arm coordinate system can be determined according to the second conversion relation, then the mechanical arm can be controlled to move based on the second pose, the mechanical arm is controlled to move to the pose, and the workpiece grabbing and placing is completed. In addition, the application can solve the stationary phase-to-pose conversion relation from the target workpiece to the identification point without establishing a TCP coordinate system by the mechanical arm. Because all physical coordinates are in the mechanical arm base coordinate system, the teaching operation and programming complexity of the mechanical arm can be obviously reduced, the debugging efficiency is effectively improved, and the grabbing and releasing precision is higher.
The 3D capturing and placing pose of the target workpiece is solved by using the 2D industrial camera or the intelligent camera to shoot the identification area, and a stereoscopic camera is not required to shoot the workpiece, so that the scheme cost can be obviously reduced, and the limitation that the stereoscopic camera is difficult to better identify transparent, reflective or mirror objects can be broken through. The robot control system can automatically communicate with the robot to control the robot to move, a mechanical arm demonstrator is not needed, the operation is simpler, the data are more accurate, and human input errors can be avoided.
In an alternative embodiment, the method further comprises:
acquiring a sample image acquired by a camera arranged on a mechanical arm when the mechanical arm of the robot stays at the second target position and the mechanical arm coordinates of the mechanical arm of the robot are at least three different positions;
Establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
Acquiring a second image which is acquired by the camera and comprises an identification point, and identifying coordinates of the identification point in the second image to obtain a reference pixel coordinate and an angle of the identification point in the second image relative to a preset reference plane to obtain a reference angle of the identification point;
Acquiring a first mechanical arm coordinate of the mechanical arm under the robot coordinate system when the tail end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm under the robot coordinate system when the mechanical arm grabs and puts the target workpiece;
and establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates.
The first target position and the second target position may be the same or different. A camera is installed on a mechanical arm of the robot and used for collecting images, and a clamp is installed at the tail end of the mechanical arm and used for grabbing and placing a workpiece.
When the mechanical arm moves, the mechanical arm can carry the camera to move together.
As shown in fig. 2, mark is a Mark point of the Mark area, and Obj is a target workpiece. Under the condition that the camera can be installed on the mechanical arm, when the robot stays at the second target position, the mechanical arm of the robot is controlled to carry the camera to move at least 3 times, after the mechanical arm moves in place (the mechanical arm stops) each time, the camera on the mechanical arm can acquire a sample image, and after the mechanical arm moves in place each time, the sample image acquired by the camera corresponding to the time and the coordinates of the mechanical arm at the moment are recorded. The sample image collected by the camera comprises calibration feature points.
Or under the condition that the camera is fixedly installed (not installed on the mechanical arm), the mechanical arm of the robot is controlled to carry a calibration reference object (a calibration plate or a material and the like) to move for at least 3 times, the camera can acquire a sample image after each mechanical arm moves in place (the mechanical arm stops), and the sample image acquired by the camera corresponding to each mechanical arm and the coordinates of the mechanical arm at the moment are recorded after each mechanical arm moves in place. The sample image collected by the camera comprises calibration feature points.
The feature point pixel coordinates of the sample calibration object can be extracted from the sample image, wherein the method for extracting the feature point pixel coordinates of the sample calibration object from the sample image can refer to the extraction method of the pixel coordinates in the related art, and will not be described herein.
According to the characteristic point pixel coordinates and the mechanical arm coordinates of the plurality of groups of sample calibration objects, a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm can be established.
The mechanical arm of the robot is controlled to move at least 3 times, the characteristic point pixel coordinates and the mechanical arm coordinates of a plurality of groups of sample calibration objects are collected, the first conversion relation between the image coordinate system of the camera and the mechanical arm coordinate system of the mechanical arm is determined based on the characteristic point pixel coordinates and the mechanical arm coordinates of the plurality of groups of sample calibration objects, and the accuracy of the determined first conversion relation can be improved.
The mechanical arm is controlled to move to a teaching shooting position, the camera acquires a second image, the second image comprises identification points of an identification area, pixel coordinates of the identification points of the identification area can be extracted from the second image to obtain reference pixel coordinates, and angles of the identification points in the second image relative to a preset reference plane are obtained to obtain reference angles of the identification points.
The solving step and method of the second conversion relation from the target workpiece to the identification point are as follows:
As shown in fig. 3, the mechanical arm is controlled to move to the identification area, so that the end teaching tool is in point contact with the identification point (Mark point) of the teaching identification area perpendicular to the surface of the workbench, the physical coordinates of the mechanical arm under the robot coordinate system at the moment are recorded, the first mechanical arm coordinate P MWB is obtained,
As shown in fig. 4, the mechanical arm is then controlled to move to the position of the target workpiece to be gripped and placed (the workpiece in fig. 4), so that the workpiece can be accurately gripped and placed by the end clamp of the mechanical arm, and the physical coordinates of the mechanical arm under the robot coordinate system at the moment are recorded, so as to obtain the second mechanical arm coordinate P OWB when the mechanical arm grips and places the target workpiece.
The manipulator end-effector pose description typically uses the euler angle method, i.e., six quantities are used to represent the position and pose of a spatial point, XYZ and RZ-RY-RX euler angles, respectively. Because describing the relative pose relationship of two points in space using the euler angle method may encounter a gimbal problem, a4 x 4 homogeneous coordinate transformation matrix, also known as a pose matrix, is typically used to describe the relative pose relationship of two points in space. Wherein X, Y, Z respectively represents the positions on three coordinate axes X, Y, Z, and RX, RY and RZ respectively represent the angles of rotation around three coordinate axes of the original coordinate system X, Y, Z.
The pose matrix is composed of a rotation matrix and a displacement vector, wherein the rotation matrix reflects the pose of the mechanical arm, and the displacement vector reflects the position of the mechanical arm. The rotation matrix is a3×3 orthogonal matrix, and represents a rotation from a coordinate system of the robot arm end reference point to a reference coordinate system. The displacement vector is a relative position vector at the end of the robot arm, typically described by XYZ three components.
Therefore, the reference pose P MWB of the Mark point and the reference pose P OWB of the target workpiece need to be converted from the euler angle description to the pose matrix description.
The conversion formula from Euler angle to rotation matrix is as follows:
(alpha, beta, gamma) is Euler angle. R z (γ) represents rotation γ around the Z axis, R Y (β) rotates β around the Y axis, and R X (α) represents rotation α around the X axis.
Base denotes the robot coordinate system.
BasePM Representing the physical coordinates of the identification point (Mark point) in the robot coordinate system.
BasePO Representing the physical coordinates of the target object in the robot coordinate system.
MarkPO A schematic diagram showing the coordinates of the target workpiece in the coordinate system based on the Mark point, that is, the relative pose of the target workpiece to the Mark point, is shown in fig. 5. X ', Y ', Z ' represent three coordinate axes of the coordinate system.
MarkTBase The intermediate variable represents a transformation matrix from the robot coordinate system to the coordinate system based on Mark points.
Then there is the following formula:
PMWBBasePM
POWBBasePO
Because the euler angles are not all pronation or Z-Y-X rotation sequences. Under the condition that special description is not made, the Euler angle type of the mechanical arm defaults to the rotation sequence of Z-Y-X according to an internal rotation mode. Of course, it is also possible to have a fixed angular rotation sequence of X-Y-Z, or other types of Euler angles.
The reference pose P MWB of the Mark point under the mechanical arm base coordinate system and the pose P OWB of the target workpiece under the mechanical arm base coordinate system are respectively converted into pose matrixes, and then:
Wherein M ij represents the value of the ith row and the jth column in the pose matrix of the reference pose of the Mark point under the mechanical arm base coordinate system, X M represents the coordinate value of the Mark point on the X axis under the mechanical arm base coordinate system, Y M represents the coordinate value of the Mark point on the Y axis under the mechanical arm base coordinate system, and Z M represents the coordinate value of the Mark point on the Z axis under the mechanical arm base coordinate system.
M' ij represents the value of the ith row and jth column of the pose matrix of the target workpiece in the pose of the mechanical arm base coordinate system. X O represents the coordinate value of the target workpiece on the X axis in the robot arm base coordinate system, Y O represents the coordinate value of the target workpiece on the Y axis in the robot arm base coordinate system, and Z O represents the coordinate value of the target workpiece on the Z axis in the robot arm base coordinate system.
Wherein M2 is the second conversion relation from the target workpiece to the identification point.
And according to the first mechanical arm coordinates and the second mechanical arm coordinates, a second conversion relation from the target workpiece to the identification point can be established. The parameter calibration of the robot arm can be completed, the parameter calibration comprises a first conversion relation, a second conversion relation and a reference pixel coordinate, based on the three parameters, the pose of a target workpiece to a mark position can be calculated in the operation process of the robot, and then the movement of the robot arm is controlled based on the pose, so that the grabbing and placing of the target workpiece are completed.
According to the application, a fixed phase-to-pose conversion relation from the target workpiece to the identification point can be solved without establishing a TCP (transmission control protocol) coordinate system from the target workpiece to the identification point by the mechanical arm. And the identification area is associated with the target workpiece, when the pose of the identification area is changed only in a two-dimensional plane, the conversion relation between the image coordinate system of the camera and the coordinate system of the mechanical arm can be simplified into homography matrix calibration, so that the operation complexity of the calibration process is reduced, and the calibration efficiency is improved. The pose of the workpiece is not limited to be changed in a two-dimensional plane, but can be changed in a three-dimensional space at will, and the method is wider in application scene.
In addition, because all physical coordinates are in the mechanical arm base coordinate system, the teaching operation and programming complexity of the mechanical arm can be obviously reduced, the debugging efficiency is effectively improved, and the grabbing and placing precision is higher.
The 3D capturing and placing pose of the target workpiece is solved by using the 2D industrial camera or the intelligent camera to shoot the identification area, and a stereoscopic camera is not required to shoot the workpiece, so that the scheme cost can be obviously reduced, and the limitation that the stereoscopic camera is difficult to better identify transparent, reflective or mirror objects can be broken through. The robot control system can automatically communicate with the robot to control the robot to move, a mechanical arm demonstrator is not needed, the operation is simpler, the data are more accurate, and human input errors can be avoided.
In an optional embodiment, the determining, according to the current pose and the pre-calibrated first conversion relation and reference pixel coordinate, the first pose of the identification point under the mechanical arm coordinate system of the mechanical arm includes:
Mapping the reference pixel coordinate to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating a reference physical coordinate of the identification point in the mechanical arm coordinate system;
Mapping the operation pixel coordinate to the mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating an operation physical coordinate of the identification point in the mechanical arm coordinate system;
Calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point;
Determining the angle change amount of the identification point based on the angle of the identification point in the second image relative to a preset reference plane and the angle of the identification point in the first image relative to the preset reference plane;
and summing the first mechanical arm coordinates and the first angle of the identification point with the coordinate variation quantity and the angle variation quantity of the identification point respectively to obtain a first pose of the identification point under the mechanical arm coordinate system of the mechanical arm.
After the running pixel coordinates are determined, mapping the reference pixel coordinates to the mechanical arm coordinate system based on a first conversion relation calibrated in advance, calculating the reference physical coordinates of the identification points in the mechanical arm coordinate system, mapping the running pixel coordinates to the mechanical arm coordinate system based on the first conversion relation calibrated in advance, and calculating the running physical coordinates of the identification points in the mechanical arm coordinate system; then calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point, determining the angle variation of the identification point based on the angle of the identification point in the second image relative to the preset reference plane and the angle of the identification point in the first image relative to the first image reference line, summing the coordinate variation of the identification point and the first mechanical arm coordinate of the identification point to obtain the coordinate position of the identification point under the mechanical arm coordinate system of the mechanical arm, and summing the first angle and the angle variation of the identification point to obtain the posture of the identification point under the mechanical arm coordinate system of the mechanical arm, wherein the first posture comprises the coordinate position and the posture of the identification point under the mechanical arm coordinate system of the mechanical arm.
Illustratively, the solving step of the first pose (running pose) P MWC of the Mark point under the mechanical arm coordinate system is as follows:
Knowing the first conversion relation M1 between the image coordinate system of the camera and the mechanical arm coordinate system of the mechanical arm, the Mark point reference pixel coordinate P MB, and the Mark point operation pixel coordinate P MC, there are:
ΔPM=M1×PMC-M1×PMB
Wherein Δp M represents the pose change amount of Mark point from the reference physical coordinate to the running physical coordinate in the mechanical arm coordinate system.
Further, the first pose of the Mark point in the mechanical arm coordinate system of the mechanical arm can be given by the following formula:
PMWC=PMWB+ΔPM
Wherein P MWC represents the first pose of the Mark point under the mechanical arm base coordinate system.
Further, when the Mark point pose of the Mark region is changed only on the two-dimensional plane, the first conversion relation M1 between the image coordinate system of the camera and the mechanical arm coordinate system of the mechanical arm may be degenerated into a homography matrix, and at this time, the rotation of the target workpiece is only about the Z axis, and is denoted as R z (Δθ). The angle of a certain side in the Mark point reference bit image is R MB, and the angle of the certain side in the Mark point running bit image is R MC, and the method comprises the following steps:
RZ(Δθ)=ΔRM=RMC-RMB
assuming that the rotational component of the pose matrix P MWB is M RotB, the rotational component of the displacement component is M TransB,PMWC, the rotational component of the pose matrix M RotC, and the displacement component is M TransC, there are:
The conversion formula from Euler angle to rotation matrix is:
MRotB=RZ(γ)×RY(β)×RX(α)
MRotC=Rz(γ+Δθ)×RY(β)×RX(α)
Through the formula, the running pose P MWC of the Mark point under the mechanical arm base coordinate system can be solved.
The following is a solving step and a method of the running pose P OWC of the target workpiece under the mechanical arm coordinate system.
According to the pose transformation relation, the method comprises the following steps:
POWCBasePMark' Mark'PO'
since the relative positioning relationship between the identification zone and the target workpiece remains unchanged, there are:
MarkPOMark'PO'
the second conversion relation M2 from the target workpiece to the Mark point and the running pose P MWC of Mark under the mechanical arm base coordinate system are known, and the two formulas are combined:
POWCBasePO' =BasePMark' Mark'PO'BasePMark' MarkPO=PMWCM2
As shown in fig. 6, the embodiment of the application further provides a robot vision calibration method, which includes:
S610, acquiring a sample image acquired by a camera arranged on a mechanical arm of the robot when the robot stays at a second target position and the mechanical arm coordinates of the mechanical arm of the robot are at least three different positions;
S620, establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
s630, acquiring a second image which is acquired by the camera and comprises an identification point, and identifying coordinates of the identification point in the second image to obtain a reference pixel coordinate and an angle of the identification point in the second image relative to a preset reference plane to obtain a reference angle of the identification point;
S640, acquiring a first mechanical arm coordinate of the mechanical arm under the robot coordinate system when the tail end teaching tool of the mechanical arm vertically contacts the identification point and a second mechanical arm coordinate of the mechanical arm under the robot coordinate system when the mechanical arm grabs and places the target workpiece;
S650, establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates;
The robot vision calibration result comprises: the first conversion relation, the reference pixel coordinates, the reference angle of the identification point and the second conversion relation.
Based on the above method embodiment, the embodiment of the present application further provides a robot vision positioning device, as shown in fig. 7, fig. 7 is a schematic structural diagram of the robot vision positioning device provided by the embodiment of the present application, where the device includes:
a first obtaining module 710, configured to obtain, when the robot stays at the first target position, a first image including the identification point acquired by a camera installed on the mechanical arm;
a first identifying module 720, configured to identify pixel coordinates of the identification point in the first image to obtain a first angle between an operating pixel coordinate and the identification point; the first angle is an angle of the identification point relative to a preset reference plane;
a first determining module 730, configured to determine a first pose of the identification point under a mechanical arm coordinate system of the mechanical arm according to a first conversion relationship, a reference pixel coordinate, a reference angle of the identification point, and a first mechanical arm coordinate calibrated in advance;
The first conversion relation is a conversion relation between an image coordinate system of the camera and a coordinate system of the mechanical arm; the reference pixel coordinates are the pixel coordinates of the identification point in the second image; the reference angle of the identification point is the angle of the identification point in the second image relative to the preset reference plane; the second image is acquired by the camera when the robot stays at a second target position; the first mechanical arm coordinates are coordinates of the mechanical arm in the robot coordinate system when the robot stays at the second target position and the tail end teaching tool of the mechanical arm vertically contacts the marking point; wherein the coordinates of the robotic arm in the robot coordinate system when the camera captures the first image are the same as the coordinates of the robotic arm in the robot coordinate system when the camera captures the second image;
And a second determining module 740, configured to determine a second pose of the target workpiece in the robot arm coordinate system according to the first pose and a second pre-calibrated conversion relationship, where the second conversion relationship is a relative pose conversion relationship from the target workpiece to the identification point.
In an alternative embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring the coordinates of the mechanical arm of the robot at least three different positions when the robot stays at the second target position and the sample images acquired by the cameras arranged on the mechanical arm when the mechanical arm is at the coordinates of the mechanical arm at least three different positions;
The first establishing module is used for establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
The third acquisition module is used for acquiring a second image which is acquired by the camera and comprises an identification point, identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane, and obtaining the reference angle of the identification point;
A fourth obtaining module, configured to obtain a first robot arm coordinate of the robot arm in the robot coordinate system when the end teaching tool of the robot arm vertically contacts the identification point, and a second robot arm coordinate of the robot arm in the robot coordinate system when the robot arm grabs and releases the target workpiece;
And the second establishing module is used for establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates.
In an alternative embodiment, the first determining module is specifically configured to:
Mapping the reference pixel coordinate to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating a reference physical coordinate of the identification point in the mechanical arm coordinate system;
Mapping the operation pixel coordinate to the mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating an operation physical coordinate of the identification point in the mechanical arm coordinate system;
Calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point;
Determining the angle change amount of the identification point based on the angle of the identification point in the second image relative to a preset reference plane and the angle of the identification point in the first image relative to the preset reference plane;
and summing the first mechanical arm coordinates and the first angle of the identification point with the coordinate variation quantity and the angle variation quantity of the identification point respectively to obtain a first pose of the identification point under the mechanical arm coordinate system of the mechanical arm.
Based on the above method embodiment, the embodiment of the present application further provides a robot vision calibration device, as shown in fig. 8, fig. 8 is a schematic structural diagram of the robot vision calibration device provided by the embodiment of the present application, where the device includes:
A second obtaining module 810, configured to obtain, when the robot stays at the second target position, the coordinates of the robot arm of the robot at least three different positions, and the coordinates of the robot arm at least three different positions, a sample image collected by a camera mounted on the robot arm;
A first establishing module 820, configured to establish a first conversion relationship between an image coordinate system of the camera and a robot coordinate system of the robot according to each sample image and each robot coordinate;
A third obtaining module 830, configured to obtain a second image including a mark point acquired by the camera, and identify coordinates of the mark point in the second image to obtain a reference pixel coordinate and an angle of the mark point in the second image relative to a preset reference plane, so as to obtain a reference angle of the mark point;
a fourth obtaining module 840, configured to obtain a first robot arm coordinate of the robot arm in the robot coordinate system when the end teaching tool of the robot arm vertically contacts the identification point, and a second robot arm coordinate of the robot arm in the robot coordinate system when the robot arm grabs and releases the target workpiece;
A second establishing module 850, configured to establish a second conversion relationship from the target workpiece to the identification point according to the first mechanical arm coordinate and the second mechanical arm coordinate;
The robot vision calibration result comprises: the first conversion relation, the reference pixel coordinates, the reference angle of the identification point and the second conversion relation.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The embodiment of the application also provides an electronic device, as shown in fig. 9, including:
a memory 901 for storing a computer program;
The processor 902 is configured to execute the program stored in the memory 901, thereby implementing the following steps:
under the condition that the robot stays at a first target position, acquiring a first image comprising an identification point, wherein the first image is acquired by a camera arranged on the mechanical arm;
Identifying pixel coordinates of the identification point in the first image to obtain a first angle of the identification point and a running pixel coordinate; the first angle is an angle of the identification point relative to a preset reference plane;
determining a first pose of the identification point under a mechanical arm coordinate system of the mechanical arm according to a first conversion relation, a reference pixel coordinate, a reference angle of the identification point and a first mechanical arm coordinate which are calibrated in advance;
The first conversion relation is a conversion relation between an image coordinate system of the camera and a coordinate system of the mechanical arm; the reference pixel coordinates are the pixel coordinates of the identification point in the second image; the reference angle of the identification point is the angle of the identification point in the second image relative to the preset reference plane; the second image is acquired by the camera when the robot stays at a second target position; the first mechanical arm coordinates are coordinates of the mechanical arm in the robot coordinate system when the robot stays at the second target position and the tail end teaching tool of the mechanical arm vertically contacts the marking point; wherein the coordinates of the robotic arm in the robot coordinate system when the camera captures the first image are the same as the coordinates of the robotic arm in the robot coordinate system when the camera captures the second image;
And determining a second pose of the target workpiece under the mechanical arm coordinate system according to the first pose and a pre-calibrated second conversion relation, wherein the second conversion relation is a relative pose conversion relation from the target workpiece to the identification point.
And the electronic device may further include a communication bus and/or a communication interface, where the processor 902, the communication interface, and the memory 901 perform communication with each other via the communication bus.
The embodiment of the application also provides electronic equipment, which comprises:
a memory for storing a computer program;
and the processor is used for realizing the following steps when executing the program stored in the memory:
Acquiring a sample image acquired by a camera arranged on a mechanical arm of the robot when the robot stays at a second target position, the mechanical arm coordinates of the mechanical arm of the robot at least three different positions and the mechanical arm coordinates of the mechanical arm at least three different positions;
Establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
Acquiring a second image which is acquired by the camera and comprises an identification point, and identifying coordinates of the identification point in the second image to obtain a reference pixel coordinate and an angle of the identification point in the second image relative to a preset reference plane to obtain a reference angle of the identification point;
Acquiring a first mechanical arm coordinate of the mechanical arm under the robot coordinate system when the tail end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm under the robot coordinate system when the mechanical arm grabs and puts the target workpiece;
establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates;
The robot vision calibration result comprises: the first conversion relation, the reference pixel coordinates, the reference angle of the identification point and the second conversion relation.
And the electronic device can also comprise a communication bus and/or a communication interface, and the processor, the communication interface and the memory can complete communication with each other through the communication bus.
The communication bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present application, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any of the robot vision positioning methods described above.
In yet another embodiment of the present application, a computer readable storage medium is provided, in which a computer program is stored, which when executed by a processor, implements the steps of any of the above-mentioned robot vision calibration methods.
In yet another embodiment of the present application, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the robot vision positioning method of any of the above embodiments.
In yet another embodiment of the present application, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the robot vision calibration method of any one of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a Solid state disk (Solid STATE DISK, SSD), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, electronic devices, computer readable storage media and computer program product embodiments, the description is relatively simple as it is substantially similar to method embodiments, as relevant points are found in the partial description of method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (10)

1. A method of robot vision positioning, the method comprising:
under the condition that the robot stays at a first target position, acquiring a first image comprising an identification point, wherein the first image is acquired by a camera arranged on the mechanical arm;
Identifying pixel coordinates of the identification point in the first image to obtain running pixel coordinates and a first angle of the identification point; the first angle is an angle of the identification point relative to a preset reference plane;
Determining a first pose of the identification point under a mechanical arm coordinate system of the mechanical arm according to a first conversion relation, a reference pixel coordinate, a reference angle of the identification point and a first mechanical arm coordinate which are calibrated in advance;
The first conversion relation is a conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system; the reference pixel coordinates are pixel coordinates of the identification points in the second image; the reference angle of the identification point is the angle of the identification point in the second image relative to the preset reference plane; the second image is acquired by the camera when the robot stays at a second target position; the first mechanical arm coordinate is the coordinate of the mechanical arm under the robot coordinate system when the robot stays at the second target position and the tail end teaching tool of the mechanical arm vertically contacts the identification point; the coordinates of the mechanical arm in the robot coordinate system when the camera collects the first image are the same as the coordinates of the mechanical arm in the robot coordinate system when the camera collects the second image;
And determining a second pose of the target workpiece under the mechanical arm coordinate system according to the first pose and a second pre-calibrated conversion relation, wherein the second conversion relation is a relative pose conversion relation from the target workpiece to the identification point.
2. The method according to claim 1, wherein the method further comprises:
Acquiring a sample image acquired by a camera mounted on a mechanical arm of the robot when the robot stays at the second target position, the mechanical arm coordinates of the mechanical arm of the robot at least three different positions and the mechanical arm coordinates of the mechanical arm at least three different positions;
According to each sample image and each mechanical arm coordinate, a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm is established;
Acquiring a second image which is acquired by the camera and comprises an identification point, and identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane to obtain a reference angle of the identification point;
Acquiring a first mechanical arm coordinate of the mechanical arm under the robot coordinate system when the tail end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm under the robot coordinate system when the mechanical arm grabs and releases a target workpiece;
and establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates.
3. The method of claim 1, wherein determining the first pose of the identification point in the robot arm coordinate system of the robot arm according to the pre-calibrated first conversion relation, the reference pixel coordinate, the reference angle of the identification point, and the first robot arm coordinate comprises:
Mapping the reference pixel coordinate to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating a reference physical coordinate of the identification point in the mechanical arm coordinate system;
Mapping the operation pixel coordinate to the mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating an operation physical coordinate of the identification point in the mechanical arm coordinate system;
Calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point;
Determining the angle change amount of the identification point based on the angle of the identification point in the second image relative to a preset reference plane and the angle of the identification point in the first image relative to the preset reference plane;
and summing the first mechanical arm coordinates and the first angle of the identification point with the coordinate variation quantity and the angle variation quantity of the identification point respectively to obtain a first pose of the identification point under the mechanical arm coordinate system of the mechanical arm.
4. A method for calibrating robot vision, the method comprising:
acquiring a sample image acquired by a camera mounted on a mechanical arm of the robot when the robot stays at a second target position, the mechanical arm coordinates of the mechanical arm of the robot at least three different positions and the mechanical arm coordinates of the mechanical arm at least three different positions;
According to each sample image and each mechanical arm coordinate, a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm is established;
Acquiring a second image which is acquired by the camera and comprises an identification point, and identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane to obtain a reference angle of the identification point;
Acquiring a first mechanical arm coordinate of the mechanical arm under the robot coordinate system when the tail end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm under the robot coordinate system when the mechanical arm grabs and releases a target workpiece;
establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates;
The robot vision calibration result comprises: the first conversion relation, the reference pixel coordinates, the reference angle of the identification point and the second conversion relation.
5. A robotic vision positioning device, the device comprising:
The first acquisition module is used for acquiring a first image including an identification point, which is acquired by a camera arranged on the mechanical arm under the condition that the robot stays at a first target position;
The first identification module is used for identifying pixel coordinates of the identification point in the first image to obtain running pixel coordinates and a first angle of the identification point; the first angle is an angle of the identification point relative to a preset reference plane;
the first determining module is used for determining a first pose of the identification point under a mechanical arm coordinate system of the mechanical arm according to a first conversion relation calibrated in advance, a reference pixel coordinate, a reference angle of the identification point and a first mechanical arm coordinate;
The first conversion relation is a conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system; the reference pixel coordinates are pixel coordinates of the identification points in the second image; the reference angle of the identification point is the angle of the identification point in the second image relative to the preset reference plane; the second image is acquired by the camera when the robot stays at a second target position; the first mechanical arm coordinate is the coordinate of the mechanical arm under the robot coordinate system when the robot stays at the second target position and the tail end teaching tool of the mechanical arm vertically contacts the identification point; the coordinates of the mechanical arm in the robot coordinate system when the camera collects the first image are the same as the coordinates of the mechanical arm in the robot coordinate system when the camera collects the second image;
And the second determining module is used for determining a second pose of the target workpiece under the mechanical arm coordinate system according to the first pose and a pre-calibrated second conversion relation, wherein the second conversion relation is a relative pose conversion relation from the target workpiece to the identification point.
6. The apparatus of claim 5, wherein the apparatus further comprises:
The second acquisition module is used for acquiring the coordinates of the mechanical arm of the robot at least three different positions when the robot stays at the second target position and the sample images acquired by cameras arranged on the mechanical arm when the mechanical arm is at the coordinates of the mechanical arm at least three different positions;
The first establishing module is used for establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
the third acquisition module is used for acquiring a second image which is acquired by the camera and comprises an identification point, identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane, and obtaining the reference angle of the identification point;
A fourth obtaining module, configured to obtain a first mechanical arm coordinate of the mechanical arm in the robot coordinate system when the end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm in the robot coordinate system when the mechanical arm grabs and releases the target workpiece;
and the second establishing module is used for establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates.
7. The apparatus of claim 5, wherein the first determining module is specifically configured to:
Mapping the reference pixel coordinate to a mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating a reference physical coordinate of the identification point in the mechanical arm coordinate system;
Mapping the operation pixel coordinate to the mechanical arm coordinate system based on a first conversion relation calibrated in advance, and calculating an operation physical coordinate of the identification point in the mechanical arm coordinate system;
Calculating the difference value between the running physical coordinates and the reference physical coordinates to obtain the coordinate variation of the identification point;
Determining the angle change amount of the identification point based on the angle of the identification point in the second image relative to a preset reference plane and the angle of the identification point in the first image relative to the preset reference plane;
and summing the first mechanical arm coordinates and the first angle of the identification point with the coordinate variation quantity and the angle variation quantity of the identification point respectively to obtain a first pose of the identification point under the mechanical arm coordinate system of the mechanical arm.
8. A robot vision calibration device, the device comprising:
the second acquisition module is used for acquiring the coordinates of the mechanical arm of the robot at least three different positions when the robot stays at a second target position and the sample images acquired by cameras arranged on the mechanical arm when the mechanical arm is at the coordinates of the mechanical arm at least three different positions;
The first establishing module is used for establishing a first conversion relation between an image coordinate system of the camera and a mechanical arm coordinate system of the mechanical arm according to each sample image and each mechanical arm coordinate;
the third acquisition module is used for acquiring a second image which is acquired by the camera and comprises an identification point, identifying the coordinate of the identification point in the second image to obtain a reference pixel coordinate and the angle of the identification point in the second image relative to a preset reference plane, and obtaining the reference angle of the identification point;
A fourth obtaining module, configured to obtain a first mechanical arm coordinate of the mechanical arm in the robot coordinate system when the end teaching tool of the mechanical arm vertically contacts the identification point, and a second mechanical arm coordinate of the mechanical arm in the robot coordinate system when the mechanical arm grabs and releases the target workpiece;
the second establishing module is used for establishing a second conversion relation from the target workpiece to the identification point according to the first mechanical arm coordinates and the second mechanical arm coordinates;
The robot vision calibration result comprises: the first conversion relation, the reference pixel coordinates, the reference angle of the identification point and the second conversion relation.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1-3 or 4 when executing a program stored on a memory.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-3 or 4.
CN202410448078.3A 2024-04-12 2024-04-12 Robot vision positioning method, calibration method, device, equipment and medium Pending CN118305790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410448078.3A CN118305790A (en) 2024-04-12 2024-04-12 Robot vision positioning method, calibration method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410448078.3A CN118305790A (en) 2024-04-12 2024-04-12 Robot vision positioning method, calibration method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN118305790A true CN118305790A (en) 2024-07-09

Family

ID=91725977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410448078.3A Pending CN118305790A (en) 2024-04-12 2024-04-12 Robot vision positioning method, calibration method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN118305790A (en)

Similar Documents

Publication Publication Date Title
KR102661635B1 (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
CN110116406A (en) The robot system of scan mechanism with enhancing
CN111369625B (en) Positioning method, positioning device and storage medium
CN108748149B (en) Non-calibration mechanical arm grabbing method based on deep learning in complex environment
CN112276936B (en) Three-dimensional data generating device and robot control system
JP2012030320A (en) Work system, working robot controller, and work program
CN110465946B (en) Method for calibrating relation between pixel coordinate and robot coordinate
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN117173254A (en) Camera calibration method, system, device and electronic equipment
CN108015770A (en) Position of manipulator scaling method and system
CN110298877A (en) A kind of the determination method, apparatus and electronic equipment of object dimensional pose
CN115070779B (en) Robot grabbing control method and system and electronic equipment
CN118305790A (en) Robot vision positioning method, calibration method, device, equipment and medium
CN114677429B (en) Positioning method and device of manipulator, computer equipment and storage medium
CN111062989A (en) High-precision two-dimensional camera and robot hand-eye calibration method and system
CN109615658B (en) Method and device for taking articles by robot, computer equipment and storage medium
CN112184819A (en) Robot guiding method and device, computer equipment and storage medium
CN115250616A (en) Calibration system, information processing system, robot control system, calibration method, information processing method, robot control method, calibration program, information processing program, calibration device, information processing device, and robot control device
CN116061196B (en) Method and system for calibrating kinematic parameters of multi-axis motion platform
CN112446928B (en) External parameter determining system and method for shooting device
CN118305773A (en) Teaching method and device of robot, electronic equipment and storage medium
CN117784974A (en) Layout method and device of display window, electronic equipment and storage medium
CN115026808A (en) Hand-eye calibration method, hand-eye calibration system, computer equipment and storage device
CN116188597A (en) Automatic calibration method and system based on binocular camera and mechanical arm, and storage medium
CN118700211A (en) Device and method for measuring robot hand-eye calibration precision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination