CN116704035A - Workpiece pose recognition method, electronic equipment, storage medium and grabbing system - Google Patents
Workpiece pose recognition method, electronic equipment, storage medium and grabbing system Download PDFInfo
- Publication number
- CN116704035A CN116704035A CN202310778993.4A CN202310778993A CN116704035A CN 116704035 A CN116704035 A CN 116704035A CN 202310778993 A CN202310778993 A CN 202310778993A CN 116704035 A CN116704035 A CN 116704035A
- Authority
- CN
- China
- Prior art keywords
- workpiece
- point cloud
- point
- pose
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000003860 storage Methods 0.000 title claims abstract description 22
- 238000001914 filtration Methods 0.000 claims abstract description 74
- 239000013598 vector Substances 0.000 claims abstract description 51
- 230000011218 segmentation Effects 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 13
- 239000000463 material Substances 0.000 abstract description 57
- 238000012545 processing Methods 0.000 abstract description 9
- 238000001514 detection method Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 7
- 239000011087 paperboard Substances 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000012216 screening Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000007599 discharging Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005111 flow chemistry technique Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention relates to the technical field of point cloud data processing and provides a workpiece pose recognition method, electronic equipment, a storage medium and a grabbing system. According to the invention, the feature vector of the original point cloud data is filtered, and the filtering is carried out along the direction perpendicular to the curved surface of the workpiece during the filtering, so that the workpiece is accurately separated from the material frame, and the detection rate and accuracy of the pose of the workpiece are improved.
Description
Technical Field
The invention relates to the technical field of point cloud data processing, in particular to a workpiece pose recognition method, electronic equipment, a storage medium and a grabbing system.
Background
In a scenario of intelligent operation of a 3D vision guiding manipulator, pose information of an object to be grabbed needs to be acquired, so that operations such as grabbing can be performed through intelligent equipment such as a robot or a manipulator. However, when the conventional pose acquisition method is adopted to identify the pose of the workpiece in the container, the detected pose of the workpiece is low in accuracy, and the user requirements are difficult to meet.
Disclosure of Invention
In order to solve at least one of the technical problems, the invention provides a workpiece pose recognition method, electronic equipment, a storage medium and a robot grabbing system.
The first aspect of the present invention provides a method for identifying a pose of a workpiece, the workpiece having a first end and a second end opposite to each other, and a first surface located between the first end and the second end, the first surface being a curved surface, the method comprising: shooting and acquiring original point cloud data, wherein the original point cloud data comprise point cloud data of contents, the contents comprise the workpiece and a carrier, and the shot and acquired point cloud data of the workpiece comprise part of point clouds of at least part of the first surface of the workpiece; generating corresponding feature vectors according to the photographed original point cloud data, wherein the feature vectors represent the point cloud geometric features of the surface of the content; performing point cloud filtering on the feature vector along the normal direction of the point cloud of the first surface to obtain a point cloud set of the workpiece; acquiring respective workpiece point cloud data of each workpiece based on the point cloud set; and determining the actual pose of the workpiece according to the workpiece point cloud data.
According to one embodiment of the present invention, the step of performing point cloud filtering on the feature vector along a normal direction of the point cloud of the first surface to obtain the point cloud set of the workpiece includes: determining a first filtering threshold based on curved features of the first surface in a camera view; and performing normal filtering based on the first filtering threshold to obtain a point cloud set of the workpiece.
According to one embodiment of the present invention, the workpiece is a columnar workpiece, and the first end and the second end are two end portions of the columnar workpiece, respectively.
According to one embodiment of the invention, the workpiece is cylindrical.
According to one embodiment of the invention, the workpieces are horizontally stacked in the carrier, the end surfaces of the first end and the second end of the workpieces face the end surface of the carrier respectively, and the first surface faces the shooting end of the camera.
According to one embodiment of the invention, the workpieces are arranged in two stacks, each stack of workpieces comprises one or more layers of workpieces, each layer of workpieces comprises one or more workpieces arranged in parallel, and the end faces of the two workpieces of the two stacks are connected or adjacent in an opposite mode.
According to one embodiment of the invention, the feature vector comprises a point cloud normal vector.
According to one embodiment of the present invention, generating a corresponding feature vector according to original point cloud data obtained by photographing includes: and determining an interest area in the photographed original point cloud data, and generating a corresponding feature vector according to the interest area.
According to one embodiment of the present invention, generating the corresponding feature vector according to the region of interest includes: and downsampling the region of interest, and generating corresponding feature vectors according to the downsampling result.
According to one embodiment of the invention, acquiring the respective workpiece point cloud data of each workpiece based on the point cloud set includes: a seed point selecting step, namely selecting seed points from a point set to be divided of the point cloud set to form a seed point set, and taking the seed points as current points; a region segmentation step, namely determining the angle difference between the normal line of the adjacent point and the normal line of the current point for the adjacent point of the current point, and adding the current point into the current region if the angle difference is smaller than a preset angle threshold value; determining curvature values of the adjacent points, adding the adjacent points into the seed point set to serve as seed points if the curvature values are smaller than a preset curvature threshold, and deleting the corresponding current points; selecting a new seed point from the seed point set as a current point to be substituted into the region segmentation step until the seed point set is an empty set, so as to obtain a point cloud of the current region; substituting point clouds which are not divided in the point cloud set into the seed point selection step as new point sets to be divided until all points in the point cloud set are divided.
According to one embodiment of the invention, after all points in the point cloud set are divided, screening is performed on each piece of point cloud data in the division result according to a preset point cloud quantity interval, and each piece of point cloud data meeting the point cloud quantity interval is used as the respective workpiece point cloud data of a workpiece.
According to one embodiment of the invention, determining the actual pose of the workpiece according to the workpiece point cloud data comprises: performing first point cloud matching on the workpiece point cloud data to obtain candidate workpiece pose of the workpiece; performing second point cloud matching on the workpiece point cloud data, and determining optional workpiece pose from the candidate workpiece pose, wherein the accuracy of the second point cloud matching is higher than that of the first point cloud matching; and determining the actual pose of the workpiece according to the optional workpiece pose.
According to one embodiment of the present invention, the second point cloud matching method includes: a point cloud association step, namely, taking the workpiece point cloud data as scene point cloud, and determining a point closest to each point to be matched in the scene point cloud as an association point to form a point set to be matched and an association point set; determining barycenter coordinates of the point set to be matched and the associated point set; determining a pose transformation matrix and a first point set after corresponding transformation according to the barycentric coordinates; determining an average distance of association points between the first point set and the scene point cloud; substituting the points in the first point set as new points to be matched into the point cloud association step until iteration termination conditions are met, and obtaining the optional workpiece pose through the latest pose transformation matrix.
According to one embodiment of the present invention, when the workpieces are placed in two stacks, determining the actual pose of the workpiece according to the selectable workpiece poses includes: acquiring edge point clouds in the workpiece point cloud data, and determining end point clouds of a first end and a second end of the workpiece according to the edge point clouds; and performing end point cloud matching according to the end point clouds of the first end and the second end, a preset point cloud model and the optional workpiece pose, and determining the actual pose of the workpiece according to an end matching result.
According to one embodiment of the present invention, obtaining an edge point cloud in the workpiece point cloud data includes: and acquiring edge parts of the workpiece point cloud data of each workpiece, and merging the edge parts to obtain an edge point cloud.
According to one embodiment of the invention, determining end point clouds of the first and second ends of the workpiece from the edge point clouds comprises: and carrying out point cloud filtering on the edge point cloud along the axial forward direction and the axial reverse direction of the workpiece to obtain end point clouds of a first end and a second end of the workpiece, wherein the axial direction is determined according to the first end and the second end.
According to one embodiment of the invention, the end point clouds of the first end and the second end are merged before the end point cloud matching is performed.
According to one embodiment of the invention, the end point clouds of the first end and the second end are statistically filtered before the end point cloud matching is performed.
According to one embodiment of the invention, determining the actual pose of the workpiece based on the end matching result comprises: and carrying out point cloud matching on the matching result of the end point cloud matching and the feature vector, and determining the actual pose of the workpiece according to the matching result.
A second aspect of the present invention proposes an electronic device comprising: a memory storing execution instructions; and a processor executing the execution instructions stored in the memory, so that the processor executes the workpiece pose recognition method according to any one of the above embodiments.
A third aspect of the present invention proposes a readable storage medium having stored therein execution instructions which, when executed by a processor, are to implement the workpiece pose recognition method according to any of the above embodiments.
A fourth aspect of the present invention proposes a robotic grasping system comprising: the readable storage medium set forth in the third aspect above is used for the robot to grasp based on the execution instruction stored in the readable storage medium.
According to one embodiment of the invention, the robot comprises a robot arm with a robot arm.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
Fig. 1 is a flow chart of a workpiece pose recognition method according to an embodiment of the present invention.
Fig. 2 is a schematic cross-sectional view of a differently shaped workpiece.
FIG. 3 is a flow diagram of generating feature vectors according to one embodiment of the invention.
Fig. 4 is a schematic diagram of a point cloud normal vector of original point cloud data.
FIG. 5 is a flow chart of determining the actual pose of a workpiece according to one embodiment of the invention.
Fig. 6 is an effect diagram of the point cloud after the second point cloud matching is performed on the workpiece point cloud data.
Fig. 7 is a schematic flow chart of determining an actual pose of a workpiece according to another embodiment of the invention.
Fig. 8 is an effect diagram of matching end point clouds of a first end and a second end of a workpiece with a point cloud model.
Fig. 9 is a flow chart of a workpiece pose recognition method according to another embodiment of the present invention.
FIG. 10 is a schematic diagram of an electronic device employing a hardware implementation of a processing system, according to one embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the substances, and not restrictive of the invention. It should be further noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without collision. The technical scheme of the present invention will be described in detail below with reference to the accompanying drawings in combination with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some of the ways in which the technical concepts of the present invention may be practiced. Thus, unless otherwise indicated, the features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
In the production and processing procedures of the workpiece, the material taking, the material conveying and the material discharging can be performed through the mechanical arm. The workpieces are usually contained and placed in a material frame, so that the workpieces are transported and stored. When the workpiece is required to be taken for processing the workpiece, the manipulator can pick up the workpiece from the material frame to realize material taking, move the workpiece to the vicinity of the target position through movement and action to realize material conveying, and then place the workpiece at the target position to realize material discharging. Before a manipulator picks up a workpiece from a material frame, the workpiece in the material frame needs to be identified, the placing pose of the workpiece in the material frame is obtained, and then the manipulator can calculate the grabbing pose according to the placing pose of the workpiece (the workpiece pose for short). When the pose of the workpiece is identified, the workpiece may be in close contact with the material frame or a plate-shaped object in the material frame for auxiliary holding, for example, a paper board for preventing the workpiece from being in friction collision with the material frame may be placed in the material frame, and the paper board separates the material frame body from the workpiece; for example, the workpiece may be closely contacted with the workpiece, which may cause difficulty in accurately identifying the pose of the workpiece, so that the number and accuracy of the detected poses of the workpiece are difficult to meet the requirements.
The workpiece pose recognition method, the electronic device, the storage medium and the robot gripping system of the invention are described below by taking a scene of pose recognition of a workpiece in a material frame as an example with reference to the accompanying drawings.
Fig. 1 is a flow chart of a workpiece pose recognition method according to an embodiment of the present invention. Referring to fig. 1, the workpiece pose recognition method M10 of the present embodiment may include the following steps S100, S200, S300, S400, and S500. In the workpiece pose recognition method M10 of the present embodiment, the workpiece to be recognized has a first end and a second end opposite to each other, and the workpiece further has a first surface located between the first end and the second end, the first surface is connected to the first end and the second end respectively, the first surface is a curved surface, and the first surface is a curved surface protruding outwards.
S100, shooting and acquiring original point cloud data, wherein the original point cloud data comprise point cloud data of contents, the contents comprise the workpiece and a carrier, the workpiece is placed in the carrier, and the placing pose of the workpiece in the carrier enables at least part of the first surface of the workpiece to be observed by a camera at least partially, so that the point cloud data of the workpiece acquired by shooting comprise part of the point cloud of at least part of the first surface of the workpiece.
And S200, generating corresponding feature vectors according to the photographed original point cloud data, wherein the feature vectors represent the point cloud geometric features of the surface of the content.
And S300, performing point cloud filtering on the feature vector along the normal direction of the point cloud of the first surface to obtain a point cloud set of the workpiece, wherein the preset direction comprises a direction perpendicular to the first surface, and the point cloud set corresponds to the workpiece in the content.
S400, acquiring respective workpiece point cloud data of each workpiece based on the point cloud set of the workpiece.
S500, determining the actual pose of the workpiece according to the workpiece point cloud data.
According to the workpiece pose recognition method provided by the embodiment of the invention, the feature vector of the original point cloud data is filtered along the direction vertical to the curved surface of the workpiece, the material frame in the point cloud and the plate-shaped objects arranged in the material frame are filtered, the workpiece is accurately separated from the material frame, and the workpiece recognition obstacle caused by the tight contact of the workpiece and the material frame body or the plate-shaped objects in the material frame is avoided, so that the axial boundary of the workpiece can be accurately determined when the pose recognition is performed later, the pose of the workpiece can be accurately positioned, and the detection rate and accuracy of the pose of the workpiece are improved.
The carrier is a tool for loading a workpiece. For example, the carrier may be a frame. Before the manipulator is controlled to pick up the workpiece from the material frame, a camera arranged on the manipulator or cameras arranged on other structures on the site can be used for photographing a preset position to obtain a point cloud file, namely original point cloud data. In the raw point cloud data, the content photographed by the camera is called content, and thus the content includes both a frame and a work.
The feature vector is used for characterizing the point cloud geometric features of the surface of the content, and the point cloud geometric features refer to geometric information contained in point cloud data, such as coordinates, normal vectors, curvature and other information, and the information can describe the shape, surface features, spatial distribution and other conditions of the point cloud. The feature vector of the material frame represents the point cloud geometric feature of the material frame surface, and the feature vector of the workpiece represents the point cloud geometric feature of the workpiece surface.
In the original point cloud data, the position of the material frame is not fixed, the end part of the workpiece faces the inner end face of the material frame, the material frame body and the workpiece can be closely contacted together, the workpiece and the material frame can be connected into a piece on the point cloud, and therefore the axial boundary of the workpiece cannot be found. In addition, if the inner side of the material frame is provided with a paper board for isolating the workpiece and the material frame so as to avoid friction impact between the workpiece and the material frame, the paper board may be closely contacted with the workpiece, and the workpiece and the material frame may be connected into a piece on the point cloud, so that the axial boundary of the workpiece cannot be found. And the interference of flying spots possibly exists in the point cloud data, which is not beneficial to the subsequent recognition of the pose of the workpiece.
Although the workpiece and the material frame or the paper board may be closely contacted, the normal direction of the first surface of the workpiece and the normal direction of the surface of the inner wall of the material frame/paper board are almost vertical, so that the characteristic vector of the original point cloud data can be subjected to normal filtering along the Z-axis direction (namely, the direction vertical to the first surface of the workpiece), the point cloud connection between the workpiece and the material frame/paper board is further broken, the boundary of the workpiece in the axial direction is obtained, and the whole point cloud data set of the workpiece in the content is obtained.
In consideration of reserving more workpiece point cloud data, a first filtering threshold value determined based on the curved surface characteristics of the first surface in the view of the camera is set, normal filtering is performed based on the first filtering threshold value, and more bar point clouds can be reserved as much as possible on the basis of filtering out the material frame point cloud data. For example, when the first surface is a cylindrical surface and the normal direction of the first surface point cloud acquired under the shooting angle of the camera forms an angle of 60 degrees at maximum, the normal jump angle between adjacent cylinders is within 60 degrees. The first filtering threshold may be set to 60 degrees to avoid filtering out point cloud data for other workpieces. This step not only filters out the point cloud data of the carrier (in this embodiment, for example, a frame), but also retains the point cloud data of the workpiece as widely as possible. In other embodiments, the first filtering threshold may be set according to a curved surface feature of the first surface under the view of the camera, for example, an included angle of a normal direction of the point cloud, or a change of curvature, so as to prevent filtering out point cloud data of other workpieces. It will be appreciated by those skilled in the art that the curved surface characteristics of the first surface in the field of view of the camera may be known based on the characteristics of the workpiece itself or acquired from camera shots. It should be noted that the first filtering threshold value is set and adjusted according to the situation of the threshold value, and when the first filtering threshold value is too large, the first filtering threshold value needs to be further set to be smaller than 90 degrees so as to prevent the point cloud data of the carrier from being leaked during filtering. For example, when the first surface is a cylindrical surface, the normal direction of the point cloud of the cylindrical surface obtained under the shooting angle of the camera forms a 90-degree angle at maximum, and the normal jump angle between adjacent cylinders is within 90 degrees. The first filtering threshold may be set to 70 degrees or 80 degrees to preserve point cloud data for other workpieces with a greater likelihood.
In another embodiment of the disclosure, the method further includes reversely acquiring the point cloud data of the carrier based on the normal filtering, and acquiring the point cloud data of the workpiece based on the original point cloud data and the point cloud data of the carrier. Because the captured point cloud data of the workpiece includes at least part of the point cloud of the first surface of the workpiece, when the first surface of the workpiece is not exposed to the field of view of the camera, the point cloud data of the workpiece obtained after the normal filtering is missing. Therefore, the method can prevent the loss of the point cloud data of the workpiece by combining the steps of the embodiment on the basis of the scheme, and is also based on the loss, the first surface is exposed under the field of view of the camera when the workpiece is placed, so that the placing difficulty of the workpiece is greatly reduced, and the operation difficulty is reduced. Based on the method, the method can be applied to workpieces with uniformly changed curved surfaces, and the application scene of the method is enlarged to a certain extent.
The point cloud data set contains point cloud data of N (N is more than or equal to 1) workpieces (the point cloud data refer to the point cloud data of the workpieces which can be identified by a camera), and at the moment, point clouds of adjacent workpieces are connected, so that the point cloud data set is required to be segmented, the point cloud data of each workpiece in the N workpieces are identified in the set, N workpiece point cloud data are obtained, and each workpiece point cloud data corresponds to one workpiece.
After the independent workpiece point cloud data of the N workpieces are obtained, processing the point cloud data of each workpiece, such as performing operations of point cloud screening, point cloud matching, point cloud filtering and the like, and finally calculating the actual pose of the workpiece of the N workpieces. It can be understood that the obtained actual pose of the workpiece is the pose under the camera coordinate system, when the manipulator is controlled to grasp the workpiece, the actual pose of the workpiece is required to be subjected to hand-eye calibration matrix conversion and grasp point pose conversion, so that the grasping action of the manipulator is determined, and then the manipulator is controlled to grasp the workpiece according to the grasping action.
Illustratively, the workpiece may be a cylindrical workpiece, in which case the workpiece is also referred to as a bar stock. The first end and the second end of the workpiece are respectively two ends of the columnar workpiece. Fig. 2 is a schematic cross-sectional view of a differently shaped workpiece. Referring to fig. 2, the cylindrical workpiece may have a plurality of side surfaces. For example, in the case of fig. 2 (a), the cylindrical workpiece has two side surfaces, wherein the radial coverage angle of the first surface k is greater than 180 degrees and the second surface is planar. As another example, in case (b) of fig. 2, the cylindrical work piece has three side surfaces, wherein the radial coverage angle of the first surface k is about 180 degrees, and the second surface and the third surface are planes perpendicular to each other. The cylindrical workpiece may also be cylindrical, such as in the case of fig. 2 (c), where the workpiece is a cylindrical bar and the first surface k is a side surface of the cylindrical workpiece. Hereinafter, a cylindrical workpiece will be described as an example. It will be appreciated by those skilled in the art that the following is merely illustrative of the work piece of the present embodiment having a first surface with a uniform trend of variation and is not intended to be limiting of the present embodiment.
The workpieces are horizontally stacked in the material frame, and the two end faces face the end face of the material frame respectively. A plurality of workpieces are stacked in a material frame to form a workpiece stack. In the same stack of workpieces, the workpieces adjacent to each other in placement position are contacted through the first surface. It will be appreciated that each stack of workpieces may include one or more layers of workpieces, each layer of workpieces including one or more workpieces juxtaposed.
Fig. 3 is a flow chart illustrating the generation of feature vectors according to another embodiment of the present invention. Referring to fig. 3, step S200 may include step S210.
S210, determining an interest area in the photographed original point cloud data, and generating a corresponding feature vector according to the interest area. The region of interest (Region of Interest, ROI) is a region containing a material frame and a workpiece, and the material frame in the point cloud image characterized by the original point cloud data and the point cloud data of the workpiece in the material frame are identified by determining the region of interest (Region of Interest, ROI).
Illustratively, the generating the corresponding feature vector according to the region of interest in step S210 may include: and downsampling the region of interest, and generating corresponding feature vectors according to the downsampling result.
Because the accuracy of the original point cloud data directly acquired by the camera may be higher and the number of the point clouds is larger, the determined interest area may still contain a huge number of point clouds, for example, may contain a near million number of point clouds. At the moment, after the region of interest is determined, the point cloud data in the region of interest is subjected to voxel downsampling, and then the downsampling result is processed, so that feature vectors representing the geometric features of the surfaces of the material frame and the workpiece are obtained.
Voxel downsampling (Voxel Downsampling) is a method of point cloud downsampling that assigns points in a point cloud into a three-dimensional grid structure and calculates a representative point attribute (e.g., center of gravity or average position) for each grid cell, thereby reducing the density of the point cloud to a desired level while preserving critical geometric features, reducing overall flow processing time, reducing computation, and increasing processing speed. The voxel size may be set according to the accuracy requirements of the point cloud matching, e.g. the side length of the voxel grid may be set to 3 mm.
Illustratively, the feature vector generated in step S200 may include a point cloud normal vector. The point cloud normal vector can be obtained by performing point cloud normal estimation on the point cloud data. The quality of the point cloud of the cylindrical workpiece is high, namely the point cloud is relatively complete and has good continuity, and few details are missing, so that the accuracy of the calculated point cloud normal direction is also high. Fig. 4 is a schematic diagram of a point cloud normal vector of original point cloud data. Referring to fig. 4, the line segment extending from the point of each point cloud in fig. 4 is a normal, and two near-horizontal lines formed by the points in the upper part of fig. 4 are identified frame edges.
As can be seen from fig. 4, the normal direction of the point cloud on the surface of the cylindrical rod is almost perpendicular to the normal direction of the inner wall of the material frame, so that when the point cloud normal vector is filtered in step S300, the filtering can be performed by adopting a normal filtering mode. The point cloud normal filtering is used for denoising and maintaining the shape and edge information of the point cloud, and the denoising effect is achieved by averaging the normal vectors of adjacent points. The point cloud normal filtering may be configured with a filtering threshold, which refers to parameters for controlling the filtering intensity and the filtering effect. Since the normal vector in the point cloud data is derived by calculating the positional relationships of points around each point in the point cloud, which can be expressed as the included angle between adjacent points, the filtering threshold can be set as an angle. In particular, the threshold for point cloud normal filtering may be defined as the maximum angle between two normal vectors, which are considered similar if the angle is less than the threshold.
The filtering threshold value of normal filtering can be set according to requirements, so that more workpiece point clouds can be obtained as large as possible on the basis of filtering a material frame. For example, the filtering threshold of the normal filtering in step S300 may be set to 60 °, that is, the cone filtering is performed at the camera shooting angle along the direction perpendicular to the cylindrical surface of the cylindrical workpiece by ±60°.
For example, in step S400, the manner of acquiring the workpiece point cloud data of each workpiece based on the point cloud set may adopt a point cloud segmentation manner, and when the point cloud set performs point cloud segmentation, based on the characteristic that the normal line of the first surface of the cylindrical workpiece at the adjacent position may suddenly change, the following steps may be adopted to implement point cloud segmentation: a seed point selecting step, namely selecting seed points from a point set to be divided in the point cloud set to form a seed point set, and taking the seed points as current points; a region segmentation step, namely determining the angle difference between the normal line of the adjacent point and the normal line of the current point for the adjacent point of the current point, and adding the current point into the current region if the angle difference is smaller than a preset angle threshold value; determining curvature values of adjacent points, adding the adjacent points into a seed point set to serve as seed points if the curvature values are smaller than a preset curvature threshold, and deleting corresponding current points; selecting new seed points from the seed point set as current points to be substituted into the region segmentation step until the seed point set is an empty set, so as to obtain a point cloud of the current region; substituting point clouds which are not divided in the point cloud set into a seed point selection step as a new point set to be divided until all points in the point cloud set are divided, and taking the obtained division result as the respective workpiece point cloud data of each workpiece.
At the initial moment, the point set to be divided is all the point clouds in the point cloud set. When the seed points are selected from the point set to be divided, the normals and the curvatures of different points can be calculated first, and the point with the minimum curvature is taken as the seed point. In this case, only one seed point, i.e. the point with the smallest curvature, is included in the set of seed points.
The preset angle threshold and the preset curvature threshold are both used for screening seed points, and if the angle difference of the adjacent points is smaller than the preset angle threshold but the curvature value is not smaller than the preset curvature threshold, the adjacent points cannot be used as seed points, and the adjacent points can be classified into the current area. If the angle difference of the adjacent point is smaller than the preset angle threshold value and the curvature value is smaller than the preset curvature threshold value, the adjacent point can be used as a seed point. When the set of seed points is emptied, this indicates that the current region growing process is complete. The next region growing process is then performed.
When the next region growing process is performed, the point set to be divided is other points except the points with the region growing completed in the point cloud set. And the next region growing process is analogized in sequence until all points in the point cloud set are traversed, namely region division is completed, and a plurality of regions are obtained.
The region-growing-based point cloud segmentation can divide points in the point cloud dataset into different portions or regions such that each region has similar characteristics, such as color or density. The point cloud segmentation method starts from a seed point, gradually expands the area by adding new points adjacent to the current area, thereby generating different areas, and segments the point cloud into different clusters by utilizing the continuity and curvature of the normal angles of the adjacent points.
The point cloud segmentation based on region growth is carried out on the point cloud filtered through normal filtering, and the point cloud of each cylindrical workpiece can be segmented in all the workpieces through the point cloud segmentation based on the characteristic that the normal line between the cylindrical workpieces at the adjacent positions of the cylindrical surfaces suddenly changes, so that the problem of recognition obstacle caused by the contact of the cylindrical surfaces between the cylindrical workpieces of the same stack and the same layer is solved, namely, for the cylindrical workpieces of one layer, each workpiece is recognized respectively along the arrangement direction of the workpieces.
For example, after all the points in the point cloud set are divided in step S400, that is, after the point cloud set is subjected to point cloud segmentation, each part of point cloud data in the division result (point cloud segmentation result) may be screened according to the preset point cloud number interval, and each part of point cloud data meeting the point cloud number interval is used as the workpiece point cloud data of each workpiece.
Because the point cloud quality of the cylindrical workpieces is high, the missing parts are almost absent, and the point cloud point number of each cylindrical workpiece can be considered to be relatively stable. Therefore, the clustering result, namely the point cloud segmentation result, can be filtered according to the clustering point number, and the point cloud number is filtered. The clustering point number is the number of point clouds on the side surface of the cylindrical workpiece.
The point cloud quantity interval can be determined in advance through experiments and statistics, and the value of the point cloud quantity interval represents the interval range of the reference value or the universal value of the point cloud quantity of the cylindrical workpiece after filtering according to the normal filtering threshold value. For example, after the point cloud segmentation, M parts of point cloud data are obtained, the M parts of point cloud data are sequentially screened, if the number of the point clouds of a certain part of point cloud data is larger than the upper limit value of the point cloud number interval or smaller than the lower limit value of the point cloud number interval, the content corresponding to the point cloud data is considered to be not a workpiece, and the point cloud data are filtered, so that the point clouds of the cylindrical workpiece are independently screened. And if the quantity of the point clouds represented by a certain piece of point cloud data is approximately matched with the quantity threshold, the point cloud data is considered to be the workpiece point cloud data, and the content corresponding to the point cloud data is the workpiece.
FIG. 5 is a flow chart of determining the actual pose of a workpiece according to one embodiment of the invention. Fig. 6 is an effect diagram of the point cloud after the second point cloud matching is performed on the workpiece point cloud data. Referring to fig. 5 and 6, step S500 may include step S510, step S520, and step S530.
S510, performing first point cloud matching on the workpiece point cloud data to obtain candidate workpiece pose of the workpiece.
S520, performing second point cloud matching on the workpiece point cloud data, and determining optional workpiece pose from candidate workpiece pose. Wherein the accuracy of the second point cloud matching is higher than the accuracy of the point cloud matching.
S530, determining the actual pose of the workpiece according to the optional workpiece pose.
The first point cloud matching may adopt a rough matching method based on feature point alignment (Point Pair Feature, PPF), and perform rough matching based on feature point alignment on each cylindrical workpiece point cloud to obtain candidate workpiece pose of each cylindrical workpiece.
It will be appreciated that the RANSAC algorithm may also be used for the first point cloud matching. RANSAC (Random Sample Consensus) algorithm is an iterative method based on random sampling and can be applied to point cloud matching. Rough matching based on feature point alignment can be achieved by finding point cloud feature points with robustness, extracting descriptors around the feature points, and then utilizing the descriptors to determine correspondence between corresponding feature points in the point cloud. The method may comprise the steps of: extracting feature points, calculating feature descriptors, matching feature points, estimating pose and the like.
Illustratively, the manner in which the second point cloud is matched may include: a point cloud association step, namely, using workpiece point cloud data as scene point cloud, and determining a point closest to each point to be matched in the scene point cloud as an association point to form a point set to be matched and an association point set; determining barycenter coordinates of a point set to be matched and an associated point set; determining a pose transformation matrix and a first point set after corresponding transformation according to the barycentric coordinates; determining an average distance of association points between the first point set and the scene point cloud; substituting the points in the first point set as new points to be matched into the point cloud association step until the iteration termination condition is met, and obtaining the optional workpiece pose through the latest pose transformation matrix.
By forming the point set to be matched and the associated point set, some point clouds without associated points can be removed. The barycentric coordinates are calculated from the coordinate information of the points included in the point set itself. In determining the pose transformation matrix, an error function of the pose transformation matrix is also calculated and the function value is minimized. The first set of points may be obtained by a rigid transformation of the pose transformation matrix. The iteration termination condition may be that the average distance is smaller than a preset distance threshold, or that the iteration number is smaller than a preset iteration number threshold.
The second point cloud matching utilizes candidate workpiece pose provided by the first point cloud matching (rough matching) and complete scene point cloud represented by workpiece point cloud data, fine matching based on nearest point iteration (Iterative Cloest Point, ICP) is carried out on each cylindrical workpiece point cloud, and obtained data of optional workpiece pose describes a matrix for converting the position and the pose of a model to the complete scene point cloud, namely a matching result of the second point cloud matching is a pose conversion matrix, so that adjustment of the candidate pose is realized, and available pose is screened out from the candidate pose.
It can be understood that, because the point cloud data may include noise and interference points, it is difficult to accurately describe the point cloud data by using a matching method based on feature point alignment, and the point cloud data can be matched by using a matching method based on nearest point iteration only by using geometric information, so that the anti-interference capability is stronger, and a better matching effect can be obtained by increasing the iteration times continuously.
According to the size of the workpiece and the material frame or according to the on-site workpiece placement requirement, the workpiece can form two workpiece stacks in the material frame, wherein the workpiece placement modes in each workpiece stack are consistent, and the end faces of the two workpiece stacks are connected or adjacent in an opposite mode. For example, each stack of cylindrical workpieces includes six layers of workpieces, two stacks of workpieces are positioned adjacent to each other, and the same end of each cylindrical workpiece is oriented substantially uniformly, that is, all cylindrical workpieces are substantially parallel in the axial direction. In the first stack of workpieces, at least a portion of the first ends of the workpieces are opposed to and joined or adjacent to the second ends of one or more of the workpieces in the second stack of workpieces. Thus, when workpieces are placed in the frame, each cylindrical workpiece in the two stacks may be either contiguous or adjacent at one end to the end of the cylindrical workpiece in the other stack, or contiguous or adjacent at the other end to the frame or planar consumable for auxiliary containment within the frame. Therefore, the division of the workpieces joined end to end is continued based on this after the division of the adjacent workpieces is performed.
When two stacks of workpieces are placed, there is a problem of cylinder axial offset, especially when the second point cloud matching is performed using the closest point iteration based fine matching in step S520. Since the features of the cylindrical model may be less pronounced, there may be problems in using a complete cylindrical model to match, where the "closest point" of the matching model is not the cylinder edge (i.e., the corresponding point is not correctly matched). For example, the end contact between cylindrical workpieces of two adjacent rows is relatively tight and is thus regarded as a problem of the same long cylinder under the point cloud view (refer to fig. 6 in which the upper and lower cylinders appearing on the left and right sides are butted to form one cylinder). This can lead to a mismatch, affecting the matching accuracy. Based on the problem, the step of performing point cloud matching by utilizing the shape characteristics of the end part of the cylindrical workpiece is added, so that the problem of insufficient matching accuracy caused by axial deviation of the workpiece during pose recognition is solved, as shown in the scheme of fig. 7. Fig. 7 is a schematic flow chart of determining an actual pose of a workpiece according to another embodiment of the invention. Referring to fig. 7, step S530 may include step S531 and step S533.
S531, acquiring edge point clouds in the workpiece point cloud data, and determining end point clouds of a first end and a second end of the workpiece according to the edge point clouds.
S533, performing end point cloud matching according to the end point clouds of the first end and the second end, a preset point cloud model and optional workpiece pose, and determining the actual pose of the workpiece according to an end matching result.
According to the mode for determining the actual pose of the workpiece, provided by the embodiment, the characteristics that the edges of the two ends of the cylindrical workpiece are provided with the cylindrical features are utilized, the cylindrical features are reinforced by extracting point clouds at the ends of the workpiece, and point cloud matching is performed by using a preset model, so that the problem that the workpiece is axially offset is solved, two adjacent workpieces of different stacks are prevented from being regarded as the same workpiece due to the fact that the ends are in close contact, the matching precision is further improved, the detection rate and accuracy of the pose of the workpiece are improved, and more varieties of poses can be obtained.
An edge point cloud refers to a point cloud at the edge of a cylindrical workpiece that is capable of characterizing at least a portion of the contour of the cylindrical workpiece. The end point cloud matching is fine matching, and in the matching process, the end point cloud is used as a scene point cloud, and the workpiece pose can be selected as an initial pose during matching. It is to be understood that the implementation sequence between the step S531 and the step S510 may be arbitrary, and the step S531 may be implemented after the step S510, may be implemented in parallel with the step S510, or may be implemented before the step S510.
Illustratively, step S531 may include: and acquiring edge parts of the workpiece point cloud data of each workpiece, and merging the edge parts to obtain edge point clouds. Step S531 may further include: and carrying out normal filtering on the edge point cloud along the axial forward direction and the axial reverse direction of the workpiece respectively to obtain end point clouds of the first end and the second end of the workpiece. Wherein the axial direction is defined by the first end and the second end.
And acquiring edges of the point clouds of the cylindrical workpieces one by one, merging all the point clouds of the edges of the cylindrical workpieces into one point cloud (the point cloud of each cylindrical workpiece) for improving the subsequent calculation speed. And then normal filtering along the axial forward direction and the axial reverse direction of the cylindrical workpiece is respectively carried out, so that point clouds at two ends of the cylindrical workpiece are obtained. For cylindrical workpieces, the axial direction of the workpiece is the direction of the center line of the cylindrical workpiece, and the axial forward direction and the axial reverse direction are two opposite directions along the center line.
Fig. 8 is an effect diagram of matching end point clouds of a first end and a second end of a workpiece with a point cloud model. Referring to fig. 8, in the end point cloud fine matching process of step S533, inputs include a preset point cloud model and optional workpiece pose. The preset point cloud model is a point cloud model of two ends of the cylindrical workpiece manufactured in advance, and the optional workpiece pose is obtained through second point cloud matching in step S520. The matching process includes transforming the model to a location close to the target and causing the inputs to match an end point cloud scene, which is end point cloud data at both ends of the cylindrical workpiece. The fine matching outputs the adjusted pose, for example, outputs a pose conversion matrix. And then, performing fine matching by using an end point cloud matching result obtained by end point cloud matching and the complete scene point cloud represented by the filtering result of normal filtering in the step S300 to obtain the data of the actual pose of the workpiece. And then the actual pose of the workpiece can be utilized to control the manipulator to grasp the workpiece.
Step S530 utilizes the upper and lower edges of the extracted cylindrical workpiece to match, strengthens the cylindrical characteristics in the matching process, and simultaneously uses the two-end edge models to match, so that the axial position of the cylinder can be adjusted, and the problem of axial deviation of the workpiece is solved.
Illustratively, in step S533, the end point clouds of the first end and the second end may be combined before the end point cloud matching is performed. In addition, before the end point cloud matching, for example, after the end point clouds of the first end and the second end are combined, statistical filtering may be performed on the end point clouds of the first end and the second end first, and then the content of step S533 is executed, that is, the end point clouds of the first end and the second end after the combination and the statistical filtering are matched with the point cloud model, so as to obtain a corresponding matching result. Interference noise points and outliers in the end point clouds of the first and second ends can be filtered by statistical filtering. The statistical filtering can be performed by mean filtering, median filtering, gaussian filtering or other filtering modes.
Illustratively, the determining the actual pose of the workpiece according to the end matching result in step S533 may include: and carrying out point cloud matching on the matching result of the end point cloud matching and the feature vector of the original point cloud data, and determining the actual pose of the workpiece according to the matching result. The end point cloud matching result is used as pose data input in the fine matching process, and the original point cloud data is used as scene point cloud input in the fine matching process. And (3) performing point cloud matching based on the closest point iteration on the original point cloud data and calculating a matching score, so that the actual pose of the workpiece is screened out.
Fig. 9 is a flow chart of a workpiece pose recognition method according to another embodiment of the present invention. Referring to fig. 9, the workpiece pose recognition method may include the following steps.
Acquiring an original point cloud: and acquiring original point cloud data, wherein the original point cloud data comprises point cloud data of a workpiece and point cloud data of a material frame (carrier).
Downsampling: and determining an interest area in the original point cloud data, and downsampling the interest area.
Normal estimation: and determining a point cloud normal vector of the downsampling result, so as to obtain a feature vector corresponding to the original point cloud data.
Normal filtering 1: and carrying out normal filtering on the point cloud normal vector along the direction vertical to the surface of the cylindrical workpiece.
And (3) point cloud segmentation: and performing point cloud segmentation based on region growing on the normal filtering result.
And (3) screening point clouds: and screening the point cloud segmentation results according to a preset quantity threshold value, so as to obtain the respective workpiece point cloud data of each workpiece.
And (3) rough matching of point clouds: and performing rough matching on the workpiece point cloud data based on characteristic point alignment to obtain candidate workpiece pose of each workpiece in all the workpieces.
Point cloud exact match 1: and performing precise matching on the workpiece point cloud data based on the closest point iteration, so as to determine the optional workpiece pose from the candidate workpiece poses, wherein the precision of the second point cloud matching is higher than that of the point cloud matching.
And (3) extracting a point cloud: and acquiring edge parts of the workpiece point cloud data of each workpiece in all the workpieces.
Point cloud merger 1: and merging the edge parts to obtain an edge point cloud.
Positive direction normal filtering: and carrying out normal filtering on the edge point cloud along the axial positive direction of the workpiece to obtain an end point cloud of the first end of the workpiece. Wherein the axial direction is defined by the first end and the second end.
Reverse direction normal filtering: and carrying out normal filtering on the edge point cloud along the axial opposite direction of the workpiece to obtain an end point cloud of the second end of the workpiece. Wherein the axial direction is defined by the first end and the second end.
Point cloud merging 2: the end point clouds of the first end and the second end are merged.
And (3) statistical filtering: and statistically filtering the end point clouds of the first end and the second end.
And (3) point cloud fine matching 2: and performing fine matching on the end point cloud and the point cloud model to obtain an end point cloud matching result.
And 3, point cloud fine matching: and carrying out point cloud fine matching on the end point cloud matching result and the point cloud normal vector, and determining the actual pose of the workpiece according to the matching result.
FIG. 10 is a schematic diagram of an electronic device employing a hardware implementation of a processing system, according to one embodiment of the invention. Referring to fig. 10, the present invention also provides an electronic device 1000, the electronic device 1000 may include a memory 1300 and a processor 1200. The memory 1300 stores execution instructions that the processor 1200 executes, causing the processor 1200 to execute the workpiece pose recognition method of any of the above embodiments. The workpiece is provided with a first end and a second end which are opposite, and a first surface which is positioned between the first end and the second end and is a curved surface.
The electronic device 1000 may include corresponding modules that perform the various steps or steps of the flowcharts described above. Thus, each step or several steps in the flowcharts described above may be performed by respective modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the respective steps, or be implemented by a processor configured to perform the respective steps, or be stored within a computer-readable medium for implementation by a processor, or be implemented by some combination.
For example, the electronic device 1000 may include an origin point cloud acquisition module 1002, a feature vector generation module 1004, a point cloud filtering module 1006, a workpiece point cloud acquisition module 1008, and an actual pose determination module 1010.
The original point cloud obtaining module 1002 is configured to capture and obtain original point cloud data, where the original point cloud data includes point cloud data of content, and the content includes the workpiece and a carrier, and the captured point cloud data of the workpiece includes at least a part of a point cloud of the first surface of the workpiece.
The feature vector generation module 1004 is configured to generate a corresponding feature vector according to the captured original point cloud data, where the feature vector characterizes the point cloud geometric feature of the surface of the content.
The point cloud filtering module 1006 is configured to perform point cloud filtering on the feature vector along a normal direction of the point cloud of the first surface to obtain a point cloud set of the workpiece.
The workpiece point cloud acquisition module 1008 is configured to acquire respective workpiece point cloud data of each of the workpieces based on the point cloud set.
The actual pose determining module 1010 is configured to determine an actual pose of a workpiece according to the workpiece point cloud data.
According to the electronic equipment provided by the embodiment of the invention, the characteristic vector of the original point cloud data is filtered along the direction perpendicular to the curved surface of the workpiece, the material frame in the point cloud and the plate-shaped objects arranged in the material frame are filtered, the workpiece is accurately separated from the material frame, and the workpiece recognition obstacle caused by the tight contact of the workpiece and the material frame body or the plate-shaped objects in the material frame is avoided, so that the axial boundary of the workpiece can be accurately determined when the pose recognition is performed later, the pose of the workpiece can be accurately positioned, and the detection rate and the accuracy of the pose of the workpiece are improved.
Illustratively, the workpiece may be a cylindrical workpiece, with the first and second ends of the workpiece being the two ends of the cylindrical workpiece, respectively. The workpiece may be cylindrical. The work piece can be stacked horizontally in the carrier and put, and the terminal surface of the first end and the second end of work piece can be towards the terminal surface of carrier respectively, and first surface can be towards the shooting end of camera. The workpiece can be placed in two stacks, the workpiece placement modes of the workpieces in each stack can be the same, each stack of workpieces comprises one or more layers of workpieces, each layer of workpieces comprises one or more workpieces placed in parallel, and the end faces of the workpieces in the two stacks can be connected or adjacent in an opposite mode.
It should be noted that, details not disclosed in the electronic device 1000 of the present embodiment may refer to details disclosed in the workpiece pose recognition method M10 of the foregoing embodiment according to the present invention, and are not described herein again.
The hardware architecture may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
The invention also provides a readable storage medium, wherein the readable storage medium stores execution instructions, and the execution instructions are used for realizing the workpiece pose recognition method M10 in any embodiment when being executed by a processor.
The invention also provides a robot gripping system, which comprises a readable storage medium, wherein the readable storage medium is the readable storage medium in the embodiment, so that the robot can grip based on the execution instructions stored in the readable storage medium. Illustratively, the robot may include a robotic arm having a manipulator.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention. The processor performs the various methods and processes described above. For example, method embodiments of the present invention may be implemented as a software program tangibly embodied on a machine-readable medium, such as a memory. In some embodiments, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more of the steps of the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
Logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present invention may be implemented in hardware, software, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps implementing the method of the above embodiment may be implemented by a program to instruct related hardware, and the program may be stored in a readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
In the description of the present specification, the descriptions of the terms "one embodiment/mode," "some embodiments/modes," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment/mode or example is included in at least one embodiment/mode or example of the present invention. In this specification, the schematic representations of the above terms are not necessarily the same embodiments/modes or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/implementations or examples described in this specification and the features of the various embodiments/implementations or examples may be combined and combined by persons skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
It will be appreciated by persons skilled in the art that the above embodiments are provided for clarity of illustration only and are not intended to limit the scope of the invention. Other variations or modifications will be apparent to persons skilled in the art from the foregoing disclosure, and such variations or modifications are intended to be within the scope of the present invention.
Claims (10)
1. A method of workpiece pose recognition, the workpiece having opposed first and second ends, and a first surface between the first and second ends, the first surface being curved, the method comprising:
shooting and acquiring original point cloud data, wherein the original point cloud data comprise point cloud data of contents, the contents comprise the workpiece and a carrier, and the shot and acquired point cloud data of the workpiece comprise part of point clouds of at least part of the first surface of the workpiece;
Generating corresponding feature vectors according to the photographed original point cloud data, wherein the feature vectors represent the point cloud geometric features of the surface of the content;
performing point cloud filtering on the feature vector along the normal direction of the point cloud of the first surface to obtain a point cloud set of the workpiece;
acquiring respective workpiece point cloud data of each workpiece based on the point cloud set; and
and determining the actual pose of the workpiece according to the workpiece point cloud data.
2. The method according to claim 1, wherein the step of performing point cloud filtering on the feature vector along a normal direction of the point cloud of the first surface to obtain the point cloud set of the workpiece includes:
determining a first filtering threshold based on curved features of the first surface in a camera view; and
and normal filtering is carried out based on the first filtering threshold value so as to obtain a point cloud set of the workpiece.
3. The workpiece pose recognition method according to claim 1, wherein acquiring respective workpiece point cloud data for each of the workpieces based on the point cloud set comprises:
a seed point selecting step, namely selecting seed points from a point set to be divided of the point cloud set to form a seed point set, and taking the seed points as current points;
A region segmentation step, namely determining the angle difference between the normal line of the adjacent point and the normal line of the current point for the adjacent point of the current point, and adding the current point into the current region if the angle difference is smaller than a preset angle threshold value;
determining curvature values of the adjacent points, adding the adjacent points into the seed point set to serve as seed points if the curvature values are smaller than a preset curvature threshold, and deleting the corresponding current points;
selecting a new seed point from the seed point set as a current point to be substituted into the region segmentation step until the seed point set is an empty set, so as to obtain a point cloud of the current region; and
substituting the point cloud which is not divided in the point cloud set into the seed point selection step as a new point set to be divided until all points in the point cloud set are divided.
4. The method of claim 1, wherein determining the actual pose of the workpiece from the workpiece point cloud data comprises:
performing first point cloud matching on the workpiece point cloud data to obtain candidate workpiece pose and candidate workpiece pose of the workpiece
Performing second point cloud matching on the workpiece point cloud data, and determining optional workpiece pose from the candidate workpiece pose, wherein the accuracy of the second point cloud matching is higher than that of the first point cloud matching; and
And determining the actual pose of the workpiece according to the optional workpiece pose.
5. The method for recognizing the pose of the workpiece according to claim 4, wherein the second point cloud matching method comprises:
a point cloud association step, namely, taking the workpiece point cloud data as scene point cloud, and determining a point closest to each point to be matched in the scene point cloud as an association point to form a point set to be matched and an association point set;
determining barycenter coordinates of the point set to be matched and the associated point set;
determining a pose transformation matrix and a first point set after corresponding transformation according to the barycentric coordinates;
determining an average distance of association points between the first point set and the scene point cloud; and
substituting the points in the first point set as new points to be matched into the point cloud association step until iteration termination conditions are met, and obtaining the optional workpiece pose through the latest pose transformation matrix.
6. The method of claim 5, wherein determining the actual pose of the workpiece from the selectable workpiece poses when the workpiece is placed in two stacks, comprises:
acquiring edge point clouds in the workpiece point cloud data, and determining end point clouds of a first end and a second end of the workpiece according to the edge point clouds; and
And performing end point cloud matching according to the end point clouds of the first end and the second end, a preset point cloud model and the optional workpiece pose, and determining the actual pose of the workpiece according to an end matching result.
7. The method of claim 6, wherein determining end point clouds of the first and second ends of the workpiece from the edge point clouds comprises:
and carrying out point cloud filtering on the edge point cloud along the axial forward direction and the axial reverse direction of the workpiece to obtain end point clouds of a first end and a second end of the workpiece, wherein the axial direction is determined according to the first end and the second end.
8. An electronic device, comprising:
a memory storing execution instructions; and
a processor that executes the execution instructions stored in the memory, so that the processor executes the workpiece pose recognition method according to any one of claims 1 to 7.
9. A readable storage medium having stored therein execution instructions which, when executed by a processor, are to implement the workpiece pose recognition method according to any of claims 1 to 7.
10. A robotic grasping system, comprising:
the readable storage medium of claim 9, for grasping by a robot based on execution instructions stored in the readable storage medium.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310778993.4A CN116704035B (en) | 2023-06-28 | 2023-06-28 | Workpiece pose recognition method, electronic equipment, storage medium and grabbing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310778993.4A CN116704035B (en) | 2023-06-28 | 2023-06-28 | Workpiece pose recognition method, electronic equipment, storage medium and grabbing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116704035A true CN116704035A (en) | 2023-09-05 |
CN116704035B CN116704035B (en) | 2023-11-07 |
Family
ID=87825627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310778993.4A Active CN116704035B (en) | 2023-06-28 | 2023-06-28 | Workpiece pose recognition method, electronic equipment, storage medium and grabbing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116704035B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112233181A (en) * | 2020-10-29 | 2021-01-15 | 深圳市广宁股份有限公司 | 6D pose recognition method and device and computer storage medium |
CN112669385A (en) * | 2020-12-31 | 2021-04-16 | 华南理工大学 | Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics |
KR20220043847A (en) * | 2020-09-29 | 2022-04-05 | 삼성전자주식회사 | Method, apparatus, electronic device and storage medium for estimating object pose |
WO2022110473A1 (en) * | 2020-11-24 | 2022-06-02 | 深圳市优必选科技股份有限公司 | Robot mapping method and device, computer readable storage medium, and robot |
CN116188540A (en) * | 2023-02-15 | 2023-05-30 | 重庆邮电大学 | Target identification and pose estimation method based on point cloud information |
-
2023
- 2023-06-28 CN CN202310778993.4A patent/CN116704035B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220043847A (en) * | 2020-09-29 | 2022-04-05 | 삼성전자주식회사 | Method, apparatus, electronic device and storage medium for estimating object pose |
CN112233181A (en) * | 2020-10-29 | 2021-01-15 | 深圳市广宁股份有限公司 | 6D pose recognition method and device and computer storage medium |
WO2022110473A1 (en) * | 2020-11-24 | 2022-06-02 | 深圳市优必选科技股份有限公司 | Robot mapping method and device, computer readable storage medium, and robot |
CN112669385A (en) * | 2020-12-31 | 2021-04-16 | 华南理工大学 | Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics |
CN116188540A (en) * | 2023-02-15 | 2023-05-30 | 重庆邮电大学 | Target identification and pose estimation method based on point cloud information |
Non-Patent Citations (1)
Title |
---|
梁雪;黄祖广;张承瑞;虞益彪;: "基于机器视觉的散乱柱类零件抓取系统", 制造技术与机床, no. 05, pages 80 - 84 * |
Also Published As
Publication number | Publication date |
---|---|
CN116704035B (en) | 2023-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114972377B (en) | 3D point cloud segmentation method and device based on mobile least square method and super-voxel | |
CN110648367A (en) | Geometric object positioning method based on multilayer depth and color visual information | |
CN113096094B (en) | Three-dimensional object surface defect detection method | |
CN112233181A (en) | 6D pose recognition method and device and computer storage medium | |
CN109001757B (en) | Parking space intelligent detection method based on 2D laser radar | |
CN111260289A (en) | Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation | |
CN111683798A (en) | Workpiece pickup device and workpiece pickup method | |
CN109448045A (en) | Plane polygon object measuring method and machine readable storage medium based on SLAM | |
CN115359083A (en) | Method, system, medium, and electronic device for obtaining contour of obstacle | |
CN110807781A (en) | Point cloud simplification method capable of retaining details and boundary features | |
CN115147437A (en) | Intelligent robot guiding machining method and system | |
CN110647156A (en) | Target object docking ring-based docking equipment pose adjusting method and system | |
CN114939891A (en) | Composite robot 3D (three-dimensional) grabbing method and system based on object plane characteristics | |
CN116704035B (en) | Workpiece pose recognition method, electronic equipment, storage medium and grabbing system | |
CN112589795B (en) | Vacuum chuck mechanical arm grabbing method based on uncertainty multi-frame fusion | |
CN118196458A (en) | Battery cover plate characteristic measurement method and device based on line laser scanning and storage medium | |
CN114897974B (en) | Target object space positioning method, system, storage medium and computer equipment | |
CN116081524A (en) | Fork truck tray butt joint detection method and detection system | |
JPH07146121A (en) | Recognition method and device for three dimensional position and attitude based on vision | |
Zhang et al. | Object detection and grabbing based on machine vision for service robot | |
CN113814982A (en) | Welding robot manipulator control method | |
CN110253575B (en) | Robot grabbing method, terminal and computer readable storage medium | |
CN112785711A (en) | Insulator creepage distance detection method and detection system based on three-dimensional reconstruction | |
Liao et al. | Pole detection for autonomous gripping of biped climbing robots | |
Hahn et al. | Tracking of human body parts using the multiocular contracting curve density algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |