CN113822946B - Mechanical arm grabbing method based on computer vision - Google Patents
Mechanical arm grabbing method based on computer vision Download PDFInfo
- Publication number
- CN113822946B CN113822946B CN202111173816.0A CN202111173816A CN113822946B CN 113822946 B CN113822946 B CN 113822946B CN 202111173816 A CN202111173816 A CN 202111173816A CN 113822946 B CN113822946 B CN 113822946B
- Authority
- CN
- China
- Prior art keywords
- mechanical arm
- follows
- points
- computer vision
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000004438 eyesight Effects 0.000 title claims abstract description 18
- 238000001914 filtration Methods 0.000 claims description 27
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 238000003708 edge detection Methods 0.000 claims description 13
- WSFSSNUMVMOOMR-UHFFFAOYSA-N Formaldehyde Chemical compound O=C WSFSSNUMVMOOMR-UHFFFAOYSA-N 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 238000004458 analytical method Methods 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 239000012636 effector Substances 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000002474 experimental method Methods 0.000 abstract 1
- 238000000605 extraction Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/77—Determining position or orientation of objects or cameras using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a mechanical arm grabbing method based on computer vision. The method is divided into two stages, wherein the first stage utilizes a feature extraction and three-dimensional matching technology to identify and position the object, and the second stage establishes a mechanical arm movement track according to the space coordinates acquired in the first stage to realize object grabbing. Experiments show that the identification rate of the method is up to 99.55 percent, the grabbing rate is 99 percent, and the method has a good application prospect in the field of engineering application and logistics warehouse management.
Description
Technical Field
The invention relates to a mechanical arm grabbing method, in particular to a mechanical arm grabbing method based on computer vision, which is simple to operate, high in positioning rate and good in recognition rate.
Background
The robot can replace human beings to carry out dangerous, complex and repeated work, such as sorting express items, returning warehouse goods and the like; in the face of dangerous and damaging work such as high-altitude work, mine sweeping, paint spraying and the like. The mechanical arm has the advantages of reducing labor cost, ensuring production safety, greatly improving production efficiency and having wide application prospect. With the development of computer vision, it becomes possible to promote robots to complete various industrial tasks. Conventional industrial robots often perform repetitive, single actions according to a pre-set program, requiring recompilation of the robot program as the environment changes or the work task changes. Therefore, the visual perception capability of the robot is endowed with great significance, and the robot can adjust the motion behavior of the robot according to the visual perception, so that more task requirements can be better met.
Disclosure of Invention
Aiming at the problems, the invention mainly aims to provide the mechanical arm grabbing method based on computer vision, which is simple to operate, high in positioning rate and good in recognition rate.
The invention solves the technical problems by the following technical proposal: the mechanical arm grabbing method based on the computer vision comprises the following steps of:
step 1: acquiring the internal and external parameters of the depth binocular sensing camera according to an imaging principle by adopting a Zhang calibration method;
step 2: calibrating a hand and an eye of the mechanical arm, and determining a conversion mode of a base coordinate system and a world coordinate system by adopting an eye-in-hand mode;
step 3: image acquisition is completed through a depth binocular sensing camera, and left and right views are acquired;
step 4: sequentially performing median filtering, gaussian filtering and mean filtering on the left view and the right view;
step 5: matching is carried out by utilizing a scale invariant feature transformation image matching algorithm, and the method specifically comprises the following steps: firstly, constructing a scale space for a reference object and an object to be grabbed, then carrying out scale invariant feature transformation algorithm feature point and edge detection, deleting edge points of the edge detection, realizing feature point matching by determining a main direction of the feature points, and finally deleting outlier matching pairs by adopting a random sampling consistency algorithm to realize matching of the object to be grabbed, so as to finish object identification;
step 6: determining centroid coordinates of an object to be grabbed, specifically comprising: firstly, carrying out image enhancement on left and right views, then carrying out edge detection and morphological processing, then determining an object area to be grabbed, and finally calculating the centroid position of the object to be grabbed;
step 7: realizing space three-dimensional reconstruction, specifically comprising: sequentially utilizing the camera internal and external parameters obtained in the step 1 and the coordinate conversion formula in the step 2 to realize space coordinate conversion;
step 8: the method comprises the steps that an initial tool bag under the robot operating system is used for calling a good robotic arm, a kinematic solver and a motion planning library are arranged, and the modeling of the robotic arm is realized;
step 9: reading space coordinates based on a mechanical arm base;
step 10: planning a mechanical arm path by utilizing a mechanical arm control module under a robot operating system according to the space coordinates determined in the step 9, and issuing a mechanical arm motion sequence to grasp and place objects;
step 11: the robot arm is initialized again and the end effector is reset.
In a specific embodiment of the present invention: the specific steps of the step 1 are as follows:
according to the imaging principle of a camera, an image directly acquired is influenced by distortion of a camera lens; and calibrating by adopting a chessboard calibration plate to obtain the internal and external parameters of the left and right cameras.
In a specific embodiment of the present invention: the specific steps of the step 2 are as follows:
the conversion relation between a camera coordinate system and a mechanical arm coordinate system is obtained through hand-eye calibration, and the chessboard calibration plate used in the step 1 is adopted, so that the calibration plate is fixed at the tail end of the mechanical arm; on the premise of keeping the pose of the two, the mechanical arm is continuously moved to obtain the calibration plate photos at different positions, and parameters and camera external parameters under the corresponding pose of each photo are recorded; thereby determining the transformation matrix of the calibration plate under the mechanical arm.
In a specific embodiment of the present invention: the specific steps of the step 4 are as follows:
sequentially performing median filtering, gaussian filtering and mean filtering on the obtained left and right views;
the median filtering is to replace the gray value of the pixel with the median of the gray value of the neighborhood of the pixel point (x, y), and the kernel function is as follows:
g(x,y)=Mid a,b {f(x-a,y-b)}
in the method, in the process of the invention,m and n are the length and width of a matrix area established by pixel points (x, y); gaussian filtering is to carry out formaldehyde on the corresponding values of the neighborhood pixel points and the convolution kernel by setting the neighborhood region and the convolution kernel of the pixel points, wherein the convolution kernel function is as follows:
wherein sigma is a parameter of the width of the Gaussian function; the mean value filtering replaces the center point by the gray mean value of the pixel points in the neighborhood, and the kernel function is as follows:
in a specific embodiment of the present invention: the specific steps of the step 5 are as follows:
firstly, establishing a scale space by using Gaussian convolution sum functions:
wherein H (x, y) is the input image;
then detecting extreme points, constructing a Gaussian pyramid through a scale space, wherein the Gaussian model is as follows:
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ)
wherein k is a proportionality coefficient;
then locating the characteristic points and distributing the characteristic point directions, eliminating the locating characteristic points of the low contrast points by solving the extremum of the Gaussian model, and counting the characteristic points through an accuracy gradient histogram for further ensuring the algorithm, wherein the gradient of the pixel points is as follows:
the pixel point amplitude is:
the gradient direction is:
finally, establishing matching between the descriptors and the feature points, calculating the amplitude value and the gradient direction of each feature point in the neighborhood of the feature points, and carrying out gradient histogram statistics; and in the characteristic point matching process, similarity matching with Euclidean distance as a criterion is adopted.
In a specific embodiment of the present invention: the specific steps of the step 6 are as follows:
determining the centroid of the matched object, sequentially performing image enhancement, edge detection, morphological processing and target area determination, and determining centroid pixel coordinates;
in a specific embodiment of the present invention: the specific steps of the step 8 are as follows:
because the two-dimensional plane coordinates obtained in the step 7 cannot be directly captured, the two-dimensional plane coordinates need to be converted into the space coordinates based on the mechanical arm, and the space three-dimensional coordinates are obtained by three-dimensional reconstruction through the conversion matrix obtained in the step 2.
In a specific embodiment of the present invention: the specific steps of step 10 are as follows:
carrying out forward and backward kinematics analysis on the mechanical arm; firstly, calculating a mechanical arm matrix of each joint, and assuming that the mechanical arm is composed of n joints:
matrix in matrixThe connecting rod of the actual mechanical arm is used for solving; and similarly, performing inverse kinematics analysis on the mechanical arm, and comparing with a positive kinematics solution; and (3) utilizing the obtained pose solution, and inputting the three-dimensional coordinates obtained in the step to carry out motion planning on the mechanical arm so as to realize grabbing.
The invention has the positive progress effects that: the mechanical arm grabbing method based on computer vision provided by the invention has the following advantages: the method utilizes the algorithms such as scale invariant feature conversion, edge detection, morphology and the like to establish a set of real-time grabbing method, and improves the recognition rate; the operation is simple, the positioning rate is high, the recognition rate is good, and the working efficiency is greatly improved.
Drawings
FIG. 1 is a schematic diagram of the overall structure of the present invention;
Detailed Description
The following description of the preferred embodiments of the present invention is given with reference to the accompanying drawings, so as to explain the technical scheme of the present invention in detail.
Fig. 1 is a schematic diagram of the overall structure of the present invention. As shown in fig. 1: the invention provides a mechanical arm grabbing method based on computer vision, which comprises the following steps:
step 1: acquiring the internal and external parameters of the depth binocular sensing camera according to an imaging principle by adopting a Zhang calibration method;
step 2: calibrating a hand and an eye of the mechanical arm, and determining a conversion mode of a base coordinate system and a world coordinate system by adopting an eye-in-hand mode;
step 3: image acquisition is completed through a depth binocular sensing camera, and left and right views are acquired;
step 4: sequentially performing median filtering, gaussian filtering and mean filtering on the left view and the right view;
step 5: matching by using a scale-invariant feature transformation image matching algorithm, specifically, firstly constructing a scale space for a reference object and an object to be grabbed, then carrying out scale-invariant feature transformation algorithm feature point and edge detection, deleting edge points of the edge detection, realizing feature point matching by determining a main direction of the feature points, and finally deleting outlier matching pairs by adopting a random sampling consistency algorithm to realize matching of the object to be grabbed, thereby completing object identification;
step 6: determining the centroid coordinates of an object to be grabbed, specifically, performing image enhancement on a left view and a right view, performing edge detection and morphological processing, determining the area of the object to be grabbed, and finally calculating the centroid position of the object to be grabbed;
step 7: realizing space three-dimensional reconstruction, specifically, sequentially utilizing the camera internal and external parameters obtained in the step 1 and the coordinate conversion formula of the step 2 to realize space coordinate conversion;
step 8: the method comprises the steps that an initial tool bag under the robot operating system is used for calling a good robotic arm, a kinematic solver and a motion planning library are arranged, and the modeling of the robotic arm is realized;
step 9: reading space coordinates based on a mechanical arm base;
step 10: planning a mechanical arm path by utilizing a mechanical arm control module under a robot operating system according to the space coordinates determined in the step 9, and issuing a mechanical arm motion sequence to grasp and place objects;
step 11: the robot arm is initialized again and the end effector is reset.
The specific steps of the step 1 are as follows:
according to the camera imaging principle, the directly acquired image is affected by the distortion of the camera lens. Therefore, the chessboard calibration board is adopted for calibration, and the internal parameters and the external parameters of the left camera and the right camera are obtained. Specific implementations are detailed at the examples.
The specific steps of the step 2 are as follows:
and (3) obtaining the conversion relation between the camera coordinate system and the mechanical arm coordinate system through hand-eye calibration, and fixing the calibration plate at the tail end of the mechanical arm by adopting the chessboard calibration plate used in the step (1). On the premise of keeping the pose of the two, the mechanical arm is continuously moved to obtain the calibration plate photos at different positions, and parameters and camera external parameters under the corresponding pose of each photo are recorded. Thereby determining the transformation matrix of the calibration plate under the mechanical arm.
The specific steps of the step 4 are as follows:
and sequentially performing median filtering, gaussian filtering and mean filtering on the obtained left and right views.
The median filtering is to replace the gray value of the pixel with the median of the gray value of the neighborhood of the pixel point (x, y), and the kernel function is as follows:
g(x,y)=Mid a,b {f(x-a,y-b)}
in the method, in the process of the invention,m and n are the length and width of a matrix area established by pixel points (x, y); gaussian filtering is to carry out formaldehyde on the corresponding values of the neighborhood pixel points and the convolution kernel by setting the neighborhood region and the convolution kernel of the pixel points, wherein the convolution kernel function is as follows:
wherein sigma is a parameter of the width of the Gaussian function; the mean value filtering replaces the center point by the gray mean value of the pixel points in the neighborhood, and the kernel function is as follows:
the specific steps of the step 5 are as follows:
firstly, establishing a scale space by using Gaussian convolution sum functions:
where H (x, y) is the input image.
Then detecting extreme points, constructing a Gaussian pyramid through a scale space, wherein the Gaussian model is as follows:
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ)
where k is a scaling factor.
Then locating the characteristic points and distributing the characteristic point directions, eliminating the locating characteristic points of the low contrast points by solving the extremum of the Gaussian model, and counting the characteristic points through an accuracy gradient histogram for further ensuring the algorithm, wherein the gradient of the pixel points is as follows:
the pixel point amplitude is:
the gradient direction is:
finally, establishing matching between the descriptors and the feature points, calculating the amplitude value and the gradient direction of each feature point in the neighborhood of the feature points, and carrying out gradient histogram statistics; and in the characteristic point matching process, similarity matching with Euclidean distance as a criterion is adopted.
The specific steps of the step 6 are as follows:
and determining the centroid of the matched object, and sequentially performing image enhancement, edge detection, morphological processing and target area determination to determine the centroid pixel coordinates.
The specific steps of the step 8 are as follows:
because the two-dimensional plane coordinates obtained in the step 7 cannot be directly captured, the two-dimensional plane coordinates need to be converted into the space coordinates based on the mechanical arm, and the space three-dimensional coordinates are obtained by three-dimensional reconstruction through the conversion matrix obtained in the step 2.
The specific steps of step 10 are as follows:
carrying out forward and backward kinematics analysis on the mechanical arm; firstly, calculating a mechanical arm matrix of each joint, and assuming that the mechanical arm is composed of n joints:
matrix in matrixThe connecting rod of the actual mechanical arm is used for solving; and similarly, performing inverse kinematics analysis on the mechanical arm, and comparing the inverse kinematics analysis with the forward kinematics analysis. And (3) utilizing the obtained pose solution, and inputting the three-dimensional coordinates obtained in the step to carry out motion planning on the mechanical arm so as to realize grabbing.
The following are specific examples:
example 1
In example 1, the operating system was Ubuntu14.04LTS and the computer CPU model was Intel Core TMi 5-6500. 20GHz, RAM 4GB. To verify the reliability of the present invention, TP (positive samples are correctly predicted as the correct number) and FP (negative samples are incorrectly predicted as the positive number) are used to evaluate the recognition performance. And starting the ZED to photograph the object in the visual field, identifying the category and the coordinate of the object, starting the mechanical arm to grasp, and returning to the initial position after the grasping is finished. The method sequentially performs 500 times of grabbing, and the average recognition accuracy of the actions of the method can reach 99.55 percent, and the grabbing rate can reach 99 percent.
The foregoing has shown and described the basic principles and main features of the present invention and the advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the foregoing embodiments, which have been described in the foregoing embodiments and description merely illustrates the principles of the invention, and that various changes and modifications may be effected therein without departing from the spirit and scope of the invention as defined in the appended claims and their equivalents.
Claims (8)
1. A mechanical arm grabbing method based on computer vision is characterized in that: the mechanical arm grabbing method based on computer vision comprises the following steps:
step 1: acquiring the internal and external parameters of the depth binocular sensing camera according to an imaging principle by adopting a Zhang calibration method;
step 2: calibrating a hand and an eye of the mechanical arm, and determining a conversion mode of a base coordinate system and a world coordinate system by adopting an eye-in-hand mode;
step 3: image acquisition is completed through a depth binocular sensing camera, and left and right views are acquired;
step 4: sequentially performing median filtering, gaussian filtering and mean filtering on the left view and the right view;
step 5: matching is carried out by utilizing a scale invariant feature transformation image matching algorithm, and the method specifically comprises the following steps: firstly, constructing a scale space for a reference object and an object to be grabbed, then carrying out scale invariant feature transformation algorithm feature point and edge detection, deleting edge points of the edge detection, realizing feature point matching by determining a main direction of the feature points, and finally deleting outlier matching pairs by adopting a random sampling consistency algorithm to realize matching of the object to be grabbed, so as to finish object identification;
step 6: determining centroid coordinates of an object to be grabbed, specifically comprising: firstly, carrying out image enhancement on left and right views, then carrying out edge detection and morphological processing, then determining an object area to be grabbed, and finally calculating the centroid position of the object to be grabbed;
step 7: realizing space three-dimensional reconstruction, specifically comprising: sequentially utilizing the camera internal and external parameters obtained in the step 1 and the coordinate conversion formula in the step 2 to realize space coordinate conversion;
step 8: the method comprises the steps that an initial tool bag under the robot operating system is used for calling a good robotic arm, a kinematic solver and a motion planning library are arranged, and the modeling of the robotic arm is realized;
step 9: reading space coordinates based on a mechanical arm base;
step 10: planning a mechanical arm path by utilizing a mechanical arm control module under a robot operating system according to the space coordinates determined in the step 9, and issuing a mechanical arm motion sequence to grasp and place objects;
step 11: the robot arm is initialized again and the end effector is reset.
2. The computer vision-based robotic arm gripping method of claim 1, wherein: the specific steps of the step 1 are as follows:
according to the imaging principle of a camera, an image directly acquired is influenced by distortion of a camera lens; and calibrating by adopting a chessboard calibration plate to obtain the internal and external parameters of the left and right cameras.
3. The computer vision-based robotic arm gripping method of claim 1, wherein: the specific steps of the step 2 are as follows:
the conversion relation between a camera coordinate system and a mechanical arm coordinate system is obtained through hand-eye calibration, and the chessboard calibration plate used in the step 1 is adopted, so that the calibration plate is fixed at the tail end of the mechanical arm; on the premise of keeping the pose of the two, the mechanical arm is continuously moved to obtain the calibration plate photos at different positions, and parameters and camera external parameters under the corresponding pose of each photo are recorded; thereby determining the transformation matrix of the calibration plate under the mechanical arm.
4. The computer vision-based robotic arm gripping method of claim 1, wherein: the specific steps of the step 4 are as follows:
sequentially performing median filtering, gaussian filtering and mean filtering on the obtained left and right views;
the median filtering is to replace the gray value of the pixel with the median of the gray value of the neighborhood of the pixel point (x, y), and the kernel function is as follows:
g(x,y)=Mid a,b {f(x-a,y-b)}
in the method, in the process of the invention,m and n are the length and width of a matrix area established by pixel points (x, y); gaussian filtering is to carry out formaldehyde on the corresponding values of the neighborhood pixel points and the convolution kernel by setting the neighborhood region and the convolution kernel of the pixel points, wherein the convolution kernel function is as follows:
wherein sigma is a parameter of the width of the Gaussian function; the mean value filtering replaces the center point by the gray mean value of the pixel points in the neighborhood, and the kernel function is as follows:
5. the computer vision-based robotic arm gripping method of claim 1, wherein: the specific steps of the step 5 are as follows:
firstly, establishing a scale space by using Gaussian convolution sum functions:
wherein H (x, y) is the input image;
then detecting extreme points, constructing a Gaussian pyramid through a scale space, wherein the Gaussian model is as follows:
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ)
wherein k is a proportionality coefficient;
then locating the characteristic points and distributing the characteristic point directions, eliminating the locating characteristic points of the low contrast points by solving the extremum of the Gaussian model, and counting the characteristic points through an accuracy gradient histogram for further ensuring the algorithm, wherein the gradient of the pixel points is as follows:
the pixel point amplitude is:
the gradient direction is:
finally, establishing matching between the descriptors and the feature points, calculating the amplitude value and the gradient direction of each feature point in the neighborhood of the feature points, and carrying out gradient histogram statistics; and in the characteristic point matching process, similarity matching with Euclidean distance as a criterion is adopted.
6. The computer vision-based robotic arm gripping method of claim 1, wherein: the specific steps of the step 6 are as follows:
and determining the centroid of the matched object, and sequentially performing image enhancement, edge detection, morphological processing and target area determination to determine the centroid pixel coordinates.
7. The computer vision-based robotic arm gripping method of claim 1, wherein: the specific steps of the step 8 are as follows:
because the two-dimensional plane coordinates obtained in the step 7 cannot be directly captured, the two-dimensional plane coordinates need to be converted into the space coordinates based on the mechanical arm, and the space three-dimensional coordinates are obtained by three-dimensional reconstruction through the conversion matrix obtained in the step 2.
8. The computer vision-based robotic arm gripping method of claim 1, wherein: the specific steps of step 10 are as follows:
carrying out forward and backward kinematics analysis on the mechanical arm; firstly, calculating a mechanical arm matrix of each joint, and assuming that the mechanical arm is composed of n joints:
matrix in matrixBy the connecting rod of the actual mechanical armDischarging; and similarly, performing inverse kinematics analysis on the mechanical arm, and comparing with a positive kinematics solution; and (3) utilizing the obtained pose solution, and inputting the three-dimensional coordinates obtained in the step to carry out motion planning on the mechanical arm so as to realize grabbing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111173816.0A CN113822946B (en) | 2021-10-09 | 2021-10-09 | Mechanical arm grabbing method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111173816.0A CN113822946B (en) | 2021-10-09 | 2021-10-09 | Mechanical arm grabbing method based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113822946A CN113822946A (en) | 2021-12-21 |
CN113822946B true CN113822946B (en) | 2023-10-20 |
Family
ID=78919955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111173816.0A Active CN113822946B (en) | 2021-10-09 | 2021-10-09 | Mechanical arm grabbing method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113822946B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332249B (en) * | 2022-03-17 | 2022-05-24 | 常州铭赛机器人科技股份有限公司 | Camera vision internal segmentation type hand-eye calibration method |
CN115383740A (en) * | 2022-07-21 | 2022-11-25 | 江苏航鼎智能装备有限公司 | Mechanical arm target object grabbing method based on binocular vision |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN105740899A (en) * | 2016-01-29 | 2016-07-06 | 长安大学 | Machine vision image characteristic point detection and matching combination optimization method |
CN107300100A (en) * | 2017-05-22 | 2017-10-27 | 浙江大学 | A kind of tandem type mechanical arm vision guide approach method of Online CA D model-drivens |
CN112132894A (en) * | 2020-09-08 | 2020-12-25 | 大连理工大学 | Mechanical arm real-time tracking method based on binocular vision guidance |
WO2021023315A1 (en) * | 2019-08-06 | 2021-02-11 | 华中科技大学 | Hand-eye-coordinated grasping method based on fixation point of person's eye |
CN112894815A (en) * | 2021-01-25 | 2021-06-04 | 西安工业大学 | Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3026538C (en) * | 2016-08-01 | 2021-01-05 | Novartis Ag | Integrated ophthalmic surgical system |
US20210030483A1 (en) * | 2019-07-29 | 2021-02-04 | Verily Life Sciences Llc | Surgery tool segmentation with robot kinematics |
-
2021
- 2021-10-09 CN CN202111173816.0A patent/CN113822946B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN105740899A (en) * | 2016-01-29 | 2016-07-06 | 长安大学 | Machine vision image characteristic point detection and matching combination optimization method |
CN107300100A (en) * | 2017-05-22 | 2017-10-27 | 浙江大学 | A kind of tandem type mechanical arm vision guide approach method of Online CA D model-drivens |
WO2021023315A1 (en) * | 2019-08-06 | 2021-02-11 | 华中科技大学 | Hand-eye-coordinated grasping method based on fixation point of person's eye |
CN112132894A (en) * | 2020-09-08 | 2020-12-25 | 大连理工大学 | Mechanical arm real-time tracking method based on binocular vision guidance |
CN112894815A (en) * | 2021-01-25 | 2021-06-04 | 西安工业大学 | Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm |
Non-Patent Citations (2)
Title |
---|
基于Kinect V2的筒子纱作业机器人视觉定位方法;李毅;金守峰;尹加杰;仝梦园;陈阳;;纺织高校基础科学学报(第03期);全文 * |
基于改进的SURF_FREAK算法的工件识别与抓取方法研究;刘敬华;钟佩思;刘梅;;机床与液压(第23期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113822946A (en) | 2021-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Song et al. | CAD-based pose estimation design for random bin picking using a RGB-D camera | |
CN110211180A (en) | A kind of autonomous grasping means of mechanical arm based on deep learning | |
Azad et al. | Stereo-based 6d object localization for grasping with humanoid robot systems | |
CN111476841B (en) | Point cloud and image-based identification and positioning method and system | |
Lin et al. | Robotic grasping with multi-view image acquisition and model-based pose estimation | |
CN113822946B (en) | Mechanical arm grabbing method based on computer vision | |
CN112894815B (en) | Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm | |
CN111243017A (en) | Intelligent robot grabbing method based on 3D vision | |
CN111462154A (en) | Target positioning method and device based on depth vision sensor and automatic grabbing robot | |
CN112561886A (en) | Automatic workpiece sorting method and system based on machine vision | |
CN112669385A (en) | Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics | |
CN114952809A (en) | Workpiece identification and pose detection method and system and grabbing control method of mechanical arm | |
Kaymak et al. | Implementation of object detection and recognition algorithms on a robotic arm platform using raspberry pi | |
CN115861780B (en) | Robot arm detection grabbing method based on YOLO-GGCNN | |
Chen et al. | Random bin picking with multi-view image acquisition and CAD-based pose estimation | |
Ma et al. | Binocular vision object positioning method for robots based on coarse-fine stereo matching | |
CN114495273A (en) | Robot gesture teleoperation method and related device | |
Sun et al. | Robotic grasping using semantic segmentation and primitive geometric model based 3D pose estimation | |
Ibrayev et al. | Recognition of curved surfaces from “one-dimensional” tactile data | |
CN117340929A (en) | Flexible clamping jaw grabbing and disposing device and method based on three-dimensional point cloud data | |
Liang et al. | Rgb-d camera based 3d object pose estimation and grasping | |
Boby | Hand-eye calibration using a single image and robotic picking up using images lacking in contrast | |
Zhang et al. | High-precision pose estimation method of the 3C parts by combining 2D and 3D vision for robotic grasping in assembly applications | |
Shi et al. | A fast workpiece detection method based on multi-feature fused SSD | |
Wang et al. | Object Grabbing of Robotic Arm Based on OpenMV Module Positioning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |