CN116408790B - Robot control method, device, system and storage medium - Google Patents
Robot control method, device, system and storage medium Download PDFInfo
- Publication number
- CN116408790B CN116408790B CN202111673300.2A CN202111673300A CN116408790B CN 116408790 B CN116408790 B CN 116408790B CN 202111673300 A CN202111673300 A CN 202111673300A CN 116408790 B CN116408790 B CN 116408790B
- Authority
- CN
- China
- Prior art keywords
- target
- positioning information
- robot
- control period
- tool
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 40
- 238000001514 detection method Methods 0.000 claims description 22
- 230000033001 locomotion Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 14
- 230000009471 action Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 238000004519 manufacturing process Methods 0.000 description 19
- 230000000007 visual effect Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000001276 controlling effect Effects 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000001934 delay Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000009776 industrial production Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 239000000725 suspension Substances 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1669—Programme controls characterised by programming, planning systems for manipulators characterised by special application, e.g. multi-arm co-operation, assembly, grasping
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
- Numerical Control (AREA)
Abstract
A robot control method, device, system and storage medium, the method includes: and according to the obtained positioning information of the tool tail end and the positioning information of the target in the current control period, predicting the positioning information of the tool tail end and the positioning information of the target in the next control period, generating a control signal according to a prediction result, and sending the control signal to the robot so that the tool tail end can track the target. The disclosure also provides a device, a system and a storage medium for realizing the robot control method. According to the embodiment of the disclosure, the control precision of the robot can be improved under the condition that the frame rate of the image acquisition equipment is low.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly, to a robot control method, apparatus, system, and storage medium.
Background
Industrial robot greatly promotes the automation process of industrial production line after appearance. The introduction of sensors, such as mechanics, laser sensors, gives the robot more capabilities, enabling the robot to simply acquire some information of the target, such as torque, simple shape, etc., so that the robot can do some more complex work.
However, with the evolution of industrial scenes, application scenes become more and more complex; particularly, in some flexible production lines, not only is the robot required to operate according to a preset track, but also the robot is required to recognize a dynamic target and the environment in which the robot is positioned on line, so that the operation on the target is completed. The targeted robot is not yet well adapted to this requirement.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the disclosure provides a robot control method, wherein the robot is used for carrying out preset operation on a target by using a tool tail end; the robot control method comprises the following steps:
Acquiring positioning information of the tool tail end and positioning information of the target in the current control period;
Predicting the positioning information of the tool tail end and the positioning information of the target in the next control period according to the positioning information of the tool tail end and the positioning information of the target in the current control period;
and generating a control signal according to the positioning information of the tool tail end and the positioning information of the target in the next control period and sending the control signal to the robot so that the tool tail end can track the target.
The embodiment of the disclosure also provides a robot control device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor can realize the robot control method according to any embodiment of the disclosure when executing the computer program.
The embodiment of the disclosure also provides a robot control system, which comprises:
Target detection means arranged to obtain a detection result of the target;
The upper computer executes the robot control method according to any embodiment of the disclosure; and acquiring positioning information of the target in the current control period according to the detection result of the target detection device in the current control period.
The embodiment of the disclosure also provides a computer readable storage medium for storing a program for controlling a robot; when the program for controlling the robot is read and executed by the processor, the robot control method according to any embodiment of the disclosure is realized.
The embodiment of the disclosure can improve the control precision of the robot under the condition that the frame rate of the image acquisition equipment is low.
Other aspects will become apparent upon reading and understanding the accompanying drawings and detailed description.
Drawings
The accompanying drawings are included to provide an understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain, without limitation, the embodiments.
Fig. 1 is a flowchart of a robot control method in embodiment 1 of the present disclosure;
fig. 2 is a flowchart of a robot control method in embodiment 2 of the present disclosure;
Fig. 3 is a flowchart of a robot control method in embodiment 3 of the present disclosure;
FIG. 4 is a schematic view of the process in example 4 of the present disclosure;
fig. 5 is a schematic view of a robot control device in embodiment 6 of the present disclosure;
fig. 6 is a schematic diagram of a robot control system in embodiment 7 of the present disclosure.
Detailed Description
The present disclosure describes several embodiments, but the description is illustrative and not limiting, and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the embodiments described in the present disclosure.
In the description of the present disclosure, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment described as "exemplary" or "e.g." in this disclosure should not be taken as preferred or advantageous over other embodiments. "and/or" herein is a description of an association relationship of an associated object, meaning that there may be three relationships, e.g., a and/or B, which may represent: a exists alone, A and B exist together, and B exists alone. "plurality" means two or more than two. In addition, for the purpose of clearly describing the technical solutions of the embodiments of the present disclosure, words such as "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In describing representative exemplary embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other sequences of steps are possible as will be appreciated by those of ordinary skill in the art. Accordingly, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Furthermore, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present disclosure.
In order to solve the application problem of the industrial robot of the flexible production line, one solution is to control the robot by converting a dynamic scene into a static scene. For example, many production lines operate in a metronomic manner, i.e., after a target enters a work area, the production line is paused, then robots operate, and after the operation is completed, the production line is restarted. However, the mode of temporarily suspending the dynamic production line affects the working efficiency of the whole production line.
In order to avoid the suspension of the production line, another related art is to provide a temporary workbench in the working area, push the target workpiece into the temporary workbench after the target enters the working area, complete the operation of the robot in the static scene on the workbench, and push the target onto the production line again. Although a suspension of the production line can be avoided by providing a temporary table, the complexity and cost of the production line are increased.
In order to dynamically assemble various parts, a follow-up device is adopted, for example, after a target enters a working area, a tray with an encoder is loaded on the target to continue to move forward, the encoder can provide position information of the target in real time, then the robot can linearly track the target through the position information, and the operation on the target is realized under the condition that the robot and the target are relatively stationary. However, the encoder can only provide one-dimensional position information, and the position changes in other directions are not monitored, so that the encoder cannot be well popularized to a wider dynamic production line, and meanwhile, the position information provided by the encoder can cause drift of feedback position information due to slipping and the like, so that the assembly success rate is affected.
In addition, there is a method of capturing by a high-speed camera, which can obtain two-dimensional image information of a target in a short time, and can obtain rough two-dimensional position information of the target (such as a package) through simple processing, and then a special robot arm performs high-speed capturing. However, the accuracy of the obtained two-dimensional positional information is not high, and since there is a delay error in the two-dimensional positional information due to the movement of the production line, this mode is not suitable for operations such as high-accuracy assembly.
The above method enables the robot to be applied in some specific scenarios by means of technical decomposition or scenario specific handling. But are not compatible with large scale applications in dynamic flexible scenarios.
In order to solve the above-described problem, it is considered to rapidly obtain three-dimensional information of a target using a plurality of rows of laser line scanning cameras to guide the movement of the robot. However, the price of the laser scanning camera is usually very expensive, and three-dimensional information is used for real-time processing, so that pressure is put on the calculation speed of the industrial personal computer, very powerful calculation resources and a specific optimization processing algorithm are required to be configured, and therefore, the cost of software and hardware is greatly increased, so that the scheme cannot be popularized at present.
In some dynamic scenarios, it may be considered to reduce errors in a high frame rate closed loop control manner to achieve high accuracy tracking. However, in order to realize high frame rate (high control frequency) closed-loop control, acceleration processing is required to be carried out on each link, so that the response speed is improved; firstly, high-speed visual positioning is required to be realized, so that high requirements are put on the image acquisition speed of a camera and the processing speed of a visual algorithm, secondly, the control algorithm must be fast enough and can realize high-speed communication with a robot, so that the development difficulty of the control algorithm is increased, and a real-time system is required to be adopted in a hardware level, so that the development cost is increased. Generally, the control frequency (frame rate) is required to be about 200Hz to achieve the accuracy of 0.5%, but the acceleration of the control frequency (the improvement of the frame rate) brings about a sharp increase in the hardware cost and the software development cost of the camera.
In order to solve the difficulty of how to realize high-precision tracking on a dynamic target under the visual feedback and control rhythm of low frame rate, the robot high-precision operation in a dynamic scene on an industrial production line becomes possible. The present embodiment provides a control method of a robot in which a tool tip (such as, but not limited to, a jaw or the like) of the robot can be used to perform a predetermined operation on a target, such as gripping or the like; the object may be, but is not limited to, a part to be processed on a production line, etc., which may be moved with the production line, i.e., the position of the object is dynamically changed.
As shown in fig. 1, a robot control method provided in an embodiment of the present disclosure includes:
Step 110, acquiring positioning information of a target in a current control period and positioning information of a tool tail end of the robot in the current control period;
Step 120, predicting the positioning information of the tool end and the positioning information of the target in the next control period according to the positioning information of the tool end and the positioning information of the target in the current control period;
And 130, generating a control signal according to the positioning information of the tool tail end and the positioning information of the target in the next control period and sending the control signal to the robot so that the tool tail end can track the target.
In this embodiment, since motion prediction is performed, the problem of low real-time performance of the target position in motion at a low frame rate can be overcome to a certain extent, and in addition, a control command for the robot is generated according to the predicted position of the target and the robot tool tip at the next control period, which corresponds to performing pre-position compensation, so that the tracking accuracy of the robot tool tip to the target at a low frame rate can be improved.
In the present embodiment, the length of the control period is related to the control frequency f, and is the reciprocal 1/f of the control frequency; the control frequency is the same as the frame rate, and in the case of a low frame rate, the control frequency can be, but not limited to, 20Hz, and the response speed of the control frequency does not need to be greatly improved, so that the software and hardware cost is not greatly improved.
In the present embodiment, the robot control method described above may be performed by, but not limited to, a host computer. The positioning information of the robot tool end can be obtained through communication between the upper computer and the robot. When the positioning information is the coordinates, the coordinates given by the robot and the coordinates contained in the positioning information of the target can be converted into the same coordinate system through the calibration work of the coordinate system which is completed in advance.
In this embodiment, the object to be obtained the positioning information may be a part or other object to be operated by the controlled robot. The robot may perform a corresponding action in a next control cycle according to the control signal, such as moving the tool tip according to the movement parameter indicated by the control signal, and performing a specified action (such as, but not limited to, gripping) after the movement is completed.
In an exemplary embodiment of the present disclosure, the control signal includes a movement parameter generated according to the positioning information of the tool tip and the positioning information of the target in a next control cycle to control the robot to move the tool tip to a designated position for operating the target according to the movement parameter. Wherein the movement parameters may include one or more of the following: trajectory, distance, direction of rotation, angle of rotation, etc.
In an exemplary embodiment of the present disclosure, the obtaining the positioning information of the target in the current control period includes:
Obtaining an image of the target captured by an image capturing device in a current control period;
according to the image of the target, adopting a trained deep neural network to initially position the target, and obtaining an initial positioning result;
and extracting characteristic points according to the initial positioning result, and matching with a preset characteristic point template to obtain the positioning information of the target in the current control period.
In this embodiment, the image acquisition device captures the image of the target first, and then inputs the image into the deep neural network, so that rough initial positioning can be performed. The deep neural network can be trained by a large number of sample images in advance, so that the robustness of target identification is improved. From the output of the deep neural network, a point or a region can be obtained, which is the approximate location of the target. Then, more accurate positioning is performed on the basis, and accurate matching based on a preset template is performed on the basis of the characteristics of the target in the vicinity of a point or an area obtained according to the deep neural network, so that high-accuracy positioning information of the target is obtained; in order to improve the matching success rate, the feature points select corner points or edges of the target as much as possible or other areas with obvious features on the target.
In this embodiment, the positioning information of the target may be, but is not limited to, coordinates; the coordinates may be three-dimensional position coordinates in three axes of x, y, z, or may be three-dimensional position coordinates and euler angular coordinates around three axes. When the euler angular coordinates are included, it is necessary to acquire three-dimensional position coordinates of points on three different planes of the object, respectively.
In an exemplary embodiment of the present disclosure, the predicting the positioning information of the tool end and the positioning information of the target in the next control period according to the positioning information of the tool end and the positioning information of the target in the current control period includes:
Obtaining a state X k at the current moment according to the positioning information of the tool tail end and the positioning information of the target in the current control period;
Predicting a state X k+1 at the next moment by adopting a Kalman filtering model, wherein the state at the next moment comprises positioning information of the tail end of the tool and positioning information of the target in the next control period;
Wherein the duration dt k between the next time and the current time is equal to the duration Δt of one control period plus the delay The time delayIs a preset value or can be calculated according to a preset time delay model.
In this embodiment, KALMAN FILTER (kalman filter) models are used for prediction to obtain respective positioning information of the target and the tool tip in the next control period.
Assuming that the positioning information is the coordinates, the coordinates of the target areWherein x tyt zt at bt ct is the three-dimensional position coordinates and euler angles around three axes, respectively; the coordinates of the robot tool tip areWhere x r yr zr ar br cr is the three-dimensional position coordinates of the robot tool tip and the euler angles around the three axes, respectively.
In order to achieve an efficient control of the robot, real-time and accuracy of the coordinates of the robot and the object have to be ensured, but the obtained coordinates of the object tend to lag behind the true value at low frame rates due to the influence of the sensors, in particular the object tracking sensor and the image processing speed. Thus, pre-compensation of the positions of the target and robot tool tip is achieved through KALMAN FILTER in this embodiment, and errors between the obtained coordinate values and the true values are corrected through prediction. According to the algorithm of KALMAN FILTER, the state equation and the measurement equation can be expressed as:
Q k=0.1I,Wk = 0.1i, i is a 6 x 6 identity matrix, The speed and the acceleration of the tail end of the target and the tail end of the robot tool are respectively shown, k is the sequence number of the moment, and X k is the state of the moment k, namely the coordinates, the speed and the acceleration of the tail end of the target and the tail end of the robot tool in the current moment (or the current control period); x k+1 is the state at time k+1, i.e. the coordinates, speed and acceleration of the target and robot tool tip at the next time (or next control cycle).
In this embodiment, the amount of time change in the state transition matrix a, i.e. the duration of the time k +1 and time k intervals,Δt is the time interval at a fixed control frequency (fixed frame rate), which can be written as Δt=1/f, where f is the control frequency (frame rate), andFor the delay of the system, addCan be regarded as a pre-time compensation.
In this embodiment, the delay of the system may be a preset value, and may be an empirical value or an analog value; or the delay can be calculated by a preset delay model.
In an exemplary embodiment of the present disclosure, the positioning information may include position coordinates and angle coordinates; the generating a control signal based on the positioning information of the tool tip and the positioning information of the target in a next control cycle comprises:
calculating a position coordinate difference value and an angle coordinate difference value between the positioning information of the tool tail end and the positioning information of the target according to the positioning information of the tool tail end and the positioning information of the target in the next control period;
and taking the position coordinate difference value and the angle coordinate difference value as inputs of a PID control algorithm, and generating the control signal through the PID control algorithm.
In this embodiment, if the positioning information only includes three-dimensional position information and does not include angle information, a PID control algorithm may still be used to generate a control signal for the robot according to a difference in position coordinates between the end of the robot tool and the target.
An embodiment of the present disclosure further provides a robot control method, as shown in fig. 2, including:
Step 210, acquiring positioning information of the tool tail end and positioning information of the target in the current control period, and calculating delay of the current control period according to a preset delay model;
Step 220, predicting the positioning information of the tool tail end and the positioning information of the target after a first time according to the positioning information of the tool tail end and the positioning information of the target in the current control period;
Wherein the first time length is equal to the time length of one control period, and the delay of the current control period calculated in step S210 is added.
And 230, generating a control signal according to the positioning information of the tool tail end and the positioning information of the target in the next control period and sending the control signal to the robot so that the tool tail end can track the target.
In this embodiment, the first duration may be regarded as a result of performing pre-delay compensation on the interval time of the adjacent control periods, and by predicting the positioning information after the first duration, instead of predicting the positioning information after only one control period duration, the delay brought by the system itself may be considered, so that the prediction result is more accurate, thereby improving the accuracy of target tracking.
In this embodiment, if the prediction is performed by kalman filtering in step 220, the first time period may be used as the time variation dt k in the state transition matrix a. That is, the time period between the respective times of the predicted state X k+1 and the current state X k is the first time period.
In an exemplary embodiment of the present disclosure, the preset delay model includes: a robot delay model and a detection delay model;
The robot delay model is used for obtaining a first delay in a current control period; specifically, the control signal received by the robot is taken as a starting moment, the action indicated by the control signal completed by the tail end of the robot tool is taken as a termination moment, and the duration between the two moments is taken as a first delay;
The detection delay model is used for obtaining the time of detecting the target in the current control period as the starting time, the time of obtaining the positioning result of the target as the ending time, and the time between the starting time and the ending time as the second delay;
delay of the current control period = first delay + second delay.
In this embodiment, the delay required for pre-compensation includes two parts, one is the delay caused by the execution of the robot itself and the other is the delay caused by the target positioning.
In this embodiment, for the robot delay model, parameter adjustment may be performed by adopting a manner driven based on input/output data, for example, a series of control signals for indicating the robot to perform a certain action are designed, and position coordinates of the tool end at different moments in the process of actually observing the robot to perform movement according to the control signals are obtained, so that a time sequence is obtained, and the time sequence is aligned with the action indicated by the control signals, so that the delay between the start of obtaining the control signals and the completion of the action of the robot can be obtained. In this embodiment, the parameters of the robot delay model may be obtained using an EM algorithm. In each control period, the first delay of the current control period can be calculated through a robot delay model.
In this embodiment, for the detection delay model, mainly for obtaining delay in the target positioning process, multiple groups of delays for outputting target positioning information from the detection algorithm can be obtained through multiple experiments, so that parameters of the detection delay model are calculated, and in each control period, the second delay of the current control period can be calculated through the detection delay model.
In this embodiment, if a camera is used to capture a target to detect the target and then locate the target, the detection delay model is a visual delay model; the image shooting of the camera is realized in an external triggering mode, in each experiment, the time when the shooting of the target is completed and the image is output, the time when the positioning information of the target is obtained through the target positioning algorithm, and the time when the positioning information of the target is output (can be output to the next algorithm, such as a prediction algorithm) are all recorded in detail, and finally, the parameters of a visual delay model can be obtained, so that in each control period, the second delay of the current control period can be calculated through the visual delay model respectively.
An embodiment of the present disclosure further provides a robot control method, as shown in fig. 3, including steps 310 to 340. Step 310, step 330 and step 340 are the same as step 110, step 120 and step S130 of the previous embodiment, respectively. The difference from the foregoing embodiment 1 is that, before step S330, further includes:
Step 320, adjusting the algorithm parameters used by the prediction according to the difference between the actual positioning information of the tool end and/or the target in the current control period and the positioning information of the tool end and/or the target predicted in the previous control period.
In this embodiment, parameters used in prediction are optimized according to the deviation between the actual positioning information of the tool end or the target and the predicted value in the current control period, for example, parameters of a state equation and/or a measurement equation can be optimized under the condition of adopting kalman filtering prediction, so that the prediction precision in the next control period is improved, and the robot can track the target more accurately.
In this embodiment, the positioning information of the target in the current control period is already acquired, and the difference can be obtained by directly comparing with the predicted result of the previous period. The upper computer can directly acquire the positioning information of the position where the tool tail end is positioned after finishing the action indicated by the control signal in the current control period from the information returned by the robot, and compares the positioning information with the positioning information predicted in the previous control period to acquire the difference between the two information, which can be but is not limited to the difference between coordinates. Additionally, positioning information of the target and/or the simulated robot tool tip may also be obtained from a simulation system, such as a digital twin model.
In this embodiment, the adjustment may be performed according to the difference and a preset rule, for example, according to the magnitude of the difference value, to determine the magnitude of the adjustment parameter; further, for example, the parameters to be adjusted are determined according to the type of difference (such as position difference or angle difference, positive or negative difference), etc.
An embodiment of the present disclosure provides a robot control method that may be used, but is not limited to, for vision-based dynamic servo control of kuka robotic arms.
The accuracy of controlling the robot is improved from two aspects of a target positioning and control algorithm.
In the aspect of target positioning, the target positioning module of the embodiment adopts a mode of combining deep learning coarse positioning and template-based accurate positioning, and positions the target based on the image shot by the camera, so that the processing speed of a target positioning algorithm is increased, and the corresponding precision requirement can be met. The coarse positioning mode based on deep learning can improve the positioning robustness through a large amount of data training; because the convolutional layer of the deep neural network adopted by the deep learning has universality, the position (such as a point or an area) of the identified target is not accurate enough, and therefore the identification result of the deep neural network cannot be directly used as a tracking reference with high precision.
In the embodiment, near a rough positioning point or region obtained by deep learning, the characteristic points of the target are extracted to perform template-based accurate matching, and high-precision coordinates are obtained, so that high-precision positioning of the target is realized. The obtained high-precision coordinates will be used as input quantity for visual closed-loop control.
In this embodiment, an RGB depth camera may be used as an image capturing device to capture an image of a target, and the obtained RGB image is input to DNN (Deep Neural Networks, deep neural network) to perform coarse positioning of the target.
In this embodiment, the positioning method of deep learning adopts yolov algorithm, so as to implement coarse positioning of the target, and a rough point or area of the target can be obtained, then the result of coarse positioning is used as input, and is imported into the accurate matching algorithm based on the template to perform fine positioning.
In this embodiment, a method of combining Harris corner point and SIFT (Scale-INVARIANT FEATURE TRANSFORM, scale invariant feature transform) search may be used to extract feature points from the coarse positioning result, and then the extracted feature points are matched with a predetermined feature point template, so as to determine a final tracking reference point on the target. And then, aligning the depth map with the RGB image through an internal reference calibration result of the RGB depth camera, determining the mapping of the tracking reference point on the depth map, and finally, solving the three-dimensional coordinate information of the tracking reference point according to the pixel coordinates of the depth map and the calibration parameters of the camera.
To achieve six degrees of freedom tracking, three-dimensional points on the target that are not on the same plane need to be determined, so that the features of the points are clearly defined in the feature point template. The selection of the feature points can be selected at the corner points of the target or at the position where the features such as the edge of the target are obvious.
In the control algorithm layer, the control algorithm of the embodiment comprises three closed loop links, and processes target motion estimation, system delay prediction and pre-feedback of the fusion signal respectively.
As shown in fig. 4, the first closed loop 410 performs online prediction on motion coordinates of the target 12 and the tool end of the robot 13 through an image captured by the camera 11, and the prediction result is fed back to the high-level controller module 14 to perform front-end position compensation, so as to eliminate tracking errors caused by orderly actions of the target.
In this embodiment, the target recognition module 411 obtains the coordinates of the current moment (or the current control period) of the target 12 according to the image obtained by the camera 11, obtains the coordinates returned by the robot 13, and transmits the obtained coordinates to the kalman filter module 412, and predicts the coordinates of the next moment (or the next control period) of the target 12 and the robot 13 through the kalman filter module 412. In this embodiment, the position difference calculating module 413 may calculate the difference between the tool end of the robot 13 and the target 12 by using the predicted coordinates, and transmit the difference to the high-level control module 14 to perform pre-compensation of the position, that is, perform pre-position feedback control. Details of the prediction performed by the kalman filter module 412 in this embodiment can be found in the foregoing embodiments.
As shown in fig. 4, the second closed loop link 420 is for a vision-robot nonlinear system, and in this embodiment, the delay of the system includes two delays: on one hand, the robot receives the control signal and starts to execute the operation required by the control signal, and on the other hand, the robot is delayed for positioning the target (including the time consumption of shooting by a camera, the time consumption of executing a target positioning algorithm on a shot image, the time consumption of transmitting positioning information and the like); the robot delay model 421 and the vision delay model 422 of the system can be obtained through system identification, and prediction of the system delay is realized through the two delay models, so that closed loop delay compensation control of the second closed loop link 420 is formed.
In this embodiment, in order to determine parameters in the robot delay model 421, a manner driven based on input/output data is adopted, i.e. a series of robot movements are designed, and then coordinates and time sequences of the tail end of the robot tool are acquired in real time and aligned with the planned robot movements in the process of robot movements; based on this, the motion of the robot 13 can be identified using an Expectation-maximization (EM) algorithm, so as to obtain the parameters of the robot delay model 421.
In this embodiment, in order to obtain the delay in tracking the target, on the premise that the image shot by the camera will be targeted by using the target positioning module 411, the time required for the camera to shoot the image, the time required for the execution of the target positioning algorithm, and the time required for sending the positioning information to the outside are tested, and the parameters of the visual delay model 422 can be calculated through multiple experiments. In order to ensure the reliability of camera acquisition, the image shooting of the camera is realized in an external triggering mode, and the time of shooting and outputting data, the time of obtaining target coordinates by a target positioning algorithm and the time of sending coordinates are recorded in detail for each time node.
In the second closed loop 420, the predicted results of the robot delay model 421 and the visual delay model 422 are added to obtain a more accurate system delay Δt ke at a fixed control frequency (fixed frame rate). In this embodiment, the system delay of the current control period may be recalculated for each control period.
In this embodiment, the delay output by the second closed loop link 420 may be sent to the higher layer controller module 14 for fusion with the first closed loop link 410; specifically, the higher-level controller module 14 may send the delay output by the second closed-loop link 420 to the kalman filter module 412 for use in prediction.
As shown in fig. 4, the third closed loop link 430 is used for pre-feedback before the first closed loop link 410 and the second closed loop link 420 are fused. In this embodiment, the outputs of the first closed loop link 410 and the second closed loop link 420 are fused by the higher-level controller module 14 to form a pre-compensation control signal of position and time, and the signal output by the third closed loop link 430 can perform pre-compensation on the pre-fused signal, for example, parameters in a model or algorithm can be adjusted. In addition, the third closed loop link 430 may also be used to provide closed loop feedback on the position of the target relative to the camera.
Based on the prediction result, the coordinate difference between the tool tip of the robot and the target 12 in the next control cycle can be calculated, and sent to the PID control module 15, and the PID control module 15 generates a control signal including the control coordinates to be output to the robot. The control signals may be sent to the robot by the ROS (Robot Operating System ) module 16 forming the final control instructions; the moveit and real-time-arm servoing units in the ROS can realize automatic track planning based on control coordinates and tracking control based on position points of targets, and the frame rate can be controlled to be about 20 Hz.
Specifically, after the second closed loop link 420 obtains the system delay, the coordinate difference between the end of the robot tool and the target can be obtained as the completion of the present control period through the calculation of the position difference calculation module 413 in the first closed loop link 410Here [ [ ] ] means that the position coordinates and the angle coordinates are respectively differenced. It is contemplated that the angular coordinates herein are expressed in terms of Euler angles, which are discontinuities in Cartesian watch spaces. Therefore, in order to realize feedback control based on the target 12 and the robot tool tip angle difference, in this embodiment, the euler angle is converted into a quaternion, and then converted into a polar angle, and then the difference is made.
After receiving the coordinate difference e r->t between the end of the robot tool and the target, the high-level controller module may further process the coordinate difference, for example, eliminate the interference of the transient error, etc., and in this embodiment, the fuzzy control algorithm is used for processing.
In this embodiment, through the target positioning algorithm adopted by the target positioning module 411 described above and the adjustment of three closed loop links, accurate tracking of the robot 13 on the target 12 at a lower control frequency (lower frame rate) can be achieved through the pre-position and delay compensation.
It can be seen that the present embodiment enables accurate tracking of a moving target 12 by a robot 13 based on a low frame rate (such as, but not limited to, 20 Hz); according to experimental tests, the tracking accuracy of the robot 13 can reach 0.5% of the moving speed of the target 12, and the frame rate is often required to be 200Hz in order to reach the same tracking accuracy. Lower frame rates reduce the requirements for sensors and computation speed, thereby reducing the hardware cost of sensors such as vision cameras. Because the low-speed industrial camera is adopted, accurate tracking can be realized, and the cost can be greatly reduced; because the low frame rate reduces the speed requirement of the visual algorithm, the development cost of the visual algorithm is also reduced; therefore, the robot can enter more production lines with lower software and hardware cost, and particularly relates to a production line of a flexible dynamic scene, such as a precision assembly service in the dynamic scene. Therefore, the labor-intensive high-precision dynamic assembly production line can be smoothly inserted by a robot, so that the production efficiency and the quality are improved.
The scheme of the embodiment can be applied to the case of other frame rates; if the response speed of the software and the hardware is faster, the method can be applied to the situation that the frame rate is higher (for example, 100Hz is more than or equal to the frame rate is more than 20 Hz); or may be more amplitude position and delay pre-compensation to apply at lower frame rates (e.g., 20Hz > frame rate ≡10 Hz).
In this embodiment, algorithms executed by each module and model in the three closed loop links may be implemented by an upper computer. In order to realize the rapid communication between the upper computer and the robot true machine and realize the real-time control of the robot, an RSI (Robot Sensor Interface ) communication module of the kuka robot can be adopted to communicate between the upper computer and the robot.
In this embodiment, in order to realize visualization of upper computer control, a digital Luan Sheng simulation model of robot vision dynamic control may be developed, so that an end user may observe the running condition of the algorithm of the upper computer in real time, thereby manually adjusting the control system of the whole robot. The user can observe the operation control algorithm, can make the robot controlled to carry out what action, and can further control the actual execution action of the robot by communicating with the robot.
In a modification of this embodiment, the second closed loop link 420 may be omitted, and only the first closed loop link 410 and the third closed loop link 430 may be used, so that a relatively ideal tracking accuracy may be obtained, which is about 1.5% -2% of the target movement speed.
In order to obtain six-dimensional coordinates of the target when the target is positioned, the embodiment can use an RGBD (red, green and blue) depth camera to perform visual tracking on three points which are not on the same plane. Under the condition that six-dimensional coordinates are not needed, if only XYZ three-dimensional coordinates are needed to be tracked, only one characteristic point of the target is needed to be tracked; if only the coordinates of one direction are tracked, a two-dimensional RGB camera is used; in addition, other sensors may be used for tracking the target, such as positioning the target by laser tracking the target, or positioning the target by feedback from a force sensor.
An embodiment of the present disclosure further provides a robot control method, where on the basis of the foregoing embodiment, the method of this embodiment further includes: generating a digital twin model; and simulating the process of the robot according to the control signal by using the digital twin model. In this embodiment, all processes of performing target positioning, predicting, generating a control command and the like in the upper computer and operations after the robot receives the control command can be mapped to a digital twin model, so that actions of the simulated robot can be directly seen through the digital twin model, namely visual feedback can be directly obtained, and a user can conveniently optimize parameters such as a target positioning algorithm, a predicting algorithm, a PID control algorithm and the like. In addition, the actions of the robot observed in the digital twin model can be directly submitted to the real robot to be completed by establishing communication between the digital twin system and the real robot.
An embodiment of the present disclosure further provides a robot control device, as shown in fig. 5, including: a memory 51 and a processor 52; wherein: the memory 51 is used for storing a program for controlling the robot; the processor 52 is configured to read and execute a program for controlling the robot, and perform the robot control method according to any one of the embodiments.
An embodiment of the present disclosure further provides a robot control system, as shown in fig. 6, including:
a target detection means 61 arranged to obtain a detection result of the target;
the upper computer 62 is configured to perform the robot control method according to any one of the embodiments of the present disclosure; the upper computer obtains positioning information of the target in the current control period according to the detection result of the target in the current control period.
In this embodiment, the object detecting device 61 may be, but not limited to, a camera, and the detection result is a photographed image, which may be a photograph or a video. The upper computer 62 may execute a target positioning algorithm according to the captured image, for example, a pre-trained deep neural network may be used to perform coarse positioning of the target; and extracting characteristic points to match with a preset template on the basis of coarse positioning to finish fine positioning and obtain positioning information of the target. In this embodiment, the object detecting device 61 may be other sensors or detecting devices, such as a laser, a force sensor, etc.; the upper computer 62 may obtain the positioning information of the target by adopting an adaptive positioning algorithm according to the difference of the target detection device 61.
An embodiment of the present disclosure also provides a computer-readable storage medium storing a program for controlling a robot; the program for controlling the robot, when executed by the processor, may implement the robot control method according to any of the embodiments of the present disclosure.
In any one or more of the above-described exemplary embodiments of the present disclosure, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium, and executed by a hardware-based processing unit. The computer-readable medium may comprise a computer-readable storage medium corresponding to a tangible medium, such as a data storage medium, or a communication medium that facilitates transfer of a computer program from one place to another, such as according to a communication protocol. In this manner, a computer-readable medium may generally correspond to a non-transitory tangible computer-readable storage medium or a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Moreover, any connection may also be termed a computer-readable medium, for example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be appreciated, however, that computer-readable storage media and data storage media do not include connection, carrier wave, signal, or other transitory (transient) media, but are instead directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk or blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The technical solutions of the embodiments of the present disclosure may be implemented in a wide variety of devices or apparatuses, including wireless handsets, integrated Circuits (ICs), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the described techniques, but do not necessarily require realization by different hardware units. Rather, as described above, the various units may be combined in a codec hardware unit or provided by a collection of interoperable hardware units (including one or more processors as described above) in combination with suitable software and/or firmware.
Claims (12)
1. A robot control method for performing a predetermined operation on a target using a tool tip; the robot control method is characterized by comprising the following steps:
Acquiring positioning information of the tool tail end and positioning information of the target in the current control period;
Predicting the positioning information of the tool tail end and the positioning information of the target in the next control period according to the positioning information of the tool tail end and the positioning information of the target in the current control period; the positioning information of the tool tail end and the positioning information of the target in the next control period are the positioning information of the tool tail end and the positioning information of the target after a first duration; the first time length is the time length of one control period plus the delay of the current control period;
and generating a control signal according to the positioning information of the tool tail end and the positioning information of the target in the next control period and sending the control signal to the robot so that the tool tail end can track the target.
2. The robot control method according to claim 1, wherein:
The method further comprises the step of calculating delay of the current control period according to a preset delay model.
3. The robot control method according to claim 2, wherein:
The preset delay model comprises the following steps: a robot delay model and a detection delay model;
The robot delay model is used for obtaining a first delay from the moment that the robot receives a control signal to the moment that the tool end finishes the action indicated by the control signal in the current control period;
The detection delay model is used for obtaining a second delay from the moment of starting to detect the target to the moment of obtaining a positioning result of the target in the current control period;
the delay of the current control period is equal to the first delay plus the second delay.
4. A robot control method according to claim 3, wherein:
before said predicting the positioning information of the tool tip and the positioning information of the target in the next control cycle, the method further comprises:
and adjusting algorithm parameters used by prediction according to the difference between the actual positioning information of the tool end and/or the target in the current control period and the positioning information of the tool end and/or the target predicted in the last control period.
5. The robot control method according to claim 1, wherein:
The obtaining the positioning information of the target in the current control period comprises the following steps:
Obtaining an image of the target captured by an image capturing device in a current control period;
according to the image of the target, adopting a trained deep neural network to initially position the target, and obtaining an initial positioning result;
and extracting characteristic points according to the initial positioning result, and matching with a preset characteristic point template to obtain the positioning information of the target in the current control period.
6. The robot control method according to claim 1, wherein:
the predicting the positioning information of the tool end and the positioning information of the target in the next control period according to the positioning information of the tool end and the positioning information of the target in the current control period comprises the following steps:
obtaining the state of the current moment according to the positioning information of the tool tail end and the positioning information of the target in the current control period;
Predicting a state at the next moment by adopting a Kalman filtering model, wherein the state at the next moment comprises positioning information of the tool tail end and positioning information of the target in the next control period;
the time length between the next time and the current time is equal to the time length of one control period plus the time delay; the time delay is calculated according to a preset value or a preset time delay model.
7. The robot control method according to claim 1, wherein:
The positioning information comprises position coordinates and angle coordinates; the generating a control signal based on the positioning information of the tool tip and the positioning information of the target in a next control cycle comprises:
calculating a position coordinate difference value and an angle coordinate difference value between the positioning information of the tool tail end and the positioning information of the target according to the positioning information of the tool tail end and the positioning information of the target in the next control period;
and taking the position coordinate difference value and the angle coordinate difference value as inputs of a PID control algorithm, and generating the control signal through the PID control algorithm.
8. The robot control method according to claim 1, wherein:
The control signal includes a movement parameter generated from the positioning information of the tool tip and the positioning information of the target in a next control cycle to control the robot to move the tool tip to a specified position for operating the target according to the movement parameter.
9. The robot control method according to claim 1, wherein:
after generating the control signal according to the positioning information of the tool end and the positioning information of the target in the next control period, the method further comprises: generating a digital twin model; and simulating a process of the robot according to the control signal by the digital twin model.
10. A robot control device comprising a memory and a processor, characterized in that the memory holds a computer program, which processor, when executing the computer program, is capable of implementing the robot control method according to any one of claims 1 to 9.
11. A robot control system, comprising:
Target detection means arranged to obtain a detection result of the target;
A host computer that executes the robot control method according to any one of claims 1 to 9; the upper computer obtains the positioning information of the target in the current control period according to the detection result of the target detection device in the current control period.
12. A computer-readable storage medium storing a program for controlling a robot; the robot control method according to any one of claims 1 to 9, when the program for controlling the robot is executed by a processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111673300.2A CN116408790B (en) | 2021-12-31 | 2021-12-31 | Robot control method, device, system and storage medium |
PCT/CN2022/135707 WO2023124735A1 (en) | 2021-12-31 | 2022-11-30 | Robot control method, apparatus and system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111673300.2A CN116408790B (en) | 2021-12-31 | 2021-12-31 | Robot control method, device, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116408790A CN116408790A (en) | 2023-07-11 |
CN116408790B true CN116408790B (en) | 2024-07-16 |
Family
ID=86997575
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111673300.2A Active CN116408790B (en) | 2021-12-31 | 2021-12-31 | Robot control method, device, system and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116408790B (en) |
WO (1) | WO2023124735A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117283571B (en) * | 2023-11-24 | 2024-02-20 | 法奥意威(苏州)机器人系统有限公司 | Robot real-time control method and device, electronic equipment and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110065064A (en) * | 2018-01-24 | 2019-07-30 | 南京机器人研究院有限公司 | A kind of robot sorting control method |
CN111496776A (en) * | 2019-01-30 | 2020-08-07 | 株式会社安川电机 | Robot system, robot control method, robot controller, and recording medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4257230B2 (en) * | 2004-02-26 | 2009-04-22 | 株式会社東芝 | Mobile robot |
US10780581B1 (en) * | 2018-06-14 | 2020-09-22 | X Development Llc | Generation and application of reachability maps to operate robots |
CN110660104A (en) * | 2019-09-29 | 2020-01-07 | 珠海格力电器股份有限公司 | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium |
CN111026115A (en) * | 2019-12-13 | 2020-04-17 | 华南智能机器人创新研究院 | Robot obstacle avoidance control method and device based on deep learning |
CN111230860B (en) * | 2020-01-02 | 2022-03-01 | 腾讯科技(深圳)有限公司 | Robot control method, robot control device, computer device, and storage medium |
CN111890365B (en) * | 2020-07-31 | 2022-07-12 | 平安科技(深圳)有限公司 | Target tracking method and device, computer equipment and storage medium |
CN112847334B (en) * | 2020-12-16 | 2022-09-23 | 北京无线电测量研究所 | Mechanical arm target tracking method based on visual servo |
CN112859853B (en) * | 2021-01-08 | 2022-07-12 | 东南大学 | Intelligent harvesting robot path control method considering time delay and environmental constraints |
-
2021
- 2021-12-31 CN CN202111673300.2A patent/CN116408790B/en active Active
-
2022
- 2022-11-30 WO PCT/CN2022/135707 patent/WO2023124735A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110065064A (en) * | 2018-01-24 | 2019-07-30 | 南京机器人研究院有限公司 | A kind of robot sorting control method |
CN111496776A (en) * | 2019-01-30 | 2020-08-07 | 株式会社安川电机 | Robot system, robot control method, robot controller, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
CN116408790A (en) | 2023-07-11 |
WO2023124735A1 (en) | 2023-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106774345B (en) | Method and equipment for multi-robot cooperation | |
JP7326911B2 (en) | Control system and control method | |
CN110216649B (en) | Robot working system and control method for robot working system | |
RU2700246C1 (en) | Method and system for capturing an object using a robot device | |
EP3733355A1 (en) | Robot motion optimization system and method | |
JP2020135623A (en) | Object detection device, control device and object detection computer program | |
CN113379849A (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
JP2013180380A (en) | Control device, control method, and robot apparatus | |
Taryudi et al. | Eye to hand calibration using ANFIS for stereo vision-based object manipulation system | |
CN116408790B (en) | Robot control method, device, system and storage medium | |
CN112633187B (en) | Automatic robot carrying method, system and storage medium based on image analysis | |
Zhang et al. | Deep learning-based robot vision: High-end tools for smart manufacturing | |
Xie et al. | Visual tracking control of SCARA robot system based on deep learning and Kalman prediction method | |
CN111275758B (en) | Hybrid 3D visual positioning method, device, computer equipment and storage medium | |
Zhou et al. | 3d pose estimation of robot arm with rgb images based on deep learning | |
CN210361314U (en) | Robot teaching device based on augmented reality technology | |
JP2020142323A (en) | Robot control device, robot control method and robot control program | |
JP2778430B2 (en) | Three-dimensional position and posture recognition method based on vision and three-dimensional position and posture recognition device based on vision | |
Lagamtzis et al. | CoAx: Collaborative Action Dataset for Human Motion Forecasting in an Industrial Workspace. | |
CN116197918B (en) | Manipulator control system based on action record analysis | |
Medeiros et al. | UAV target-selection: 3D pointing interface system for large-scale environment | |
Xu et al. | A fast and straightforward hand-eye calibration method using stereo camera | |
KR20220065232A (en) | Apparatus and method for controlling robot based on reinforcement learning | |
WO2023100282A1 (en) | Data generation system, model generation system, estimation system, trained model production method, robot control system, data generation method, and data generation program | |
CN112917457A (en) | Industrial robot rapid and accurate teaching system and method based on augmented reality technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |