Nothing Special   »   [go: up one dir, main page]

CN116700156A - Visual servo control method, control system, control device and storage medium - Google Patents

Visual servo control method, control system, control device and storage medium Download PDF

Info

Publication number
CN116700156A
CN116700156A CN202210372619.XA CN202210372619A CN116700156A CN 116700156 A CN116700156 A CN 116700156A CN 202210372619 A CN202210372619 A CN 202210372619A CN 116700156 A CN116700156 A CN 116700156A
Authority
CN
China
Prior art keywords
information
motion
visual
module
control instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210372619.XA
Other languages
Chinese (zh)
Inventor
赖禹昊
刘英策
耿仕能
邱谭平
刘德顺
李晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pai Turner Foshan Robot Technology Co ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN202210372619.XA priority Critical patent/CN116700156A/en
Publication of CN116700156A publication Critical patent/CN116700156A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/41865Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by job scheduling, process planning, material flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32252Scheduling production, machining, job shop
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Numerical Control (AREA)

Abstract

The invention discloses a visual servo control method, a control system, control equipment and a storage medium. The visual servo control method comprises the following steps: generating perception information corresponding to the target construction object through a pose perception module; observing the target construction object through a state observer and calling an observation model to generate motion state prediction information corresponding to the target construction object; generating a motion control instruction according to the perception information and the motion state prediction information through a motion control module; executing the motion according to the motion control instruction by an execution driving module; receiving real-time motion state information fed back by the execution driving module through the motion control module; generating a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information through a motion control module; and executing the correction motion according to the dynamic compensation control instruction by the execution driving module. According to the visual servo control method provided by the embodiment of the invention, the visual servo control precision of the building robot can be improved.

Description

Visual servo control method, control system, control device and storage medium
Technical Field
The present invention relates to the field of robots, and in particular, to a visual servo control method, a control system, a control device, and a storage medium.
Background
The visual servo control is used for controlling the robot, and mainly combines visual perception to control the movement of the robot.
At present, when the movement of a robot is controlled by a visual servo control method, the control precision of the robot is lower under the conditions that the robot faces to a complex dynamic environment and is disturbed in real time.
Disclosure of Invention
The invention provides a visual servo control method, a control system, control equipment and a storage medium, which can improve the visual servo control precision of a construction robot.
In a first aspect, an embodiment of the present invention provides a visual servoing control method for a construction robot, including: generating perception information corresponding to the target construction object through a pose perception module; observing the target construction object through a state observer and calling an observation model to generate motion state prediction information corresponding to the target construction object; generating a motion control instruction according to the perception information and the motion state prediction information through a motion control module; executing the motion according to the motion control instruction by an execution driving module; receiving real-time motion state information fed back by the execution driving module through the motion control module; generating a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information through a motion control module; and executing the correction motion according to the dynamic compensation control instruction by the execution driving module.
According to the foregoing embodiment of the first aspect of the present invention, before the generation of the perception information corresponding to the target construction object by the pose-aware module, the visual servo control method further includes: performing basic parameter configuration, wherein the performing basic parameter configuration comprises: determining a visual servo working mode; and invoking process standard data in the product and process databases to obtain the precision convergence standard of the visual servo work.
According to any of the foregoing embodiments of the first aspect of the present invention, the visual servo operation mode includes a fixed-point tracking mode and a tracking mode, and if it is determined that the visual servo operation mode is the fixed-point tracking mode, the perception information is target feature point information; and if the visual servo working mode is determined to be a tracking mode, the perception information is expected track information.
According to any of the foregoing embodiments of the first aspect of the present invention, generating, by the pose-aware module, awareness information corresponding to the target construction object includes: calibrating parameters; and performing target recognition and pose calculation to obtain perception information.
According to any of the foregoing embodiments of the first aspect of the present invention, performing parameter calibration includes: and calling a standard visual calibration plate, and completing the calibration of an inner parameter and an outer parameter through a corresponding standard calibration flow, wherein the inner parameter is a pixel coordinate parameter of a visual sensor, the outer parameter is a relative installation position parameter between the visual sensor and a body of the construction robot, the inner parameter is used for realizing pose calculation of a target construction object under a camera coordinate system, and the outer parameter is used for realizing the conversion of a coordinate position and a motion state between a workpiece coordinate system and a base coordinate system by a motion control module.
According to any of the foregoing embodiments of the first aspect of the present invention, performing object recognition and pose calculation to obtain the perception information includes: acquiring an actual image of a target construction object; comparing the actual image with samples in the product and process databases; if the actual image can be matched with the existing samples in the product and process databases, a target recognition and pose resolving scheme corresponding to the matched samples is called to obtain perception information; if the actual image is not matched with the existing samples in the product and process database, performing graphic processing and processing scheme extraction on the actual image to form a new sample and a corresponding target recognition and pose resolving scheme, and adding the new sample and the corresponding target recognition and pose resolving scheme into the product and process database.
According to any of the foregoing embodiments of the first aspect of the present invention, the perception information includes a flag bit of the target construction object within the effective field of view and a description of a position and/or posture of the target construction object in three-dimensional space under a camera coordinate system.
According to any of the foregoing embodiments of the first aspect of the present invention, generating, by the motion control module, motion control instructions from the perceptual information and the motion state prediction information comprises: performing data preprocessing, the performing data preprocessing comprising: performing data verification on the perception information; performing coordinate conversion on the perception information; and generating a motion control instruction according to the preprocessed perception information and the motion state prediction information.
According to any of the foregoing embodiments of the first aspect of the present invention, performing data verification on the perceptual information comprises: logic checking, namely checking whether a target construction object in the perception information is valid in an effective visual field range or not to judge whether the perception information is available or not; the process verification, namely calling the process standard data in the product and the process database to verify whether the perceived information is in the corresponding allowable error range, if so, the process verification is qualified, and if not, a prompt is sent; and (3) safety verification, namely verifying whether the perceived information exceeds a motion capability threshold of the construction robot, if not, judging the perceived information to be abnormal information and discarding the abnormal information if the perceived information exceeds the motion capability threshold of the construction robot, and if the perceived information does not exceed the motion capability threshold of the construction robot, judging the safety verification to be qualified.
According to any of the foregoing embodiments of the first aspect of the present invention, performing coordinate conversion on the perception information includes: the perception information described based on the camera coordinate system or the tool coordinate system is converted into the description based on the base coordinate system.
According to any of the foregoing embodiments of the first aspect of the present invention, generating motion control instructions from the preprocessed perceptual information and the motion state prediction information comprises: calculating an error equation between the current state and the target state of the construction robot according to the preprocessed perception information and the motion state prediction information; and generating a motion control instruction according to the error equation and a corresponding control law.
According to any of the foregoing embodiments of the first aspect of the present invention, the visual servoing control method further includes: in each control period of the motion control module, judging whether the pose of the construction robot in the task space meets the precision convergence standard, if so, ending the visual servo work, and if not, repeating the steps of generating a motion control instruction, executing motion according to the motion control instruction, receiving feedback real-time motion state information, generating a dynamic compensation control instruction and executing correction motion according to the dynamic compensation control instruction.
In a second aspect, an embodiment of the present invention provides a visual servoing control system for a construction robot, the visual servoing control system including: the pose sensing module is configured to generate sensing information corresponding to the target construction object; the motion control module comprises a state observer, wherein the state observer is configured to observe a target construction object and call an observation model to generate motion state prediction information corresponding to the target construction object, and the motion control module is configured to generate motion control instructions according to the perception information and the motion state prediction information; and the execution driving module can execute motion according to the motion control instruction and can feed back real-time motion state information, wherein the motion control module can generate a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, and the execution driving module can execute correction motion according to the dynamic compensation control instruction.
According to the foregoing embodiment of the second aspect of the present invention, the visual servoing control system further includes: the product and process database stores process standard data and sample information of material products; and a main control module for performing basic parameter configuration, wherein the performing basic parameter configuration includes: determining a visual servo working mode; and invoking process standard data in the product and process databases to obtain the precision convergence standard of the visual servo work.
In a third aspect, an embodiment of the present invention provides a visual servoing control apparatus, including: a memory having instructions stored therein and at least one processor invoking the instructions in the memory to cause the visual servo control device to perform the visual servo control method according to any of the preceding embodiments of the first aspect of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium having stored thereon instructions that, when executed by a processor, implement a visual servoing control method according to any of the foregoing embodiments of the first aspect of the invention.
According to the visual servo control method provided by the embodiment of the invention, the visual servo control method is used for the building robot, the execution driving module can feed back real-time motion state information when executing motion according to the motion control instruction, and the motion control module generates a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, so that the execution driving module executes correction motion according to the dynamic compensation control instruction. Therefore, the motion state of the building robot is continuously corrected, the influence of complex dynamic environment and real-time disturbance on the building robot can be reduced, and the accuracy of motion control of the building robot is improved. In addition, the state observer observes the target construction object and calls the observation model to generate motion state prediction information corresponding to the target construction object, the motion control instruction is generated according to the perception information and the motion state prediction information, and when the generation frequency of the perception information is lower than the generation frequency required by the motion control instruction, the motion state prediction information can be used for interpolation to generate the motion control instruction, so that the contradiction between the high-frequency motion control instruction requirement and the low-frequency perception information is solved. Even when the frequency of the generation of the perception information is low, a smoother motion control instruction set can be generated, the requirement on the processing speed of the pose perception module is reduced, namely the dependence on the pose perception module with high processing speed is reduced, and the cost is convenient to reduce.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block flow diagram of a visual servoing control method according to an embodiment of the invention;
FIG. 2 is a block diagram illustrating an embodiment of a visual servoing control system in accordance with the present invention;
FIG. 3 is a schematic diagram of a visual servo control apparatus according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
Furthermore, the description of "first," "second," etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
The embodiment of the invention provides a visual servo control method which is used for a construction robot.
FIG. 1 is a block flow diagram of a visual servoing control method according to an embodiment of the invention. The visual servoing control method includes steps S110 to S190.
Alternatively, step S110 may be performed before step S120. In step S110, basic parameter configuration is performed. In some embodiments, performing the base parameter configuration includes: determining a visual servo working mode; and invoking process standard data in the product and process databases to obtain the precision convergence standard of the visual servo work.
In some embodiments, the base parameter configuration is performed by a master control module of the visual servoing control system. The basic parameter configuration may be performed by configuring a construction robot job task, a system parameter, a construction process parameter, and the like, in addition to the above examples. When the basic parameter configuration is carried out, a main control program of the construction process of the building robot can be operated. The main control module can perform overall scheduling control on the complete construction operation task of the construction robot.
Alternatively, the main control module may invoke the product and process databases. The product and process databases store process standard data, sample information for material products, and the like. The process standard data includes, for example, standard process control flows and program templates, visual servo operating mode models, process acceptance criteria, and the like. According to the process acceptance qualification condition, the precision convergence standard, the task ending condition and the like of the visual servo work can be determined. The main control module can set, coordinate, schedule, adjust and control other modules according to the standard process control flow and the program template and in combination with the current actual construction task situation.
In step S120, perception information corresponding to the target construction object is generated by the pose perception module.
In some embodiments, the step S120 of generating, by the pose-aware module, the perception information corresponding to the target construction object includes: calibrating parameters; and performing target recognition and pose calculation to obtain perception information.
In some embodiments, the pose sensing module performs parameter calibration according to the configuration of the construction system, the construction object material product image, the construction task and the like configured and sent by the main control module, and performs target identification and pose calculation to obtain sensing information.
In some embodiments, performing parameter calibration includes: and calling a standard visual calibration plate, and completing the calibration of an inner parameter and an outer parameter through a corresponding standard calibration flow, wherein the inner parameter is a pixel coordinate parameter of a visual sensor, the outer parameter is a relative installation position parameter between the visual sensor and a body of the construction robot, the inner parameter is used for realizing pose calculation of a target construction object under a camera coordinate system, and the outer parameter is used for realizing the conversion of a coordinate position and a motion state between a workpiece coordinate system and a base coordinate system by a motion control module. The object coordinate system is also referred to as the user coordinate system. The base coordinate system is also referred to as the motion control coordinate system, i.e. the coordinate system suitable for the motion control module.
In some embodiments, performing object recognition and pose resolution to obtain the perceptual information comprises: acquiring an actual image of a target construction object; comparing the actual image with samples in the product and process databases; if the actual image can be matched with the existing samples in the product and process databases, a target recognition and pose resolving scheme corresponding to the matched samples is called to obtain perception information; if the actual image is not matched with the existing samples in the product and process database, performing graphic processing and processing scheme extraction on the actual image to form a new sample and a corresponding target recognition and pose resolving scheme, and adding the new sample and the corresponding target recognition and pose resolving scheme into the product and process database. Specifically, if the actual image fails to match with the existing samples in the product and process database, the actual image can be subjected to graphics processing, contour and step point extraction, deep learning, model training and other programs, and the processing scheme is extracted and added into the product and process database.
In some embodiments, the visual servoing mode of operation includes a fixed point tracking mode and a tracking mode. If it is determined in step S110 that the visual servoing operation mode is the fixed point tracking mode, in step S120, the perception information is target feature point information. If it is determined in step S110 that the visual servoing operation mode is the tracking mode, the perceived information is the desired track information in step S120.
When the visual servo working mode is a fixed-point tracking mode, matching building construction task scenes tracked at fixed points, wherein the matched building construction task scenes comprise, but are not limited to, a wall or floor tile leveling task, a wall or floor tile paving task, a brickwork task, a screw hole plugging task, a reinforcing steel bar binding task and the like. When matching a wall or floor tile leveling task, the task reference object is a building surface to be operated, and the characteristic information is relative attitude information perceived by an inertial measurement unit (Inertial Measurement Unit, IMU) or an inclinometer; when matching a wall or floor tile paving task, the task reference object is a paved brick corner point, and the characteristic information is the relative pose between the brick to be paved and the reference brick; when a bricklaying task is matched, the task reference object is a bricklayed brick corner point, and the characteristic information is the relative pose between the brick to be bricklayed and the reference brick; when the screw hole plugging task is matched, the task reference object is a screw hole, and the characteristic information is the relative pose between the tail end of the executing mechanism and the hole; when the reinforcement binding task is matched, the task reference object is a reinforcement intersection point, and the characteristic information is the relative pose between the tail end of the executing mechanism and the intersection point.
When the visual servo working mode is a tracking mode, building construction task scenes tracked by tracking are matched, and matched common building construction task scenes are mainly divided into uniform linear motion tracking types, uniform circular motion tracking types and irregular spline curve tracking types.
Through step S120, perception information is generated, which in some embodiments includes whether the target construction object is in the effective field of view and a description of the position and/or pose of the target construction object in three-dimensional space under the camera coordinate system. The description of the position and/or the gesture of the target construction object in the three-dimensional space under the camera coordinate system may be the description of the 6D pose information of the target construction object in the three-dimensional space under the camera coordinate system, and may include position description information and/or gesture description information. The position description information refers to the positions x, y and z relative to the coordinate axes of the camera coordinate system, and the gesture description information refers to the rotation angles rx, ry and rz relative to the coordinate axes of the camera coordinate system. According to the different types, functions and installation positions of the sensors configured by the pose sensing module, the three-dimensional spatial pose information of the target construction object accurately identified by the pose sensing module is also different, but at least comprises one or more dimensions of the 6D pose information.
In some embodiments, when the pose sensing module is used for carrying out target recognition and pose resolving on the target construction object, the feature recognition precision of the target construction object is not lower than the error precision required by the process standards in the product and process databases.
In step S130, the target construction object is observed by the state observer and the observation model is called to generate motion state prediction information corresponding to the target construction object.
In step S140, a motion control command is generated by the motion control module according to the perception information and the motion state prediction information.
In some embodiments, the step S140 of generating, by the motion control module, motion control instructions according to the perception information and the motion state prediction information includes: carrying out data preprocessing; and generating a motion control instruction according to the preprocessed perception information and the motion state prediction information.
The data preprocessing comprises the following steps: performing data verification on the perception information; and performing coordinate transformation on the perception information.
In some embodiments, data checking the perceptual information comprises: logic verification; and (5) process checking and safety checking.
In the logic verification, whether the target construction object in the perception information is in an effective visual field range or not is verified, and whether the perception information is available or not is judged.
And if the logic check is qualified, performing process check. In the process verification, invoking process standard data in the product and process database to verify whether the perceived information is in a corresponding allowable error range, if so, verifying the process to be qualified, and if not, sending a prompt. Specifically, the normal execution of the subsequent process operation in the construction process of the building robot often depends on the qualified execution of the preamble operation process, in the process verification, errors required by process standard data in a product and a process database for the process standard of the preamble process link are called, verification judgment is carried out on the current perception information, if the perception information is within an allowable error range specified by the process standard of the preamble process link, the process verification is judged to be qualified, otherwise, the building robot does not respond to the perception information and prompts.
And if the process check is qualified, performing safety check. And (3) safety verification, namely verifying whether the perceived information exceeds a motion capability threshold of the construction robot, if not, judging the perceived information to be abnormal information and discarding the abnormal information if the perceived information exceeds the motion capability threshold of the construction robot, and if the perceived information does not exceed the motion capability threshold of the construction robot, judging the safety verification to be qualified. In some embodiments, if the verification sensing information exceeds the movement capability threshold of the construction robot, the sensing information is judged as abnormal information and is discarded, and the construction robot maintains the current movement state and does not perform servo response so as to prevent the construction robot from galloping or generating other safety problems. If the continuous sensing information is abnormal information within a certain time range or a certain frequency range, a prompt is sent out to prompt the current sensor to identify the abnormality.
The coordinate conversion of the perception information comprises: the perception information described based on the camera coordinate system or the tool coordinate system is converted into the description based on the base coordinate system. Optionally, in this step, in combination with the foregoing parameter calibration step, the sensing information may be converted into a description based on the base coordinate system through a preset matrix operation.
In some embodiments, generating motion control instructions from the preprocessed perceptual information and the motion state prediction information comprises: calculating an error equation between the current state and the target state of the construction robot according to the preprocessed perception information and the motion state prediction information; and generating a motion control instruction according to the error equation and a corresponding control law. Error equations include, but are not limited to, positional errors, attitude errors, speed errors, acceleration errors, etc. of the present state and the target state of the construction robot.
The motion control command generated according to the error equation and the corresponding control law may be a position control command, a speed control command, or a torque control command, and the mode thereof is selected when the basic parameter configuration is performed in step S110. The further generated motion control instruction needs to meet the motion parameter constraint set by the user and the execution capacity constraint of the execution driving module, and minimize the tracking error convergence time.
In step S150, a motion is performed according to a motion control instruction by the execution driving module. The execution driving module can feed back real-time motion state information at the same time.
In step S160, the real-time motion state information fed back by the execution driving module is received by the motion control module.
In step S170, a motion control module generates a motion compensation control command according to the motion control command and the real-time motion state information.
In step S180, the corrective motion is performed according to the dynamic compensation control instruction by the execution driving module.
In some embodiments, step S190 is further included, and in step S190, in each control cycle of the motion control module, it is determined whether the pose of the construction robot in the task space meets the precision convergence criterion. If so, ending the visual servo work. If not, repeating the steps of generating a motion control instruction, executing motion according to the motion control instruction, receiving feedback real-time motion state information, generating a dynamic compensation control instruction and executing correction motion according to the dynamic compensation control instruction. That is, if not satisfied, step S140, step S150, step S160, step S170, and step S180 are repeated. In some embodiments, if satisfied, the visual servoing is ended. The motion state of the construction robot when the visual servoing operation is finished may be selected to maintain the current position or the current speed or the current moment according to the maintaining mode set when the basic parameter configuration is performed in step S110.
According to the visual servo control method provided by the embodiment of the invention, the visual servo control method is used for the building robot, the execution driving module can feed back real-time motion state information when executing motion according to the motion control instruction, and the motion control module generates a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, so that the execution driving module executes correction motion according to the dynamic compensation control instruction. Therefore, the motion state of the building robot is continuously corrected, the influence of complex dynamic environment and real-time disturbance on the building robot can be reduced, and the accuracy of motion control of the building robot is improved. In addition, the state observer observes the target construction object and calls the observation model to generate motion state prediction information corresponding to the target construction object, the motion control instruction is generated according to the perception information and the motion state prediction information, and when the generation frequency of the perception information is lower than the generation frequency required by the motion control instruction, the motion state prediction information can be used for interpolation to generate the motion control instruction, so that the contradiction between the high-frequency motion control instruction requirement and the low-frequency perception information is solved. Even when the frequency of the generation of the perception information is low, a smoother motion control instruction set can be generated, the requirement on the processing speed of the pose perception module is reduced, namely the dependence on the pose perception module with high processing speed is reduced, and the cost is convenient to reduce.
Optionally, the visual servo operating mode includes a fixed point tracking mode and a tracking mode. If it is determined in step S110 that the visual servoing operation mode is the fixed point tracking mode, in step S120, the perception information is target feature point information. If it is determined in step S110 that the visual servoing operation mode is the tracking mode, the perceived information is the desired track information in step S120. Therefore, the visual servo control method provided by the embodiment of the invention can be compatible with common target sensing and dynamic tracking control link work tasks in building robot brick paving, brick laying, screw hole plugging and gluing and caulking work tasks, realize modularized packaging and calling of a dynamic tracking algorithm sample library, can openly receive new algorithm sample cases, and improves the applicability to tasks and targets.
Optionally, performing data verification on the perception information includes: logic verification; and (5) process checking and safety checking. Therefore, protection processing of sensor data anomalies due to shielding, losing and the like of a target construction object in the construction process, independent fine control of different dimension data and multiple stop modes of visual servo work are increased in the visual servo control method, and safety and flexibility are improved.
The embodiment of the invention also provides a visual servo control system for the construction robot. The visual servo control system can realize the visual servo control method.
FIG. 2 is a block diagram of a visual servoing control system according to an embodiment of the invention. The visual servoing control system includes a pose sensing module 130, a motion control module 140, and an execution driving module 150. In this embodiment, the visual servoing control system further includes a product and process database 110 and a master control module 120. The product and process databases 110, the master control module 120, the pose sensing module 130, the motion control module 140, and the execution driving module 150 are communicatively connected to each other.
The product and process database 110 stores process standard data and sample information for material products. The process standard data includes, for example, standard process control flows and program templates, visual servo operating mode models, process acceptance criteria, and the like. According to the process acceptance qualification condition, the precision convergence standard, the task ending condition and the like of the visual servo work can be determined.
The main control module 120 is configured to perform basic parameter configuration, where performing basic parameter configuration includes:
Determining a visual servo working mode; and invoking process criteria data in the product and process database 110 to obtain accuracy convergence criteria for the visual servoing effort. The main control module 120 may set, coordinate, schedule, adjust and control other modules in accordance with standard process control flows and program templates of the product and process database 110 in conjunction with the current actual construction task situation. The basic parameter configuration may be performed by configuring a construction robot job task, a system parameter, a construction process parameter, and the like, in addition to the above examples. When the basic parameter configuration is carried out, a main control program of the construction process of the building robot can be operated. The main control module can perform overall scheduling control on the complete construction operation task of the construction robot.
The pose awareness module 130 is configured to generate awareness information corresponding to the target construction object. In some embodiments, the pose sensing module 130 performs parameter calibration according to the configuration of the construction system, the image of the construction object material product, the construction task, etc. configured and sent by the main control module 120, and performs target recognition and pose calculation to obtain the sensing information. The pose awareness module 130 sends awareness information to the motion control module 140.
The pose sensor module 130 may include visual sensors, media for running visual recognition, image processing, pose resolving programs, and other connection and auxiliary devices, as well as sensors with a pose information sensing processing capability, such as IMUs, inclinometers, laser guidance and receiving devices, etc. The pose sensing module 130 is used for performing parameter calibration, and performing target recognition and pose calculation to obtain sensing information. The step of performing target recognition and pose calculation to obtain perception information comprises the following steps: acquiring an actual image of a target construction object; comparing the actual image with samples in the product and process databases; if the actual image can be matched with the existing samples in the product and process databases, a target recognition and pose resolving scheme corresponding to the matched samples is called to obtain perception information; if the actual image is not matched with the existing samples in the product and process database, performing graphic processing and processing scheme extraction on the actual image to form a new sample and a corresponding target recognition and pose resolving scheme, and adding the new sample and the corresponding target recognition and pose resolving scheme into the product and process database. Specifically, if the actual image fails to match with the existing samples in the product and process database, the actual image can be subjected to graphics processing, contour and step point extraction, deep learning, model training and other programs, and the processing scheme is extracted and added into the product and process database.
The motion control module 140 includes a state observer 141, the state observer 141 being configured to observe a target construction object and to invoke an observation model to generate motion state prediction information corresponding to the target construction object, the motion control module 140 being configured to be capable of generating motion control instructions based on the perception information and the motion state prediction information. Generating, by the motion control module, motion control instructions based on the perceptual information and the motion state prediction information comprises: carrying out data preprocessing; and generating a motion control instruction according to the preprocessed perception information and the motion state prediction information.
The data preprocessing comprises the following steps: performing data verification on the perception information; and performing coordinate transformation on the perception information. In some embodiments, data checking the perceptual information comprises: logic verification; and (5) process checking and safety checking. The coordinate conversion of the perception information comprises: the perception information described based on the camera coordinate system or the tool coordinate system is converted into the description based on the base coordinate system. Optionally, in this step, in combination with the foregoing parameter calibration step, the sensing information may be converted into a description based on the base coordinate system through a preset matrix operation.
In some embodiments, generating motion control instructions from the preprocessed perceptual information and the motion state prediction information comprises: calculating an error equation between the current state and the target state of the construction robot according to the preprocessed perception information and the motion state prediction information; and generating a motion control instruction according to the error equation and a corresponding control law.
The execution driving module 150 can execute the motion according to the motion control instruction and can feed back real-time motion state information. The execution driving module 150 may be a serial/parallel/serial mechanical structure with a certain configuration including a preset number of motors and other auxiliary driving devices, and is capable of executing motion control instructions in a corresponding work task space, so as to cause the construction robot to move according to a preset motion state, and feed back real-time motion state information of the construction robot.
In this embodiment, the motion control module 140 can generate a motion compensation control command according to the motion control command and the real-time motion state information, and the execution driving module 150 can execute the correction motion according to the motion compensation control command.
According to the visual servo control system of the embodiment of the invention, when the execution driving module 150 is used for executing motion according to the motion control instruction, the real-time motion state information can be fed back, and the motion control module 140 generates a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, so that the execution driving module 150 executes correction motion according to the dynamic compensation control instruction. Therefore, the motion state of the building robot is continuously corrected, the influence of complex dynamic environment and real-time disturbance on the building robot can be reduced, and the accuracy of motion control of the building robot is improved. In addition, the state observer 141 observes the target construction object and invokes the observation model to generate motion state prediction information corresponding to the target construction object, and the motion control instruction is generated according to the perception information and the motion state prediction information, and when the generation frequency of the perception information is lower than the generation frequency required by the motion control instruction, the motion state prediction information can be used for interpolating to generate the motion control instruction, so as to solve the contradiction between the requirement of the high-frequency motion control instruction and the low-frequency perception information. Even when the frequency of the generation of the perception information is low, a smoother motion control instruction set can be generated, the requirement on the processing speed of the pose perception module is reduced, namely the dependence on the pose perception module with high processing speed is reduced, and the cost is convenient to reduce.
The present invention also provides a visual servo control apparatus including a memory and a processor, the memory storing computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the visual servo control method in the above embodiments.
Fig. 3 is a schematic diagram of an embodiment of a visual servo control device 500 according to the present invention, which may vary widely in configuration or performance, may include at least one processor (central processing units, CPU) 510 and a memory 520, at least one storage medium 530 (e.g., at least one mass storage device) storing applications 533 and/or data 532. Wherein memory 520 and storage medium 530 may be transitory or persistent storage. The program stored in the storage medium 530 may include at least one module (not shown), each of which may include a series of instruction operations to the visual servoing control device 500. Still further, the processor 510 may be arranged to communicate with a storage medium 530 and to execute a series of instruction operations in the storage medium 530 on the visual servoing control device 500.
The visual servoing control device 500 may also include at least one power supply 540, at least one wired or wireless network interface 550, at least one input output interface 560, and/or at least one operating system 531, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the visual servoing control device configuration shown in fig. 3 is not limiting of the visual servoing control device and may include more or fewer components than shown, or may be combined with certain components, or may be arranged in a different arrangement of components.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and which may also be a volatile computer readable storage medium, having stored therein instructions that, when executed on a computer, cause the computer to perform the steps of the aforementioned visual servo control method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, but rather, the equivalent structural changes made by the description and drawings of the present invention or the direct/indirect application in other related technical fields are included in the scope of the present invention.

Claims (16)

1. A visual servoing control method for a construction robot, the visual servoing control method comprising:
generating perception information corresponding to the target construction object through a pose perception module;
observing the target construction object through a state observer and calling an observation model to generate motion state prediction information corresponding to the target construction object;
generating a motion control instruction by a motion control module according to the perception information and the motion state prediction information;
executing the motion according to the motion control instruction by an execution driving module;
receiving real-time motion state information fed back by the execution driving module through the motion control module;
generating a dynamic compensation control instruction by the motion control module according to the motion control instruction and the real-time motion state information; and
and executing the correction motion according to the dynamic compensation control instruction through the execution driving module.
2. The visual servoing control method of claim 1, wherein prior to said generating, by said pose-aware module, of awareness information corresponding to a target construction object, said visual servoing control method further comprises:
Performing basic parameter configuration, wherein the performing basic parameter configuration comprises:
determining a visual servo working mode; and
and calling process standard data in the product and process database to obtain the precision convergence standard of the visual servo work.
3. The visual servo control method of claim 2 wherein the visual servo operating modes comprise a fixed point tracking mode and a tracking mode,
if the visual servo working mode is determined to be the fixed-point tracking mode, the perception information is target characteristic point information;
and if the visual servo working mode is determined to be the tracking mode, the perception information is the expected track information.
4. The visual servoing control method of claim 2, wherein said generating, by said pose-aware module, awareness information corresponding to a target construction object comprises:
calibrating parameters;
and carrying out target recognition and pose calculation to obtain the perception information.
5. The visual servoing control method of claim 4, wherein said performing parameter calibration comprises:
and calling a standard visual calibration plate, and completing calibration of an inner parameter and an outer parameter through a corresponding standard calibration flow, wherein the inner parameter is a pixel coordinate parameter of a visual sensor, the outer parameter is a relative installation position parameter between the visual sensor and a body of the construction robot, the inner parameter is used for realizing pose calculation of the target construction object under a camera coordinate system, and the outer parameter is used for realizing conversion of a coordinate position and a motion state between a workpiece coordinate system and a base coordinate system by a motion control module.
6. The visual servoing control method of claim 4, wherein said performing object recognition and pose resolution to obtain said perceptual information comprises:
acquiring an actual image of the target construction object;
comparing the actual image with samples in the product and process database;
if the actual image can be matched with the existing samples in the product and process database, a target identification and pose resolving scheme corresponding to the matched samples is called to obtain the perception information;
if the actual image is not matched with the existing samples in the product and process database, performing graphic processing and processing scheme extraction on the actual image to form a new sample and a corresponding target recognition and pose resolving scheme, and adding the new sample and the corresponding target recognition and pose resolving scheme into the product and process database.
7. The visual servoing control method of claim 1, wherein said perception information includes whether said target construction object is a zone bit within an effective field of view and a description of a position and/or an attitude of said target construction object in three-dimensional space under a camera coordinate system.
8. The visual servoing control method of claim 1, wherein said generating, by a motion control module, motion control instructions based on said perceptual information and said motion state prediction information comprises:
Performing data preprocessing, wherein the performing data preprocessing comprises:
performing data verification on the perception information; and
performing coordinate conversion on the perception information;
and generating the motion control instruction according to the preprocessed perception information and the motion state prediction information.
9. The visual servoing control method of claim 8, wherein said data verification of said perceptual information comprises:
logic verification is carried out to verify whether the target construction object in the perception information is valid or not in an effective visual field range or not so as to judge whether the perception information is available or not;
the process verification is carried out, process standard data in a product and a process database are called to verify whether the perceived information is in a corresponding allowable error range, if the perceived information is in the allowable error range, the process verification is qualified, and if the perceived information is not in the allowable error range, a prompt is sent;
and (3) safety verification, namely verifying whether the perceived information exceeds a motion capability threshold of the construction robot, if not, judging the perceived information to be abnormal information and discarding the perceived information, if so, judging the perceived information to be qualified.
10. The visual servoing control method of claim 8, wherein said converting said perceptual information comprises:
The perception information described based on the camera coordinate system or the tool coordinate system is converted into the description based on the base coordinate system.
11. The visual servoing control method of claim 8, wherein said generating said motion control instructions from said preprocessed perceptual information and said motion state prediction information comprises:
calculating an error equation between the current state and the target state of the construction robot according to the preprocessed perception information and the motion state prediction information;
and generating the motion control instruction according to the error equation and the corresponding control law.
12. The visual servoing control method of claim 2, further comprising:
and in each control period of the motion control module, judging whether the pose of the construction robot in a task space meets the precision convergence standard, if so, ending the visual servo work, and if not, repeating the steps of generating a motion control instruction, executing motion according to the motion control instruction, receiving feedback real-time motion state information, generating a dynamic compensation control instruction and executing correction motion according to the dynamic compensation control instruction.
13. A vision servo control system for a construction robot, the vision servo control system comprising:
the pose sensing module is configured to generate sensing information corresponding to the target construction object;
a motion control module including a state observer configured to observe the target construction object and invoke an observation model to generate motion state prediction information corresponding to the target construction object, the motion control module being configured to be capable of generating motion control instructions based on the perception information and the motion state prediction information; and
the execution driving module can execute motion according to the motion control instruction and feed back real-time motion state information,
the motion control module can generate a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, and the execution driving module can execute correction motion according to the dynamic compensation control instruction.
14. The visual servoing control system of claim 13, further comprising:
the product and process database stores process standard data and sample information of material products; and
The main control module is used for carrying out basic parameter configuration, wherein the basic parameter configuration comprises the following steps: determining a visual servo working mode; and invoking process standard data in the product and process database to obtain the precision convergence standard of the visual servo work.
15. A visual servoing control device, comprising: a memory and at least one processor, the memory having instructions stored therein,
the at least one processor invokes the instructions in the memory to cause the visual servo control device to perform the visual servo control method of any one of claims 1 to 12.
16. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the visual servoing control method of any of claims 1 to 12.
CN202210372619.XA 2022-04-11 2022-04-11 Visual servo control method, control system, control device and storage medium Pending CN116700156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210372619.XA CN116700156A (en) 2022-04-11 2022-04-11 Visual servo control method, control system, control device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210372619.XA CN116700156A (en) 2022-04-11 2022-04-11 Visual servo control method, control system, control device and storage medium

Publications (1)

Publication Number Publication Date
CN116700156A true CN116700156A (en) 2023-09-05

Family

ID=87836231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210372619.XA Pending CN116700156A (en) 2022-04-11 2022-04-11 Visual servo control method, control system, control device and storage medium

Country Status (1)

Country Link
CN (1) CN116700156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118655825A (en) * 2024-08-21 2024-09-17 四川汇达未来科技有限公司 Motion control vision calibration system based on image data processing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118655825A (en) * 2024-08-21 2024-09-17 四川汇达未来科技有限公司 Motion control vision calibration system based on image data processing

Similar Documents

Publication Publication Date Title
CN108568814B (en) Robot and robot control method
US10300600B2 (en) Control system having learning control function and control method
EP2381325B1 (en) Method for robot offline programming
US8255078B2 (en) Numerical controller for multi-axis machine tool
EP0519081B1 (en) Method of correcting deflection of robot
US9477216B2 (en) Numerical control device including display part for displaying information for evaluation of machining process
EP1769890A2 (en) Robot simulation device
CN108127661B (en) Robot controller
US20130060373A1 (en) Numerical controller with workpiece setting error compensation unit for multi-axis machine tool
CN103442858A (en) Robotic work object cell calibration device, system, and method
US20110010008A1 (en) Method And Device For Controlling A Manipulator
US5341458A (en) Method of and system for generating teaching data for robots
US20210362338A1 (en) Method of improving safety of robot and method of evaluating safety of robot
US10507581B2 (en) Robot system
JP2021088024A (en) Numerical control device and control method
JP2774939B2 (en) Robot tool parameter derivation method and calibration method
CN116700156A (en) Visual servo control method, control system, control device and storage medium
CN115674208A (en) Robot vibration suppression device, control method and robot thereof
US11919171B2 (en) Robot controller
KR101976358B1 (en) Safety improve method of robot
CN114800523A (en) Mechanical arm track correction method, system, computer and readable storage medium
US11154947B2 (en) Laser processing system
JP7534386B2 (en) Judging System
KR102262235B1 (en) Method for calibrating real robot operation program made by off line programming
KR102048066B1 (en) Method of added calibration for tool validation of robot for medicine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240418

Address after: 528000, Room 204-2, 2nd Floor, Office Building, Zone A, No. 40 Boai Middle Road, Shishan Town, Nanhai District, Foshan City, Guangdong Province (Residence Declaration)

Applicant after: Pai Turner (Foshan) Robot Technology Co.,Ltd.

Country or region after: China

Address before: 528000 a2-05, 2nd floor, building A1, 1 Panpu Road, Biguiyuan community, Beijiao Town, Shunde District, Foshan City, Guangdong Province (for office use only) (residence declaration)

Applicant before: GUANGDONG BOZHILIN ROBOT Co.,Ltd.

Country or region before: China