Disclosure of Invention
The invention provides a visual servo control method, a control system, control equipment and a storage medium, which can improve the visual servo control precision of a construction robot.
In a first aspect, an embodiment of the present invention provides a visual servoing control method for a construction robot, including: generating perception information corresponding to the target construction object through a pose perception module; observing the target construction object through a state observer and calling an observation model to generate motion state prediction information corresponding to the target construction object; generating a motion control instruction according to the perception information and the motion state prediction information through a motion control module; executing the motion according to the motion control instruction by an execution driving module; receiving real-time motion state information fed back by the execution driving module through the motion control module; generating a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information through a motion control module; and executing the correction motion according to the dynamic compensation control instruction by the execution driving module.
According to the foregoing embodiment of the first aspect of the present invention, before the generation of the perception information corresponding to the target construction object by the pose-aware module, the visual servo control method further includes: performing basic parameter configuration, wherein the performing basic parameter configuration comprises: determining a visual servo working mode; and invoking process standard data in the product and process databases to obtain the precision convergence standard of the visual servo work.
According to any of the foregoing embodiments of the first aspect of the present invention, the visual servo operation mode includes a fixed-point tracking mode and a tracking mode, and if it is determined that the visual servo operation mode is the fixed-point tracking mode, the perception information is target feature point information; and if the visual servo working mode is determined to be a tracking mode, the perception information is expected track information.
According to any of the foregoing embodiments of the first aspect of the present invention, generating, by the pose-aware module, awareness information corresponding to the target construction object includes: calibrating parameters; and performing target recognition and pose calculation to obtain perception information.
According to any of the foregoing embodiments of the first aspect of the present invention, performing parameter calibration includes: and calling a standard visual calibration plate, and completing the calibration of an inner parameter and an outer parameter through a corresponding standard calibration flow, wherein the inner parameter is a pixel coordinate parameter of a visual sensor, the outer parameter is a relative installation position parameter between the visual sensor and a body of the construction robot, the inner parameter is used for realizing pose calculation of a target construction object under a camera coordinate system, and the outer parameter is used for realizing the conversion of a coordinate position and a motion state between a workpiece coordinate system and a base coordinate system by a motion control module.
According to any of the foregoing embodiments of the first aspect of the present invention, performing object recognition and pose calculation to obtain the perception information includes: acquiring an actual image of a target construction object; comparing the actual image with samples in the product and process databases; if the actual image can be matched with the existing samples in the product and process databases, a target recognition and pose resolving scheme corresponding to the matched samples is called to obtain perception information; if the actual image is not matched with the existing samples in the product and process database, performing graphic processing and processing scheme extraction on the actual image to form a new sample and a corresponding target recognition and pose resolving scheme, and adding the new sample and the corresponding target recognition and pose resolving scheme into the product and process database.
According to any of the foregoing embodiments of the first aspect of the present invention, the perception information includes a flag bit of the target construction object within the effective field of view and a description of a position and/or posture of the target construction object in three-dimensional space under a camera coordinate system.
According to any of the foregoing embodiments of the first aspect of the present invention, generating, by the motion control module, motion control instructions from the perceptual information and the motion state prediction information comprises: performing data preprocessing, the performing data preprocessing comprising: performing data verification on the perception information; performing coordinate conversion on the perception information; and generating a motion control instruction according to the preprocessed perception information and the motion state prediction information.
According to any of the foregoing embodiments of the first aspect of the present invention, performing data verification on the perceptual information comprises: logic checking, namely checking whether a target construction object in the perception information is valid in an effective visual field range or not to judge whether the perception information is available or not; the process verification, namely calling the process standard data in the product and the process database to verify whether the perceived information is in the corresponding allowable error range, if so, the process verification is qualified, and if not, a prompt is sent; and (3) safety verification, namely verifying whether the perceived information exceeds a motion capability threshold of the construction robot, if not, judging the perceived information to be abnormal information and discarding the abnormal information if the perceived information exceeds the motion capability threshold of the construction robot, and if the perceived information does not exceed the motion capability threshold of the construction robot, judging the safety verification to be qualified.
According to any of the foregoing embodiments of the first aspect of the present invention, performing coordinate conversion on the perception information includes: the perception information described based on the camera coordinate system or the tool coordinate system is converted into the description based on the base coordinate system.
According to any of the foregoing embodiments of the first aspect of the present invention, generating motion control instructions from the preprocessed perceptual information and the motion state prediction information comprises: calculating an error equation between the current state and the target state of the construction robot according to the preprocessed perception information and the motion state prediction information; and generating a motion control instruction according to the error equation and a corresponding control law.
According to any of the foregoing embodiments of the first aspect of the present invention, the visual servoing control method further includes: in each control period of the motion control module, judging whether the pose of the construction robot in the task space meets the precision convergence standard, if so, ending the visual servo work, and if not, repeating the steps of generating a motion control instruction, executing motion according to the motion control instruction, receiving feedback real-time motion state information, generating a dynamic compensation control instruction and executing correction motion according to the dynamic compensation control instruction.
In a second aspect, an embodiment of the present invention provides a visual servoing control system for a construction robot, the visual servoing control system including: the pose sensing module is configured to generate sensing information corresponding to the target construction object; the motion control module comprises a state observer, wherein the state observer is configured to observe a target construction object and call an observation model to generate motion state prediction information corresponding to the target construction object, and the motion control module is configured to generate motion control instructions according to the perception information and the motion state prediction information; and the execution driving module can execute motion according to the motion control instruction and can feed back real-time motion state information, wherein the motion control module can generate a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, and the execution driving module can execute correction motion according to the dynamic compensation control instruction.
According to the foregoing embodiment of the second aspect of the present invention, the visual servoing control system further includes: the product and process database stores process standard data and sample information of material products; and a main control module for performing basic parameter configuration, wherein the performing basic parameter configuration includes: determining a visual servo working mode; and invoking process standard data in the product and process databases to obtain the precision convergence standard of the visual servo work.
In a third aspect, an embodiment of the present invention provides a visual servoing control apparatus, including: a memory having instructions stored therein and at least one processor invoking the instructions in the memory to cause the visual servo control device to perform the visual servo control method according to any of the preceding embodiments of the first aspect of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium having stored thereon instructions that, when executed by a processor, implement a visual servoing control method according to any of the foregoing embodiments of the first aspect of the invention.
According to the visual servo control method provided by the embodiment of the invention, the visual servo control method is used for the building robot, the execution driving module can feed back real-time motion state information when executing motion according to the motion control instruction, and the motion control module generates a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, so that the execution driving module executes correction motion according to the dynamic compensation control instruction. Therefore, the motion state of the building robot is continuously corrected, the influence of complex dynamic environment and real-time disturbance on the building robot can be reduced, and the accuracy of motion control of the building robot is improved. In addition, the state observer observes the target construction object and calls the observation model to generate motion state prediction information corresponding to the target construction object, the motion control instruction is generated according to the perception information and the motion state prediction information, and when the generation frequency of the perception information is lower than the generation frequency required by the motion control instruction, the motion state prediction information can be used for interpolation to generate the motion control instruction, so that the contradiction between the high-frequency motion control instruction requirement and the low-frequency perception information is solved. Even when the frequency of the generation of the perception information is low, a smoother motion control instruction set can be generated, the requirement on the processing speed of the pose perception module is reduced, namely the dependence on the pose perception module with high processing speed is reduced, and the cost is convenient to reduce.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
Furthermore, the description of "first," "second," etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
The embodiment of the invention provides a visual servo control method which is used for a construction robot.
FIG. 1 is a block flow diagram of a visual servoing control method according to an embodiment of the invention. The visual servoing control method includes steps S110 to S190.
Alternatively, step S110 may be performed before step S120. In step S110, basic parameter configuration is performed. In some embodiments, performing the base parameter configuration includes: determining a visual servo working mode; and invoking process standard data in the product and process databases to obtain the precision convergence standard of the visual servo work.
In some embodiments, the base parameter configuration is performed by a master control module of the visual servoing control system. The basic parameter configuration may be performed by configuring a construction robot job task, a system parameter, a construction process parameter, and the like, in addition to the above examples. When the basic parameter configuration is carried out, a main control program of the construction process of the building robot can be operated. The main control module can perform overall scheduling control on the complete construction operation task of the construction robot.
Alternatively, the main control module may invoke the product and process databases. The product and process databases store process standard data, sample information for material products, and the like. The process standard data includes, for example, standard process control flows and program templates, visual servo operating mode models, process acceptance criteria, and the like. According to the process acceptance qualification condition, the precision convergence standard, the task ending condition and the like of the visual servo work can be determined. The main control module can set, coordinate, schedule, adjust and control other modules according to the standard process control flow and the program template and in combination with the current actual construction task situation.
In step S120, perception information corresponding to the target construction object is generated by the pose perception module.
In some embodiments, the step S120 of generating, by the pose-aware module, the perception information corresponding to the target construction object includes: calibrating parameters; and performing target recognition and pose calculation to obtain perception information.
In some embodiments, the pose sensing module performs parameter calibration according to the configuration of the construction system, the construction object material product image, the construction task and the like configured and sent by the main control module, and performs target identification and pose calculation to obtain sensing information.
In some embodiments, performing parameter calibration includes: and calling a standard visual calibration plate, and completing the calibration of an inner parameter and an outer parameter through a corresponding standard calibration flow, wherein the inner parameter is a pixel coordinate parameter of a visual sensor, the outer parameter is a relative installation position parameter between the visual sensor and a body of the construction robot, the inner parameter is used for realizing pose calculation of a target construction object under a camera coordinate system, and the outer parameter is used for realizing the conversion of a coordinate position and a motion state between a workpiece coordinate system and a base coordinate system by a motion control module. The object coordinate system is also referred to as the user coordinate system. The base coordinate system is also referred to as the motion control coordinate system, i.e. the coordinate system suitable for the motion control module.
In some embodiments, performing object recognition and pose resolution to obtain the perceptual information comprises: acquiring an actual image of a target construction object; comparing the actual image with samples in the product and process databases; if the actual image can be matched with the existing samples in the product and process databases, a target recognition and pose resolving scheme corresponding to the matched samples is called to obtain perception information; if the actual image is not matched with the existing samples in the product and process database, performing graphic processing and processing scheme extraction on the actual image to form a new sample and a corresponding target recognition and pose resolving scheme, and adding the new sample and the corresponding target recognition and pose resolving scheme into the product and process database. Specifically, if the actual image fails to match with the existing samples in the product and process database, the actual image can be subjected to graphics processing, contour and step point extraction, deep learning, model training and other programs, and the processing scheme is extracted and added into the product and process database.
In some embodiments, the visual servoing mode of operation includes a fixed point tracking mode and a tracking mode. If it is determined in step S110 that the visual servoing operation mode is the fixed point tracking mode, in step S120, the perception information is target feature point information. If it is determined in step S110 that the visual servoing operation mode is the tracking mode, the perceived information is the desired track information in step S120.
When the visual servo working mode is a fixed-point tracking mode, matching building construction task scenes tracked at fixed points, wherein the matched building construction task scenes comprise, but are not limited to, a wall or floor tile leveling task, a wall or floor tile paving task, a brickwork task, a screw hole plugging task, a reinforcing steel bar binding task and the like. When matching a wall or floor tile leveling task, the task reference object is a building surface to be operated, and the characteristic information is relative attitude information perceived by an inertial measurement unit (Inertial Measurement Unit, IMU) or an inclinometer; when matching a wall or floor tile paving task, the task reference object is a paved brick corner point, and the characteristic information is the relative pose between the brick to be paved and the reference brick; when a bricklaying task is matched, the task reference object is a bricklayed brick corner point, and the characteristic information is the relative pose between the brick to be bricklayed and the reference brick; when the screw hole plugging task is matched, the task reference object is a screw hole, and the characteristic information is the relative pose between the tail end of the executing mechanism and the hole; when the reinforcement binding task is matched, the task reference object is a reinforcement intersection point, and the characteristic information is the relative pose between the tail end of the executing mechanism and the intersection point.
When the visual servo working mode is a tracking mode, building construction task scenes tracked by tracking are matched, and matched common building construction task scenes are mainly divided into uniform linear motion tracking types, uniform circular motion tracking types and irregular spline curve tracking types.
Through step S120, perception information is generated, which in some embodiments includes whether the target construction object is in the effective field of view and a description of the position and/or pose of the target construction object in three-dimensional space under the camera coordinate system. The description of the position and/or the gesture of the target construction object in the three-dimensional space under the camera coordinate system may be the description of the 6D pose information of the target construction object in the three-dimensional space under the camera coordinate system, and may include position description information and/or gesture description information. The position description information refers to the positions x, y and z relative to the coordinate axes of the camera coordinate system, and the gesture description information refers to the rotation angles rx, ry and rz relative to the coordinate axes of the camera coordinate system. According to the different types, functions and installation positions of the sensors configured by the pose sensing module, the three-dimensional spatial pose information of the target construction object accurately identified by the pose sensing module is also different, but at least comprises one or more dimensions of the 6D pose information.
In some embodiments, when the pose sensing module is used for carrying out target recognition and pose resolving on the target construction object, the feature recognition precision of the target construction object is not lower than the error precision required by the process standards in the product and process databases.
In step S130, the target construction object is observed by the state observer and the observation model is called to generate motion state prediction information corresponding to the target construction object.
In step S140, a motion control command is generated by the motion control module according to the perception information and the motion state prediction information.
In some embodiments, the step S140 of generating, by the motion control module, motion control instructions according to the perception information and the motion state prediction information includes: carrying out data preprocessing; and generating a motion control instruction according to the preprocessed perception information and the motion state prediction information.
The data preprocessing comprises the following steps: performing data verification on the perception information; and performing coordinate transformation on the perception information.
In some embodiments, data checking the perceptual information comprises: logic verification; and (5) process checking and safety checking.
In the logic verification, whether the target construction object in the perception information is in an effective visual field range or not is verified, and whether the perception information is available or not is judged.
And if the logic check is qualified, performing process check. In the process verification, invoking process standard data in the product and process database to verify whether the perceived information is in a corresponding allowable error range, if so, verifying the process to be qualified, and if not, sending a prompt. Specifically, the normal execution of the subsequent process operation in the construction process of the building robot often depends on the qualified execution of the preamble operation process, in the process verification, errors required by process standard data in a product and a process database for the process standard of the preamble process link are called, verification judgment is carried out on the current perception information, if the perception information is within an allowable error range specified by the process standard of the preamble process link, the process verification is judged to be qualified, otherwise, the building robot does not respond to the perception information and prompts.
And if the process check is qualified, performing safety check. And (3) safety verification, namely verifying whether the perceived information exceeds a motion capability threshold of the construction robot, if not, judging the perceived information to be abnormal information and discarding the abnormal information if the perceived information exceeds the motion capability threshold of the construction robot, and if the perceived information does not exceed the motion capability threshold of the construction robot, judging the safety verification to be qualified. In some embodiments, if the verification sensing information exceeds the movement capability threshold of the construction robot, the sensing information is judged as abnormal information and is discarded, and the construction robot maintains the current movement state and does not perform servo response so as to prevent the construction robot from galloping or generating other safety problems. If the continuous sensing information is abnormal information within a certain time range or a certain frequency range, a prompt is sent out to prompt the current sensor to identify the abnormality.
The coordinate conversion of the perception information comprises: the perception information described based on the camera coordinate system or the tool coordinate system is converted into the description based on the base coordinate system. Optionally, in this step, in combination with the foregoing parameter calibration step, the sensing information may be converted into a description based on the base coordinate system through a preset matrix operation.
In some embodiments, generating motion control instructions from the preprocessed perceptual information and the motion state prediction information comprises: calculating an error equation between the current state and the target state of the construction robot according to the preprocessed perception information and the motion state prediction information; and generating a motion control instruction according to the error equation and a corresponding control law. Error equations include, but are not limited to, positional errors, attitude errors, speed errors, acceleration errors, etc. of the present state and the target state of the construction robot.
The motion control command generated according to the error equation and the corresponding control law may be a position control command, a speed control command, or a torque control command, and the mode thereof is selected when the basic parameter configuration is performed in step S110. The further generated motion control instruction needs to meet the motion parameter constraint set by the user and the execution capacity constraint of the execution driving module, and minimize the tracking error convergence time.
In step S150, a motion is performed according to a motion control instruction by the execution driving module. The execution driving module can feed back real-time motion state information at the same time.
In step S160, the real-time motion state information fed back by the execution driving module is received by the motion control module.
In step S170, a motion control module generates a motion compensation control command according to the motion control command and the real-time motion state information.
In step S180, the corrective motion is performed according to the dynamic compensation control instruction by the execution driving module.
In some embodiments, step S190 is further included, and in step S190, in each control cycle of the motion control module, it is determined whether the pose of the construction robot in the task space meets the precision convergence criterion. If so, ending the visual servo work. If not, repeating the steps of generating a motion control instruction, executing motion according to the motion control instruction, receiving feedback real-time motion state information, generating a dynamic compensation control instruction and executing correction motion according to the dynamic compensation control instruction. That is, if not satisfied, step S140, step S150, step S160, step S170, and step S180 are repeated. In some embodiments, if satisfied, the visual servoing is ended. The motion state of the construction robot when the visual servoing operation is finished may be selected to maintain the current position or the current speed or the current moment according to the maintaining mode set when the basic parameter configuration is performed in step S110.
According to the visual servo control method provided by the embodiment of the invention, the visual servo control method is used for the building robot, the execution driving module can feed back real-time motion state information when executing motion according to the motion control instruction, and the motion control module generates a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, so that the execution driving module executes correction motion according to the dynamic compensation control instruction. Therefore, the motion state of the building robot is continuously corrected, the influence of complex dynamic environment and real-time disturbance on the building robot can be reduced, and the accuracy of motion control of the building robot is improved. In addition, the state observer observes the target construction object and calls the observation model to generate motion state prediction information corresponding to the target construction object, the motion control instruction is generated according to the perception information and the motion state prediction information, and when the generation frequency of the perception information is lower than the generation frequency required by the motion control instruction, the motion state prediction information can be used for interpolation to generate the motion control instruction, so that the contradiction between the high-frequency motion control instruction requirement and the low-frequency perception information is solved. Even when the frequency of the generation of the perception information is low, a smoother motion control instruction set can be generated, the requirement on the processing speed of the pose perception module is reduced, namely the dependence on the pose perception module with high processing speed is reduced, and the cost is convenient to reduce.
Optionally, the visual servo operating mode includes a fixed point tracking mode and a tracking mode. If it is determined in step S110 that the visual servoing operation mode is the fixed point tracking mode, in step S120, the perception information is target feature point information. If it is determined in step S110 that the visual servoing operation mode is the tracking mode, the perceived information is the desired track information in step S120. Therefore, the visual servo control method provided by the embodiment of the invention can be compatible with common target sensing and dynamic tracking control link work tasks in building robot brick paving, brick laying, screw hole plugging and gluing and caulking work tasks, realize modularized packaging and calling of a dynamic tracking algorithm sample library, can openly receive new algorithm sample cases, and improves the applicability to tasks and targets.
Optionally, performing data verification on the perception information includes: logic verification; and (5) process checking and safety checking. Therefore, protection processing of sensor data anomalies due to shielding, losing and the like of a target construction object in the construction process, independent fine control of different dimension data and multiple stop modes of visual servo work are increased in the visual servo control method, and safety and flexibility are improved.
The embodiment of the invention also provides a visual servo control system for the construction robot. The visual servo control system can realize the visual servo control method.
FIG. 2 is a block diagram of a visual servoing control system according to an embodiment of the invention. The visual servoing control system includes a pose sensing module 130, a motion control module 140, and an execution driving module 150. In this embodiment, the visual servoing control system further includes a product and process database 110 and a master control module 120. The product and process databases 110, the master control module 120, the pose sensing module 130, the motion control module 140, and the execution driving module 150 are communicatively connected to each other.
The product and process database 110 stores process standard data and sample information for material products. The process standard data includes, for example, standard process control flows and program templates, visual servo operating mode models, process acceptance criteria, and the like. According to the process acceptance qualification condition, the precision convergence standard, the task ending condition and the like of the visual servo work can be determined.
The main control module 120 is configured to perform basic parameter configuration, where performing basic parameter configuration includes:
Determining a visual servo working mode; and invoking process criteria data in the product and process database 110 to obtain accuracy convergence criteria for the visual servoing effort. The main control module 120 may set, coordinate, schedule, adjust and control other modules in accordance with standard process control flows and program templates of the product and process database 110 in conjunction with the current actual construction task situation. The basic parameter configuration may be performed by configuring a construction robot job task, a system parameter, a construction process parameter, and the like, in addition to the above examples. When the basic parameter configuration is carried out, a main control program of the construction process of the building robot can be operated. The main control module can perform overall scheduling control on the complete construction operation task of the construction robot.
The pose awareness module 130 is configured to generate awareness information corresponding to the target construction object. In some embodiments, the pose sensing module 130 performs parameter calibration according to the configuration of the construction system, the image of the construction object material product, the construction task, etc. configured and sent by the main control module 120, and performs target recognition and pose calculation to obtain the sensing information. The pose awareness module 130 sends awareness information to the motion control module 140.
The pose sensor module 130 may include visual sensors, media for running visual recognition, image processing, pose resolving programs, and other connection and auxiliary devices, as well as sensors with a pose information sensing processing capability, such as IMUs, inclinometers, laser guidance and receiving devices, etc. The pose sensing module 130 is used for performing parameter calibration, and performing target recognition and pose calculation to obtain sensing information. The step of performing target recognition and pose calculation to obtain perception information comprises the following steps: acquiring an actual image of a target construction object; comparing the actual image with samples in the product and process databases; if the actual image can be matched with the existing samples in the product and process databases, a target recognition and pose resolving scheme corresponding to the matched samples is called to obtain perception information; if the actual image is not matched with the existing samples in the product and process database, performing graphic processing and processing scheme extraction on the actual image to form a new sample and a corresponding target recognition and pose resolving scheme, and adding the new sample and the corresponding target recognition and pose resolving scheme into the product and process database. Specifically, if the actual image fails to match with the existing samples in the product and process database, the actual image can be subjected to graphics processing, contour and step point extraction, deep learning, model training and other programs, and the processing scheme is extracted and added into the product and process database.
The motion control module 140 includes a state observer 141, the state observer 141 being configured to observe a target construction object and to invoke an observation model to generate motion state prediction information corresponding to the target construction object, the motion control module 140 being configured to be capable of generating motion control instructions based on the perception information and the motion state prediction information. Generating, by the motion control module, motion control instructions based on the perceptual information and the motion state prediction information comprises: carrying out data preprocessing; and generating a motion control instruction according to the preprocessed perception information and the motion state prediction information.
The data preprocessing comprises the following steps: performing data verification on the perception information; and performing coordinate transformation on the perception information. In some embodiments, data checking the perceptual information comprises: logic verification; and (5) process checking and safety checking. The coordinate conversion of the perception information comprises: the perception information described based on the camera coordinate system or the tool coordinate system is converted into the description based on the base coordinate system. Optionally, in this step, in combination with the foregoing parameter calibration step, the sensing information may be converted into a description based on the base coordinate system through a preset matrix operation.
In some embodiments, generating motion control instructions from the preprocessed perceptual information and the motion state prediction information comprises: calculating an error equation between the current state and the target state of the construction robot according to the preprocessed perception information and the motion state prediction information; and generating a motion control instruction according to the error equation and a corresponding control law.
The execution driving module 150 can execute the motion according to the motion control instruction and can feed back real-time motion state information. The execution driving module 150 may be a serial/parallel/serial mechanical structure with a certain configuration including a preset number of motors and other auxiliary driving devices, and is capable of executing motion control instructions in a corresponding work task space, so as to cause the construction robot to move according to a preset motion state, and feed back real-time motion state information of the construction robot.
In this embodiment, the motion control module 140 can generate a motion compensation control command according to the motion control command and the real-time motion state information, and the execution driving module 150 can execute the correction motion according to the motion compensation control command.
According to the visual servo control system of the embodiment of the invention, when the execution driving module 150 is used for executing motion according to the motion control instruction, the real-time motion state information can be fed back, and the motion control module 140 generates a dynamic compensation control instruction according to the motion control instruction and the real-time motion state information, so that the execution driving module 150 executes correction motion according to the dynamic compensation control instruction. Therefore, the motion state of the building robot is continuously corrected, the influence of complex dynamic environment and real-time disturbance on the building robot can be reduced, and the accuracy of motion control of the building robot is improved. In addition, the state observer 141 observes the target construction object and invokes the observation model to generate motion state prediction information corresponding to the target construction object, and the motion control instruction is generated according to the perception information and the motion state prediction information, and when the generation frequency of the perception information is lower than the generation frequency required by the motion control instruction, the motion state prediction information can be used for interpolating to generate the motion control instruction, so as to solve the contradiction between the requirement of the high-frequency motion control instruction and the low-frequency perception information. Even when the frequency of the generation of the perception information is low, a smoother motion control instruction set can be generated, the requirement on the processing speed of the pose perception module is reduced, namely the dependence on the pose perception module with high processing speed is reduced, and the cost is convenient to reduce.
The present invention also provides a visual servo control apparatus including a memory and a processor, the memory storing computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the visual servo control method in the above embodiments.
Fig. 3 is a schematic diagram of an embodiment of a visual servo control device 500 according to the present invention, which may vary widely in configuration or performance, may include at least one processor (central processing units, CPU) 510 and a memory 520, at least one storage medium 530 (e.g., at least one mass storage device) storing applications 533 and/or data 532. Wherein memory 520 and storage medium 530 may be transitory or persistent storage. The program stored in the storage medium 530 may include at least one module (not shown), each of which may include a series of instruction operations to the visual servoing control device 500. Still further, the processor 510 may be arranged to communicate with a storage medium 530 and to execute a series of instruction operations in the storage medium 530 on the visual servoing control device 500.
The visual servoing control device 500 may also include at least one power supply 540, at least one wired or wireless network interface 550, at least one input output interface 560, and/or at least one operating system 531, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the visual servoing control device configuration shown in fig. 3 is not limiting of the visual servoing control device and may include more or fewer components than shown, or may be combined with certain components, or may be arranged in a different arrangement of components.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and which may also be a volatile computer readable storage medium, having stored therein instructions that, when executed on a computer, cause the computer to perform the steps of the aforementioned visual servo control method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, but rather, the equivalent structural changes made by the description and drawings of the present invention or the direct/indirect application in other related technical fields are included in the scope of the present invention.