Nothing Special   »   [go: up one dir, main page]

CN116382296A - Path generation method, path generation device, robot and computer readable storage medium - Google Patents

Path generation method, path generation device, robot and computer readable storage medium Download PDF

Info

Publication number
CN116382296A
CN116382296A CN202310502675.5A CN202310502675A CN116382296A CN 116382296 A CN116382296 A CN 116382296A CN 202310502675 A CN202310502675 A CN 202310502675A CN 116382296 A CN116382296 A CN 116382296A
Authority
CN
China
Prior art keywords
node
expansion
state information
robot
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310502675.5A
Other languages
Chinese (zh)
Inventor
朱吉林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pudu Robot Co ltd
Shenzhen Pudu Technology Co Ltd
Original Assignee
Chengdu Pudu Robot Co ltd
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pudu Robot Co ltd, Shenzhen Pudu Technology Co Ltd filed Critical Chengdu Pudu Robot Co ltd
Priority to CN202310502675.5A priority Critical patent/CN116382296A/en
Publication of CN116382296A publication Critical patent/CN116382296A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

The application relates to a path generation method, a path generation device, a robot and a computer readable storage medium. Comprising the following steps: acquiring a motion control instruction set, and a current position node and a target position node of the robot in a grid map, wherein the motion control instruction set comprises at least two sub-instructions; fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, and determining a corresponding expansion node according to the reference state information, wherein the state information is used for representing the motion state of the robot at the corresponding node; determining a reference expansion node from the expansion nodes according to the first distance between each expansion node and the current position node and the second distance between each expansion node and the target position node; when the reference expansion node is in a preset range corresponding to the target position node, determining a target path according to the target position node, the reference expansion node and the current position node. The method can improve the adaptability of the generated path.

Description

Path generation method, path generation device, robot and computer readable storage medium
Technical Field
The present disclosure relates to the field of robots, and in particular, to a path generating method, a path generating device, a robot, and a computer readable storage medium.
Background
With the development of robot technology, robots related to cleaning, cruising, mine expelling, mowing, automatic navigation task execution and the like are widely studied and applied. The method for generating the robot path effectively and accurately has important significance for the robot to be capable of efficiently and reliably carrying out work.
In the prior art, a path generated by a traditional path searching algorithm cannot be well applicable to the kinematic constraint of a robot.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a path generation method, apparatus, robot, and computer-readable storage medium that can improve the adaptability of the generated path and satisfy the kinematic constraint.
In a first aspect, the present application provides a robot comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the computer readable instructions when executed by the processor implement the steps of:
acquiring a motion control instruction set, and a current position node and a target position node of the robot in a grid map, wherein the motion control instruction set comprises at least two sub-instructions;
Fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, and determining a corresponding expansion node according to the reference state information, wherein the state information is used for representing the motion state of the robot at the corresponding node;
determining a reference expansion node from the expansion nodes according to the first distance between each expansion node and the current position node and the second distance between each expansion node and the target position node;
when the reference expansion node is in a preset range corresponding to the target position node, determining a target path according to the target position node, the reference expansion node and the current position node.
In one embodiment, obtaining a set of motion control instructions includes:
acquiring linear acceleration range data and linear acceleration sampling intervals of a robot;
acquiring angular acceleration range data and angular acceleration sampling intervals of a robot;
sampling linear acceleration range data according to the linear acceleration sampling interval to obtain a linear acceleration set;
sampling angular acceleration range data according to an angular acceleration sampling interval to obtain an angular acceleration set;
based on the combination of each element in the linear acceleration set and each element in the angular acceleration set, each motion control sub-instruction is obtained, and a motion control instruction set is obtained according to each motion control sub-instruction.
In one embodiment, the state information includes position information, heading angle, linear velocity, and angular velocity of the robot at the corresponding node in the world coordinate system; the sub-instruction comprises linear acceleration, angular acceleration and control duration of the robot at a corresponding node under a world coordinate system;
fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, wherein the method comprises the following steps:
obtaining a reference linear velocity according to the fusion of the linear velocity, the linear acceleration and the control duration;
obtaining a reference angular velocity according to fusion of the angular velocity, the angular acceleration and the control duration;
obtaining a reference course angle according to the fusion of the course angle, the angular speed, the control duration and the angular acceleration;
obtaining reference position information according to fusion of position information, linear speed, control duration, linear acceleration and reference course angle;
and obtaining the reference state information based on the reference linear velocity, the reference angular velocity, the reference heading angle and the reference position information.
In one embodiment, determining the corresponding extension node from the reference state information includes: acquiring origin position information of origin points in a grid map, wherein the origin points correspond to the origin points in a world coordinate system;
Obtaining index information corresponding to the extended node in the grid map, which corresponds to the reference state information, based on fusion of the reference position information, the origin position information and the map resolution of the grid map in the reference state information;
and determining the expansion node corresponding to the reference state information according to the index information.
In one embodiment, determining the corresponding extension node from the reference state information includes: respectively determining corresponding candidate expansion nodes according to each piece of reference state information;
and determining expansion nodes from the candidate expansion nodes according to the linear speed and the obstacle state information in the reference state information corresponding to the candidate expansion nodes, wherein the obstacle state information is used for representing the existence state of the obstacle at the corresponding candidate expansion nodes.
In one embodiment, determining a reference extension node from the extension nodes according to a first distance between the extension nodes and the current location node and a second distance between the extension nodes and the target location node comprises:
calculating a first distance between each expansion node and the current position node;
calculating a second distance between each expansion node and the target position node;
fusing the first distance and the second distance of each expansion node to obtain a corresponding total distance;
And determining a reference extension node from the extension nodes according to the total distance of the extension nodes.
In one embodiment, the processor, when executing computer-readable instructions, performs the steps of:
when the reference expansion node is not in the preset range, taking the reference expansion node as the next current position node;
and returning to the step of fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, and determining the corresponding expansion node according to the reference state information.
In a second aspect, the present application provides a path generating method, including:
acquiring a motion control instruction set, and a current position node and a target position node of the robot in a grid map, wherein the motion control instruction set comprises at least two sub-instructions;
fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, and determining a corresponding expansion node according to the reference state information, wherein the state information is used for representing the motion state of the robot at the corresponding node;
determining a reference expansion node from the expansion nodes according to the first distance between each expansion node and the current position node and the second distance between each expansion node and the target position node;
When the reference expansion node is in a preset range corresponding to the target position node, determining a target path according to the target position node, the reference expansion node and the current position node.
In a third aspect, the present application provides a path generating apparatus, including:
the acquisition module is used for acquiring a motion control instruction set and a current position node and a target position node of the robot in the grid map, wherein the motion control instruction set comprises at least two sub-instructions;
the calculation module is used for respectively fusing each sub-instruction with the state information corresponding to the current position node to obtain corresponding reference state information, determining a corresponding expansion node according to the reference state information, and the state information is used for representing the motion state of the robot at the corresponding node;
the determining module is used for determining a reference expansion node from the expansion nodes according to the first distance between each expansion node and the current position node and the second distance between each expansion node and the target position node;
and the generation module is used for determining a target path according to the target position node, the reference expansion node and the current position node when the reference expansion node is in a preset range corresponding to the target position node.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the steps of the path generation method described above.
According to the path generation method, the path generation device, the robot and the computer readable storage medium, the state information corresponding to the current position node of the robot is fused with each sub-instruction in the acquired motion control instruction set to generate each corresponding expansion node, the distances between each expansion node and the current position node and between each expansion node and the target position node are used as the basis, and the reference expansion node is determined from each expansion node, so that the adaptability of the generated path can be effectively improved according to the kinematic constraint of the robot, such as a two-wheel differential model which cannot support the motion states of lateral side movement, in-situ turning, large-curvature turning and the like, when the next reference expansion node is determined according to the current position node, the corresponding reference expansion node is determined under the condition of fully considering the kinematic constraint of the robot, and finally, the target path is determined based on the target position node, the reference expansion node and the current position node, and the adaptability of the generated path can be effectively improved, and the kinematic constraint is met.
Drawings
FIG. 1 is an application environment diagram of a path generation method in one embodiment;
FIG. 2 is a flow diagram of a method of path generation in one embodiment;
FIG. 3 is a flow diagram of generating a set of motion control instructions in one embodiment;
FIG. 4 is a flow diagram of determining reference state information in one embodiment;
FIG. 5 is a flow diagram of determining corresponding expansion nodes according to index information in one embodiment;
FIG. 6 is a flow diagram of determining an expansion node among candidate expansion nodes in one embodiment;
FIG. 7 is a flow diagram of determining a reference extension node in one embodiment;
FIG. 8 is a flow diagram of a reference expansion node as a next current location node in one embodiment;
FIG. 9 is a process diagram of node expansion in one embodiment;
FIG. 10 is a flow diagram of a method of robot path search in one embodiment;
FIG. 11 is a schematic diagram of a path search process in one embodiment;
FIG. 12 is a block diagram of a path generation apparatus in one embodiment;
fig. 13 is an internal structural view of the robot in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The path generation method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the robot 102 communicates with a server via a network. The data storage system may store data that the server needs to process. The data storage system may be integrated on a server or may be placed on a cloud or other network server. The robot 102 acquires a motion control instruction set, and a current position node and a target position node of the robot in the grid map; fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, and determining a corresponding expansion node according to the reference state information; determining a reference expansion node from the expansion nodes according to the first distance between each expansion node and the current position node and the second distance between each expansion node and the target position node; when the reference expansion node is in a preset range corresponding to the target position node, determining a target path according to the target position node, the reference expansion node and the current position node.
In one embodiment, a robot is provided that includes a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, as shown in fig. 2, which when executed by the processor implement the steps of:
Step S202, a motion control instruction set and a current position node and a target position node of the robot in the grid map are obtained.
Wherein the set of motion control instructions comprises at least two sub-instructions.
Specifically, the robot control device performs environmental data acquisition on a surrounding area by using a positioning sensor on the robot, and obtains coordinate information of the robot corresponding to a target position in a world coordinate system by analyzing the acquired environmental data information, and a current position node of the robot corresponding to the target position in a grid map, and a target position node of the target position corresponding to the target position in the grid map, wherein the coordinate information of the robot in the world coordinate system and the current position node of the robot corresponding to the grid map are in a corresponding relationship, that is, when the coordinate position of the robot in the world coordinate system is determined, the node position information of the coordinate position corresponding to the grid map can be uniquely determined, alternatively, the robot can also determine the position information of the robot and the target position corresponding to the world coordinate system or the node position information corresponding to the grid map by receiving the positioning information sent by a server or other computer equipment.
Step S204, each sub-instruction is respectively fused with the state information corresponding to the current position node to obtain the corresponding reference state information, and the corresponding expansion node is determined by the reference state information.
The state information is used for representing the motion state of the robot at the corresponding node, and the motion state of the robot can be the linear speed, the angular speed, the linear acceleration, the angular acceleration, the course angle, the position information corresponding to the world coordinate system or the grid map and other physical quantities reflecting the motion condition of the robot, and the dimension of the parameter contained in each sub-instruction corresponds to the state information corresponding to each node one by one.
Specifically, the robot respectively fuses the parameters of each dimension in each sub-instruction with the parameters of each dimension in the state information corresponding to the current position node to respectively obtain the reference state parameters corresponding to each dimension, then the reference state parameters of each dimension are combined to obtain the corresponding reference state information, and the corresponding expansion node is determined by the reference state information.
Step S206, determining a reference expansion node from the expansion nodes according to the first distance between the expansion nodes and the current position node and the second distance between the expansion nodes and the target position node.
The first distance and the second distance may be distances calculated in the same manner, or may be distances calculated in different manners, for example, the first distance and the second distance may be both euclidean distances, or the first distance may be a euclidean distance, and the second distance may be a distance calculated in other manners.
Specifically, the robot acquires a first distance between each expansion node and a current position node, acquires a second distance between each expansion node and a target position node, fuses the first distance and the second distance to obtain a total distance corresponding to each expansion node, and determines a reference expansion node from each expansion node according to the total distance.
Step S208, when the reference expansion node is in the preset range corresponding to the target position node, determining a target path according to the target position node, the reference expansion node and the current position node.
Specifically, the robot compares the coordinate position of the reference expansion node in the world coordinate system with the preset range corresponding to the target position node, and when the reference expansion node is in the preset range corresponding to the target position node, the target path is determined according to the target position node, the reference expansion node and the current position node.
In this embodiment, by fusing each sub-instruction in the acquired motion control instruction set with the state information corresponding to the current position node of the robot to generate each corresponding expansion node, and determining the reference expansion node from each expansion node by taking the distance between each expansion node and the current position node and the distance between each expansion node and the target position node as a basis, a two-wheel differential model which cannot support the motion states of lateral movement, in-situ turning, large-curvature turning and the like is realized, so that when the next reference expansion node is determined according to the current position node in the application, the corresponding reference expansion node is determined under the condition of fully considering the motion constraint of the robot, and finally, the target path is determined based on the target position node, the reference expansion node and the current position node, thereby effectively improving the adaptability of the generated path.
In one embodiment, as shown in FIG. 3, acquiring a set of motion control instructions includes:
step S302, acquiring the linear acceleration range data and the linear acceleration sampling interval of the robot.
The linear acceleration range data of the robot can be flexibly set by a technician according to the motion characteristic of the robot, for example, when the motion characteristic of the robot is that when acceleration exceeds the maximum limit, the risk of motion runaway exists, and the upper limit of the linear acceleration range in the linear acceleration range data is set to be lower than the maximum limit.
Step S304, acquiring angular acceleration range data and angular acceleration sampling intervals of the robot.
The angular acceleration range data of the robot can be flexibly set by a technician according to the motion characteristic of the robot, for example, when the motion characteristic of the robot is that when the angular velocity is accelerated beyond the maximum limit, the robot has risks of motion rollover and other runaway, and the upper limit of the angular acceleration range in the angular acceleration range data is set to be lower than the maximum limit.
And step S306, sampling the linear acceleration range data according to the linear acceleration sampling interval to obtain a linear acceleration set.
Step S308, sampling the angular acceleration range data according to the angular acceleration sampling interval to obtain an angular acceleration set.
Step S310, based on the combination of each element in the linear acceleration set and each element in the angular acceleration set, each motion control sub-instruction is obtained, and a motion control instruction set is obtained according to each motion control sub-instruction.
Specifically, the robot samples the linear acceleration and the angular acceleration according to the sampling interval in the foregoing step to obtain a linear acceleration set and an angular acceleration set, and then combines each linear acceleration in the linear acceleration set with each element in the angular acceleration set to obtain each motion control sub-instruction, where the combination mode may be any combination mode, for example, when the linear acceleration range is [ -0.2,0.2], the sampling interval is 0.1, the linear acceleration set after sampling is { -0.2, -0.1,0,0.1,0.2}, and when the angular acceleration sampling range is [ -pi, pi ] and the sampling interval is pi/2, the angular acceleration set after sampling is { -pi, -pi/2,0, pi/2, pi }, so when each element in the linear acceleration set and each element in the angular acceleration set are combined two by two at will, there are 25 combinations, and the motion control instruction set should be obtained by combining the 25 motion control sub-instructions.
In this embodiment, according to the motion characteristics of the robot, the linear acceleration range, the angular acceleration range and the sampling interval flexibly set by a technician are sampled to obtain the linear acceleration set and the angular acceleration set, and then each motion control sub-instruction is obtained according to the combination of each element in the linear acceleration set and each element in the angular acceleration set, so that the motion control instruction set obtained on this basis can be effectively applied to the motion characteristics of the robot, and the applicability and reliability of the motion control instruction set are improved.
In one embodiment, as shown in fig. 4, fusing each sub-instruction with the state information corresponding to the current location node to obtain corresponding reference state information, including:
the state information comprises position information, course angle, linear speed and angular speed of the robot at a corresponding node under a world coordinate system; the sub-instructions comprise linear acceleration, angular acceleration and control duration of the robot at corresponding nodes in a world coordinate system.
Step S402, obtaining a reference linear velocity according to fusion of the linear velocity, the linear acceleration and the control duration.
Specifically, the robot fuses the linear acceleration and the control time to obtain a linear velocity variation, and fuses the linear velocity variation and the linear velocity to obtain a reference linear velocity.
For example, the robot may calculate the reference linear velocity v according to the following formula 1 i
v ic + v t, equation 1
Wherein v is c Is the linear velocity, a v And the linear acceleration is obtained, and t is the control duration.
And step S404, obtaining the reference angular velocity according to the fusion of the angular velocity, the angular acceleration and the control duration.
Specifically, the robot fuses the angular acceleration and the control time to obtain an angular velocity variation, and fuses the angular velocity variation and the angular velocity to obtain a reference angular velocity. For example, the robot may calculate the reference angular velocity w according to the following equation 2 i
w i =w c +a w t, equation 2
Wherein w is c For angular velocity, a w And is angular acceleration, and t is control duration.
Step S406, obtaining a reference course angle according to the fusion of the course angle, the angular speed, the control duration and the angular acceleration.
Specifically, the angular velocity and the control duration of the robot are fused to obtain a course angle variation, the angular acceleration and the control duration are fused to obtain an angular velocity variation, and finally the robot is fused to obtain a reference course angle based on the course angle, the angular velocity variation and the course angle variation of the current position node.
For example, the robot may calculate the reference heading angle theta according to the following equation 3 i
theta i =theta c +w c t+0.5a w t 2 Equation 3
Wherein theta is c For the course angle, w, corresponding to the current position node c Is angular velocity, t is control time length, a w Is angular acceleration.
Step S408, obtaining reference position information according to fusion of the position information, the linear speed, the control duration, the linear acceleration and the reference course angle.
Specifically, the robot obtains a position information variation based on the fusion of the linear speed, the control duration, the linear acceleration and the reference course angle corresponding to the current position node, and obtains the reference position information based on the fusion of the position information and the position information variation.
For example, the robot may calculate the reference position information according to the following formula 4 and formula 5:
x i =x c +(v c t+0.5a v t 2 )cos(theta i ) Equation 4
y i =y c +(v c t+0.5a v t 2 )sin(theta i ) Equation 5
Wherein, (x) i ,y i ) For reference position information, (x) c ,y c ) V is the position information corresponding to the current position node c The linear velocity corresponding to the node at the current position is t is the control duration, a v For linear acceleration, theta i Is the reference heading angle.
Step S410, obtaining reference state information based on the reference linear velocity, the reference angular velocity, the reference heading angle, and the reference position information.
Specifically, the robot combines the reference linear velocity, the reference angular velocity, the reference heading angle, and the reference position information to obtain the reference state information.
In this embodiment, the reference linear velocity is obtained by fusing the linear velocity, the linear acceleration and the control duration, the reference angular velocity is obtained by fusing the angular velocity, the angular acceleration and the control duration, the reference heading angle is obtained by fusing the heading angle, the angular velocity, the control duration and the angular acceleration, the reference position information is obtained by fusing the position information, the linear velocity, the control duration, the linear acceleration and the reference heading angle, and finally the reference state information is obtained based on the reference linear velocity, the reference angular velocity, the reference heading angle and the reference position information, so that the state information corresponding to the current position node is expanded according to the linear acceleration and the angular acceleration in the sub-instruction, the process of determining the expansion node according to the reference state information is realized, and the accuracy of determining the expansion node is effectively improved.
In one embodiment, as shown in fig. 5, determining the corresponding extension node from the reference state information includes: step 502, obtaining origin position information of origin corresponding to the world coordinate system in the grid map.
Step 504, obtaining index information corresponding to the extended node in the grid map and corresponding to the reference state information based on the fusion of the reference position information, the origin position information and the map resolution of the grid map in the reference state information.
Specifically, the robot performs difference calculation on the reference position information and the origin position information in the reference state information to obtain a position difference value, and then calculates based on the ratio of the position difference value to the map resolution of the grid map to obtain index information of the expansion node corresponding to the reference state information in the grid map.
For example, the robot may calculate the index information according to the following formula 6 and formula 7:
Figure BDA0004220944210000101
Figure BDA0004220944210000102
wherein (i, j) is index information of the extended node corresponding to the reference state information in the grid map, the round () function is a rounding function, i.e. the remainder is smaller than 0.5 and is truncated and larger than 0.5, and (x, y) is reference position information in the reference state information, (x) origin ,y origin ) Scale for origin position information of origin corresponding to world coordinate system in grid map map Is map resolution.
Step 506, determining the expansion node corresponding to the reference state information according to the index information.
Specifically, the robot determines row and column information corresponding to the extension node in the grid map according to the index information, and further determines the extension node corresponding to the reference state information.
In this embodiment, the index information of the extended node corresponding to the reference state information in the grid map is obtained by fusing the reference position information, the origin position information and the map resolution of the grid map in the reference state information, and then the corresponding extended node in the grid map is determined by the index information, so that coordinate conversion between the grid map and the position information of the world coordinate system is effectively realized, and the accuracy of the determined extended node can be effectively improved.
In one embodiment, as shown in fig. 6, determining the corresponding extension node from the reference state information includes: step S602, corresponding candidate expansion nodes are respectively determined according to each piece of reference state information.
Specifically, the robot determines the corresponding node in the grid map according to the method of converting the position information and the index information in the reference state information in the previous step, so as to respectively determine the corresponding candidate expansion node according to each reference state information, wherein each reference state information uniquely corresponds to the node in one grid map.
Step S604, determining expansion nodes from the candidate expansion nodes according to the linear velocity and the obstacle state information in the reference state information corresponding to the candidate expansion nodes.
The obstacle state information is used for representing the existence state of the obstacle at the corresponding candidate expansion node.
Specifically, the robot may analyze the reference state information corresponding to each candidate expansion node, which may be to analyze the linear velocity in the reference state information, reject candidate expansion nodes with a negative linear velocity, reject candidate expansion nodes with an obstacle state information as an obstacle, and use the remaining candidate expansion nodes as expansion nodes.
In this embodiment, corresponding candidate expansion nodes are respectively determined according to each piece of reference state information, and expansion nodes are determined from the candidate expansion nodes according to the linear velocity and the obstacle state information in the reference state information corresponding to each candidate expansion node, so that the screening conditions of the expansion nodes can be flexibly set according to the kinematic constraint of the robot, candidate expansion nodes with the linear velocity being negative or having obstacles and other non-conforming to the kinematic constraint of the robot can be effectively removed, and the reliability of the expansion nodes is effectively improved.
In one embodiment, as shown in fig. 7, determining a reference extension node from the respective extension nodes according to a first distance of the respective extension nodes from the current location node and a second distance from the target location node includes: step S702, a first distance between each expansion node and the current position node is calculated.
Specifically, the robot may acquire an image according to the environment, and analyze a depth map of the acquired image, so as to obtain a first distance between each expansion node and a current position node, where the first distance may be a euclidean distance.
Step S704, calculating a second distance between each expansion node and the target position node.
Specifically, the robot may acquire an image of the environment, and analyze the acquired image in a depth map, so as to obtain a second distance between each expansion node and the target position node, where the second distance may be a euclidean distance.
Step S706, fusing the first distance and the second distance of each expansion node to obtain a corresponding total distance.
Specifically, the robot performs weighted fusion on the first distance and the second distance to obtain a corresponding total distance.
Step S708, determining a reference expansion node from the expansion nodes according to the total distance of the expansion nodes.
Specifically, the robot arranges the total distance of each expansion node from small to large, and takes the expansion node with the smallest total distance as a reference expansion node.
In this embodiment, by calculating the first distance between each expansion node and the current position node, calculating the second distance between each expansion node and the target position node, fusing the first distance and the second distance of each expansion node to obtain a corresponding total distance, and determining the expansion node with the smallest total distance as the reference expansion node, the reference expansion node is determined according to the total distance between each expansion node and the current position node and the total distance between each expansion node and the target position node, and the reliability and the practicability of the target path generated by the reference expansion node are improved.
In one embodiment, as shown in FIG. 8, the processor when executing the computer readable instructions performs the steps of:
step S802, when the reference expansion node is not in the preset range, the reference expansion node is taken as the next current position node.
Specifically, the robot analyzes the position information in the reference state information corresponding to the reference expansion node, judges whether the position of the reference expansion node is within a preset range corresponding to the target position node, and takes the reference expansion node as the next current position node when the reference expansion node is not within the preset range.
Step S804, returning to the step of fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain the corresponding reference state information, and determining the corresponding expansion node according to the reference state information.
In this embodiment, whether the reference expansion node is within the preset range corresponding to the target position node is determined in real time, so as to determine the stopping condition of the path search algorithm, when the reference expansion node is not within the preset range, the reference expansion node is used as the next current position node, each sub-instruction is returned to be fused with the state information corresponding to the current position node respectively, so as to obtain the corresponding reference state information, and the reference state information is used to determine the corresponding expansion node, so that each reference expansion node corresponding to the target path is circularly generated, the final generation of the target path is realized, and the reliability of the generated target path is improved.
The application scene is applied to the path generation method, and the method is applied to the scene for generating the target path by the robot. Specifically, the application of the path generation method in the application scene is as follows:
(1) Description of coordinate System conversion relation
The robot builds a world coordinate system and a grid map of the target scene, each grid in the grid map is used as a corresponding node, the position information of each node corresponding to the grid map is used as an index, and the conversion relation between the index (i, j) of the corresponding node and the coordinates (x, y) of the node corresponding to the world coordinate system is shown in the following formulas 8 to 11:
x=x origin +iscale map equation 8
y=y origin +jscale map Equation 9
Conversely, the index (i, j) of the node in the map can also be obtained according to the coordinates (x, y):
Figure BDA0004220944210000131
Figure BDA0004220944210000132
wherein the robot sets the lower left corner of the map as the origin of coordinates, the index corresponding to the grid map as (0, 0), and the coordinates corresponding to the world coordinate system as (x) origin ,y origin ) Map resolution is scale map
(2) Description of algorithm dependent variable relationships
In this embodiment, the path search algorithm includes a search graph (graph), edges (edge), and nodes (node). The nodes are basic elements in the graph, and each node contains state information (x, y, theta, v and w), wherein (x, y) is the position information of the node corresponding to the world coordinate system, theta is the course angle of the robot at the node, and v and w are the linear speed and the angular speed of the robot at the node. A schematic diagram of the start node and the extension node is shown in fig. 9.
In addition, the edges are the connection relations between nodes, for example, edges exist between the extension nodes N0, N1, N2 and the start node Nc, and the relation is a parent node where Nc is N0, N1, N2.
g is an expansion cost, and the expansion cost g is calculated in such a way that the Euclidean distance from the current node to the expansion node, such as the expansion cost from Nc (1, 1) to N0 (0, 2)
Figure BDA0004220944210000141
The method comprises the following steps:
Figure BDA0004220944210000142
f is the total cost, the total cost f=g+h, where h represents the heuristic cost value, which is the euclidean distance of the extended node to the target node (endpoint), such as N0 (0, 2) is:
Figure BDA0004220944210000143
the total cost of N0 is expressed as:
Figure BDA0004220944210000144
it can be understood that, in the path search algorithm of this embodiment, the graph is not built once, but only includes the initial node (node), and as the node (node) is expanded, the node (node) is gradually added to the graph and represented by open_list, and the open_list is a priority queue for storing the nodes to be expanded, and according to the arrangement from small to large of the total cost value F, the node with the smallest total cost is always preferentially fetched for expansion in the node expansion process, so that the first node in the open_list is fetched for expansion update every time. The close_list is used for storing the expanded node, avoiding the node from being expanded for a plurality of times, and determining the expanded node by the current node until the algorithm flow for generating the target path is shown in fig. 10.
(3) Node extension mode description
Defining acceleration control quantity U of two-wheel differential model t Is the linear acceleration a v Angular acceleration a w The control period sets the current state for t c ={x c ,y c ,theta c ,v c ,w c Then by the acceleration control amount U t Acting on state c The state of the node obtained later i State transition formulas 12 to 16 are as follows:
v i =v c +a v t, equation 12
w i =w c +a w t, equation 13
theta i =theta c +w c t+0.5a w t 2 Equation 14
x i =x c +(v c t+0.5a v t 2 )cos(theta i ) Equation 15
y i =y c +(v c t+0.5a v t 2 )sin(theta i ) Equation 16
We obtain the range of the achievable linear acceleration value of the machine according to the kinematic model of the actual machine
Figure BDA0004220944210000151
And angular acceleration range->
Figure BDA0004220944210000152
And sets the sampling interval +.>
Figure BDA0004220944210000153
Multiple groups of acceleration control values U can be obtained t ={u 0 ,u 1 ,....,u n }, wherein->
Figure BDA0004220944210000154
Then a plurality of groups of state transition values can be obtained according to a state transition formula,the index in the map can then be calculated from the (x, y) in the state values according to the coordinate conversion formula and mapped to a plurality of nodes.
(4) Node screening
And determining each candidate expansion node corresponding to each state information according to the steps, wherein part of candidate expansion nodes possibly exceed the map range, eliminating the candidate expansion nodes exceeding the map range, or eliminating the candidate expansion nodes with obstacles if the obstacles exist at the map index, wherein the state values are not used for node expansion. And secondly, for the machine with limited part of perception, the machine is not allowed to turn around in situ and turn with large curvature, so that candidate expansion nodes with negative linear speed in the calculated state value are removed, and the rest candidate expansion nodes are used as expansion nodes.
(5) Determination of target path
When the node is expanded from the initial node to the predetermined range around the target node, the searching can be considered to be successful, as shown in fig. 11, the relationship between the nodes can be described by the edges in the process of node expansion, namely the parent node of the current node is known, so that the connection relationship from the target node to the initial node can be reversely found after the target node is found according to the parent node, and then the path from the initial node to the target node can be obtained after the connection relationship is reversely traced.
In this embodiment, by fusing each sub-instruction in the obtained acceleration control amount with the state information corresponding to the current node of the robot to generate each corresponding expansion node, and determining the reference expansion node from each expansion node by taking the total cost between each expansion node and the current node and the total cost between each expansion node and the target node as a basis, a two-wheel differential model which cannot support the motion states of lateral movement, in-situ turning, large-curvature turning and the like, for example, is realized, when the next reference expansion node is determined according to the current node in the application, the corresponding reference expansion node is determined under the condition of fully considering the motion constraint of the robot, and finally, the target path is determined based on the target node, the reference expansion node and the current node, so that the adaptability of the generated path can be effectively improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
In one embodiment, as shown in fig. 12, a path generating apparatus is provided, which may use a software module or a hardware module, or a combination of both, as a part of a robot, and the apparatus specifically includes: an acquisition module 1202, a calculation module 1204, a determination module 1206, a generation module 1208, wherein:
An obtaining module 1202, configured to obtain a motion control instruction set, and a current position node and a target position node of the robot in the grid map, where the motion control instruction set includes at least two sub-instructions;
the computing module 1204 is configured to fuse each sub-instruction with state information corresponding to a node at the current position, to obtain corresponding reference state information, and determine a corresponding extended node according to the reference state information, where the state information is used to characterize a motion state of the robot at the corresponding node;
a determining module 1206, configured to determine a reference extension node from the extension nodes according to a first distance between the extension nodes and the current location node and a second distance between the extension nodes and the target location node;
the generating module 1208 is configured to determine, when the reference extension node is within a preset range corresponding to the target location node, a target path according to the target location node, the reference extension node, and the current location node.
In one embodiment, the acquiring module 1202 is further configured to acquire the robot linear acceleration range data and the linear acceleration sampling interval; acquiring angular acceleration range data and angular acceleration sampling intervals of a robot; sampling linear acceleration range data according to the linear acceleration sampling interval to obtain a linear acceleration set; sampling angular acceleration range data according to an angular acceleration sampling interval to obtain an angular acceleration set; based on the combination of each element in the linear acceleration set and each element in the angular acceleration set, each motion control sub-instruction is obtained, and a motion control instruction set is obtained according to each motion control sub-instruction.
In one embodiment, the calculating module 1204 is further configured to obtain a reference linear velocity according to the fusion of the linear velocity, the linear acceleration, and the control duration; obtaining a reference angular velocity according to fusion of the angular velocity, the angular acceleration and the control duration; obtaining a reference course angle according to the fusion of the course angle, the angular speed, the control duration and the angular acceleration; obtaining reference position information according to fusion of position information, linear speed, control duration, linear acceleration and reference course angle; obtaining reference state information based on the reference linear velocity, the reference angular velocity, the reference course angle and the reference position information, wherein the state information comprises position information, course angle, linear velocity and angular velocity of the robot at a corresponding node under a world coordinate system; the sub-instructions comprise the linear acceleration, the angular acceleration and the control duration of the robot at the corresponding nodes in the world coordinate system.
In one embodiment, the calculating module 1204 is further configured to obtain origin position information of the origin point in the grid map under the world coordinate system; obtaining index information corresponding to the extended node in the grid map, which corresponds to the reference state information, based on fusion of the reference position information, the origin position information and the map resolution of the grid map in the reference state information; and determining the expansion node corresponding to the reference state information according to the index information.
In one embodiment, the computing module 1204 is further configured to determine a corresponding candidate expansion node according to each reference state information; and determining the expansion nodes from the candidate expansion nodes according to the linear speed and the obstacle state information in the reference state information corresponding to the candidate expansion nodes, wherein the obstacle state information is used for representing the existence state of the obstacle at the corresponding candidate expansion nodes.
In one embodiment, the determining module 1206 is further configured to calculate a first distance between each expansion node and the current location node; calculating a second distance between each expansion node and the target position node; fusing the first distance and the second distance of each expansion node to obtain a corresponding total distance; and determining a reference extension node from the extension nodes according to the total distance of the extension nodes.
In one embodiment, the generating module 1208 is further configured to take the reference extension node as the next current location node when the reference extension node is not within the preset range; and returning to the step of fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, and determining the corresponding expansion node according to the reference state information.
According to the path generation device, the acquired sub-instructions in the motion control instruction set are fused with the state information corresponding to the current position node of the robot to generate the corresponding expansion nodes, the distances between the expansion nodes and the current position node and the distances between the expansion nodes and the target position node are used as the basis, and the reference expansion nodes are determined from the expansion nodes, so that the two-wheel differential model and the like which cannot support the motion states of lateral side movement, in-situ turning, large-curvature turning and the like according to the motion constraint of the robot are realized, when the next reference expansion node is determined according to the current position node, the corresponding reference expansion node is determined under the condition of fully considering the motion constraint of the robot, and finally, the target path is determined based on the target position node, the reference expansion node and the current position node, so that the adaptability of the generated path can be effectively improved.
The specific definition of the path generating means may be referred to the definition of the path generating method hereinabove, and will not be described herein. Each of the modules in the path generating apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in the processor in the robot or independent of the processor in the robot in a hardware mode, and can also be stored in a memory in the robot in a software mode, so that the processor can call and execute the operations corresponding to the modules.
In one embodiment, a robot is provided, which may be a terminal, and an internal structure thereof may be as shown in fig. 13. The robot includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the robot is adapted to provide computing and control capabilities. The memory of the robot comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the robot is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a path generation method. The display screen of the robot can be a liquid crystal display screen or an electronic ink display screen, the input device of the robot can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on a robot shell, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 13 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the robots to which the present application is applied, and that a particular robot may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a robot including a memory and a processor, the memory storing a computer program, the processor implementing the steps of the method embodiments described above when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the robot reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the robot performs the steps in the above-described method embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A robot comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions performs the steps of:
acquiring a motion control instruction set, and a current position node and a target position node of the robot in a grid map, wherein the motion control instruction set comprises at least two sub-instructions;
fusing each sub-instruction with state information corresponding to a current position node respectively to obtain corresponding reference state information, and determining a corresponding expansion node according to the reference state information, wherein the state information is used for representing the motion state of the robot at the corresponding node;
Determining a reference expansion node from the expansion nodes according to a first distance between each expansion node and a current position node and a second distance between each expansion node and the target position node;
and when the reference expansion node is in a preset range corresponding to the target position node, determining a target path according to the target position node, the reference expansion node and the current position node.
2. The robot of claim 1, wherein the acquiring a set of motion control instructions comprises:
acquiring linear acceleration range data and linear acceleration sampling intervals of the robot;
acquiring angular acceleration range data and an angular acceleration sampling interval of the robot;
sampling the linear acceleration range data according to the linear acceleration sampling interval to obtain a linear acceleration set;
sampling the angular acceleration range data according to the angular acceleration sampling interval to obtain an angular acceleration set;
and based on the combination of each element in the linear acceleration set and each element in the angular acceleration set, obtaining each motion control sub-instruction, and obtaining a motion control instruction set according to each motion control sub-instruction.
3. The robot of claim 1, wherein the fusing each sub-instruction with the state information corresponding to the current position node to obtain the corresponding reference state information includes:
the state information comprises position information, course angle, linear speed and angular speed of the robot at a corresponding node under a world coordinate system;
the sub-instruction comprises linear acceleration, angular acceleration and control duration of the robot at a corresponding node under a world coordinate system;
obtaining a reference linear velocity according to the fusion of the linear velocity, the linear acceleration and the control duration;
obtaining a reference angular velocity according to the fusion of the angular velocity, the angular acceleration and the control duration;
obtaining a reference course angle according to the course angle, the angular speed, the control duration and the fusion of the angular acceleration;
obtaining reference position information according to fusion of the position information, the linear speed, the control duration, the linear acceleration and the reference course angle;
and obtaining reference state information based on the reference linear velocity, the reference angular velocity, the reference course angle and the reference position information.
4. The robot of claim 1, wherein the determining of the corresponding extension node from the reference state information comprises:
acquiring origin position information of an origin point in the grid map under a world coordinate system;
obtaining index information corresponding to the extended node corresponding to the reference state information in the grid map based on fusion of the reference position information in the reference state information, the origin position information and the map resolution of the grid map;
and determining the expansion node corresponding to the reference state information according to the index information.
5. The robot of claim 1, wherein the determining of the corresponding extension node from the reference state information comprises:
respectively determining corresponding candidate expansion nodes according to each piece of reference state information;
and determining the expansion nodes from the candidate expansion nodes according to the linear speed and the obstacle state information in the reference state information corresponding to the candidate expansion nodes, wherein the obstacle state information is used for representing the existence state of the obstacle at the corresponding candidate expansion nodes.
6. The robot of claim 1, wherein the determining a reference extension node from each extension node based on a first distance of the extension node from a current location node and a second distance from the target location node comprises:
Calculating a first distance between each expansion node and a current position node;
calculating a second distance between each expansion node and the target position node;
fusing the first distance and the second distance of each expansion node to obtain a corresponding total distance;
and determining a reference expansion node from the expansion nodes according to the total distance of the expansion nodes.
7. The robot of claim 1, wherein the processor when executing the computer readable instructions performs the steps of:
when the reference expansion node is not in the preset range, the reference expansion node is used as a next current position node;
and returning to the step of fusing each sub-instruction with the state information corresponding to the current position node respectively to obtain corresponding reference state information, and determining the corresponding expansion node according to the reference state information.
8. A method of path generation, the method comprising:
acquiring a motion control instruction set, and a current position node and a target position node of the robot in a grid map, wherein the motion control instruction set comprises at least two sub-instructions;
Fusing each sub-instruction with state information corresponding to a current position node respectively to obtain corresponding reference state information, and determining a corresponding expansion node according to the reference state information, wherein the state information is used for representing the motion state of the robot at the corresponding node;
determining a reference expansion node from the expansion nodes according to a first distance between each expansion node and a current position node and a second distance between each expansion node and the target position node;
and when the reference expansion node is in a preset range corresponding to the target position node, determining a target path according to the target position node, the reference expansion node and the current position node.
9. A path generating apparatus, the apparatus comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring a motion control instruction set, a current position node and a target position node of a robot in a grid map, and the motion control instruction set comprises at least two sub-instructions;
the calculation module is used for respectively fusing each sub-instruction with the state information corresponding to the current position node to obtain corresponding reference state information, determining a corresponding expansion node according to the reference state information, and the state information is used for representing the motion state of the robot at the corresponding node;
The determining module is used for determining a reference expansion node from the expansion nodes according to the first distance between each expansion node and the current position node and the second distance between each expansion node and the target position node;
and the generation module is used for determining a target path according to the target position node, the reference expansion node and the current position node when the reference expansion node is in the preset range corresponding to the target position node.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor realizes the steps of the path generation method of claim 8.
CN202310502675.5A 2023-05-06 2023-05-06 Path generation method, path generation device, robot and computer readable storage medium Pending CN116382296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310502675.5A CN116382296A (en) 2023-05-06 2023-05-06 Path generation method, path generation device, robot and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310502675.5A CN116382296A (en) 2023-05-06 2023-05-06 Path generation method, path generation device, robot and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116382296A true CN116382296A (en) 2023-07-04

Family

ID=86971115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310502675.5A Pending CN116382296A (en) 2023-05-06 2023-05-06 Path generation method, path generation device, robot and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116382296A (en)

Similar Documents

Publication Publication Date Title
US11300964B2 (en) Method and system for updating occupancy map for a robotic system
CN110221600B (en) Path planning method and device, computer equipment and storage medium
CN108332758B (en) Corridor identification method and device for mobile robot
CN111680747B (en) Method and apparatus for closed loop detection of occupancy grid subgraphs
CN112699765B (en) Method, device, electronic equipment and storage medium for evaluating visual positioning algorithm
CN111707294A (en) Pedestrian navigation zero-speed interval detection method and device based on optimal interval estimation
CN111678513A (en) Ultra-wideband/inertial navigation tight coupling indoor positioning device and system
CN116448111A (en) Pedestrian indoor navigation method, device and medium based on multi-source information fusion
CN111882494B (en) Pose graph processing method and device, computer equipment and storage medium
CN116382296A (en) Path generation method, path generation device, robot and computer readable storage medium
CN114088104B (en) Map generation method under automatic driving scene
CN115900697B (en) Object motion trail information processing method, electronic equipment and automatic driving vehicle
CN113503883B (en) Method for collecting data for constructing map, storage medium and electronic equipment
CN114383621B (en) Track deviation rectifying method based on grid map, electronic equipment and storage medium
CN113203424B (en) Multi-sensor data fusion method and device and related equipment
CN111060127B (en) Vehicle starting point positioning method and device, computer equipment and storage medium
CN115393816A (en) Lane line generation method, lane line generation device, computer device, and storage medium
CN115619954A (en) Sparse semantic map construction method, device, equipment and storage medium
CN114964204A (en) Map construction method, map using method, map constructing device, map using equipment and storage medium
CN112669196A (en) Method and equipment for optimizing data by factor graph in hardware acceleration engine
CN117709000B (en) Unmanned underwater vehicle simulation method, unmanned underwater vehicle simulation device, computer equipment and medium
CN113156968B (en) Path planning method and system for mobile robot
CN111007553B (en) Navigation method and device of measured object, computer equipment and storage medium
CN116105731B (en) Navigation method and device under sparse ranging condition, computer equipment and medium
CN118149796B (en) Map construction method and device under degradation environment, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination