CN114295138A - Robot, map extension method, map extension device and readable storage medium - Google Patents
Robot, map extension method, map extension device and readable storage medium Download PDFInfo
- Publication number
- CN114295138A CN114295138A CN202111679399.7A CN202111679399A CN114295138A CN 114295138 A CN114295138 A CN 114295138A CN 202111679399 A CN202111679399 A CN 202111679399A CN 114295138 A CN114295138 A CN 114295138A
- Authority
- CN
- China
- Prior art keywords
- pose
- map
- nodes
- robot
- local
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000005457 optimization Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 12
- 239000003550 marker Substances 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 230000004807 localization Effects 0.000 claims 1
- 230000001953 sensory effect Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000004927 fusion Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
A map extension method comprises the following steps: when the robot moves, acquiring multi-frame image data acquired by the identification code through the image acquisition device and multi-frame sensing data acquired by the sensing device; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps; associating multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints; and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
Description
Technical Field
The invention relates to the field of robots, in particular to a robot, a map extension method, a map extension device and a readable storage medium.
Background
At present, the indoor distribution robot generally adopts laser positioning or visual marking positioning. Different robots may employ different positioning means. If a robot using a visual marker for positioning is already deployed in a field, a new robot using laser for positioning needs to be added, and in order to ensure that the robots of two positioning modes can normally operate and cannot collide with each other, the coordinate systems of the maps used by the robots are required to be consistent. If a new fusion map is adopted, all the set service points, such as dining tables, kitchen meal taking points, charging points and the like, need to be reset, and the deployment time can be increased.
Disclosure of Invention
The application provides a robot, a map extension method, a map extension device and a readable storage medium, which can improve map construction efficiency while ensuring that a map coordinate system of a laser positioning robot is consistent with a map coordinate system of a visual marking positioning robot.
In one aspect, the present application provides a robot, comprising:
the device comprises a memory, a processor, a sensing device and an image acquisition device;
the memory stores executable program code;
the processor coupled with the memory is used for realizing the following steps when calling the executable program code stored in the memory:
when the robot moves, acquiring multi-frame image data acquired by an image acquisition device for the identification code and multi-frame sensing data acquired by the sensing device;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the pose nodes and the local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps;
associating the multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes and the local maps by using the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
In another aspect, the present application provides a map construction apparatus applied to a robot including a sensing device and an image capturing device, the apparatus including:
the first acquisition module is used for acquiring multiframe sensing data acquired by a sensing device when the robot moves;
the second acquisition module is used for acquiring multi-frame image data acquired by the image acquisition device for the identification code;
the third acquisition module is used for acquiring a marking map of the identification code;
the fourth acquisition module is used for acquiring a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
a fifth obtaining module, configured to obtain, based on the multiple frames of sensing data, the multiple pose nodes, and the multiple local maps, a first relative pose relationship between every two adjacent pose nodes and a second relative pose relationship between each pose node and the multiple local maps;
the association module is used for associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
the optimization module is used for optimizing the pose nodes and the local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and the map construction module is used for constructing a positioning map by utilizing the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
In a third aspect, the present application provides a map extension method, including:
acquiring multiframe image data acquired by the identification code through the image acquisition device and multiframe sensing data acquired by the sensing device when the robot moves;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the pose nodes and the local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps;
associating the multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes and the local maps by using the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program for, when executed by a processor, implementing a map extension method for a robot according to the first aspect.
According to the technical scheme, the laser visual marking fusion map is directly expanded from the prior visual marking map, the coordinate system of the prior visual marking map is adopted, the position of the prior service point can be ensured to be unchanged, only a small amount of newly added service points are needed to be added, the deployment time can be saved, and the map building efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a robot provided in an embodiment of the present application;
fig. 2 is a flowchart of a map extension method provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a mapping apparatus provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this specification, adjectives such as first and second may only be used to distinguish one element or action from another, without necessarily requiring or implying any actual such relationship or order. References to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but rather to one or more of the element, component, or step, etc., where the context permits.
In the present specification, the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
Referring to fig. 1, a schematic structural diagram of a robot according to an embodiment of the present application is shown. For convenience of explanation, only portions related to the embodiments of the present application are shown. The robot may include:
the robot comprises a memory 10, a processor 20, a sensing device and an image acquisition device (not shown in the figure), wherein the processor 20 is a core of operation and control of the robot and is a final execution unit for information processing and program operation. The memory 10 is, for example, a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable erase-limited memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), and the like, and the embodiments of the present application are not limited thereto.
The memory 10 has stored therein executable program code; the processor 20, coupled to the memory 10, calls said executable program code stored in the memory 10, performing the following map extension method:
acquiring multi-frame image data acquired by the identification code through the image acquisition device and multi-frame sensing data acquired by the sensing device when the robot moves; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps; associating multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints; and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
Referring to fig. 2, a map extension method provided in the embodiment of the present application mainly includes steps S201 to S207, which are described as follows:
step S201: and acquiring multiframe image data acquired by the identification code through the image acquisition device and multiframe sensing data acquired by the sensing device when the robot moves.
In the embodiment of the application, the method refers to acquiring multi-frame image data acquired by an image acquisition device for an identification code and multi-frame sensing data acquired by a sensing device when a robot moves along a mapping path, wherein the mapping path is a route of the robot when the robot moves for mapping. In some embodiments the sensing means comprises at least a two-dimensional lidar, and in other embodiments a wheel odometer or the like. In this embodiment, the sensing device includes a two-dimensional laser radar and a wheel-type odometer, and therefore, the multi-frame sensing data collected by the sensing device includes laser point cloud data and wheel-type odometer data, that is, information such as position and attitude.
In an embodiment of the present application, the identification code may be an identification affixed to the environment, such as a wall, ceiling, or the like. The image acquisition device that the robot carried on can be monocular, binocular or trinocular camera, through the shooting of image acquisition device such as these monocular, binocular or trinocular camera to the identification code, can gather multiframe image data.
Step S202: and acquiring a marking map of the identification code.
In the embodiment of the present application, the indication map is a visual indication map. The labeled map is the previous labeled map, i.e. the known information. In some embodiments, when a laser positioning robot is newly added to the same site, the site has already deployed a robot positioned by using a visual marker, and the marker map is a positioning map of the robot positioned by the deployed visual marker; in other embodiments, the map is marked when the same robot can be positioned by both visual marking and laser markingOr may be a previously established visual indication map. The label map records ID and coordinate information of each identification codewTmiHere, a coordinate system indicating a map is defined as a world coordinate system, mi denotes an ith identification code, and W denotes a world coordinate system.
Step S203: and obtaining a plurality of pose nodes and a plurality of local maps of the robot through the multi-frame image data, the multi-frame sensing data and the marking map.
In the embodiment of the application, as mentioned above, the sensing device carried by the robot includes the wheel type odometer and the two-dimensional laser radar, and a plurality of accurate pose nodes of the robot can be obtained through the multi-frame sensing data acquired by the two sensors. It should be noted here that the plurality of pose nodes of the robot include a historical pose node and a latest pose node, where the latest pose node of the robot is also referred to as a current pose node of the robot, because the pose node of the robot includes a time variable, that is, the pose node of the robot is a pose node at a different time, the pose node obtained at the current time is the latest pose node or the current pose node, and the pose node obtained before the current time is the historical pose node. In the present embodiment, the local map is a laser grid map. Furthermore, the local map is relative to the global map, meaning that the laser grid map is limited to a certain range. As an embodiment of the present application, obtaining a plurality of pose nodes and a plurality of local maps of a robot through a plurality of frames of image data, a plurality of frames of sensing data, and a marker map may be implemented through steps S2031 to S2034 as follows:
step S2031: and creating an initial pose node and an initial local map based on the image data and the marked map. Step S2031 specifically includes the steps of:
step S20311: ID of identification code and pose of relative identification code of robot are obtained through image datarTmi。
Step S20312: obtaining coordinate information of the identification code according to the ID of the identification code and the marked mapwTmi。
Step (ii) ofS20313: pose using relative identification codes of robotsrTmiAnd coordinate information of the identification codewTmiCalculating the pose of the robotwTro. Here, thewTro=wTmi*rTmiWhere i denotes the same identification code.
Step S20314: and establishing a pose node and a local map based on the pose of the robot, and inserting corresponding sensing data into the local map.
According to the position and posture of the robotwTroAcquiring an initial pose node, wherein the initial pose node is a pose node (t)0,xo,yo,θo) Where xo、yoIn the above positionwTroThe values of x and y in (1), θoIn the above positionwTroThe amount of the angle of orientation in (1); accordingly, the local map has coordinates of (x)o,yo,0)。
Step S2032: a new node is created based on the odometry data and/or the laser point cloud data.
Generally, more nodes are created, meaning an increase in the accuracy of subsequent graph building. However, creating more nodes comes at the cost of consuming computational effort or sacrificing real-time, and therefore, a choice is needed when creating new nodes. Specifically, creating a new node based on odometry data and/or laser point cloud data may be: and if the odometry data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value, creating a new node.
Step S2033: and acquiring a pose predicted value of the new node based on the odometer data corresponding to the new node and the previous pose node, and correcting the pose predicted value by using the stress light spot cloud data and the local map of the new node to obtain the latest pose node.
In the embodiment of the present application, the previous pose node is the pose node obtained in the previous event period according to steps S2031 to S2033.
Because a certain error or uncertainty exists in the position and pose prediction value of the new node obtained based on the odometer data corresponding to the new node and the previous position and pose node, the latest position and pose node needs to be obtained by correcting the position and pose prediction value by using the stress light spot cloud data and the local map of the new node.
Step S2034: and creating a new local map based on the latest pose node and the last local map.
Specifically, creating a new local map based on the latest pose node and the previous local map may be: and if the translation distance between the latest pose node and the previous local map is greater than or equal to the second translation threshold, creating a new local map, and taking the pose value of the latest pose node as the pose value of the local map. Namely, the pose value of the first pose in the local map is used as the pose value of the local map. For example, for an initial local probability grid map, the first pose node is the initial pose node, so the pose value of the local map is the same as the pose value of the initial pose node.
After step S2034, the step of creating a new node and/or a new local map based on the odometry data and/or the laser point cloud data is returned to be performed to obtain a plurality of pose nodes and a plurality of local maps, that is, steps S2032 to S2034 are repeatedly performed.
Step S204: based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps are obtained.
The second relative pose relationship in step S204 includes a map relative relationship and a closed-loop constraint relationship. Specifically, the implementation of step S204 can be obtained through steps S2041 to S2043, which are described as follows:
step S2041: and acquiring a first relative pose relation of every two adjacent pose nodes based on the plurality of pose nodes.
Specifically, based on every two adjacent pose nodes in the plurality of pose nodes, a first relative pose relationship of every two adjacent pose nodes is calculated. Thereby a plurality of first relative pose relationships can be obtained. Optionally, the first relative pose relationship between two adjacent pose nodes specifically refers to a relative pose relationship between pose values of two adjacent pose nodes.
Step S2042: and obtaining a map relative relation based on each pose node and the corresponding local map.
Specifically, laser point cloud data associated with the latest pose node in a plurality of pose nodes of the robot is matched with a corresponding local map. If the laser point cloud data associated with the latest pose node of the robot is successfully matched with the corresponding local map, acquiring the relative pose relation R between the latest pose node of the robot and the successfully matched local map in the corresponding local mapijAnd the relative relationship between the pose nodes and the corresponding local map is used. In the process of continuously creating the pose, each latest pose node is matched with the local map where the pose node is located (corresponding to the pose node), so that the map relative relationship between each pose node and the local map is obtained, and a plurality of map relative relationships can be obtained. Similarly, the map relative relationship of each pose node to the local map is also the relative positional relationship of its pose value.
Step S2043: obtaining a closed-loop constraint relation C based on the laser point cloud data corresponding to the latest pose node and other local maps through loop detectionijAnd the other local map refers to a local map which does not correspond to the latest pose node. In the process of continuously creating the pose, closed-loop detection is carried out on each latest pose node and other local maps, and therefore whether a closed-loop constraint relation exists or not is determined. If yes, obtaining the closed-loop constraint relation, and if not, regarding as none.
The loop detection is also called closed loop detection, and is not described herein for the prior art. Once a closed-loop frame is detected, a closed-loop constraint relationship, i.e., an edge between two points having a loop relationship in graph optimization, may be obtained based on the laser point cloud data corresponding to the latest pose node and other local maps.
Step S205: and associating the multi-frame image data with the corresponding pose nodes to obtain an observed value.
In the embodiment of the present application, the identification code may be a label pasted in the environment. The image acquisition device carried by the robot can determine the poses of the identification codes by acquiring the identification codes in the environment, wherein the poses are a plurality of observed values of the robot on pose nodes and the identification codes in the environment. Further, multiple frames of image data may be associated with pose nodes of the robot. As for the specific association method, a time association method may be used to associate the observed values of the same identification code in the environment at different times with different pose nodes of the robot, or an ID association method may be used to associate the observed values of the different pose nodes and the identification code in the environment with the same pose nodes of the robot.
Optionally, for the same pose node, multiple identification codes may be observed, and similarly, the same identification code may also be observed by multiple pose nodes, so that multiple observation values of the identification code may be obtained by associating the multiple frames of image data acquired by the image acquisition device with the corresponding pose node.
Step S206: and optimizing the pose nodes and the local maps by using the first relative pose relationship, the second relative pose relationship and the observed value as constraints.
Specifically, the implementation of step S206 may be: constructing a nonlinear least square problem by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraint conditions and taking a plurality of pose nodes and a plurality of local maps as optimization targets; and solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as a plurality of optimized pose nodes, a plurality of local maps and an observed value. Here, the graph optimization solving algorithm may be Levenberg-Marquardt algorithm or gauss-newton method, etc., which is not limited in this application. In an optional embodiment, a plurality of or all first relative pose relations, a plurality of or all second relative pose relations, and a plurality of or all observation values are used as constraints, and optimization calculation is performed uniformly, so that a plurality of pose nodes and a plurality of local probability grid maps are obtained.
Step S207: and constructing a positioning map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
Specifically, the implementation of step S207 may be: and establishing a global map according to the plurality of optimized pose nodes and the multi-frame sensing data associated with the pose nodes and the plurality of local maps. And when the map building work is finished, outputting the grid map which has the same coordinate system as the marking map.
In the embodiment of the application, the position of a laser map is initialized by adopting the existing marking map, and the initial laser pose with the consistent coordinate system is provided; the existing marked pose is only used as optimization constraint and not used as an optimization target, so that the marking pose in the global map is ensured to be the same as the pose of the previous marked map. According to the technical scheme, the laser visual marking fusion map is directly expanded from the previous marking map, the coordinate system of the previous visual marking map is adopted, the position of the previous service point can be ensured to be unchanged, only a small amount of newly added service points are needed to be added, the deployment time can be saved, and the map building efficiency is improved.
Referring to fig. 3, a map building apparatus provided in this embodiment of the present application, which may be a central processing unit of a robot or a functional module thereof, may include a first obtaining module 301, a second obtaining module 302, a third obtaining module 303, a fourth obtaining module 304, a fifth obtaining module 305, an association module 306, an optimization module 307, and a map building module 308, which are detailed as follows:
the first acquisition module 301 is configured to acquire multiframe sensing data acquired by a sensing device when the robot moves;
the second obtaining module 302 is configured to obtain multi-frame image data acquired by the image acquisition device for the identification code;
a third obtaining module 303, configured to obtain a label map of the identification code;
the fourth obtaining module 304 is configured to obtain a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data, and the marker map;
a fifth obtaining module 305, configured to obtain, based on multiple frames of sensing data, multiple pose nodes, and multiple local maps, a first relative pose relationship between every two adjacent pose nodes and a second relative pose relationship between each pose node and the multiple local maps;
the association module 306 is configured to associate the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
the optimization module 307 is configured to optimize the pose nodes and the local maps by using the first relative pose relationship, the second relative pose relationship, and the observed value as constraints;
and the map building module 308 is configured to build a positioning map by using the optimized pose nodes, the local maps and the multi-frame sensing data.
In an embodiment of the present application, the fourth obtaining module 304 illustrated in fig. 3 may include an initializing unit, a first creating unit, a predicted value obtaining unit, a correcting unit, and a second creating unit, where:
the initialization unit is used for establishing an initial pose node and an initial local map based on multi-frame image data and a marking map;
a first creating unit for creating a new node based on the odometer data and/or the laser point cloud data;
a predicted value obtaining unit, configured to obtain a pose predicted value of the new node based on the odometer data corresponding to the new node and the previous pose node;
the correction unit is used for correcting the predicted value of the pose by using the new node to the stress light point cloud data and the local map to obtain the latest pose node;
and the second creating unit is used for creating a new local map based on the latest pose node and the previous local map.
Optionally, in an embodiment of the present application, the initialization unit includes: the system comprises an identification code analysis unit, a coordinate information acquisition unit, a robot pose calculation unit, an initial node creation unit and an initial local map creation unit, wherein:
the identification code analysis unit is used for acquiring the ID of the identification code and the pose of the relative identification code of the robot based on multi-frame image data;
the coordinate information acquisition unit is used for acquiring coordinate information of the identification code based on the ID of the identification code in combination with the marking map;
the robot pose calculation unit is used for calculating the pose of the robot based on the pose of the relative identification code of the robot and the coordinate information of the identification code;
the initial node creating unit is used for creating pose nodes based on the pose of the robot;
and the initial local map creating unit is used for creating a local map based on the position and posture of the robot.
Optionally, in another embodiment of the present application, the first creating unit may include a new node creating unit, and the second creating unit may include a new map creating unit, where:
the new node creation unit is used for creating a new node if the odometer data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value;
and the new map creating unit is used for creating a new local map and taking the latest pose node as the pose of the local map if the translation distance between the latest pose node and the previous local map is greater than or equal to the second translation threshold.
Optionally, in another embodiment of the present application, the fifth obtaining module 305 illustrated in fig. 3 above may include a first relative pose relationship obtaining unit, a map relative relationship obtaining unit, and a closed-loop constraint relationship obtaining unit, where:
a first relative pose relationship acquisition unit configured to acquire a first relative pose relationship between every two adjacent pose nodes based on the plurality of pose nodes;
the map relative relationship acquisition unit is used for acquiring the map relative relationship between each pose node and the corresponding local map;
and the closed-loop constraint relation acquisition unit is used for acquiring a closed-loop constraint relation based on the laser point cloud data corresponding to the latest pose node and other local maps through loop detection, wherein the other local maps are local maps which do not correspond to the latest pose node.
Optionally, in another embodiment of the present application, the optimization module 307 illustrated in fig. 3 may include a problem construction unit and a problem solving unit, where:
the problem construction unit is used for constructing a nonlinear least square problem by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraint conditions and taking a plurality of pose nodes and a plurality of local maps as optimization targets;
and the problem solving unit is used for solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as a plurality of pose nodes and a plurality of local maps after optimization.
Optionally, in another embodiment of the present application, the mapping module 307 illustrated in fig. 3 may include a first map generating unit, where:
and the first map generation unit is used for establishing a global map according to the plurality of optimized pose nodes and the multi-frame sensing data associated with the pose nodes and the plurality of local maps.
As can be seen from the apparatus illustrated in fig. 3, in the embodiment of the present application, an existing marking map is used to initialize the position of a laser map and provide an initial laser pose with a consistent coordinate system; the existing marked pose is only used as optimization constraint and not used as an optimization target, so that the marking pose in the global map is ensured to be the same as the pose of the previous marked map. According to the technical scheme, the laser visual marking fusion map is directly expanded from the previous marking map, the coordinate system of the previous visual marking map is adopted, the position of the previous service point can be ensured to be unchanged, only a small amount of newly added service points are needed to be added, the deployment time can be saved, and the map building efficiency is improved.
Fig. 4 is a schematic structural diagram of an apparatus provided in an embodiment of the present application. As shown in fig. 4, the apparatus 4 of this embodiment may be a robot or a module thereof, and mainly includes: a processor 40, a memory 41 and a computer program 42, such as a program of a map extension method, stored in the memory 41 and executable on the processor 40. The steps in the above-described map extension method embodiment, such as steps S201 to S207 shown in fig. 2, are implemented when the processor 40 executes the computer program 42. Alternatively, the processor 40 executes the computer program 42 to implement the functions of the modules/units in the device embodiments, such as the functions of the first obtaining module 301, the second obtaining module 302, the third obtaining module 303, the fourth obtaining module 304, the fifth obtaining module 305, the associating module 306, the optimizing module 307 and the map building module 308 shown in fig. 3.
Illustratively, the computer program 42 of the map extension method mainly includes: acquiring multiframe image data acquired by the identification code through the image acquisition device and multiframe sensing data acquired by the sensing device when the robot moves; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps; associating multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints; and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data. The computer program 42 may be partitioned into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to accomplish the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 42 in the device 4. For example, the computer program 42 may be divided into the functions of a first acquisition module 301, a second acquisition module 302, a third acquisition module 303, a fourth acquisition module 304, a fifth acquisition module 305, an association module 306, an optimization module 307, and a map construction module 308 (modules in a virtual device), each module having the following specific functions: the first acquisition module 301 is configured to acquire multiframe sensing data acquired by a sensing device when the robot moves; the second obtaining module 302 is configured to obtain multi-frame image data acquired by the image acquisition device for the identification code; a third obtaining module 303, configured to obtain a label map of the identification code; the fourth obtaining module 304 is configured to obtain a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data, and the marker map; a fifth obtaining module 305, configured to obtain, based on multiple frames of sensing data, multiple pose nodes, and multiple local maps, a first relative pose relationship between every two adjacent pose nodes and a second relative pose relationship between each pose node and the multiple local maps; the association module 306 is configured to associate the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes; the optimization module 307 is configured to optimize the pose nodes and the local maps by using the first relative pose relationship, the second relative pose relationship, and the observed value as constraints; and the map building module 308 is configured to build a positioning map by using the optimized pose nodes, the local maps and the multi-frame sensing data.
The device 4 may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of a device 4 and does not constitute a limitation of device 4 and may include more or fewer components than shown, or some components in combination, or different components, e.g., a computing device may also include input-output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the device 4, such as a hard disk or a memory of the device 4. The memory 41 may also be an external storage device of the device 4, such as a plug-in hard disk provided on the device 4, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 41 may also include both an internal storage unit of the device 4 and an external storage device. The memory 41 is used for storing computer programs and other programs and data required by the device. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as required to different functional units and modules, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, all or part of the processes in the method of the embodiments may also be implemented by instructing related hardware through a computer program, where the computer program of the map extension method may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods may be implemented, that is, acquiring multi-frame image data acquired by an image acquisition device for an identification code and multi-frame sensing data acquired by a sensing device when the robot moves; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps; associating multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints; and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the non-transitory computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, non-transitory computer readable media does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice. The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application. The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present invention.
Claims (10)
1. A robot, characterized in that the robot comprises:
the device comprises a memory, a processor, a sensing device and an image acquisition device;
the memory stores executable program code;
the processor coupled with the memory is used for realizing the following steps when calling the executable program code stored in the memory:
when the robot moves, acquiring multi-frame image data acquired by the image acquisition device for the identification code and multi-frame sensing data acquired by the sensing device;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the pose nodes and the local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps;
associating the multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes and the local maps by using the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
2. The robot of claim 1, wherein the sensing devices comprise a wheeled odometer and a two-dimensional lidar, and the multi-frame sensing data comprises odometer data and laser point cloud data;
the obtaining the plurality of pose nodes and the plurality of local maps of the robot based on the plurality of frames of image data, the plurality of frames of sensing data, and the marker map comprises:
creating an initial pose node and an initial local map based on the image data and the marking map;
creating a new node based on the odometry data and/or the laser point cloud data;
acquiring a pose predicted value of the new node based on the odometer data corresponding to the new node and the previous pose node, and correcting the pose predicted value by using the stress light spot cloud data and a local map of the new node to obtain a latest pose node;
creating a new local map based on the latest pose node and the last local map;
and returning to the step of creating a new node and/or a new local map based on the odometry data and/or the laser point cloud data to obtain a plurality of pose nodes and a plurality of local maps.
3. The robot of claim 2, wherein said step of creating an initial pose node and an initial local map based on said image data and said marker map comprises:
acquiring the ID of the identification code and the pose of the relative identification code of the robot based on the image data;
acquiring coordinate information of the identification code based on the ID of the identification code and the marking map;
calculating the pose of the robot based on the pose of the robot relative to the identification code and the coordinate information of the identification code;
an initial pose node and an initial local map are created based on the pose of the robot.
4. A robot according to claim 2, characterized in that the step of creating a new node based on odometry data and/or laser point cloud data comprises:
if the odometry data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value, creating a new node;
the processor calls the executable program codes stored in the memory, and the step of creating a new local map based on the latest pose node and the last local map in the executed map extension method comprises the following steps:
and if the translation distance between the latest pose node and the previous local map is greater than or equal to a second translation threshold value, creating a new local map, and taking the latest pose node as the pose of the local map.
5. The robot of claim 1, in which the second relative pose relationship comprises a map relative relationship and a closed-loop constraint relationship;
the obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps based on the multi-frame sensing data, the pose nodes and the local maps comprises:
acquiring a first relative pose relation of every two adjacent pose nodes based on the plurality of pose nodes;
acquiring the relative relation of the map based on each pose node and the corresponding local map;
and acquiring the closed loop constraint relation based on the laser point cloud data corresponding to the latest pose node and other local maps through loop detection, wherein the other local maps are local maps which do not correspond to the latest pose node.
6. The robot of claim 1, wherein the step of optimizing the plurality of pose nodes, the plurality of local maps, with the first relative pose relationship, the second relative pose relationship, and the observation as constraints comprises:
constructing a nonlinear least square problem by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraint conditions and taking the pose nodes and the local maps as optimization targets;
and solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as the optimized pose nodes and the local maps.
7. The robot of claim 1, wherein the step of constructing a localization map using the optimized plurality of pose nodes, the plurality of local maps, and the plurality of frames of sensory data comprises:
and establishing a global map according to the plurality of optimized pose nodes and the multi-frame sensing data and the plurality of local maps associated with the pose nodes.
8. A map construction apparatus applied to a robot including a sensor device and an image pickup device, the apparatus comprising:
the first acquisition module is used for acquiring multiframe sensing data acquired by a sensing device when the robot moves;
the second acquisition module is used for acquiring multi-frame image data acquired by the image acquisition device for the identification code;
the third acquisition module is used for acquiring a marking map of the identification code;
the fourth acquisition module is used for acquiring a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
a fifth obtaining module, configured to obtain, based on the multiple frames of sensing data, the multiple pose nodes, and the multiple local maps, a first relative pose relationship between every two adjacent pose nodes and a second relative pose relationship between each pose node and the multiple local maps;
the association module is used for associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
the optimization module is used for optimizing the pose nodes and the local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and the map construction module is used for constructing a positioning map by utilizing the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
9. A map extension method, characterized in that the method comprises:
when the robot moves, acquiring multi-frame image data acquired by the identification code through the image acquisition device and multi-frame sensing data acquired by the sensing device;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the pose nodes and the local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the local maps;
associating the multi-frame image data with corresponding pose nodes to obtain a plurality of observed values of the pose nodes and the identification codes;
optimizing the pose nodes and the local maps by using the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and constructing a global map by using the plurality of optimized pose nodes, the plurality of local maps and the multi-frame sensing data.
10. A readable storage medium having stored thereon a computer program for implementing a map extension method when being executed by a processor, the map extension method being the map extension method according to claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111679399.7A CN114295138B (en) | 2021-12-31 | 2021-12-31 | Robot, map extension method, apparatus, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111679399.7A CN114295138B (en) | 2021-12-31 | 2021-12-31 | Robot, map extension method, apparatus, and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114295138A true CN114295138A (en) | 2022-04-08 |
CN114295138B CN114295138B (en) | 2024-01-12 |
Family
ID=80975910
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111679399.7A Active CN114295138B (en) | 2021-12-31 | 2021-12-31 | Robot, map extension method, apparatus, and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114295138B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107607107A (en) * | 2017-09-14 | 2018-01-19 | 斯坦德机器人(深圳)有限公司 | A kind of Slam method and apparatus based on prior information |
WO2019169540A1 (en) * | 2018-03-06 | 2019-09-12 | 斯坦德机器人(深圳)有限公司 | Method for tightly-coupling visual slam, terminal and computer readable storage medium |
CN112051596A (en) * | 2020-07-29 | 2020-12-08 | 武汉威图传视科技有限公司 | Indoor positioning method and device based on node coding |
CN112183171A (en) * | 2019-07-05 | 2021-01-05 | 杭州海康机器人技术有限公司 | Method and device for establishing beacon map based on visual beacon |
CN113819914A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | Map construction method and device |
-
2021
- 2021-12-31 CN CN202111679399.7A patent/CN114295138B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107607107A (en) * | 2017-09-14 | 2018-01-19 | 斯坦德机器人(深圳)有限公司 | A kind of Slam method and apparatus based on prior information |
WO2019169540A1 (en) * | 2018-03-06 | 2019-09-12 | 斯坦德机器人(深圳)有限公司 | Method for tightly-coupling visual slam, terminal and computer readable storage medium |
CN110462683A (en) * | 2018-03-06 | 2019-11-15 | 斯坦德机器人(深圳)有限公司 | Method, terminal and the computer readable storage medium of close coupling vision SLAM |
CN112183171A (en) * | 2019-07-05 | 2021-01-05 | 杭州海康机器人技术有限公司 | Method and device for establishing beacon map based on visual beacon |
CN113819914A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | Map construction method and device |
CN112051596A (en) * | 2020-07-29 | 2020-12-08 | 武汉威图传视科技有限公司 | Indoor positioning method and device based on node coding |
Non-Patent Citations (2)
Title |
---|
DAVID WISTH等: "Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry", IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 6, no. 2, pages 1004 - 1011, XP011838195, DOI: 10.1109/LRA.2021.3056380 * |
刘运航: "多模融合的室内定位算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 5, pages 138 - 1249 * |
Also Published As
Publication number | Publication date |
---|---|
CN114295138B (en) | 2024-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109059902B (en) | Relative pose determination method, device, equipment and medium | |
CN112183171B (en) | Method and device for building beacon map based on visual beacon | |
CN110470333B (en) | Calibration method and device of sensor parameters, storage medium and electronic device | |
CN111426312B (en) | Method, device and equipment for updating positioning map and storage medium | |
CN109506642A (en) | A kind of robot polyphaser vision inertia real-time location method and device | |
CN112556685B (en) | Navigation route display method and device, storage medium and electronic equipment | |
CN110501036A (en) | The calibration inspection method and device of sensor parameters | |
KR102130687B1 (en) | System for information fusion among multiple sensor platforms | |
CN111856499B (en) | Map construction method and device based on laser radar | |
CN111862215B (en) | Computer equipment positioning method and device, computer equipment and storage medium | |
JP2015521767A5 (en) | ||
CN113587934B (en) | Robot, indoor positioning method and device and readable storage medium | |
CN112833890B (en) | Map construction method, device, equipment, robot and storage medium | |
CN112967340A (en) | Simultaneous positioning and map construction method and device, electronic equipment and storage medium | |
CN115366097A (en) | Robot following method, device, robot and computer readable storage medium | |
CN113052907A (en) | Positioning method of mobile robot in dynamic environment | |
CN111380515B (en) | Positioning method and device, storage medium and electronic device | |
CN114926549B (en) | Three-dimensional point cloud processing method, device, equipment and storage medium | |
CN113671523B (en) | Robot positioning method and device, storage medium and robot | |
CN115326051A (en) | Positioning method and device based on dynamic scene, robot and medium | |
CN111426316B (en) | Robot positioning method and device, robot and readable storage medium | |
CN114295138A (en) | Robot, map extension method, map extension device and readable storage medium | |
CN113375657A (en) | Electronic map updating method and device and electronic equipment | |
CN111708046A (en) | Method and device for processing plane data of obstacle, electronic equipment and storage medium | |
CN116698015A (en) | Target map building method, storage medium and pool robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |