CN109959377A - A kind of robot navigation's positioning system and method - Google Patents
A kind of robot navigation's positioning system and method Download PDFInfo
- Publication number
- CN109959377A CN109959377A CN201711420969.4A CN201711420969A CN109959377A CN 109959377 A CN109959377 A CN 109959377A CN 201711420969 A CN201711420969 A CN 201711420969A CN 109959377 A CN109959377 A CN 109959377A
- Authority
- CN
- China
- Prior art keywords
- robot
- map
- path
- positioning
- path planning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 50
- 239000002245 particle Substances 0.000 claims abstract description 25
- 230000003044 adaptive effect Effects 0.000 claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims description 15
- 238000010276 construction Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 9
- 238000004458 analytical method Methods 0.000 claims description 8
- 238000002939 conjugate gradient method Methods 0.000 claims description 8
- 230000001133 acceleration Effects 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 abstract description 3
- 230000007246 mechanism Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 16
- 230000008859 change Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000004927 fusion Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000010845 search algorithm Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0234—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Acoustics & Sound (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The present invention discloses a kind of robot navigation's positioning system and method, map structuring, positioning and path planning are carried out for a robot, its method includes the following steps: S100, positioning step: robot detects ambient condition information by multiple sensors, the then SLAM algorithm based on adaptive particle filter, and match different odometers and complete real-time map building and positioning;And S200, path planning step: use the path planning algorithm based on bipolar mixture state A*, comprising: S210 after progress path planning obtains path length and expanding node number on the map of rasterizing, passes through the map that parsing extension obtains higher rasterizing;And S220 gains enlightenment formula weight using the path length of acquisition and expanding node number as the input of fuzzy reasoning by fuzzy reasoning, the input of the search as second stage carries out path planning on the map of higher rasterizing.The present invention can not only be adapted to varying environment, and being capable of active path planning.
Description
Technical Field
The invention relates to a robot navigation positioning system and a method thereof, in particular to a robot navigation positioning system and a method thereof which are suitable for different environments and dynamic path planning.
Background
An ideal mobile service robot system is generally composed of 4 parts, namely a moving mechanism, a functional mechanism, a sensing system and a control system.
The moving mechanism provides a moving function for the robot, and common moving mechanisms include a wheel mechanism, a crawler mechanism, an articulated mechanism, a mixing mechanism and the like. The functional mechanism is used for the robot to realize various service functions, such as nursing, carrying and the like, and is commonly in a form of a mechanical arm and the like. The sensing system consists of various sensors and is used for collecting self and external environment information for the service robot. Common sensors on the service robot are a camera, a laser radar, an ultrasonic sensor, a contact and proximity sensor, an inertial measurement unit, a speedometer and the like. The control system acts as the "brain" of the service robot. The autonomous service robot has multiple technologies such as image recognition, environment perception, path planning, obstacle detection and the like, and can autonomously travel and complete service tasks according to set functional requirements.
Navigation is the core and key technology of the mobile robot and also is a key index for reflecting the realization of intellectualization of the mobile robot. The mobile robot navigation is that the robot can autonomously plan a moving path according to map information stored in the robot or external environment signals obtained by a sensor, and can move to a preset target point along the path without manual intervention. Since the service robot working environment often has the characteristics of complex structure and many dynamic objects, how to correctly and safely complete the functions of instant positioning, map construction, dynamic path planning and the like in the environment is one of the problems to be solved at present. The existing positioning technology can only obtain good positioning effect in certain environments, and cannot effectively solve the positioning problem of the robot in all environments.
The path planning of the mobile robot means that the robot avoids obstacles in the environment in the process of moving to a target point, and an optimal or sub-optimal path from a starting point to the target point is planned according to certain indexes such as distance, time, energy consumption and the like. The technology still exists in the problem of incomplete path planning in the situations of obstacle obstruction and dynamic environment change at present.
Disclosure of Invention
The invention aims to provide a robot navigation positioning system and a method, which can not only adapt to different environments, but also dynamically plan a path.
In order to achieve the above object, the robot navigation positioning method of the present invention is used for a robot to perform map construction, positioning and path planning, and comprises the following steps:
s100, positioning: the robot detects surrounding environment information through a plurality of sensors, then based on an SLAM algorithm of adaptive particle filtering, and matches with different odometers to complete real-time map construction and positioning;
s200, path planning: adopting a path planning algorithm based on a biphase mixed state A, comprising the following steps: s210, after path planning is carried out on the rasterized map to obtain path length and the number of expansion nodes, a higher rasterized map is obtained through analysis and expansion; and S220, taking the obtained path length and the number of the expansion nodes as input of fuzzy reasoning, obtaining heuristic weight through the fuzzy reasoning, taking the heuristic weight as input of search of the second stage, and planning the path on a higher rasterized map.
In an embodiment of the above-mentioned robot navigation and positioning method, the step S100 includes the following steps: s110, establishing a map by using data fed back by a sensor; s120, performing dead reckoning by using an odometer, acquiring dead reckoning information by using a particle filter, and taking the dead reckoning information as initial estimation of the pose of the robot, and then matching current frame data of a sensor with a local map to eliminate accumulated errors of the odometer;
s130, when the matching result of the current frame reaches a matching threshold, adding the current frame information into the map, eliminating original error information in the map by using the current frame information, and when the matching result continuously reaches the matching threshold, continuously updating the map; and when the current frame matching result does not reach the matching threshold, the map is kept unchanged, and only the pose information of the robot is updated.
In an embodiment of the above-mentioned robot navigation positioning method, in step S100, the SLAM algorithm of the adaptive particle filter performs dead reckoning by using a wheel-type odometer, a visual odometer, or an inertial navigation device.
In an embodiment of the above robot navigation positioning method, a dead reckoning process using a wheel type odometer is represented as:
wherein,is tkAn estimate of the position of the robot at the moment,is tkThe turning angle of the robot at the moment, B is the distance between the left wheel and the right wheel,are each tk-1To tkDistance covered by the left and right wheels during the time period.
In an embodiment of the above robot navigation positioning method, the dead reckoning using the visual odometer includes: firstly, collecting a new frame of image, extracting ORB feature points in the image, and calculating BRIEF descriptors corresponding to the feature points; matching the feature points of the current image with the feature points of the previous frame of image; and finally, solving the 3D-2D pose, namely obtaining a rotation and translation matrix of the front frame and the rear frame by minimizing the reprojection error, and accumulating to obtain the current posture of the robot.
In an embodiment of the above robot navigation positioning method, the dead reckoning using the inertial navigation device includes: the acceleration and the angular velocity of the robot are obtained through a gyroscope and an accelerometer, and the acceleration and the angular velocity are integrated, so that the current posture of the robot is calculated.
In an embodiment of the above-mentioned robot navigation and positioning method, the step S130 further includes the following steps: the sensors are arranged into at least one ultrasonic sensor, at least one laser radar and at least one RGB-D camera, when the deviation of data detected by the ultrasonic sensor and the data detected by the laser radar and the RGB-D camera in an area exceeds a deviation threshold value, the data are checked repeatedly, if the data are unstable, a reflective/transparent/semitransparent object is considered to be in the area, and the data are marked in a map.
In an embodiment of the above robot navigation and positioning method, the step S220 further includes an analysis expanding step 221: and (4) ignoring obstacles in the environment, calculating a path from the current node to the target node, and if the path is collision-free, directly adding the nodes of the path into the node list.
In an embodiment of the above robot navigation and positioning method, after the step of analyzing and expanding, the following processing and optimizing step S222 is further included: smoothing the planned path by using a conjugate gradient method; interpolating to increase the density of the columns of points in the path; again using conjugate gradient methods to move the path away from the obstruction.
The invention relates to a robot navigation positioning system, which is connected to a robot and used for the robot to carry out map construction, positioning and path planning, and comprises the following components:
a sensor unit connected to the robot; the positioning unit comprises an odometer module which is connected with the sensor unit, the robot detects the surrounding environment information through the sensor unit, then the SLAM algorithm based on the adaptive particle filtering is carried out, and the odometer module is matched to complete the real-time map construction and positioning; the path planning unit is connected with the sensor unit and the positioning unit, and is used for carrying out path planning on the rasterized map to obtain the path length and the number of the expansion nodes and then obtaining a higher rasterized map through analysis and expansion; and using the obtained path length and the number of the expansion nodes as input of fuzzy reasoning, obtaining heuristic weight through the fuzzy reasoning, using the heuristic weight as input of searching in the second stage, and planning the path on a map with higher rasterization.
In an embodiment of the above-mentioned robot navigation positioning system, the odometer module includes a wheel type odometer, a visual odometer and an inertial navigation device.
The invention has the beneficial effects that:
1. in the aspect of positioning, an SLAM framework based on adaptive particle filtering is provided, and the framework can be matched with different odometer technologies to complete real-time map construction and positioning functions in various environments such as indoor and outdoor environments. The basic idea is that the current frame is matched with a local map by analyzing the structural characteristics of the laser data of the current frame, and the accumulated error of the odometer is eliminated, so that the functions of accurate positioning and composition are achieved. The method also integrates information of various sensors such as a laser radar, an RGB-D camera, an ultrasonic sensor, a speedometer and the like, and solves the problems of environmental light change, dynamic obstacles, transparent/semitransparent objects and the like which cannot be solved by the traditional method. In addition, the project designs and compiles a map editing upper computer, the constructed environment map can be edited manually, a series of functions such as map optimization and virtual wall setting are realized, and the usability of the robot is improved by repeated use of the map.
2. In the aspect of path planning, a path planning algorithm based on a two-phase mixed state A is provided. The algorithm is improved on the traditional heuristic A-x search algorithm, firstly, the environment complexity is evaluated, the heuristic weight of the algorithm is calculated, then, the heuristic weight, the current state of the robot, the target state and the robot kinematics model are combined to carry out high-dimensional path planning in a high-resolution map of the environment, and the target pose and the target speed of the robot in the next period of time are directly output. The algorithm considers the kinematics model of the robot, and the planned path conforms to the kinematics constraint, so that the running smoothness of the robot can be ensured in the running process. Meanwhile, the high-dimensional planning result comprises target speed information, so that the robot can track the path with high precision without designing a complex motion controller.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a schematic structural diagram of a robot navigation positioning system according to an embodiment of the present invention applied to a robot;
FIGS. 2a and 2b are exploded views from different angles, respectively, of the structure shown in FIG. 1;
FIG. 3 is a block diagram of the robotic navigation positioning system of the present invention;
FIG. 4 is a diagram of steps of a robot navigation positioning method of the present invention;
FIG. 5 is a block diagram of a positioning unit of the robotic navigation positioning system of the present invention;
FIG. 6 is a two-wheel differential steering model diagram of the odometer module of the robotic navigation positioning system of the present invention;
FIG. 7 is a diagram of a visual differential steering model of the odometer module of the robotic navigation positioning system of the present invention;
FIG. 8 is a diagram illustrating an example of an update timing sequence of a map according to the present invention;
FIG. 9 is a diagram illustrating map updating and error history elimination according to the present invention;
FIG. 10 is a map constructed in accordance with the present invention;
FIG. 11 is a schematic diagram of a VGG-Net network structure;
fig. 12 is an algorithm architecture diagram based on a two-phase hybrid state a-algorithm of the robot navigation positioning method of the present invention;
fig. 13 is a conventional a-x path search algorithm based on a grid map;
fig. 14 is a conventional example of a path planning based on a x of a grid map;
FIG. 15 is an example of a two-phase hybrid state A algorithm-based path planning for the method of the present invention;
FIG. 16 is a flow chart of a single cycle of the method of the present invention based on the two-phase mixing regime A algorithm;
FIG. 17 is a comparison of paths before and after interpolation;
fig. 18 is a diagram showing a result of a path planning performed by the robot in the actual situation based on the two-phase hybrid state a.
Wherein the reference numerals
10 robot
100 fixed plate
200 running mechanism
300 drive control mechanism
400 collision detection mechanism
401 ultrasonic sensor
402 RGB-D camera
500 laser radar mechanism
501 laser radar
600 battery mechanism
700 expanding mechanism
20 robot navigation positioning system
23 sensor unit
24 sensor unit
241 odometer module
242 particle filter module
243 map updating module
25 path planning unit
S100, S110-S130, S200, S210-S220, S221-S222 steps
Detailed Description
The following detailed description of the embodiments of the present invention with reference to the drawings and specific examples is provided for further understanding the objects, aspects and effects of the present invention, but not for limiting the scope of the appended claims.
As shown in fig. 1, 2a, and 2b, the robot 10 includes a fixed plate 100, a traveling mechanism 200, a drive control mechanism 300, a collision detection mechanism 400, a laser radar mechanism 500, a battery mechanism 600, and an expansion mechanism 700. Illustratively, a robotic mobile platform is shown that may be coupled to various robotic features via the extension mechanism 700 to form, for example, an educational robot, a home robot, a greeting robot, a meal delivery robot, and the like.
The robot navigation positioning system 20 of the present invention is connected to the robot 10 for the robot 10 to perform mapping, positioning and path planning, and as shown in fig. 3, the robot navigation positioning system 20 includes a sensor unit 23, a sensor unit 24 and a path planning unit 25. The sensor unit 23 is connected to the robot 10. The sensor unit 23 is, for example, a laser radar 501 of the laser radar mechanism 500, a plurality of ultrasonic sensors 401 of the collision detection mechanism 400, an RGB-D camera 402, or the like.
The sensor unit 24 is connected with the sensor unit 23, the sensor unit 24 comprises an odometer module 241, the robot 10 detects the surrounding environment information through the sensor unit 23, then the SLAM algorithm based on the adaptive particle filtering is carried out, and the odometer module 241 is matched to complete real-time map construction and positioning. The path planning unit 25 is connected with the sensor unit 23 and the sensor unit 24, and after path planning is carried out on the rasterized map by the path planning unit 25 to obtain the path length and the number of the expansion nodes, a higher rasterized map is obtained by analyzing and expanding; and the obtained path length and the number of the expansion nodes are used as the input of fuzzy reasoning, heuristic weight is obtained through the fuzzy reasoning and is used as the input of the search of the second stage, and path planning is carried out on a map with higher rasterization.
As shown in fig. 4, the robot navigation and positioning method of the present invention is used for a robot to perform mapping, positioning and path planning, and includes the following steps:
s100, positioning: the robot detects surrounding environment information through a plurality of sensors, then based on an SLAM algorithm of adaptive particle filtering, and matches with different odometers to complete real-time map construction and positioning;
s200, path planning: adopting a path planning algorithm based on a biphase mixed state A, comprising the following steps:
s210, after path planning is carried out on the rasterized map to obtain path length and the number of expansion nodes, a higher rasterized map is obtained through analysis and expansion;
and S220, taking the obtained path length and the number of the expansion nodes as input of fuzzy reasoning, obtaining heuristic weight through the fuzzy reasoning, taking the heuristic weight as input of search of the second stage, and planning the path on a higher rasterized map.
The step S100 includes the steps of:
s110, establishing a map by using data fed back by a sensor;
s120, performing dead reckoning by using an odometer, acquiring dead reckoning information by using a particle filter, and taking the dead reckoning information as initial estimation of the pose of the robot, and then matching current frame data of a sensor with a local map to eliminate accumulated errors of the odometer;
s130, when the matching result of the current frame reaches a matching threshold, adding the current frame information into the map, eliminating original error information in the map by using the current frame information, and when the matching result continuously reaches the matching threshold, continuously updating the map; and when the current frame matching result does not reach the matching threshold, the map is kept unchanged, and only the pose information of the robot is updated.
The SLAM algorithm based on the multi-sensor fusion mainly uses a combined SLAM algorithm of a laser radar 501, an RGB-D camera 402 and an odometer which are carried by the robot 10 to position the robot 10, and simultaneously uses an ultrasonic sensor 401 to detect a reflective/transparent/semitransparent object which is difficult to detect visually, so as to supplement an optical detection method.
The SLAM algorithm based on the multi-sensor fusion proposes the CRSPF-SLAM based on the laser radar 501 as a main positioning means. The algorithm is based on adaptive particle filtering, and can complete real-time map construction and positioning functions in various environments such as indoor and outdoor environments by matching with different odometer technologies. The basic idea is that the current frame is matched with a local map by analyzing the laser data structure characteristics of the current frame, and the accumulated error of the odometer is eliminated, so that the functions of accurate positioning and composition are achieved.
The above will be described in detail below.
As shown in fig. 5, the positioning unit 21 of the present invention mainly includes three modules, namely, an odometer module 241, a particle filter module 242, and a map update module 243, which are described in detail below.
1) Odometer module 241
The SLAM algorithm based on multi-sensor fusion uses three different odometers, namely a wheel type odometer, a visual odometer and an inertial navigation unit (IMU) to carry out dead reckoning. Three odometers can be used to satisfy a variety of different platforms or application environments, and are primarily used to provide initial robot 10 pose information for particle filters.
The wheel-type odometer module obtains a rough real-time posture of the robot 10 through dead reckoning, and the two-wheel differential steering model is shown in fig. 6.
The two-wheel differential model dead reckoning process is represented as:
wherein,is tkAn estimate of the position of the robot at the moment,is tkThe turning angle of the robot 10 at the moment, B is the distance between the left and right wheels,are each tk-1To tkDistance covered by the left and right wheels during the time period.
The wheel-type odometer is suitable for an environment with a flat ground and a certain friction force, and the precision is reduced if the robot 10 slips or the ground is uneven during movement.
The main algorithm flow of the visual odometer is as follows: firstly, a new frame of image is collected, ORB feature points in the image are extracted, and BRIEF descriptors corresponding to the feature points are calculated. And then matching the feature points of the current image with the feature points of the previous frame of image, wherein the matched feature points are refined by using a RANSAC algorithm because mismatching can be generated. And finally, solving the 3D-2D pose, namely obtaining a rotation and translation matrix of the front frame and the rear frame by minimizing the reprojection error, and accumulating to obtain the current pose of the robot 10.
As shown in fig. 6, P ═ XYZ]TBeing a known three-dimensional point in space,andcorresponding points in the k-1 frame and the k-th frame for the three-dimensional point P obtained by feature matching,and calculating the actual projection point of the P in the second frame image for the P point through a pinhole camera model. Ideally, the matching points in the first sub-diagramObtained by projecting the corresponding three-dimensional point P on the second imageShould match the feature with the pointThe exact coincidence, in fact due to the presence of errors,andthe error distances of all the feature matching points in the two images are accumulated, and the error is minimized to obtain the optimization target as follows
By optimizing the objective function, the pose transformation relationship between two adjacent frames of images can be obtained, and the mileage and course information of the robot 10 can be further calculated.
The visual odometer is suitable for environments with rich features, and the accuracy of the odometer is affected if the environment is single in color or the light is too bright or too dark.
The inertial navigation device can obtain the acceleration and the angular velocity of the robot 10 through a gyroscope and an accelerometer, and integrate the acceleration and the angular velocity to calculate the current posture of the robot 10, but the micro-electro-mechanical system (MEMS) gyroscope and the accelerometer adopted by the invention have larger drift errors and can only provide more accurate displacement and posture information in a shorter time.
2) Particle filter module 242
The Particle Filter (Particle Filter) acquires the dead reckoning information of the three odometers as the initial estimation of the pose of the robot 10, and then the data of the laser radar 501 is matched with the map information to obtain a more accurate pose, and the accumulated error of the odometers is eliminated. In the system of the project, the module mainly performs current laser structure analysis, searches key structure points in a current laser frame based on an IPAN algorithm, increases influence factors of the structure points, and then searches the optimal posture of the robot 10 in a local space based on a Monte Carlo local iterative convergence process. Wherein each particle pose is obtained by:
Sm(t)=[pm(1)...pm(i)...pm(Ns(t))]
Sb(t)=[pb(1)...pb(i)...pb(Ns(t))]
pm(i)=[xm(i) ym(i)]T
pb(i)=[xb(i) yb(i)]T
wherein Sm(t) is when the particle to be matched, Sb(t) is the current observed data, pmAnd pbRespectively, the coordinates of each laser point in the particle and the coordinates of each laser point in the observation data. And (3) searching the optimal key frame and the matching result M (t) thereof in the current period by analyzing the continuous matching frames. Meanwhile, the optimal matching result is used as a feedback value to act on the particle matching process of the next frame, the particle distribution range (X)M,YM,AM) And the iteration times (loopnum) are determined in a mode of passing through the history matching value and the preset parameter:
on the other hand, the results will also be returned to the odometer module 241 and the map update module 243 for accumulated error cancellation and map information update, respectively.
3) Map update module 243
The Map Update module (Map Update)223 mainly includes three types of execution actions: when the key frame matching result is sufficiently excellent, the current frame scanning information is added into the global map, and meanwhile, the current frame scanning information is utilized to eliminate the original error information in the map; when the matching value keeps excellent continuously, the map is updated continuously; when the key frame matching result is not good enough, the global map will remain unchanged, and only the pose information of the robot 10 is updated.
An exemplary diagram of the update timing of the map is shown in fig. 8.
The value of the matching result m (t) at each time is represented by a line ①, and a line ② represents that the map is updated when it is 1 and does not update when it is 0.
A map update and error history elimination diagram is shown in fig. 9. The gray area range in the map is the scanning range of the current key frame, and the original map information in the area is eliminated as an error history.
The three modules respectively work independently in different clock cycles, real-time positioning and map building functions of the robot 10 are realized through real-time data sharing and feedback, and a schematic diagram of a building result is shown in fig. 10.
Further, for reflective/transparent/translucent objects, the robot 10 navigation positioning system detects them using a plurality of ultrasonic sensors 401 on the moving mechanism platform around the robot 10. Since the sensitive area of the ultrasonic sensor 401 is a sector, when the distance returned by the ultrasonic wave has a large deviation from the data of the laser radar 501 and the RGB-D camera 402, the system checks the data, and if the data is unstable, that is, the measurement variance is large, it is determined that there is a reflective/transparent/translucent object in the sector, and the system marks the sector in the map.
To ensure that the observed data obtained by each sensor is observed for the same batch of targets at the same time, the observed data needs to be synchronized. The synchronization of data mainly comprises two aspects: spatial registration and temporal synchronization. In space, the coordinates of the observation data are all based on the local coordinate system where each sensor is located, and for convenience of data fusion, the position and angle information of each sensor need to be unified into a global coordinate system. The invention can obtain the relative pose of each sensor by means of the static data acquired by each sensor and the registration of the static data, and carry out off-line calibration on the relative position of each sensor. In order to ensure the time synchronization of the data, when the control system acquires the information of each sensor, the local time is used as the timestamp of the data of the sensor, so that the clock synchronization is realized.
The SLAM algorithm based on multi-sensor fusion has better robustness, but the 2D point cloud constructed by the method has weaker capability on scene recognition. If the scene recognition algorithm is not robust enough, the robot 10 has to be required to be stationary at a certain fixed position on the map each time the robot 10 starts up, otherwise it may not be able to position correctly. The robust scene recognition algorithm can enable the robot 10 to obtain a relatively accurate pose, so that the robot 10 can be started at any position.
The navigation and positioning system of the robot 10 estimates the initial position and the initial posture of the robot 10 by using a scene recognition method based on vision. When the map is constructed, the robot 10 records the image acquired by the image sensor and the current position and posture every time it travels a certain distance, and stores the recorded image and current position and posture in an image database. After the robot 10 is started each time, a series of historical images stored during map building and information such as the pose of the robot 10 during image shooting are loaded, a scene similar to the current image is searched by comparing the current image with the image in the image database, a perspective n-point positioning problem (PnP) is solved after the similar scene is found, and a pose transformation relation between the current image and the historical image in the image database is obtained. And then, the algorithm further adjusts the position estimation of scene recognition by using the current laser radar point cloud through a Monte Carlo positioning algorithm to obtain a high-precision positioning effect and realize the function of starting at any time and any place.
The SLAM algorithm based on multi-sensor fusion mainly identifies scene images through a bag-of-words model, and the bag-of-words model is mainly divided into an offline training part and an online using part in the application process.
The off-line training part is used for training a large amount of picture data in an off-line mode by using a clustering algorithm. Firstly, extracting FAST feature points of all pictures in a data set, then extracting image blocks in a range of 64 x 64 pixels around the feature points, obtaining a feature map by removing a convolution neural network VGG-Net of a full connection layer, performing spatial pyramid pooling on the feature map to obtain descriptors corresponding to the feature points, and clustering the descriptors of all the feature points in the images, wherein the number n of the clustered classes is the number of words of a visual dictionary. The schematic diagram of the VGG-Net network structure is shown in FIG. 11.
The online use part utilizes visual words obtained by offline training to carry out loop detection, for each newly acquired image, characteristic points in the image are acquired, a descriptor of each characteristic point is calculated by the same method as the training process to obtain the nearest neighbor visual word, the visual words corresponding to all the characteristic points in the image are counted and represented by n-dimensional characteristic vectors, and if the distance between the characteristic vectors of the two images is close, the two images can be considered to be similar. The visual bag-of-words model has the advantages that the dimensionality of the data can be reduced through clustering, the storage efficiency can be increased, and the robustness of the system to changes in brightness or viewing angle can be increased.
The path planning of the mobile robot 10 means that the robot 10 plans an optimal or sub-optimal path from a starting point to a target point by using a certain index such as distance, time, energy consumption and the like while avoiding obstacles in the environment when moving to the target point. Path planning of the mobile robot 10 is classified into global path planning and local path planning. Global path planning refers to finding an optimal path from a departure point to a target point under the condition that environmental information is completely known. The global path planning is not high in real-time computing requirement, but is highly dependent on an environment map. However, the working environment of the mobile robot 10 is generally an unstructured environment and has dynamic changes, so that a good planning effect cannot be obtained by only depending on global path planning. The local path planning does not need all environment map information, and the path search planning is performed in the local environment map according to information fed back by the sensor, so that the robot 10 is required to have better information processing capability, higher requirement on the real-time performance of calculation, and better robustness.
For a mobile robot system, a reasonable solution is to perform global planning once in a global map according to map information which is not changed for a long time, and then perform continuous local path planning along a planned global path in a robot local map, so as to solve the influence of environmental changes and dynamic objects on the robot.
In the global planning, in order to quickly search out a path reaching a task point, a two-phase mixed state a-star algorithm based on fuzzy inference is adopted, and the algorithm architecture diagram is shown in fig. 12.
The biphase means that the algorithm is divided into two stages, and the search in the first stage is to plan a path on a low-resolution map by using the traditional A-x algorithm to obtain the path length and the number of extended nodes. And taking the two quantities as the input of fuzzy reasoning, obtaining heuristic weight through the fuzzy reasoning, and taking the heuristic weight as the input of the second-stage search to plan the path on the high-resolution map.
First, a conventional grid map-based a-path search algorithm is introduced, as shown in fig. 13.
The A-algorithm adopts the idea of heuristic search, namely, the heuristic search is to evaluate each position to be evaluated in a state space, obtain the best position from the position, and then search and evaluate the position until the target position is reached. And for the established low-resolution grid map, establishing an OPEN table and a CLOSE table of path nodes, placing a starting point into the CLOSE table, then placing points in a certain range around the CLOSE table into the OPEN table, and sequentially calculating corresponding path scores. The path cost function is f (n) ═ G (n) + H (n), where G (n) represents the actual distance from the starting point to any vertex n (G cost), and H (n) represents the estimated distance from any vertex n to the target vertex (H cost). h (n) different planning results are obtained by using different distance functions. And continuously putting the node with the minimum cost into a CLOSE table, simultaneously calculating the cost of the nodes in the neighborhood, and adjusting the connection relation of the nodes to finally obtain the path with the minimum cost from the end point to the starting point.
The biphase mixed state A-algorithm based on the fuzzy inference is an improved algorithm of the traditional A-algorithm. The conventional a-th planning algorithm rasterizes the map, and then uses the midpoint of the grid as a search node, resulting in discontinuity of the planned path, as shown in fig. 14.
The fuzzy inference based two-phase hybrid state a of the present invention associates each grid with a continuous state, which is calculated by forward simulation using a robot kinematics model, so that the path planned by the fuzzy inference based two-phase hybrid state a of the present invention is smooth and conforms to the kinematics constraint of the robot, as shown in fig. 15.
The single-cycle flow chart of the fuzzy inference based two-phase mixed state A algorithm adopted by the method of the system is shown in figure 16.
Like traditional a, the algorithm first associates the current robot's continuous state with the starting grid, then calculates the sub-states of the current continuous state through forward simulation, and calculates the grid into which these sub-states fall, and adds it to the OPEN list if it never appears in it; if the grid is already present in the OPEN list, calculating the current G cost of the grid, if the current G cost of the grid is smaller than the original G cost, updating the cost of the grid and the parent node, and reordering the OPEN list. Since it is difficult for forward simulation to accurately reach the target state, and meanwhile, in order to further improve the real-time performance and path smoothness of the hybrid state a search, an analysis expansion step 221 is also performed in the node expansion process, which ignores obstacles in the environment, calculates a Reed-Shepp path from the current node to the target node, and adds a direct point of the path to an OPEN list if the path is collision-free.
The post-processing optimization step 222 includes three steps, the first step is to smooth the planned path using conjugate gradient method, the second step is to interpolate to increase the density of the point columns in the path, and the third step is to move the path away from the obstacle again using conjugate gradient method.
The first step of optimization is to smooth the planning path by using a conjugate gradient method, and the objective function to be optimized is as follows:
wherein κmaxAt minimum turning radius, σκIs the cost coefficient, ωκAnd ωsIs a weighting coefficient, N is the number of waypoints in the path, Δ xiAnd delta phiiRespectively is a position vector and an angle vector, and the calculation method is as follows:
Δxi=xi-xi-1
x and y are respectively the horizontal and vertical coordinates of the waypoints, and in order to improve the real-time performance of mixed state A-search, the grid map used for searching is generally low in resolution, so that the waypoint spacing in the path is large, tracking control is not facilitated for a bottom-layer program, and therefore second-step optimization, namely plug-value operation, is performed. The path pairs before and after interpolation are shown in fig. 17.
The two figures are a method of smoothing chassis paths, where the left figure is the path after smoothing and the right figure is one path per figure before smoothing. The path is formed by connecting a plurality of equally spaced points, the path point in the right graph is greatly spaced from the point, although the right graph looks like a smooth path, the path is actually spliced by a plurality of short straight lines, the robot can be violent if the driving direction of the robot changes along the path, a plurality of intermediate points can be inserted between the original point and the point through difference operation, so that the distance between the point in the new path is reduced, the path is smoother, and the result is the left graph.
In order to enable the planned path to be further away from the obstacle on the basis of no collision, the algorithm carries out third-step optimization, the path is optimized by using the conjugate gradient method again, and the objective function to be optimized is
Wherein sigmaoIs the cost coefficient, ωoIs a weighting coefficient, oiIs the coordinates of the obstacle, dmaxIs the maximum distance of the robot from the obstacle.
In local planning, the system uses a hybrid state a path planning algorithm based on fuzzy inference of distance from target points. The input of the local path planning comprises an established grid map, the current state (coordinates, posture, speed and the like) of the robot, the target state of the robot, and the output comprises a planned path, namely a state sequence of the robot, and a control quantity corresponding to each state for tracking control.
The local path planning method uses the result of the global planning in the previous stage, so as to obtain the distance between the current state and the target state of the robot and prepare for reasoning heuristic weight later; and then, carrying out mixed state A-star search on the local high-resolution map, wherein the algorithm search comprehensively considers the coordinates, attitude angles and speed of the robot and combines a kinematic model of the robot, so that a safe, efficient and collision-free smooth path can be planned. In the mixed state A search, the operation speed of the search algorithm can be improved by reasonably selecting heuristic weights, the number of expansion nodes and the path length output by the traditional A search in the global stage are fuzzified, then fuzzy reasoning is carried out, and finally defuzzification is carried out, so that the heuristic weights can be selected on line.
In addition, the size of the robot is considered when the two-phase mixed state A-search is carried out collision detection, so that the robot cannot collide with an obstacle in a planned path, and a search target state is arranged on a global path after the obstacle is avoided, so that a correct path can be returned. As mentioned above, the biphase mixed state a × search considers the kinematic model of the robot, and the planned path conforms to the kinematic constraint, so that the smoothness of the robot operation can be ensured during the driving process.
Fig. 18 shows the result of the robot path planning based on the mixed state a in the actual case. In the figure, a circle R represents the current position of the robot, and the short line direction represents the heading of the robot; the square and the short line T represent the pose of the robot target; the solid black line L1 is the global path planning result, and the black curve L2 is the local path planning result.
In the diagram (a), since there is no moving obstacle in the environment, the local path L2 of the robot can be converged to the vicinity of the global path L1 quickly. In the diagram (b), since there is a dynamic obstacle D (black laser point cloud) on the global path L1 of the robot, the robot bypasses the obstacle D during the planning of the local path L2 and then travels along the global path L1.
Because a series of optimal states of the robot including pose and speed can be output by using the A-programming method of the two-phase mixed state, the optimal control parameters of the robot can be obtained by performing reverse extrapolation through a kinematic model of the robot. However, there are many unknown factors in the robot workplace, such as the roughness and smoothness of the ground, which makes it necessary for the robot to achieve accurate tracking of the travel path and state through a closed-loop controller.
In engineering practice, the most widely used closed-loop controller is a proportional, integral, and derivative controller, abbreviated as a PID controller. The PID controller has been introduced for nearly 70 years, and becomes one of the main technologies of industrial control due to simple structure, good stability, reliable operation and convenient adjustment. The PID controller is a linear controller which forms a control deviation from a set value and an actual output value. The proportional term, integral term and differential term of the deviation are linearly combined to form a control quantity.
A method of a two-phase mixed state adopts a fuzzy PID controller, before the controller is used on line, fuzzy reasoning relations among three parameters of proportion, integral and differential, errors and error change rates are obtained through experiments or experiences, and the reasoning relations are made into a knowledge base. When the controller is used online, firstly, the error and the error change rate are fuzzified by the controller and input into a knowledge base to obtain a fuzzy inference output result, and the result is defzified to obtain three parameters of proportion, integral and differential, so that the online adjustment of the parameters is realized, the requirements of different errors and error change rates are met, and the dynamic and steady-state performance of a controlled process is improved. The input of the fuzzy controller is the course, the position error and the change rate of the robot, and the output is the set value of the speed of two wheels of the robot. In order to ensure the stability of the robot in operation, the path tracking module also needs to perform smooth filtering processing on the control parameters so as to avoid the danger caused by the violent change of the acceleration due to the sudden stop and the sudden start of the robot.
When the robot needs to be deployed, the upper computer remote control robot is used for scanning the region to which the robot belongs to construct an environment map, then the upper computer is used for editing and optimizing the map, for example, a virtual wall is set to limit the working space of the robot, and finally a plurality of task points are set according to business requirements.
When the robot works, a client obtains service through man-machine interaction, a top-level business system can issue instructions to a navigation system according to the requirements of the client, and a planning decision module of the navigation system carries out path planning in a constructed map according to task points and self positioning. Meanwhile, the instant positioning and map building module calculates the current pose by using information collected by various sensors and feeds back the pose and the ambient environment information to the planning decision module. And the planning decision module integrates the task points, the current state of the robot and the surrounding environment information, plans a path, generates a motion control quantity and sends the motion control quantity to a robot motion system. In addition, the planning decision module can monitor the task execution state in real time, feed the state back to the top-level financial business application, and inform the user through a man-machine interaction function.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (11)
1. A robot navigation positioning method is used for a robot to carry out map construction, positioning and path planning, and is characterized by comprising the following steps:
s100, positioning: the robot detects surrounding environment information through a plurality of sensors, then based on an SLAM algorithm of adaptive particle filtering, and matches with different odometers to complete real-time map construction and positioning; and
s200, path planning: adopting a path planning algorithm based on a biphase mixed state A, comprising the following steps:
s210, after path planning is carried out on the rasterized map to obtain path length and the number of expansion nodes, a higher rasterized map is obtained through analysis and expansion; and
and S220, taking the obtained path length and the number of the expansion nodes as input of fuzzy reasoning, obtaining heuristic weight through the fuzzy reasoning, taking the heuristic weight as input of search of the second stage, and planning the path on a higher rasterized map.
2. The robot navigation positioning method according to claim 1, wherein the step S100 comprises the steps of:
s110, establishing a map by using data fed back by a sensor;
s120, performing dead reckoning by using an odometer, acquiring dead reckoning information by using a particle filter, and taking the dead reckoning information as initial estimation of the pose of the robot, and then matching current frame data of a sensor with a local map to eliminate accumulated errors of the odometer;
s130, when the matching result of the current frame reaches a matching threshold, adding the current frame information into the map, eliminating original error information in the map by using the current frame information, and when the matching result continuously reaches the matching threshold, continuously updating the map; and when the current frame matching result does not reach the matching threshold, the map is kept unchanged, and only the pose information of the robot is updated.
3. The method as claimed in claim 1 or 2, wherein in step S100, the adaptive particle filter SLAM algorithm uses wheel type odometer, visual odometer or inertial navigation device to perform dead reckoning.
4. The robotic navigation positioning method of claim 3, where the dead reckoning process using a wheeled odometer is represented as:
wherein,is tkAn estimate of the position of the robot at the moment,is tkThe turning angle of the robot at the moment, B is the distance between the left wheel and the right wheel,are each tk-1To tkDistance covered by the left and right wheels during the time period.
5. The robotic navigation positioning method of claim 3, wherein the dead reckoning using a visual odometer comprises: firstly, collecting a new frame of image, extracting ORB feature points in the image, and calculating BRIEF descriptors corresponding to the feature points; matching the feature points of the current image with the feature points of the previous frame of image; and finally, solving the 3D-2D pose, namely obtaining a rotation and translation matrix of the front frame and the rear frame by minimizing the reprojection error, and accumulating to obtain the current posture of the robot.
6. The method according to claim 3, wherein the dead reckoning using the inertial navigation device comprises: the acceleration and the angular velocity of the robot are obtained through a gyroscope and an accelerometer, and the acceleration and the angular velocity are integrated, so that the current posture of the robot is calculated.
7. The method according to claim 2, wherein the step S130 further comprises the steps of: the sensors are arranged into at least one ultrasonic sensor, at least one laser radar and at least one RGB-D camera, when the deviation of data detected by the ultrasonic sensor and the data detected by the laser radar and the RGB-D camera in an area exceeds a deviation threshold value, the data are checked repeatedly, if the data are unstable, a reflective/transparent/semitransparent object is considered to be in the area, and the data are marked in a map.
8. The method according to claim 1, wherein the step S220 further comprises a parsing and expanding step 221: and (4) ignoring obstacles in the environment, calculating a path from the current node to the target node, and if the path is collision-free, directly adding the nodes of the path into the node list.
9. The method according to claim 8, further comprising a process optimization step S222 after the step of parsing and expanding: smoothing the planned path by using a conjugate gradient method; interpolating to increase the density of the columns of points in the path; again using conjugate gradient methods to move the path away from the obstruction.
10. A robot navigation positioning system, connected to a robot, for the robot to perform mapping, positioning and path planning, comprising:
a sensor unit connected to the robot;
the positioning unit comprises an odometer module which is connected with the sensor unit, the robot detects the surrounding environment information through the sensor unit, then the SLAM algorithm based on the adaptive particle filtering is carried out, and the odometer module is matched to complete the real-time map construction and positioning; and
the path planning unit is connected with the sensor unit and the positioning unit, and is used for carrying out path planning on the rasterized map to obtain a path length and an expansion node number and then obtaining a higher rasterized map through analysis and expansion; and using the obtained path length and the number of the expansion nodes as input of fuzzy reasoning, obtaining heuristic weight through the fuzzy reasoning, using the heuristic weight as input of searching in the second stage, and planning the path on a map with higher rasterization.
11. The robotic navigational positioning system of claim 10, wherein the odometer module includes a wheeled odometer, a visual odometer, and an inertial navigation device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420969.4A CN109959377A (en) | 2017-12-25 | 2017-12-25 | A kind of robot navigation's positioning system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420969.4A CN109959377A (en) | 2017-12-25 | 2017-12-25 | A kind of robot navigation's positioning system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109959377A true CN109959377A (en) | 2019-07-02 |
Family
ID=67021025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711420969.4A Withdrawn CN109959377A (en) | 2017-12-25 | 2017-12-25 | A kind of robot navigation's positioning system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109959377A (en) |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110260867A (en) * | 2019-07-29 | 2019-09-20 | 浙江大华技术股份有限公司 | Method, equipment and the device that pose is determining in a kind of robot navigation, corrects |
CN110531766A (en) * | 2019-08-27 | 2019-12-03 | 熵智科技(深圳)有限公司 | Based on the known continuous laser SLAM composition localization method for occupying grating map |
CN110568447A (en) * | 2019-07-29 | 2019-12-13 | 广东星舆科技有限公司 | Visual positioning method, device and computer readable medium |
CN110632919A (en) * | 2019-08-28 | 2019-12-31 | 广东工业大学 | Autonomous positioning navigation method based on crawler-type rescue robot |
CN110648353A (en) * | 2019-08-30 | 2020-01-03 | 北京影谱科技股份有限公司 | Monocular sensor-based robot indoor positioning method and device |
CN110763225A (en) * | 2019-11-13 | 2020-02-07 | 内蒙古工业大学 | Trolley path navigation method and system and transport vehicle system |
CN110823225A (en) * | 2019-10-29 | 2020-02-21 | 北京影谱科技股份有限公司 | Positioning method and device under indoor dynamic situation |
CN111012251A (en) * | 2019-12-17 | 2020-04-17 | 哈工大机器人(合肥)国际创新研究院 | Planning method and device for full-coverage path of cleaning robot |
CN111060108A (en) * | 2019-12-31 | 2020-04-24 | 江苏徐工工程机械研究院有限公司 | Path planning method and device and engineering vehicle |
CN111142514A (en) * | 2019-12-11 | 2020-05-12 | 深圳市优必选科技股份有限公司 | Robot and obstacle avoidance method and device thereof |
CN111176288A (en) * | 2020-01-07 | 2020-05-19 | 深圳南方德尔汽车电子有限公司 | Reedsshepp-based global path planning method and device, computer equipment and storage medium |
CN111308490A (en) * | 2020-02-05 | 2020-06-19 | 浙江工业大学 | Balance car indoor positioning and navigation system based on single-line laser radar |
CN111309015A (en) * | 2020-02-25 | 2020-06-19 | 华南理工大学 | Transformer substation inspection robot positioning navigation system integrating multiple sensors |
CN111432341A (en) * | 2020-03-11 | 2020-07-17 | 大连理工大学 | Environment self-adaptive positioning method |
CN111427047A (en) * | 2020-03-30 | 2020-07-17 | 哈尔滨工程大学 | Autonomous mobile robot S L AM method in large scene |
CN111487986A (en) * | 2020-05-15 | 2020-08-04 | 中国海洋大学 | Underwater robot cooperative target searching method based on global information transfer mechanism |
CN111812669A (en) * | 2020-07-16 | 2020-10-23 | 南京航空航天大学 | Winding inspection device, positioning method thereof and storage medium |
CN111812668A (en) * | 2020-07-16 | 2020-10-23 | 南京航空航天大学 | Winding inspection device, positioning method thereof and storage medium |
CN111830977A (en) * | 2020-07-02 | 2020-10-27 | 中国兵器科学研究院 | Autonomous navigation software framework and navigation method for mobile robot |
CN111964682A (en) * | 2020-08-10 | 2020-11-20 | 北京轩宇空间科技有限公司 | Fast path planning method and device adapting to unknown dynamic space and storage medium |
CN112130445A (en) * | 2020-11-24 | 2020-12-25 | 四川写正智能科技有限公司 | Intelligent watch and method for carrying out safety early warning based on driving route of child |
CN112146660A (en) * | 2020-09-25 | 2020-12-29 | 电子科技大学 | Indoor map positioning method based on dynamic word vector |
CN112336267A (en) * | 2019-08-07 | 2021-02-09 | 杭州萤石软件有限公司 | Cleaning robot and control method thereof |
CN112476433A (en) * | 2020-11-23 | 2021-03-12 | 深圳怪虫机器人有限公司 | Mobile robot positioning method based on array boundary identification |
CN112509056A (en) * | 2020-11-30 | 2021-03-16 | 中国人民解放军32181部队 | Dynamic battlefield environment real-time path planning system and method |
CN112652001A (en) * | 2020-11-13 | 2021-04-13 | 山东交通学院 | Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering |
CN113050140A (en) * | 2019-12-27 | 2021-06-29 | 中移智行网络科技有限公司 | Positioning method, positioning device, storage medium and server |
CN113093761A (en) * | 2021-04-08 | 2021-07-09 | 浙江中烟工业有限责任公司 | Warehouse robot indoor map navigation system based on laser radar |
CN113109821A (en) * | 2021-04-28 | 2021-07-13 | 武汉理工大学 | Mapping method, device and system based on ultrasonic radar and laser radar |
CN113189977A (en) * | 2021-03-10 | 2021-07-30 | 新兴际华集团有限公司 | Intelligent navigation path planning system and method for robot |
CN113232025A (en) * | 2021-06-07 | 2021-08-10 | 上海大学 | Mechanical arm obstacle avoidance method based on proximity perception |
CN113359714A (en) * | 2021-05-25 | 2021-09-07 | 国网江苏省电力有限公司电力科学研究院 | Routing inspection robot dynamic path planning method and device based on particle filter algorithm |
CN113608170A (en) * | 2021-07-07 | 2021-11-05 | 云鲸智能(深圳)有限公司 | Radar calibration method, radar, robot, medium, and computer program product |
CN113835428A (en) * | 2021-08-27 | 2021-12-24 | 华东交通大学 | Robot path planning method for restaurant |
CN114035562A (en) * | 2021-07-20 | 2022-02-11 | 新兴际华集团有限公司 | Multi-information fusion acquisition robot for explosive environment |
CN114217622A (en) * | 2021-12-16 | 2022-03-22 | 南京理工大学 | Robot autonomous navigation method based on BIM |
CN114489036A (en) * | 2021-07-25 | 2022-05-13 | 西北农林科技大学 | Indoor robot navigation control method based on SLAM |
CN114527763A (en) * | 2022-02-28 | 2022-05-24 | 合肥工业大学 | Intelligent inspection system and method based on target detection and SLAM composition |
CN115290098A (en) * | 2022-09-30 | 2022-11-04 | 成都朴为科技有限公司 | Robot positioning method and system based on variable step length |
WO2022247045A1 (en) * | 2021-05-28 | 2022-12-01 | 浙江大学 | Laser radar information-based mobile robot location re-identification method |
CN115884853A (en) * | 2020-09-23 | 2023-03-31 | 应用材料公司 | Robot joint space diagram path planning and movement execution |
CN116541574A (en) * | 2023-07-07 | 2023-08-04 | 湖北珞珈实验室 | Intelligent extraction method, device, storage medium and equipment for map sensitive information |
CN117405178A (en) * | 2023-12-15 | 2024-01-16 | 成都电科星拓科技有限公司 | Mobile monitoring platform and method for automatically detecting indoor environment parameters |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110125323A1 (en) * | 2009-11-06 | 2011-05-26 | Evolution Robotics, Inc. | Localization by learning of wave-signal distributions |
CN103644903A (en) * | 2013-09-17 | 2014-03-19 | 北京工业大学 | Simultaneous localization and mapping method based on distributed edge unscented particle filter |
CN104236551A (en) * | 2014-09-28 | 2014-12-24 | 北京信息科技大学 | Laser range finder-based map establishing method of snake-like robot |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN107092264A (en) * | 2017-06-21 | 2017-08-25 | 北京理工大学 | Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment |
-
2017
- 2017-12-25 CN CN201711420969.4A patent/CN109959377A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110125323A1 (en) * | 2009-11-06 | 2011-05-26 | Evolution Robotics, Inc. | Localization by learning of wave-signal distributions |
CN103644903A (en) * | 2013-09-17 | 2014-03-19 | 北京工业大学 | Simultaneous localization and mapping method based on distributed edge unscented particle filter |
CN104236551A (en) * | 2014-09-28 | 2014-12-24 | 北京信息科技大学 | Laser range finder-based map establishing method of snake-like robot |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN107092264A (en) * | 2017-06-21 | 2017-08-25 | 北京理工大学 | Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment |
Cited By (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110568447A (en) * | 2019-07-29 | 2019-12-13 | 广东星舆科技有限公司 | Visual positioning method, device and computer readable medium |
CN110260867A (en) * | 2019-07-29 | 2019-09-20 | 浙江大华技术股份有限公司 | Method, equipment and the device that pose is determining in a kind of robot navigation, corrects |
CN112336267A (en) * | 2019-08-07 | 2021-02-09 | 杭州萤石软件有限公司 | Cleaning robot and control method thereof |
CN110531766A (en) * | 2019-08-27 | 2019-12-03 | 熵智科技(深圳)有限公司 | Based on the known continuous laser SLAM composition localization method for occupying grating map |
CN110531766B (en) * | 2019-08-27 | 2022-06-28 | 熵智科技(深圳)有限公司 | Continuous laser SLAM (Simultaneous laser mapping) composition positioning method based on known occupied grid map |
CN110632919A (en) * | 2019-08-28 | 2019-12-31 | 广东工业大学 | Autonomous positioning navigation method based on crawler-type rescue robot |
CN110648353A (en) * | 2019-08-30 | 2020-01-03 | 北京影谱科技股份有限公司 | Monocular sensor-based robot indoor positioning method and device |
CN110823225A (en) * | 2019-10-29 | 2020-02-21 | 北京影谱科技股份有限公司 | Positioning method and device under indoor dynamic situation |
CN110763225A (en) * | 2019-11-13 | 2020-02-07 | 内蒙古工业大学 | Trolley path navigation method and system and transport vehicle system |
CN111142514A (en) * | 2019-12-11 | 2020-05-12 | 深圳市优必选科技股份有限公司 | Robot and obstacle avoidance method and device thereof |
CN111142514B (en) * | 2019-12-11 | 2024-02-13 | 深圳市优必选科技股份有限公司 | Robot and obstacle avoidance method and device thereof |
CN111012251A (en) * | 2019-12-17 | 2020-04-17 | 哈工大机器人(合肥)国际创新研究院 | Planning method and device for full-coverage path of cleaning robot |
CN113050140A (en) * | 2019-12-27 | 2021-06-29 | 中移智行网络科技有限公司 | Positioning method, positioning device, storage medium and server |
CN111060108A (en) * | 2019-12-31 | 2020-04-24 | 江苏徐工工程机械研究院有限公司 | Path planning method and device and engineering vehicle |
CN111060108B (en) * | 2019-12-31 | 2021-10-12 | 江苏徐工工程机械研究院有限公司 | Path planning method and device and engineering vehicle |
CN111176288A (en) * | 2020-01-07 | 2020-05-19 | 深圳南方德尔汽车电子有限公司 | Reedsshepp-based global path planning method and device, computer equipment and storage medium |
CN111308490B (en) * | 2020-02-05 | 2021-11-19 | 浙江工业大学 | Balance car indoor positioning and navigation system based on single-line laser radar |
CN111308490A (en) * | 2020-02-05 | 2020-06-19 | 浙江工业大学 | Balance car indoor positioning and navigation system based on single-line laser radar |
CN111309015A (en) * | 2020-02-25 | 2020-06-19 | 华南理工大学 | Transformer substation inspection robot positioning navigation system integrating multiple sensors |
CN111432341A (en) * | 2020-03-11 | 2020-07-17 | 大连理工大学 | Environment self-adaptive positioning method |
CN111427047B (en) * | 2020-03-30 | 2023-05-05 | 哈尔滨工程大学 | SLAM method for autonomous mobile robot in large scene |
CN111427047A (en) * | 2020-03-30 | 2020-07-17 | 哈尔滨工程大学 | Autonomous mobile robot S L AM method in large scene |
CN111487986A (en) * | 2020-05-15 | 2020-08-04 | 中国海洋大学 | Underwater robot cooperative target searching method based on global information transfer mechanism |
CN111830977B (en) * | 2020-07-02 | 2024-06-18 | 中国兵器科学研究院 | Autonomous navigation software framework of mobile robot and navigation method |
CN111830977A (en) * | 2020-07-02 | 2020-10-27 | 中国兵器科学研究院 | Autonomous navigation software framework and navigation method for mobile robot |
CN111812669A (en) * | 2020-07-16 | 2020-10-23 | 南京航空航天大学 | Winding inspection device, positioning method thereof and storage medium |
CN111812668A (en) * | 2020-07-16 | 2020-10-23 | 南京航空航天大学 | Winding inspection device, positioning method thereof and storage medium |
CN111964682A (en) * | 2020-08-10 | 2020-11-20 | 北京轩宇空间科技有限公司 | Fast path planning method and device adapting to unknown dynamic space and storage medium |
CN115884853A (en) * | 2020-09-23 | 2023-03-31 | 应用材料公司 | Robot joint space diagram path planning and movement execution |
CN115884853B (en) * | 2020-09-23 | 2024-01-16 | 应用材料公司 | Robot joint space diagram path planning and mobile execution |
CN112146660B (en) * | 2020-09-25 | 2022-05-03 | 电子科技大学 | Indoor map positioning method based on dynamic word vector |
CN112146660A (en) * | 2020-09-25 | 2020-12-29 | 电子科技大学 | Indoor map positioning method based on dynamic word vector |
CN112652001A (en) * | 2020-11-13 | 2021-04-13 | 山东交通学院 | Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering |
CN112476433B (en) * | 2020-11-23 | 2023-08-04 | 深圳怪虫机器人有限公司 | Mobile robot positioning method based on identification array boundary |
CN112476433A (en) * | 2020-11-23 | 2021-03-12 | 深圳怪虫机器人有限公司 | Mobile robot positioning method based on array boundary identification |
CN112130445A (en) * | 2020-11-24 | 2020-12-25 | 四川写正智能科技有限公司 | Intelligent watch and method for carrying out safety early warning based on driving route of child |
CN112509056A (en) * | 2020-11-30 | 2021-03-16 | 中国人民解放军32181部队 | Dynamic battlefield environment real-time path planning system and method |
CN112509056B (en) * | 2020-11-30 | 2022-12-20 | 中国人民解放军32181部队 | Dynamic battlefield environment real-time path planning system and method |
CN113189977A (en) * | 2021-03-10 | 2021-07-30 | 新兴际华集团有限公司 | Intelligent navigation path planning system and method for robot |
CN113093761A (en) * | 2021-04-08 | 2021-07-09 | 浙江中烟工业有限责任公司 | Warehouse robot indoor map navigation system based on laser radar |
CN113109821A (en) * | 2021-04-28 | 2021-07-13 | 武汉理工大学 | Mapping method, device and system based on ultrasonic radar and laser radar |
CN113359714A (en) * | 2021-05-25 | 2021-09-07 | 国网江苏省电力有限公司电力科学研究院 | Routing inspection robot dynamic path planning method and device based on particle filter algorithm |
WO2022247045A1 (en) * | 2021-05-28 | 2022-12-01 | 浙江大学 | Laser radar information-based mobile robot location re-identification method |
CN113232025B (en) * | 2021-06-07 | 2022-04-22 | 上海大学 | Mechanical arm obstacle avoidance method based on proximity perception |
CN113232025A (en) * | 2021-06-07 | 2021-08-10 | 上海大学 | Mechanical arm obstacle avoidance method based on proximity perception |
CN113608170B (en) * | 2021-07-07 | 2023-11-14 | 云鲸智能(深圳)有限公司 | Radar calibration method, radar, robot, medium and computer program product |
CN113608170A (en) * | 2021-07-07 | 2021-11-05 | 云鲸智能(深圳)有限公司 | Radar calibration method, radar, robot, medium, and computer program product |
CN114035562B (en) * | 2021-07-20 | 2024-05-28 | 新兴际华集团有限公司 | Multi-information fusion acquisition robot for explosive environment |
CN114035562A (en) * | 2021-07-20 | 2022-02-11 | 新兴际华集团有限公司 | Multi-information fusion acquisition robot for explosive environment |
CN114489036A (en) * | 2021-07-25 | 2022-05-13 | 西北农林科技大学 | Indoor robot navigation control method based on SLAM |
CN113835428A (en) * | 2021-08-27 | 2021-12-24 | 华东交通大学 | Robot path planning method for restaurant |
CN114217622A (en) * | 2021-12-16 | 2022-03-22 | 南京理工大学 | Robot autonomous navigation method based on BIM |
CN114217622B (en) * | 2021-12-16 | 2023-09-01 | 南京理工大学 | BIM-based robot autonomous navigation method |
CN114527763A (en) * | 2022-02-28 | 2022-05-24 | 合肥工业大学 | Intelligent inspection system and method based on target detection and SLAM composition |
CN114527763B (en) * | 2022-02-28 | 2024-08-06 | 合肥工业大学 | Intelligent inspection system and method based on target detection and SLAM composition |
CN115290098B (en) * | 2022-09-30 | 2022-12-23 | 成都朴为科技有限公司 | Robot positioning method and system based on variable step length |
CN115290098A (en) * | 2022-09-30 | 2022-11-04 | 成都朴为科技有限公司 | Robot positioning method and system based on variable step length |
CN116541574B (en) * | 2023-07-07 | 2023-10-03 | 湖北珞珈实验室 | Intelligent extraction method, device, storage medium and equipment for map sensitive information |
CN116541574A (en) * | 2023-07-07 | 2023-08-04 | 湖北珞珈实验室 | Intelligent extraction method, device, storage medium and equipment for map sensitive information |
CN117405178A (en) * | 2023-12-15 | 2024-01-16 | 成都电科星拓科技有限公司 | Mobile monitoring platform and method for automatically detecting indoor environment parameters |
CN117405178B (en) * | 2023-12-15 | 2024-03-15 | 成都电科星拓科技有限公司 | Mobile monitoring method for automatically detecting indoor environment parameters |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109959377A (en) | A kind of robot navigation's positioning system and method | |
Weiss | Vision based navigation for micro helicopters | |
Chen | Kalman filter for robot vision: a survey | |
US7584020B2 (en) | Occupancy change detection system and method | |
CN111308490B (en) | Balance car indoor positioning and navigation system based on single-line laser radar | |
Li et al. | Localization and navigation for indoor mobile robot based on ROS | |
CN112518739B (en) | Track-mounted chassis robot reconnaissance intelligent autonomous navigation method | |
Goto et al. | Mobile robot navigation: The CMU system | |
US20080009967A1 (en) | Robotic Intelligence Kernel | |
CN107092264A (en) | Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment | |
Usher et al. | Visual servoing of a car-like vehicle-an application of omnidirectional vision | |
CN114371716B (en) | Automatic driving inspection method for fire-fighting robot | |
CN111982114A (en) | Rescue robot for estimating three-dimensional pose by adopting IMU data fusion | |
CN109655058A (en) | A kind of inertia/Visual intelligent Combinated navigation method | |
CN113189613B (en) | Robot positioning method based on particle filtering | |
CN118020038A (en) | Two-wheeled self-balancing robot | |
Fang et al. | Homography-based visual servoing of wheeled mobile robots | |
Kazim et al. | Recent advances in path integral control for trajectory optimization: An overview in theoretical and algorithmic perspectives | |
Cui et al. | Simulation and Implementation of Slam Drawing Based on Ros Wheeled Mobile Robot | |
CN117901126A (en) | Dynamic perception method and computing system of humanoid robot | |
CN114721377A (en) | Improved Cartogrier based SLAM indoor blind guiding robot control method | |
Zhang et al. | A visual slam system with laser assisted optimization | |
Wang et al. | Research on SLAM road sign observation based on particle filter | |
Huang et al. | Research on laser-based mobile robot SLAM and autonomous navigation | |
Payne | Autonomous interior mapping robot utilizing lidar localization and mapping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190702 |