CN114419155A - Visual mapping method based on laser radar assistance - Google Patents
Visual mapping method based on laser radar assistance Download PDFInfo
- Publication number
- CN114419155A CN114419155A CN202210061037.XA CN202210061037A CN114419155A CN 114419155 A CN114419155 A CN 114419155A CN 202210061037 A CN202210061037 A CN 202210061037A CN 114419155 A CN114419155 A CN 114419155A
- Authority
- CN
- China
- Prior art keywords
- frame
- pose
- data
- map
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000013507 mapping Methods 0.000 title claims abstract description 45
- 230000000007 visual effect Effects 0.000 title claims description 20
- 238000013135 deep learning Methods 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000005457 optimization Methods 0.000 claims description 19
- 230000009466 transformation Effects 0.000 claims description 19
- 230000001360 synchronised effect Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 abstract description 4
- 238000005286 illumination Methods 0.000 abstract description 4
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009616 inductively coupled plasma Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses a vision mapping method based on laser radar assistance, which comprises the following steps: collecting laser radar data, image data, inertial sensor data and odometer data in the movement process of the mobile equipment; traversing the whole data stream by adopting a laser SLAM method; according to the obtained position information, carrying out interpolation estimation to obtain image position information and constructing a KD Tree; traversing the image data of the whole data stream frame by frame, and extracting characteristic information of deep learning; and traversing the image data of the whole data stream frame by frame, searching and acquiring an image frame set adjacent to the current frame by using the KD Tree, performing feature matching on every two image frames in the image frame set, and generating a 3D map point by incremental triangulation. The scheme ensures that the large scene map building is not limited any more, and improves the precision of the map on the whole; meanwhile, the invention uses more robust characteristic information of deep learning, can overcome the influence of illumination change and has stronger anti-interference capability.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a vision mapping method based on laser radar assistance.
Background
At present, a mobile robot or an automatic driving automobile mainly adopts a synchronous positioning and Mapping technology (SLAM) to map surrounding environment maps, and the SLAM scheme using different sensors can be divided into a laser SLAM and a visual SLAM.
The laser SLAM mapping method has the advantages of high precision, small influence of ambient light and the like, and can accurately acquire the distance and the direction information between the laser radar and the obstacle. However, the laser SLAM algorithm is still in the environment with the same characteristics of a long corridor and the like
Accurate positioning cannot be obtained, meanwhile, interference features such as pedestrians or decorations can be scanned by laser positioning, and at the moment, the error positioning is easy to generate by simply depending on the laser.
The visual SLAM mapping process is that feature points are extracted from each frame of picture in real time, feature matching is carried out on the picture and key frames, local map optimization is carried out, and the like, when a loop is detected, global optimization is required to solve the problems of scale drift and the like, and because the performance of a processor carried by the existing robot end or the automatic driving automobile end is weak, the mapping can be reluctantly completed only when the mapping range is small by simply adopting the visual SLAM; if the mapping range is large (a scene more than several thousand square meters), local optimization consumes time, and global optimization needs to process all key frames and map points and consumes more time, so that the visual SLAM mapping process cannot be completed in real time, the final mapping accuracy is low, and the requirement of high-precision positioning cannot be met.
On the other hand, the features extracted by the current visual mapping are also the traditional features such as ORB (ordered FAST and indexed BRIEF), SIFT (Scale-invariant feature transform) and the like, but the ORB and SIFT features have the defects of easy interference and the like on illumination change and also have the problem that the high-precision positioning requirement cannot be met.
Disclosure of Invention
Aiming at the defects in the prior art, the technical problems to be solved by the invention are as follows: how to provide a visual mapping method based on laser radar assistance, which is suitable for mapping a large scene and greatly improves the final mapping precision.
In order to solve the technical problems, the invention adopts the following technical scheme:
a visual mapping method based on laser radar assistance comprises the following steps:
step 1) collecting laser radar data, image data, inertial sensor data and odometer data in the movement process of mobile equipment;
step 2) traversing the whole data stream in the step 1) by adopting a laser synchronous positioning and mapping method;
step 3) according to the position information obtained after the laser synchronous positioning and mapping method traverses the whole data stream, carrying out interpolation estimation to obtain image position information, and constructing a high-dimensional index tree data structure according to the image position information;
step 4) performing frame-by-frame traversal on the image data of the whole data stream in the step 1), and extracting characteristic information of deep learning;
and 5) traversing the image data of the whole data stream in the step 1) frame by frame again, searching and acquiring an image frame set adjacent to the current frame by using a high-dimensional index tree data structure, then performing feature matching on every two image frames in the image frame set, and generating a 3D map point by incremental triangulation.
Preferably, the laser synchronous positioning and mapping method in step 2) includes the following steps:
step 2.1) obtaining an inertial navigation pose of the mobile equipment at the current moment according to the fusion integral of the inertial sensor data and the mileage data, taking the inertial navigation pose as an initial value, and constructing a local occupation grid map according to the obtained first frame of laser radar data;
step 2.2) matching the laser radar data of the current frame in the motion tracking thread with the laser radar data of the previous frame, and calculating to obtain an optimized inter-frame pose motion matrix;
step 2.3) obtaining a pose prior of the current frame based on the inertial navigation pose of the mobile equipment obtained by optimizing the previous frame and the inertial navigation pose of the mobile equipment fused from the previous frame to the current frame, matching the laser radar data of the current frame with the occupation grid map, and adjusting the optimized inter-frame pose motion matrix to obtain an accurate pose of the current frame;
step 2.4) updating a local map or reconstructing the local map according to the distance between the current frame and the current key frame, and updating the pose of each key frame in the key frame queue in an incremental optimization manner according to the motion constraint constructed by the newly inserted key frame to obtain an updated global map;
and 2.5) matching the current local map with the global map to perform loop detection, acquiring a pose transformation matrix from the current local map to the global map by adopting a closest point iteration algorithm or a normal distribution transformation algorithm, and adjusting pose information of a plurality of sub-maps connected with the current sub-map.
Preferably, in step 2.4), the size of the pose transformation and preset threshold of the current frame and the pose transformation and preset threshold of the current key frame is judged, and if the pose transformation of the current frame and the pose transformation of the current key frame is smaller than the preset threshold, the local map is updated according to the laser radar data of the current frame; and if the pose of the current frame and the pose of the current key frame are changed to be larger than a preset threshold value, updating the key frame queue according to the laser radar data of the current frame, and reconstructing the local map.
Preferably, in step 2.4), a common-view frame constraint is constructed according to the newly inserted key frame and the key frame queue, then the pose of the key frame queue is updated by adopting an incremental optimization pose factor graph, when a closed loop is detected, a closed loop frame constraint is constructed, and the pose in the key frame queue is updated through incremental optimization, so that the global map is updated.
Preferably, the local map in step 2) is a map created within the last 30 seconds.
Preferably, step 5) comprises the following steps:
step 5.1) searching an image frame set which meets the condition of being smaller than a given threshold value in a high-dimensional index tree data structure by using a nearest neighbor method according to an Euclidean distance by using the position information of the current frame;
step 5.2) matching the deep learning characteristics of every two images in the image frame set;
step 5.3) triangularizing every two images in the image frame set corresponding to the current frame to generate 3D map points; removing 3D map points with matching errors by using the existing matching relation, fusing redundant 3D map points, and simultaneously performing local optimization;
and 5.4) carrying out frame-by-frame triangulation to sequentially generate 3D map points by utilizing the position information and the feature matching information of the key frames.
Preferably, in step 5.2), the matching of deep learning features on every two images in the image frame set includes: matching of point features, matching of line features, matching of face features, matching of object features, or matching of semantic segmentation features.
Preferably, in step 3), the image position information of each frame of image is obtained through interpolation estimation, and a high-dimensional index tree data structure is constructed according to 2D or 3D spatial position information in the obtained image position information.
Preferably, in step 4), the feature information obtained by deep learning includes point features, line features, surface features, object features, or semantic segmentation features obtained by learning and training by using a deep learning method.
Preferably, in step 1), the laser radar data, the image data, the inertial sensor data and the odometry data are synchronized in time, and the synchronization in time includes hardware time synchronization or software time synchronization;
the laser radar is a 2D laser radar or a 3D laser radar;
the mobile equipment is a mobile robot or an automobile.
Compared with the prior art, on one hand, the method obtains the initial value of each frame of high-precision image pose through the laser SLAM method, and then uses the incremental triangulation to generate map points, so that compared with the traditional visual SLAM method for creating a map on line, the method ensures that large-scene map creation is not limited, and improves the map precision on the whole; on the other hand, the invention uses more robust deep learning feature information, and compared with the traditional feature points, the invention can overcome the influence of illumination change and simultaneously provide matching information with stronger anti-interference capability.
Drawings
FIG. 1 is a flow chart of a vision mapping method based on laser radar assistance according to the present invention;
FIG. 2 is a flowchart of a laser SLAM method in the vision mapping method based on laser radar assistance according to the present invention;
fig. 3 is a schematic diagram of pairwise matching of image frames in an image frame set in the vision mapping method based on laser radar assistance.
Detailed Description
The invention will be further explained with reference to the drawings and the embodiments.
As shown in fig. 1, a visual mapping method based on laser radar assistance includes the following steps:
step 1) collecting laser radar data, image data, inertial sensor data (IMU data) and odometer data in the movement process of mobile equipment; specifically, the laser radar data, the image data, the inertial sensor data and the odometry data are synchronized in time, and the synchronization in time comprises hardware time synchronization or software time synchronization;
the laser radar is a 2D laser radar or a 3D laser radar;
the mobile equipment is a mobile robot or an automobile.
Step 2) traversing the whole data stream in the step 1) by adopting a laser synchronous positioning and mapping method (laser SLAM method);
and 3) according to position information obtained by traversing the whole data stream by a laser synchronous positioning and mapping method (a laser SLAM method), carrying out interpolation estimation to obtain image position information, constructing a high-dimensional index Tree data structure (KD Tree) according to the image position information, specifically, carrying out interpolation estimation to obtain the image position information of each frame of image, and constructing the high-dimensional index Tree data structure according to 2D or 3D space position information in the obtained image position information.
And 4) traversing the image data of the whole data stream in the step 1) frame by frame, and extracting deep learning feature information, wherein the deep learning feature information comprises point features, line features, surface features, object features or semantic segmentation features obtained by learning and training by adopting a deep learning method.
And 5) traversing the image data of the whole data stream in the step 1) frame by frame again, searching and acquiring an image frame set adjacent to the current frame by using a high-dimensional index Tree data structure (KD Tree), performing feature matching on every two image frames in the image frame set, and generating a 3D map point by incremental triangulation.
As shown in fig. 2, in this embodiment, the laser synchronous positioning and mapping method (SLAM method) in step 2) is an online SLAM method or an offline SLAM method, and specifically includes the following steps:
step 2.1) obtaining an inertial navigation pose of the mobile equipment at the current moment according to the fusion integral of the inertial sensor data and the mileage data, taking the inertial navigation pose as an initial value, and constructing a local occupation grid map according to the obtained first frame of laser radar data;
step 2.2) matching the laser radar data of the current frame in the motion tracking thread with the laser radar data of the previous frame, and calculating to obtain an optimized inter-frame pose motion matrix;
step 2.3) obtaining a pose prior of the current frame based on the inertial navigation pose of the mobile equipment obtained by optimizing the previous frame and the inertial navigation pose of the mobile equipment fused from the previous frame to the current frame, matching the laser radar data of the current frame with the occupation grid map, and adjusting the optimized inter-frame pose motion matrix to obtain an accurate pose of the current frame;
step 2.4) updating a local map or reconstructing the local map according to the distance between the current frame and the current key frame, and updating the pose of each key frame in the key frame queue in an incremental optimization manner according to the motion constraint constructed by the newly inserted key frame to obtain an updated global map;
and 2.5) matching the current local map with the global map to perform loop detection, acquiring a pose Transformation matrix from the current local map to the global map by adopting an ICP (inductively coupled plasma) or NDT (Normal Distribution Transformation) algorithm, and adjusting pose information of a plurality of sub-maps connected with the current sub-map.
In this embodiment, in step 2.4), the size of the pose transformation and preset threshold of the current frame and the pose transformation and preset threshold of the current keyframe is determined, and if the pose transformation and the pose transformation of the current frame and the pose of the current keyframe are smaller than the preset threshold, the local map is updated according to the lidar data of the current frame; and if the pose of the current frame and the pose of the current key frame are changed to be larger than a preset threshold value, updating the key frame queue according to the laser radar data of the current frame, and reconstructing the local map.
In this embodiment, in step 2.4), a co-view frame constraint is constructed according to a newly inserted key frame and a key frame queue, then the pose of the key frame queue is updated by using an incremental optimization pose factor graph, when a closed loop is detected, a closed loop frame constraint is constructed, and the pose in the key frame queue is updated by incremental optimization, so as to update the global map.
In the present embodiment, the local map in step 2) is a map created within the past 30 seconds.
In this embodiment, step 5) includes the following steps:
step 5.1) searching an image frame set which meets the requirement of being smaller than a given threshold value in a high-dimensional index Tree data structure (KD Tree) by a nearest neighbor method according to an Euclidean distance by utilizing the position information of the current frame;
and 5.2) matching the deep learning features of every two images in the image frame set (as shown in the attached figure 3) specifically comprises the following steps: matching point features, matching line features, matching surface features, matching object features, or matching semantic segmentation features;
step 5.3) triangularizing every two images in the image frame set corresponding to the current frame to generate 3D map points; removing 3D map points with wrong matching by using the existing matching relation, fusing redundant 3D map points, and simultaneously performing local optimization to further improve the pose estimation precision of the key frame and the 3D map points;
and 5.4) carrying out frame-by-frame triangulation to sequentially generate 3D map points by utilizing the position information and the feature matching information of the key frames.
Wherein the given threshold is: performing a large number of early-stage experiments on the same type of data streams or the data streams of the same scene, and comprehensively considering the motion speed of the mobile equipment, whether the mobile equipment and the current frame are viewed together and the number of returned image frame sets; in local optimization, feature information matching and weight setting of key frame poses can be dynamically adjusted according to actual data streams or scenes.
The technical scheme of the invention is explained in further detail below with reference to fig. 1 to 3, which is convenient for those skilled in the art to understand.
As also shown in fig. 1, a flow chart of a vision mapping method based on laser radar assistance: firstly, collecting laser radar data, image data, IMU data and odometer data in the movement process of mobile equipment; traversing the whole data stream by using a laser SLAM method, and carrying out interpolation estimation on position information obtained by using the laser SLAM method to obtain the position information of each frame of image; then, constructing a KD Tree based on 2D or 3D space position information by using the image position information obtained in the last step; synchronously, feature information learned and trained by a deep learning method is extracted for each piece of image data, for example: SuperPoint feature points, R2D2 feature points, pedestrian detection frames, vehicle detection frames, lane line detection frames, semantically segmented pixels and the like; finally, traversing the image data in the whole data stream frame by frame again, and searching an image frame set which meets the condition that the image frame set is smaller than a given threshold value in a KD Tree by using the nearest neighbor method by using the image position information obtained in the last step; and then carrying out feature matching on all image frames in the image frame set corresponding to the current frame pairwise to obtain a matching relation, and carrying out frame-by-frame triangulation to sequentially generate 3D map points by utilizing the matching relation and the pose information of the key frame.
As also shown in fig. 2: a flow chart of a laser SLAM method in a vision mapping method based on laser radar assistance. Can be described as: the laser SLAM method can be divided into two threads of motion tracking and map construction: the motion tracking thread is a main thread of the system, the mapping thread assists the motion tracking thread, and meanwhile the pose information of the motion tracking thread is relied on.
In the motion tracking thread, the pose obtained by fusing the IMU data and the mileage data is used as an initial value, the optimized inter-frame pose motion matrix is obtained by combining the current frame of laser radar data and the previous frame of laser radar data through inter-frame laser matching based on linear characteristics and inter-frame laser matching based on normal distribution transformation, and then the accurate current frame pose is determined through laser map matching based on least square gray difference. And determining a local map according to the accurate current frame pose and the current key frame. Specifically, by judging the accurate pose of the current frame and the pose transformation of the current key frame and the size of a preset threshold, when the pose of the current frame and the pose transformation of the current key frame are smaller than the preset threshold, updating the current local map according to the laser radar data of the current frame; and when the current frame is larger than the preset threshold value, updating the key frame queue according to the current frame laser radar data, and recreating the local map.
In the mapping thread, a common-view frame constraint can be constructed according to the newly inserted key frame and the key frame queue, and then the pose of the key frame queue is updated by adopting an incremental optimization pose factor graph. In the process, when a closed loop is detected, closed loop frame constraint can be constructed, and the pose in the key frame queue is updated through incremental optimization, so that the update of the global map is realized, and the creation of the global map is completed.
As shown in fig. 3, a schematic diagram of pairwise matching image frames in an image frame set in a laser radar-assisted visual mapping method is shown: assuming that k frames are in total in the image frame set, firstly, the 0 th frame is matched with the 1 st frame and the 2 nd frame … (the k-1 st frame) one by one; then the 1 st frame is matched with the 2 nd frame and the 3 rd frame … the k-1 st frame one by one, …, until the k-2 nd frame is matched with the k-1 st frame, and a seed scheme is obtained.
Compared with the prior art, on one hand, the method obtains the high-precision initial value of the pose of each frame of image through the laser SLAM method, and then uses the incremental triangulation to generate map points, so that compared with the traditional visual SLAM method for creating a map on line, the method ensures that large-scene map creation is not limited, and simultaneously improves the precision of the map as a whole; on the other hand, the invention uses more robust deep learning feature information, and compared with the traditional feature points, the invention can overcome the influence of illumination change and simultaneously provide matching information with stronger anti-interference capability.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.
Claims (10)
1. A visual mapping method based on laser radar assistance is characterized by comprising the following steps:
step 1) collecting laser radar data, image data, inertial sensor data and odometer data in the movement process of mobile equipment;
step 2) traversing the whole data stream in the step 1) by adopting a laser synchronous positioning and mapping method;
step 3) according to the position information obtained after the laser synchronous positioning and mapping method traverses the whole data stream, carrying out interpolation estimation to obtain image position information, and constructing a high-dimensional index tree data structure according to the image position information;
step 4) performing frame-by-frame traversal on the image data of the whole data stream in the step 1), and extracting characteristic information of deep learning;
and 5) traversing the image data of the whole data stream in the step 1) frame by frame again, searching and acquiring an image frame set adjacent to the current frame by using a high-dimensional index tree data structure, then performing feature matching on every two image frames in the image frame set, and generating a 3D map point by incremental triangulation.
2. The lidar-assisted based vision mapping method according to claim 1, wherein the laser synchronous positioning and mapping method in step 2) comprises the following steps:
step 2.1) obtaining an inertial navigation pose of the mobile equipment at the current moment according to the fusion integral of the inertial sensor data and the mileage data, taking the inertial navigation pose as an initial value, and constructing a local occupation grid map according to the obtained first frame of laser radar data;
step 2.2) matching the laser radar data of the current frame in the motion tracking thread with the laser radar data of the previous frame, and calculating to obtain an optimized inter-frame pose motion matrix;
step 2.3) obtaining a pose prior of the current frame based on the inertial navigation pose of the mobile equipment obtained by optimizing the previous frame and the inertial navigation pose of the mobile equipment fused from the previous frame to the current frame, matching the laser radar data of the current frame with the occupation grid map, and adjusting the optimized inter-frame pose motion matrix to obtain an accurate pose of the current frame;
step 2.4) updating a local map or reconstructing the local map according to the distance between the current frame and the current key frame, and updating the pose of each key frame in the key frame queue in an incremental optimization manner according to the motion constraint constructed by the newly inserted key frame to obtain an updated global map;
and 2.5) matching the current local map with the global map to perform loop detection, acquiring a pose transformation matrix from the current local map to the global map by adopting a closest point iteration algorithm or a normal distribution transformation algorithm, and adjusting pose information of a plurality of sub-maps connected with the current sub-map.
3. The laser radar-assisted-based visual mapping method according to claim 2, wherein in step 2.4), the sizes of the pose of the current frame and the pose transformation of the current key frame and a preset threshold are judged, and if the pose of the current frame and the pose transformation of the current key frame are smaller than the preset threshold, the local map is updated according to the laser radar data of the current frame; and if the pose of the current frame and the pose of the current key frame are changed to be larger than a preset threshold value, updating the key frame queue according to the laser radar data of the current frame, and reconstructing the local map.
4. The laser radar assistance-based visual mapping method according to claim 3, wherein in step 2.4), a common-view frame constraint is constructed according to a newly inserted key frame and a key frame queue, then the pose of the key frame queue is updated by using an incremental optimization pose factor graph, when a closed loop is detected, a closed-loop frame constraint is constructed, and the pose in the key frame queue is updated by incremental optimization, so that the global map is updated.
5. The lidar-assisted based visual mapping method of claim 2, wherein the local map in step 2) is a map created within the last 30 seconds.
6. The lidar-assisted based vision mapping method according to claim 1, wherein the step 5) comprises the following steps:
step 5.1) searching an image frame set which meets the condition of being smaller than a given threshold value in a high-dimensional index tree data structure by using a nearest neighbor method according to an Euclidean distance by using the position information of the current frame;
step 5.2) matching the deep learning characteristics of every two images in the image frame set;
step 5.3) triangularizing every two images in the image frame set corresponding to the current frame to generate 3D map points; removing 3D map points with matching errors by using the existing matching relation, fusing redundant 3D map points, and simultaneously performing local optimization;
and 5.4) carrying out frame-by-frame triangulation to sequentially generate 3D map points by utilizing the position information and the feature matching information of the key frames.
7. The lidar-assisted-based visual mapping method according to claim 6, wherein the step 5.2) of matching two-by-two deep learning features of the images in the image frame set comprises: matching of point features, matching of line features, matching of face features, matching of object features, or matching of semantic segmentation features.
8. The lidar-assisted-based visual mapping method according to claim 1, wherein in step 3), the image position information of each frame of image is obtained through interpolation estimation, and a high-dimensional index tree data structure is constructed according to 2D or 3D space position information in the obtained image position information.
9. The lidar-assisted-based visual mapping method according to claim 1, wherein in the step 4), the feature information obtained by deep learning comprises a point feature, a line feature, a surface feature, an object feature or a semantic segmentation feature obtained by learning and training by using a deep learning method.
10. The lidar-assisted based vision mapping method of claim 1, wherein in step 1), the lidar data, the image data, the inertial sensor data, and the odometry data are time synchronized, and the time synchronization comprises hardware time synchronization or software time synchronization;
the laser radar is a 2D laser radar or a 3D laser radar;
the mobile equipment is a mobile robot or an automobile.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210061037.XA CN114419155B (en) | 2022-01-19 | 2022-01-19 | Visual image construction method based on laser radar assistance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210061037.XA CN114419155B (en) | 2022-01-19 | 2022-01-19 | Visual image construction method based on laser radar assistance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114419155A true CN114419155A (en) | 2022-04-29 |
CN114419155B CN114419155B (en) | 2024-10-25 |
Family
ID=81275121
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210061037.XA Active CN114419155B (en) | 2022-01-19 | 2022-01-19 | Visual image construction method based on laser radar assistance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114419155B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019018315A1 (en) * | 2017-07-17 | 2019-01-24 | Kaarta, Inc. | Aligning measured signal data with slam localization data and uses thereof |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN112068154A (en) * | 2020-09-14 | 2020-12-11 | 中科院软件研究所南京软件技术研究院 | Laser mapping positioning method and device, storage medium and electronic equipment |
CN113819914A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | Map construction method and device |
-
2022
- 2022-01-19 CN CN202210061037.XA patent/CN114419155B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019018315A1 (en) * | 2017-07-17 | 2019-01-24 | Kaarta, Inc. | Aligning measured signal data with slam localization data and uses thereof |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN113819914A (en) * | 2020-06-19 | 2021-12-21 | 北京图森未来科技有限公司 | Map construction method and device |
CN112068154A (en) * | 2020-09-14 | 2020-12-11 | 中科院软件研究所南京软件技术研究院 | Laser mapping positioning method and device, storage medium and electronic equipment |
Non-Patent Citations (2)
Title |
---|
XIAOZHI QU等: ""Vehicle localization using mono-camera and geo-referenced traffic signs"", 《IEEE》, 27 August 2015 (2015-08-27) * |
卫文乐;金国栋;谭力宁;芦利斌;陈丹琪;: "利用惯导测量单元确定关键帧的实时SLAM算法", 计算机应用, no. 04, 17 October 2019 (2019-10-17) * |
Also Published As
Publication number | Publication date |
---|---|
CN114419155B (en) | 2024-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
WO2021233029A1 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
CN113516664B (en) | Visual SLAM method based on semantic segmentation dynamic points | |
CN109186606B (en) | Robot composition and navigation method based on SLAM and image information | |
CN107167826B (en) | Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving | |
CN113903011B (en) | Semantic map construction and positioning method suitable for indoor parking lot | |
CN106780631B (en) | Robot closed-loop detection method based on deep learning | |
CN113313763B (en) | Monocular camera pose optimization method and device based on neural network | |
CN110781262B (en) | Semantic map construction method based on visual SLAM | |
Qian et al. | Robust visual-lidar simultaneous localization and mapping system for UAV | |
CN110487286B (en) | Robot pose judgment method based on point feature projection and laser point cloud fusion | |
CN113920198B (en) | Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment | |
CN111161334B (en) | Semantic map construction method based on deep learning | |
Saleem et al. | Neural network-based recent research developments in SLAM for autonomous ground vehicles: A review | |
CN114565726A (en) | Simultaneous positioning and mapping method in unstructured dynamic environment | |
CN114187418A (en) | Loop detection method, point cloud map construction method, electronic device and storage medium | |
CN116977628A (en) | SLAM method and system applied to dynamic environment and based on multi-mode semantic framework | |
CN112184906A (en) | Method and device for constructing three-dimensional model | |
CN112432653B (en) | Monocular vision inertial odometer method based on dotted line characteristics | |
CN113838129A (en) | Method, device and system for obtaining pose information | |
US20220164595A1 (en) | Method, electronic device and storage medium for vehicle localization | |
CN112348854A (en) | Visual inertial mileage detection method based on deep learning | |
Li et al. | RF-LOAM: Robust and Fast LiDAR Odometry and Mapping in Urban Dynamic Environment | |
CN114323038B (en) | Outdoor positioning method integrating binocular vision and 2D laser radar | |
CN114419155B (en) | Visual image construction method based on laser radar assistance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |