CN115290073A - A SLAM method and system under unstructured characteristics of underground mines - Google Patents
A SLAM method and system under unstructured characteristics of underground mines Download PDFInfo
- Publication number
- CN115290073A CN115290073A CN202210970287.5A CN202210970287A CN115290073A CN 115290073 A CN115290073 A CN 115290073A CN 202210970287 A CN202210970287 A CN 202210970287A CN 115290073 A CN115290073 A CN 115290073A
- Authority
- CN
- China
- Prior art keywords
- point
- frame
- feature
- current
- factor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 239000000284 extract Substances 0.000 claims abstract description 22
- 230000008569 process Effects 0.000 claims abstract description 20
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 19
- 230000003287 optical effect Effects 0.000 claims abstract description 17
- 238000004364 calculation method Methods 0.000 claims abstract description 15
- 230000000007 visual effect Effects 0.000 claims description 27
- 238000005457 optimization Methods 0.000 claims description 24
- 230000010354 integration Effects 0.000 claims description 19
- 238000000605 extraction Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 8
- 239000003245 coal Substances 0.000 abstract description 13
- 238000013507 mapping Methods 0.000 abstract description 9
- 230000007613 environmental effect Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000005065 mining Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及矿山井下定位与导航技术领域,解决了非结构化环境特征下且GPS无法作用的井下定位与建图难的技术问题,尤其涉及一种矿山井下非结构化特征下的SLAM方法,包括以下过程:获取激光雷达当前激光帧的点云信息;计算激光雷达当前激光帧点云信息中每个点的曲率,并根据每个点的曲率提取每个点的角点特征以及平面点特征;通过相机提取当前帧的信息,根据当前帧的信息采用FAST算法检测当前帧的角点,并判断是否为一个角特征点;根据当前帧的信息采用KLT光流法跟踪算法跟踪当前滑动窗口关键帧的特征点。本发明实现对煤矿井下进行高精度定位,同时平衡了精度与计算量,提高矿山井巷的定位与建图准确率与实时性。
The invention relates to the technical field of underground positioning and navigation in mines, and solves the technical problem of difficulty in underground positioning and mapping under unstructured environmental characteristics and GPS cannot function, and in particular relates to a SLAM method under the unstructured characteristics of underground mines, comprising: The following process: obtain the point cloud information of the current laser frame of the lidar; calculate the curvature of each point in the point cloud information of the current laser frame of the lidar, and extract the corner feature and plane point feature of each point according to the curvature of each point; Extract the information of the current frame through the camera, use the FAST algorithm to detect the corner of the current frame according to the information of the current frame, and determine whether it is a corner feature point; use the KLT optical flow tracking algorithm to track the key frame of the current sliding window according to the information of the current frame feature points. The invention realizes high-precision positioning in the coal mine, balances the precision and the calculation amount at the same time, and improves the positioning and mapping accuracy and real-time performance of the mine roadway.
Description
技术领域technical field
本发明涉及矿山井下定位与导航技术领域,尤其涉及一种矿山井下非结 构化特征下的SLAM方法及系统。The present invention relates to the technical field of underground positioning and navigation in mines, in particular to a SLAM method and system under unstructured features in underground mines.
背景技术Background technique
煤矿巷道、采掘工作面等作业区域具有典型的非结构化环境特征,而GPS 技术无法直接应用于煤矿井下,导致煤矿机器人在煤矿井下的自主定位系统 无法实现高精准定位、姿态感知差,从而导致煤矿开采时矿难频发,因此降 低了煤矿开采的安全性。Coal mine roadways, mining working faces and other operating areas have typical unstructured environmental characteristics, and GPS technology cannot be directly applied underground in coal mines, resulting in the inability of the autonomous positioning system of coal mine robots to achieve high-precision positioning and poor attitude perception in underground coal mines, resulting in Mine disasters occur frequently during coal mining, thus reducing the safety of coal mining.
发明内容Contents of the invention
针对现有技术的不足,本发明提供了一种矿山井下非结构化特征下的 SLAM方法及系统,解决了非结构化环境特征下且GPS无法作用的井下定位与 建图难的技术问题。Aiming at the deficiencies of the prior art, the present invention provides a SLAM method and system under the unstructured characteristics of underground mines, which solves the technical problem of difficult underground positioning and mapping under unstructured environmental characteristics and GPS cannot function.
为解决上述技术问题,本发明提供了如下技术方案:一种矿山井下非结 构化特征下的SLAM方法,包括以下过程:In order to solve the problems of the technologies described above, the present invention provides the following technical solutions: a kind of SLAM method under unstructured characteristics in underground mine, comprising the following process:
获取激光雷达当前激光帧的点云信息;Obtain the point cloud information of the current laser frame of the lidar;
计算激光雷达当前激光帧点云信息中每个点的曲率,并根据每个点的曲 率提取每个点的角点特征以及平面点特征;Calculate the curvature of each point in the point cloud information of the current laser frame of the laser radar, and extract the corner point features and plane point features of each point according to the curvature of each point;
通过相机提取当前帧的信息,根据当前帧的信息采用FAST算法检测当前 帧的角点,并判断是否为一个角特征点;Extract the information of the current frame through the camera, use the FAST algorithm to detect the corner point of the current frame according to the information of the current frame, and judge whether it is a corner feature point;
根据当前帧的信息采用KLT光流法跟踪算法跟踪当前滑动窗口关键帧的 特征点;According to the information of the current frame, the KLT optical flow method tracking algorithm is used to track the feature points of the current sliding window key frame;
根据IMU先验的位姿估计与跟踪当前滑动窗口关键帧的特征点之间的重 投影误差更新当前状态的位姿估计;Update the pose estimation of the current state according to the reprojection error between the pose estimation of the IMU priori and the feature points tracking the current sliding window keyframe;
通过ESIKF对当前状态的位姿估计进行状态更新得到最新的视觉特征;The latest visual features are obtained by updating the state of the pose estimation of the current state through ESIKF;
将最新的视觉特征与带有角点特征点云数据的激光雷达当前激光帧的点 云信息进行特征匹配,得出激光雷达里程计因子和相机里程计因子;Match the latest visual features with the point cloud information of the current laser frame of the lidar with corner feature point cloud data to obtain the lidar odometry factor and camera odometry factor;
将激光雷达里程计因子、相机里程计因子以及IMU预积分因子加入因子 图优化后进行三维地图建模。After adding the lidar odometer factor, camera odometer factor and IMU pre-integration factor into the factor map optimization, the 3D map modeling is carried out.
进一步地,在计算激光雷达当前激光帧点云信息中每个点的曲率,并根 据每个点的曲率提取每个点的角点特征以及平面点特征这一步骤中,具体包 括以下过程:Further, in the step of calculating the curvature of each point in the point cloud information of the current laser frame of the laser radar, and extracting the corner point features and plane point features of each point according to the curvature of each point, the following process is specifically included:
订阅当前激光帧运动畸变校正后的点云信息;Subscribe to the point cloud information after the motion distortion correction of the current laser frame;
遍历当前激光帧运动畸变校正后的有效点云,从有效点云中提取当前激 光点的前后各若干点并计算其相应的曲率;Traversing the effective point cloud after the motion distortion correction of the current laser frame, extracting several points before and after the current laser point from the effective point cloud and calculating their corresponding curvature;
根据当前激光点的前后各若干点的曲率建立角点特征集合以及平面点特 征集合;According to the curvature of several points before and after the current laser point, a corner point feature set and a plane point feature set are established;
从角点特征集合中提取每个点的角点特征,从平面点特征集合中提取每 个点的平面点特征。The corner feature of each point is extracted from the corner feature set, and the plane point feature of each point is extracted from the plane point feature set.
进一步地,在根据当前帧的信息采用FAST算法检测当前帧的角点,并判 断是否为一个角特征点这一步骤中,具体包括以下过程:Further, adopting the FAST algorithm to detect the corner point of the current frame according to the information of the current frame, and judging whether it is a corner feature point in this step, specifically include the following process:
订阅当前相机提取到的当前帧的信息;Subscribe to the information of the current frame extracted by the current camera;
从当前帧的信息中选取一个像素,并判断该像素是否为一个角特征点。Select a pixel from the information of the current frame, and judge whether the pixel is a corner feature point.
进一步地,在从当前帧的信息中选取一个像素,并判断该像素是否为一 个角特征点这一步骤中,具体包括以下过程:Further, in the step of selecting a pixel from the information of the current frame and judging whether the pixel is a corner feature point, the following process is specifically included:
将所选取像素p的亮度值设为Ip,亮度阈值设为t;Set the luminance value of the selected pixel p as I p , and the luminance threshold as t;
以像素p点为中心设定一个半径等于3像素的离散化的Bresenham圆, Bresenham圆的边界上有16个像素;Set a discretized Bresenham circle with a radius equal to 3 pixels centered on pixel p, and there are 16 pixels on the boundary of the Bresenham circle;
在大小为16个像素的Bresenham圆上有9个连续的像素点,若9个连续 像素点的像素值大于或小于Ip+t,则判定像素p为一个角特征点;There are 9 consecutive pixel points on the Bresenham circle with a size of 16 pixels. If the pixel value of the 9 consecutive pixel points is greater than or smaller than I p +t, it is determined that the pixel p is a corner feature point;
若9个连续像素点的像素值等于Ip+t,则判定像素p不是一个角特征点。If the pixel values of 9 consecutive pixel points are equal to I p +t, it is determined that the pixel p is not a corner feature point.
进一步地,在根据当前帧的信息采用KLT光流法跟踪算法跟踪当前滑动 窗口关键帧的特征点这一步骤中,具体包括以下过程:Further, in the step of using the KLT optical flow tracking algorithm to track the feature points of the current sliding window key frame according to the information of the current frame, the following processes are specifically included:
订阅当前相机提取到的当前帧的信息;Subscribe to the information of the current frame extracted by the current camera;
对当前帧信息中的原始图像进行直方图均衡化;Perform histogram equalization on the original image in the current frame information;
提取直方图均衡化后图像的金字塔,随后提取上一帧图像的FAST特征点;Extract the pyramid of the image after histogram equalization, and then extract the FAST feature points of the previous frame image;
对上一帧图像的FAST特征点进行LK光流跟踪;Perform LK optical flow tracking on the FAST feature points of the previous frame image;
判断上一帧图像的特征点数量,若上一帧图像的特征点少于10个,直接 设为跟踪失败;Determine the number of feature points of the previous frame image, if the feature points of the previous frame image are less than 10, directly set it as tracking failure;
若跟踪点的数量满足预设阈值,则直接进行LK光流跟踪。If the number of tracking points meets the preset threshold, LK optical flow tracking is performed directly.
进一步地,在更新当前状态的位姿估计这一步骤中,具体包括以下过程:Further, in the step of updating the pose estimation of the current state, the following process is specifically included:
通过视差判断采取何种边缘化的方法判断当前帧是否为关键帧,新帧与 上一帧视差大则边缘化最老帧,视差小则边缘化上一帧;Judge whether the current frame is a key frame by the method of marginalization based on the parallax judgment. If the parallax between the new frame and the previous frame is large, the oldest frame will be marginalized, and if the parallax is small, the previous frame will be marginalized;
在IMU中加入新特征点的先验,按照IMU先验的位姿估计将一个特征点 的所有观测加入三角化计算得到最新的观测帧;Add the prior of the new feature point to the IMU, and add all the observations of a feature point into the triangulation calculation according to the pose estimation of the IMU prior to obtain the latest observation frame;
将一个特征点最新的观测帧与在滑动窗口中最老的一帧、两帧对应的IMU 位姿估计做重投影误差建立残差关系。The latest observation frame of a feature point and the IMU pose estimation corresponding to the oldest frame and two frames in the sliding window are used as reprojection errors to establish a residual relationship.
进一步地,在通过ESIKF对当前状态的位姿估计进行状态更新得到最新 的视觉特征这一步骤中,具体包括以下过程:Further, in the step of obtaining the latest visual features through ESIKF to update the state of the pose estimation of the current state, it specifically includes the following process:
通过参数方程计算得到当前帧的平面,并将当前帧的平面与上一帧的平 面点互相匹配得到认为是同一个平面,即位姿变换的观测量;Calculate the plane of the current frame through the parameter equation, and match the plane of the current frame with the plane points of the previous frame to obtain the same plane, that is, the observation of pose transformation;
以IMU积分作为先验做EKF更新并更新协方差,得到误差的最优位姿估 计以及状态向量;Use the IMU integral as a priori to update the EKF and update the covariance to obtain the optimal pose estimation of the error and the state vector;
当ESIKF结束退出时更新后验的协方差矩阵,完成对当前状态的位姿估 计进行状态更新;When ESIKF finishes exiting, update the covariance matrix of the posterior, and complete the state update of the pose estimation of the current state;
将新的帧中的一些点云加入到kd树中,并更新下一帧在地图中查找它的 标志;Add some point clouds in the new frame to the kd tree, and update the flag to find it in the map in the next frame;
通过ESIKF若干次迭代逼近结果,用最新的ESKF后验更新当前滑动窗口。The result is approximated by several iterations of ESIKF, and the current sliding window is updated with the latest ESKF posterior.
进一步地,在将激光雷达里程计因子、相机里程计因子以及IMU预积分 因子加入因子图优化后进行三维地图建模这一步骤中,具体包括以下过程:Further, in the step of performing three-dimensional map modeling after adding the lidar odometer factor, camera odometer factor and IMU pre-integration factor to the factor map optimization, the following processes are specifically included:
将激光雷达里程计因子、相机里程计因子以及IMU预积分因子加入因子 图中进行优化;Add the lidar odometer factor, camera odometer factor and IMU pre-integration factor to the factor graph for optimization;
分别求取激光雷达里程计因子、相机里程计因子以及IMU预积分因子关 于状态量的雅克比矩阵,使用Levenberg-Marquardt法求解状态量;Calculate the Jacobian matrix of the lidar odometer factor, camera odometer factor and IMU pre-integration factor on the state quantity, and use the Levenberg-Marquardt method to solve the state quantity;
在ros系统中利用map_server服务器配置rgbdslam包并联合因子图优 化后的全部数据进行三维地图建模。In the ros system, use the map_server server to configure the rgbdslam package and combine all the data optimized by the factor graph for 3D map modeling.
本发明还提供了另一种技术方案,一种用于实现矿山井下非结构化特征 下SLAM方法的系统,经过因子图优化后的激光雷达里程计因子、相机里程计 因子以及IMU预积分因子,通过在ros系统中利用map_server服务器配置 rgbdslam包并联合图优化后的全部数据进行三维地图建模,包括:The present invention also provides another technical solution, a system for realizing the SLAM method under unstructured characteristics in underground mines, the lidar odometer factor, camera odometer factor and IMU pre-integration factor after factor graph optimization, By using the map_server server in the ros system to configure the rgbdslam package and combine all the optimized data for 3D map modeling, including:
点云信息获取模块,所述点云信息获取模块用于获取激光雷达当前激光 帧的点云信息;Point cloud information acquisition module, the point cloud information acquisition module is used to obtain the point cloud information of the laser radar current laser frame;
特征提取模块,所述特征提取模块用于计算激光雷达当前激光帧点云信 息中每个点的曲率,并根据每个点的曲率提取每个点的角点特征以及平面点 特征;Feature extraction module, described feature extraction module is used for calculating the curvature of each point in the laser radar current laser frame point cloud information, and extracts the corner point feature and the plane point feature of each point according to the curvature of each point;
角点特征判断模块,所述角点特征判断模块用于通过相机提取当前帧的 信息,根据当前帧的信息采用FAST算法检测当前帧的角点,并判断是否为一 个角特征点;Corner feature judging module, described corner feature judging module is used for extracting the information of current frame by camera, adopts FAST algorithm to detect the corner of current frame according to the information of current frame, and judges whether it is a corner feature point;
特征点跟踪模块,所述特征点跟踪模块用于根据当前帧的信息采用KLT 光流法跟踪算法跟踪当前滑动窗口关键帧的特征点;A feature point tracking module, the feature point tracking module is used to track the feature points of the current sliding window key frame using the KLT optical flow method tracking algorithm according to the information of the current frame;
位姿估计更新模块,所述位姿估计更新模块用于根据IMU先验的位姿估 计与跟踪当前滑动窗口关键帧的特征点之间的重投影误差更新当前状态的位 姿估计;A pose estimation update module, the pose estimation update module is used to update the pose estimate of the current state according to the reprojection error between the pose estimate of the IMU priori and the feature point tracking the current sliding window keyframe;
视觉特征得出模块,所述视觉特征得出模块用于通过ESIKF对当前状态 的位姿估计进行状态更新得到最新的视觉特征;Visual feature obtains module, and described visual feature obtains module and is used for carrying out state update to the pose estimation of current state by ESIKF and obtains the latest visual feature;
里程计因子得出模块,所述里程计因子得出模块用于将最新的视觉特征 与带有角点特征点云数据的激光雷达当前激光帧的点云信息进行特征匹配, 得出激光雷达里程计因子和相机里程计因子;The odometer factor deriving module, the odometer factor deriving module is used to match the latest visual features with the point cloud information of the current laser frame of the lidar with corner feature point cloud data to obtain the lidar mileage meter factor and camera odometer factor;
因子图优化模块,所述因子图优化模块用于将激光雷达里程计因子、相 机里程计因子以及IMU预积分因子加入因子图优化后进行三维地图建模。Factor graph optimization module, described factor graph optimization module is used for adding laser radar odometer factor, camera odometer factor and IMU pre-integration factor to carry out three-dimensional map modeling after factor graph optimization.
借由上述技术方案,本发明提供了一种矿山井下非结构化特征下的SLAM 方法及系统,至少具备以下有益效果:By means of the above technical solution, the present invention provides a SLAM method and system under the unstructured characteristics of underground mines, which at least have the following beneficial effects:
1、本发明采用激光雷达、相机和IMU对煤矿井下实现高精准定位,并提 高对井下环境的姿态感知,由此实现对煤矿井下进行高精度定位,同时平衡 了精度与计算量,提高矿山井巷的定位与建图准确率与实时性,有效的解决 了非结构化环境特征下且GPS无法作用的井下定位与建图难的问题。1. The present invention uses laser radar, camera and IMU to realize high-precision positioning underground in coal mines, and improves the attitude perception of the underground environment, thereby realizing high-precision positioning in underground coal mines, and at the same time balancing the accuracy and calculation amount, and improving mine wells. The accuracy and real-time performance of lane positioning and mapping effectively solves the problem of difficult underground positioning and mapping in unstructured environments where GPS cannot function.
2、本发明使用激光雷达、相机和IMU的SLAM方法,将激光雷达的点云 特征与IMU数据和相机提取并优化后的特征信息进行一同运算,提高整个系 统在非结构化环境中的精度,并且还进行关键帧的匹配,降低整个系统的计 算量,使得精度与计算量之间取得平衡,保证了系统的准确率和实时性。2. The present invention uses the SLAM method of laser radar, camera and IMU to calculate the point cloud features of laser radar and the feature information extracted and optimized by IMU data and camera, so as to improve the accuracy of the whole system in an unstructured environment. In addition, key frame matching is carried out to reduce the calculation load of the whole system, to achieve a balance between precision and calculation load, and to ensure the accuracy and real-time performance of the system.
3、本发明通过对激光雷达进行点云的特征提取,相机也可以提供视觉特 征与点云特征进行匹配,以此优化整个系统,从而使整个系统保持稳定。3. The present invention extracts the features of the point cloud from the lidar, and the camera can also provide visual features to match the features of the point cloud, thereby optimizing the entire system, so that the entire system remains stable.
附图说明Description of drawings
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部 分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的 不当限定。在附图中:The accompanying drawings described here are used to provide a further understanding of the application and constitute a part of the application. The schematic embodiments of the application and their descriptions are used to explain the application and do not constitute undue limitations to the application. In the attached picture:
图1为本发明SLAM方法的流程图;Fig. 1 is the flowchart of SLAM method of the present invention;
图2为本发明点云特征提取数据传递示意图;Fig. 2 is a schematic diagram of point cloud feature extraction data transmission of the present invention;
图3为本发明视觉特征提取数据传递示意图;Fig. 3 is a schematic diagram of data transmission for visual feature extraction in the present invention;
图4为本发明相机的视觉里程计数据传递示意图;Fig. 4 is a schematic diagram of visual odometer data transmission of the camera of the present invention;
图5为本发明因子图优化数据传递示意图。Fig. 5 is a schematic diagram of factor graph optimization data transmission in the present invention.
图中:100、点云信息获取模块;200、特征提取模块;300、角点特征判 断模块;400、特征点跟踪模块;500、位姿估计更新模块;600、视觉特征得 出模块;700、里程计因子得出模块;800、因子图优化模块。In the figure: 100, point cloud information acquisition module; 200, feature extraction module; 300, corner feature judgment module; 400, feature point tracking module; 500, pose estimation update module; 600, visual feature derivation module; 700, The odometer factor derivation module; 800, the factor graph optimization module.
具体实施方式Detailed ways
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图 和具体实施方式对本发明作进一步详细的说明。借此对本申请如何应用技术 手段来解决技术问题并达成技术功效的实现过程能充分理解并据以实施。In order to make the above objects, features and advantages of the present invention more obvious and understandable, the present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments. In this way, the realization process of how the application uses technical means to solve technical problems and achieve technical effects can be fully understood and implemented accordingly.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤 是可以通过程序来指令相关的硬件来完成,因此,本申请可采用完全硬件实 施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本 申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储 介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程 序产品的形式。Those of ordinary skill in the art can understand that all or part of the steps in the method of the above-mentioned embodiments can be completed by instructing the relevant hardware through a program. Therefore, the present application can adopt a complete hardware embodiment, a complete software embodiment, or a combination of software and The form of the embodiment in terms of hardware. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
请参照图1-图5,示出了本实施例的一种具体实施方式,充分考虑了矿 山井巷非结构化环境特征且GPS技术无法直接运用的因素,由于矿山井巷非 结构化环境特征,所以在一定情况下,激光雷达无法采集到足够的特征点云, 但是IMU可以不依赖于外部信息,独立于其它传感器存在,故IMU可以在无 足够的特征点云时进行预积分,弥补没有足够特征点云的缺陷。进而对激光 雷达进行点云的特征提取,相机也可以提供视觉特征与点云特征进行匹配, 以此优化整个系统,从而使整个系统保持稳定。Please refer to Fig. 1-Fig. 5, which shows a specific implementation of this embodiment, which fully considers the unstructured environmental characteristics of mine shafts and the factors that GPS technology cannot be directly used. , so under certain circumstances, lidar cannot collect enough feature point clouds, but IMU can exist independently of other sensors, so IMU can perform pre-integration when there are not enough feature point clouds to make up for the lack of Deficiencies of adequately characterized point clouds. Furthermore, the feature extraction of the point cloud is performed on the lidar, and the camera can also provide visual features to match with the point cloud features, so as to optimize the entire system, so that the entire system remains stable.
一种矿山井下非结构化特征下的SLAM方法,包括以下步骤:A kind of SLAM method under unstructured characteristic under mine well, comprises the following steps:
S1、获取激光雷达当前激光帧的点云信息。S1. Obtain the point cloud information of the current laser frame of the lidar.
S2、计算激光雷达当前激光帧点云信息中每个点的曲率,并根据每个点 的曲率提取每个点的角点特征以及平面点特征。S2. Calculate the curvature of each point in the point cloud information of the current laser frame of the lidar, and extract the corner feature and plane point feature of each point according to the curvature of each point.
在步骤S2中,具体包括以下步骤:In step S2, specifically include the following steps:
S21、订阅当前激光帧运动畸变校正后的点云信息。S21. Subscribing to point cloud information after motion distortion correction of the current laser frame.
S22、遍历当前激光帧运动畸变校正后的有效点云,从有效点云中提取当 前激光点的前后各若干点并计算其相应的曲率,当前激光点的角点处曲率距 离差值平方作为曲率,同时存储该点曲率值,当前激光点的前后各若干点以 参数5为例,取当前激光点的前五个点以及后五个点,共计十个点并计算每 个点的曲率,每个点相应的曲率的为角点处曲率距离差值的平方。S22. Traversing the effective point cloud corrected by the motion distortion of the current laser frame, extracting several points before and after the current laser point from the effective point cloud and calculating their corresponding curvatures, the square of the curvature distance difference at the corner of the current laser point is used as the curvature , and store the curvature value of the point at the same time, several points before and after the current laser point take parameter 5 as an example, take the first five points and the last five points of the current laser point, a total of ten points and calculate the curvature of each point, each The curvature corresponding to each point is the square of the difference in curvature distance at the corner point.
S23、根据当前激光点的前后各若干点的曲率建立角点特征集合以及平面 点特征集合,在此过程中,从若干个点中标记属于遮挡、平行两种情况的点 不做特征提取,然后遍历扫描线,每根扫描线扫描一周的点云划分为6段, 针对每段提取20个角点、不限数量的平面点,将每个段中曲率大的点加入角 点特征集合,而曲率小的点加入平面点特征集合,最后认为非角点的点都是 平面点,加入平面点特征集合,再进行降采样。S23. Establish a corner point feature set and a plane point feature set according to the curvature of several points before and after the current laser point. During this process, mark the points belonging to the two situations of occlusion and parallelism from several points without performing feature extraction, and then Traverse the scan lines, divide the point cloud scanned by each scan line into 6 segments, extract 20 corner points and an unlimited number of plane points for each segment, add the points with large curvature in each segment to the corner point feature set, and Points with small curvature are added to the plane point feature set, and finally the non-corner points are considered to be plane points, added to the plane point feature set, and then down-sampled.
S24、从角点特征集合中提取每个点的角点特征,从平面点特征集合中提 取每个点的平面点特征,在此过程中,根据角点特征以及平面点特征发布相 应点的角点点云和平面点点云,同时发布带有角点特征点云数据的激光雷达 当前激光帧的点云信息。S24. Extract the corner feature of each point from the corner feature set, and extract the plane point feature of each point from the plane point feature set. In the process, publish the corner of the corresponding point according to the corner feature and the plane point feature Point cloud and planar point cloud, at the same time release the point cloud information of the current laser frame of the lidar with the corner feature point cloud data.
S3、通过相机提取当前帧的信息,根据当前帧的信息采用FAST算法检测 当前帧的角点,并判断是否为一个角特征点,以矿山井为例,相机拍摄当前 场景下的图像,在当前帧的图像中包含该场景下的信息。S3. Extract the information of the current frame through the camera, and use the FAST algorithm to detect the corner point of the current frame according to the information of the current frame, and judge whether it is a corner feature point. Taking a mine shaft as an example, the camera captures the image in the current scene. The image of the frame contains the information of the scene.
在步骤S3中,具体包括以下步骤:In step S3, specifically include the following steps:
S31、订阅当前相机提取到的当前帧的信息。S31. Subscribe to the information of the current frame extracted by the current camera.
S32、从当前帧的信息中选取一个像素,并判断该像素是否为一个角特征 点。S32. Select a pixel from the information of the current frame, and judge whether the pixel is a corner feature point.
在步骤S32中,判断一个像素是否为一个特征点包括以下步骤:In step S32, judging whether a pixel is a feature point includes the following steps:
S321、将所选取像素p的亮度值设为Ip,亮度阈值设为t。S321. Set the luminance value of the selected pixel p as I p , and set the luminance threshold as t.
S322、以像素p点为中心设定一个半径等于3像素的离散化的Bresenham 圆,Bresenham圆的边界上有16个像素。S322. Set a discretized Bresenham circle with a radius equal to 3 pixels centered on the pixel p, and there are 16 pixels on the boundary of the Bresenham circle.
在大小为16个像素的Bresenham圆上有9个连续的像素点,若9个连续 像素点的像素值大于或小于Ip+t,则判定像素p为一个角特征点。There are 9 consecutive pixels on the Bresenham circle with a size of 16 pixels. If the pixel values of the 9 consecutive pixels are greater or less than I p +t, then the pixel p is determined to be a corner feature point.
若9个连续像素点的像素值等于Ip+t,则判定像素p不是一个角特征点。If the pixel values of 9 consecutive pixel points are equal to I p +t, it is determined that the pixel p is not a corner feature point.
S4、根据当前帧的信息采用KLT光流法跟踪算法跟踪当前滑动窗口关键 帧的特征点。S4. Using the KLT optical flow method tracking algorithm to track the feature points of the current sliding window key frame according to the information of the current frame.
在步骤S4中,具体包括以下步骤:In step S4, specifically include the following steps:
S41、订阅当前相机提取到的当前帧的信息。S41. Subscribe to the information of the current frame extracted by the current camera.
S42、对当前帧信息中的原始图像进行直方图均衡化。S42. Perform histogram equalization on the original image in the current frame information.
S43、提取直方图均衡化后图像的金字塔,随后提取上一帧图像的FAST 特征点。S43. Extract the pyramid of the image after the histogram equalization, and then extract the FAST feature points of the previous frame image.
S44、对上一帧图像的FAST特征点进行LK光流跟踪。S44. Perform LK optical flow tracking on the FAST feature points of the last frame of image.
S45、判断上一帧图像的特征点数量,若上一帧图像的特征点少于10个, 直接设为跟踪失败;若跟踪点的数量满足预设阈值,则直接进行LK光流跟踪, 此时,先对原始图像的特征点去畸变,然后利用RANSAC算法剔除离群值,设 置上一帧图像中所有特征点是否成功跟踪,只有同时满足光流跟踪和RANSAC 检测的点才标记为1。S45. Determine the number of feature points of the previous frame of image, if the number of feature points of the previous frame of image is less than 10, directly set it as tracking failure; if the number of tracking points meets the preset threshold, then directly perform LK optical flow tracking, then , firstly remove the feature points of the original image, then use the RANSAC algorithm to remove outliers, and set whether all feature points in the previous frame of image are successfully tracked, and only points that satisfy both optical flow tracking and RANSAC detection are marked as 1.
S5、根据IMU先验的位姿估计与跟踪当前滑动窗口关键帧的特征点之间 的重投影误差更新当前状态的位姿估计,IMU先验的位姿估计从IMU中获取, 为相机拍摄当前场景下图片当前帧的初始位姿估计。S5. Update the pose estimation of the current state according to the reprojection error between the IMU prior pose estimation and the feature point tracking the key frame of the current sliding window. The IMU prior pose estimation is obtained from the IMU, and the camera captures the current The initial pose estimation of the current frame of the picture in the scene.
在步骤S5中,具体包括以下步骤:In step S5, specifically include the following steps:
S51、通过视差判断采取何种边缘化的方法判断当前帧是否为关键帧,新 帧与上一帧视差大则边缘化最老帧,视差小则边缘化上一帧。S51. Determine whether the current frame is a key frame by judging which marginalization method to use by parallax. If the parallax between the new frame and the previous frame is large, the oldest frame will be marginalized, and if the parallax is small, the previous frame will be marginalized.
S52、在IMU中加入新特征点的先验,按照IMU先验的位姿估计将一个特 征点的所有观测加入三角化计算得到最新的观测帧。S52. Add the priori of new feature points in the IMU, and add all observations of a feature point into triangulation calculations to obtain the latest observation frame according to the pose estimation of the IMU priori.
S53、将一个特征点最新的观测帧与在滑动窗口中最老的一帧、两帧对应 的IMU位姿估计做重投影误差建立残差关系,每次迭代的开始都以上一次视 觉惯性系统即相机和IMU联合优化系统结束优化后的位姿估计作为初值,重 新计算一个窗口内,通过重投影误差建立残差关系更新当前状态的位姿估计。S53. The latest observation frame of a feature point and the IMU pose estimation corresponding to the oldest frame and two frames in the sliding window are re-projected to establish a residual relationship. At the beginning of each iteration, the previous visual-inertial system is The camera and IMU joint optimization system uses the optimized pose estimation as the initial value, recalculates within a window, and updates the current state pose estimation by establishing a residual relationship through the reprojection error.
S6、通过ESIKF对当前状态的位姿估计进行状态更新得到最新的视觉特 征;S6, carry out state update to the pose estimation of current state by ESIKF and obtain the latest visual features;
在步骤S6中,具体包括以下步骤:In step S6, specifically include the following steps:
S61、通过参数方程计算得到当前帧的平面,并将当前帧的平面与上一帧 的平面点互相匹配得到认为是同一个平面,即位姿变换的观测量。S61. Obtain the plane of the current frame through parameter equation calculation, and match the plane of the current frame with the plane points of the previous frame to obtain the same plane, that is, the observation of pose transformation.
S62、以IMU积分作为先验做EKF更新并更新协方差,得到误差的最优位 姿估计以及状态向量,在求解增益矩阵K后,得到误差量的后验,进而计算 其变化量,当变化量小于阈值时判断其收敛。S62. Use the IMU integral as a priori to update the EKF and update the covariance to obtain the optimal pose estimation of the error and the state vector. After solving the gain matrix K, the posteriori of the error amount is obtained, and then the amount of change is calculated. When the change It is judged to converge when the amount is less than the threshold.
S63、当ESIKF结束退出时更新后验的协方差矩阵,完成对当前状态的位 姿估计进行状态更新。S63. When the ESIKF finishes exiting, update the posterior covariance matrix, and complete the state update of the pose estimation of the current state.
S64、将新的帧中的一些点云加入到kd树中,并更新下一帧在地图中查 找它的标志。S64, add some point clouds in the new frame to the kd tree, and update the next frame to find its sign in the map.
S65、通过ESIKF若干次迭代逼近结果,用最新的ESKF后验更新当前滑 动窗口。S65. Approximate the result by several iterations of ESIKF, and update the current sliding window with the latest ESKF posterior.
S7、将最新的视觉特征与带有角点特征点云数据的激光雷达当前激光帧 的点云信息进行特征匹配,得出激光雷达里程计因子和相机里程计因子。S7. The latest visual feature is matched with the point cloud information of the current laser frame of the lidar with corner feature point cloud data to obtain the lidar odometry factor and the camera odometry factor.
S8、将激光雷达里程计因子、相机里程计因子以及IMU预积分因子加入 因子图优化后进行三维地图建模。S8. After adding the lidar odometer factor, the camera odometer factor and the IMU pre-integration factor into the factor map optimization, the three-dimensional map modeling is performed.
向因子图中添加IMU预积分因子:Add an IMU preintegration factor to the factor graph:
考虑滑动窗口内两个连续帧bk和bk-1之间的IMU测量,IMU预积分测量残 差定义如下:Considering an IMU measurement between two consecutive frames b k and b k − 1 within a sliding window, the IMU pre-integrated measurement residual is defined as follows:
其中[.]xyz提取了用于误差状态表示的四元数q的向量部分(虚部)。是四元数的三阶误差状态表示。是两个连续图像帧的时间间隔 内只采用带噪声的加速度计和陀螺仪测量的IMU预积分测量值,加速度计和 陀螺仪偏差还包括在残差项中用于在线矫正。where [.] xyz extracts the vector part (imaginary part) of the quaternion q used for the error state representation. is the third-order error state representation of a quaternion. is the IMU pre-integrated measurement value measured by only the noisy accelerometer and gyroscope in the time interval of two consecutive image frames, and the accelerometer and gyroscope deviations are also included in the residual item for online correction.
上式中,表示残差的定义;分别是对于IMU的 位置速度方向预积分量的差值;δba、δbg分别是对于IMU加速度和角速度的偏 置量差值;分别是k-1帧IMU坐标系在世界坐标系下对应的位 置、速度、旋转;分别是k-1时刻到k时刻两时刻间位置、速 度、角速度的预积分估计值;gW是世界坐标系下的重力矢量;是k时刻IMU 加速度的偏置量;是k时刻IMU角速度的偏置量。In the above formula, Indicates the definition of the residual; They are the difference of the position and velocity direction pre-integration of the IMU; δb a and δb g are the offset difference of the acceleration and angular velocity of the IMU respectively; They are the position, velocity, and rotation corresponding to the k-1 frame IMU coordinate system in the world coordinate system; are the pre-integrated estimated values of position, velocity, and angular velocity between time k-1 and time k; g W is the gravity vector in the world coordinate system; is the bias of IMU acceleration at time k; is the offset of the IMU angular velocity at time k.
向因子图中添加相机里程计因子:考虑第ith幅图像中第一个观测到的第 lth个特征,在第jth幅图像中特征观测的残差定义为:Adding a camera odometry factor to the factor graph: Considering the first observed l th feature in the i th image, the residual of the feature observation in the j th image is defined as:
上式中,表示视觉残差的定义,b1,b2为正切平面的两个正交基 也就是原点到测量单位球面点的向量a,在球面上的切向单位向量。为第l 个特征在第j张图像中的像素通过相机内参反向投影到单位球面上的三维坐 标。为第l个特征在第j张图像中的像素的三维坐标。表示从IMU坐标 系到相机坐标系的旋转转换矩阵三维坐标表示。表示从世界坐标系到IMU 坐标系的旋转转换矩阵三维坐标表示。表示从相机坐标系到IMU坐标系的 位置转换矩阵三维坐标表示。表示从IMU坐标系到世界坐标系的位置转换 矩阵三维坐标表示。In the above formula, Represents the definition of visual residuals, b1 and b2 are the two orthogonal bases of the tangent plane, that is, the vector a from the origin to the measurement unit spherical point, and the tangential unit vector on the spherical surface. is the three-dimensional coordinates of the pixel of the lth feature in the jth image back-projected to the unit sphere through the camera internal reference. is the three-dimensional coordinates of the pixel of the lth feature in the jth image. Indicates the three-dimensional coordinate representation of the rotation transformation matrix from the IMU coordinate system to the camera coordinate system. Indicates the three-dimensional coordinate representation of the rotation transformation matrix from the world coordinate system to the IMU coordinate system. Indicates the three-dimensional coordinate representation of the position transformation matrix from the camera coordinate system to the IMU coordinate system. Indicates the three-dimensional coordinate representation of the position transformation matrix from the IMU coordinate system to the world coordinate system.
其中,是使用相机内参将像素坐标转变为单位向量的反投影函数。是第ith幅图像中第lth个特征的第一次观测。是第jth幅图像中相 同特征的观测。由于视觉残差的自由度是2,因此将残差向量投影到切平面上。in, is a backprojection function that converts pixel coordinates into unit vectors using camera intrinsics. is the first observation of the l th feature in the ith image. is the observation of the same feature in the j th image. Since the degree of freedom of the visual residual is 2, the residual vector is projected onto the tangent plane.
向因子图中添加激光雷达里程计因子:激光里程计因子为其中ΔTk,k+1为状态节点Xk与Xk+1之间的相对变换关系。Add the lidar odometry factor to the factor graph: The laser odometry factor is Among them, ΔT k,k+1 is the relative transformation relationship between state nodes X k and X k+1 .
考虑到矿山井巷使用激光雷达和IMU的方法,提高整个系统在非结构化 环境中的精度,但会造成计算量的增加,所以实施例进行关键帧的匹配,使 得精度与计算量之间取得平衡,保证了系统的准确率和实时性。Considering that the method of using laser radar and IMU in the mine shaft improves the accuracy of the whole system in an unstructured environment, but it will cause an increase in the amount of calculation, so the embodiment performs key frame matching, so that the accuracy and the amount of calculation can be achieved. Balance ensures the accuracy and real-time performance of the system.
使用激光雷达、相机和IMU的SLAM方法,将激光雷达的点云特征与IMU 数据和相机提取并优化后的特征信息进行一同运算,提高整个系统在非结构 化环境中的精度,但会造成计算量的增加,所以本实施例进行关键帧的匹配, 降低整个系统的计算量,使得精度与计算量之间取得平衡,保证了系统的准 确率和实时性。Using the SLAM method of lidar, camera and IMU, the point cloud features of lidar and the feature information extracted and optimized by IMU data and camera are calculated together to improve the accuracy of the whole system in an unstructured environment, but it will cause calculation Therefore, this embodiment performs key frame matching to reduce the calculation amount of the entire system, so as to strike a balance between the accuracy and the calculation amount, and ensure the accuracy and real-time performance of the system.
在步骤S8中,具体包括以下步骤:In step S8, specifically include the following steps:
S81、将激光雷达里程计因子、相机里程计因子以及IMU预积分因子加入 因子图中进行优化。S81, adding the lidar odometer factor, the camera odometer factor and the IMU pre-integration factor into the factor graph for optimization.
S82、分别求取激光雷达里程计因子、相机里程计因子以及IMU预积分因 子关于状态量的雅克比矩阵,使用Levenberg-Marquardt法求解状态量。S82. Respectively obtain the Jacobian matrix of the lidar odometer factor, the camera odometer factor and the IMU pre-integration factor about the state quantity, and use the Levenberg-Marquardt method to solve the state quantity.
S83、在ros系统中利用map_server服务器配置rgbdslam包并联合因子 图优化后的全部数据进行三维地图建模。S83. In the ros system, use the map_server server to configure the rgbdslam package and combine all the optimized data of the factor map to perform three-dimensional map modeling.
本实施例采用激光雷达、相机和IMU对煤矿井下实现高精准定位,并提 高对井下环境的姿态感知,由此实现对煤矿井下进行高精度定位,同时平衡 了精度与计算量,提高矿山井巷的定位与建图准确率与实时性,有效的解决 了非结构化环境特征下且GPS无法作用的井下定位与建图难的问题。In this embodiment, laser radar, camera, and IMU are used to realize high-precision positioning of the coal mine underground, and improve the attitude perception of the underground environment, thereby realizing high-precision positioning of the coal mine underground, and at the same time balancing the accuracy and calculation amount, and improving mine shafts. The positioning and mapping accuracy and real-time performance effectively solve the problem of difficult underground positioning and mapping under unstructured environment characteristics and GPS cannot function.
本实施例还提供了一种应用于上述矿山井下非结构化特征下SLAM方法的 系统,经过因子图优化后的激光雷达里程计因子、相机里程计因子以及IMU 预积分因子,通过在ros系统中利用map_server服务器配置rgbdslam包并 联合图优化后的全部数据进行三维地图建模,包括:This embodiment also provides a system applied to the SLAM method under the unstructured characteristics of the above-mentioned underground mine. The lidar odometer factor, camera odometer factor and IMU pre-integration factor after factor graph optimization are passed in the ros system Use the map_server server to configure the rgbdslam package and combine all the optimized data for 3D map modeling, including:
点云信息获取模块100,所述点云信息获取模块100用于获取激光雷达当 前激光帧的点云信息。Point cloud information acquisition module 100, the point cloud information acquisition module 100 is used to obtain the point cloud information of the current laser frame of lidar.
特征提取模块200,所述特征提取模块200用于计算激光雷达当前激光帧 点云信息中每个点的曲率,并根据每个点的曲率提取每个点的角点特征以及 平面点特征。Feature extraction module 200, described feature extraction module 200 is used for calculating the curvature of each point in the laser radar current laser frame point cloud information, and extracts the corner point feature and plane point feature of each point according to the curvature of each point.
角点特征判断模块300,所述角点特征判断模块300用于通过相机提取当 前帧的信息,根据当前帧的信息采用FAST算法检测当前帧的角点,并判断是 否为一个角特征点。Corner feature judging module 300, described corner feature judging module 300 is used for extracting the information of current frame by camera, adopts FAST algorithm to detect the corner of current frame according to the information of current frame, and judges whether it is a corner feature point.
特征点跟踪模块400,所述特征点跟踪模块400用于根据当前帧的信息采 用KLT光流法跟踪算法跟踪当前滑动窗口关键帧的特征点。Feature point tracking module 400, described feature point tracking module 400 is used for adopting KLT optical flow method tracking algorithm to track the feature point of current sliding window key frame according to the information of current frame.
位姿估计更新模块500,所述位姿估计更新模块500用于根据IMU先验的 位姿估计与跟踪当前滑动窗口关键帧的特征点之间的重投影误差更新当前状 态的位姿估计。The pose estimation update module 500, the pose estimation update module 500 is used to update the pose estimate of the current state according to the reprojection error between the pose estimate of the IMU prior and the feature point of the tracking current sliding window key frame.
视觉特征得出模块600,所述视觉特征得出模块600用于通过ESIKF对当 前状态的位姿估计进行状态更新得到最新的视觉特征。Visual feature obtains module 600, and described visual feature obtains module 600 and is used for carrying out state update to the pose estimation of current state by ESIKF and obtains the latest visual feature.
里程计因子得出模块700,所述里程计因子得出模块700用于将最新的视 觉特征与带有角点特征点云数据的激光雷达当前激光帧的点云信息进行特征 匹配,得出激光雷达里程计因子和相机里程计因子。The odometer factor deriving module 700, the odometer factor deriving module 700 is used to carry out feature matching with the point cloud information of the current laser frame of the laser radar with corner feature point cloud data to obtain the laser Radar odometry factor and camera odometry factor.
因子图优化模块800,所述因子图优化模块800用于将激光雷达里程计因 子、相机里程计因子以及IMU预积分因子加入因子图优化后进行三维地图建 模。Factor graph optimization module 800, described factor graph optimization module 800 is used for adding lidar odometer factor, camera odometer factor and IMU pre-integration factor to carry out three-dimensional map modeling after factor graph optimization.
本实施例通过对激光雷达进行点云的特征提取,相机也可以提供视觉特 征与点云特征进行匹配,以此优化整个系统,从而使整个系统保持稳定。In this embodiment, the feature extraction of the point cloud is performed on the lidar, and the camera can also provide visual features to match with the feature of the point cloud, so as to optimize the entire system, so that the entire system remains stable.
采用激光雷达、相机和IMU对煤矿井下实现高精准定位,并提高对井下 环境的姿态感知,由此实现对煤矿井下进行高精度定位,同时平衡了精度与 计算量,提高矿山井巷的定位与建图准确率与实时性,有效的解决了非结构 化环境特征下且GPS无法作用的井下定位与建图难的问题。Using laser radar, camera and IMU to achieve high-precision positioning of coal mine underground, and improve the attitude perception of the underground environment, thereby achieving high-precision positioning of coal mine underground, while balancing the accuracy and calculation amount, and improving the positioning and positioning of mine shafts Mapping accuracy and real-time performance effectively solve the problem of difficult underground positioning and mapping under unstructured environment characteristics and GPS cannot function.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明 的都是与其他实施例的不同之处,各个实施例之间相同或相似的部分互相参 见即可。对于以上各实施例而言,由于其与方法实施例基本相似,所以描述 的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other. For the above embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for relevant parts, please refer to the part of the description of the method embodiments.
以上实施方式对本发明进行了详细介绍,本文中应用了具体个例对本发 明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发 明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的 思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书 内容不应理解为对本发明的限制。The above embodiments have introduced the present invention in detail. The principles and implementation modes of the present invention have been explained by using specific examples in this paper. The description of the above embodiments is only used to help understand the method of the present invention and its core idea; meanwhile, for Those skilled in the art will have changes in the specific implementation and scope of application according to the idea of the present invention. In summary, the contents of this specification should not be construed as limiting the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210970287.5A CN115290073A (en) | 2022-08-12 | 2022-08-12 | A SLAM method and system under unstructured characteristics of underground mines |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210970287.5A CN115290073A (en) | 2022-08-12 | 2022-08-12 | A SLAM method and system under unstructured characteristics of underground mines |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115290073A true CN115290073A (en) | 2022-11-04 |
Family
ID=83829328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210970287.5A Pending CN115290073A (en) | 2022-08-12 | 2022-08-12 | A SLAM method and system under unstructured characteristics of underground mines |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115290073A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115628738A (en) * | 2022-11-05 | 2023-01-20 | 合肥图灵纪元科技有限公司 | Multi-mode autonomous navigation and positioning system |
-
2022
- 2022-08-12 CN CN202210970287.5A patent/CN115290073A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115628738A (en) * | 2022-11-05 | 2023-01-20 | 合肥图灵纪元科技有限公司 | Multi-mode autonomous navigation and positioning system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111207774B (en) | Method and system for laser-IMU external reference calibration | |
WO2021035669A1 (en) | Pose prediction method, map construction method, movable platform, and storage medium | |
CN111024066B (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
US8761439B1 (en) | Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit | |
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN113223161B (en) | Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling | |
CN107796391A (en) | A kind of strapdown inertial navigation system/visual odometry Combinated navigation method | |
CN113674412B (en) | Pose fusion optimization-based indoor map construction method, system and storage medium | |
CN110455301A (en) | A Dynamic Scene SLAM Method Based on Inertial Measurement Unit | |
CN114255323A (en) | Robot, map construction method, map construction device and readable storage medium | |
CN110929402A (en) | A Probabilistic Terrain Estimation Method Based on Uncertainty Analysis | |
CN115479598A (en) | Positioning and mapping method based on multi-sensor fusion and tight coupling system | |
CN113420590B (en) | Robot positioning method, device, equipment and medium in weak texture environment | |
CN112652001B (en) | Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering | |
CN114234967A (en) | Hexapod robot positioning method based on multi-sensor fusion | |
CN115218889A (en) | Multi-sensor indoor positioning method based on dotted line feature fusion | |
CN112731503A (en) | Pose estimation method and system based on front-end tight coupling | |
CN116772844A (en) | A navigation method based on visual inertial indoor robot in dynamic environment | |
JP2023021994A (en) | DATA PROCESSING METHOD AND APPARATUS FOR AUTONOMOUS VEHICLE, ELECTRONIC DEVICE, STORAGE MEDIUM, COMPUTER PROGRAM, AND AUTONOMOUS VEHICLE | |
Schwendner et al. | Using embodied data for localization and mapping | |
CN117073720A (en) | Method and equipment for quick visual inertia calibration and initialization under weak environment and weak action control | |
CN114429432B (en) | Multi-source information layered fusion method and device and storage medium | |
CN115290073A (en) | A SLAM method and system under unstructured characteristics of underground mines | |
CN108827287B (en) | Robust visual SLAM system in complex environment | |
CN118351177B (en) | A posture optimization method and system integrating edge features of reflectivity projection images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |