CN115290073A - SLAM method and system under mine underground unstructured characteristics - Google Patents
SLAM method and system under mine underground unstructured characteristics Download PDFInfo
- Publication number
- CN115290073A CN115290073A CN202210970287.5A CN202210970287A CN115290073A CN 115290073 A CN115290073 A CN 115290073A CN 202210970287 A CN202210970287 A CN 202210970287A CN 115290073 A CN115290073 A CN 115290073A
- Authority
- CN
- China
- Prior art keywords
- point
- frame
- current
- factor
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 19
- 230000003287 optical effect Effects 0.000 claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims description 28
- 238000005457 optimization Methods 0.000 claims description 24
- 230000010354 integration Effects 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 3
- 239000003245 coal Substances 0.000 abstract description 13
- 230000007613 environmental effect Effects 0.000 abstract description 6
- 238000013507 mapping Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000009412 basement excavation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 125000004432 carbon atom Chemical group C* 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of underground mine positioning and navigation, solves the technical problem of difficult underground positioning and map building under the non-structural environmental characteristic and the incapability of GPS, in particular to an SLAM method under the underground non-structural characteristic of a mine, which comprises the following steps: acquiring point cloud information of a current laser frame of a laser radar; calculating the curvature of each point in the current laser frame point cloud information of the laser radar, and extracting the corner point characteristic and the plane point characteristic of each point according to the curvature of each point; extracting information of a current frame through a camera, detecting angular points of the current frame by adopting an FAST algorithm according to the information of the current frame, and judging whether the angular points are angular feature points or not; and tracking the characteristic points of the key frame of the current sliding window by adopting a KLT optical flow tracking algorithm according to the information of the current frame. The invention realizes high-precision positioning of the underground coal mine, balances precision and calculated amount, and improves positioning and mapping accuracy and real-time performance of the mine roadway.
Description
Technical Field
The invention relates to the technical field of mine underground positioning and navigation, in particular to an SLAM method and system under the underground unstructured characteristic of a mine.
Background
The operation areas such as coal mine tunnels and excavation working faces have typical unstructured environmental characteristics, and the GPS technology cannot be directly applied to the underground coal mine, so that the autonomous positioning system of the coal mine robot under the coal mine cannot realize high-precision positioning and poor posture sensing, and therefore mine accidents occur frequently during coal mining, and the safety of coal mining is reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an SLAM method and an SLAM system under the underground unstructured characteristic of a mine, and solves the technical problem that underground positioning and map building are difficult under the unstructured environmental characteristic and the GPS cannot act.
In order to solve the technical problems, the invention provides the following technical scheme: a SLAM method under mine downhole unstructured characteristics, comprising the following processes:
acquiring point cloud information of a current laser frame of a laser radar;
calculating the curvature of each point in the current laser frame point cloud information of the laser radar, and extracting the corner point characteristic and the plane point characteristic of each point according to the curvature of each point;
extracting information of a current frame through a camera, detecting an angular point of the current frame by adopting an FAST algorithm according to the information of the current frame, and judging whether the angular point is an angular feature point;
tracking feature points of a key frame of a current sliding window by adopting a KLT optical flow tracking algorithm according to the information of the current frame;
updating the pose estimation of the current state according to the IMU priori pose estimation and the reprojection error between the feature points tracking the current sliding window key frame;
updating the pose estimation of the current state through ESIKF to obtain the latest visual characteristics;
performing feature matching on the latest visual features and point cloud information of a current laser frame of the laser radar with angular point feature point cloud data to obtain a laser radar odometer factor and a camera odometer factor;
and adding the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor into the factor graph for optimization, and then performing three-dimensional map modeling.
Further, in the step of calculating the curvature of each point in the current laser frame point cloud information of the laser radar and extracting the corner feature and the plane feature of each point according to the curvature of each point, the method specifically comprises the following steps:
subscribing point cloud information after the motion distortion of the current laser frame is corrected;
traversing the effective point cloud after the motion distortion correction of the current laser frame, extracting a plurality of points before and after the current laser point from the effective point cloud, and calculating the corresponding curvatures of the points;
establishing an angular point characteristic set and a plane point characteristic set according to the curvatures of a plurality of points before and after the current laser point;
and extracting the corner feature of each point from the corner feature set, and extracting the plane point feature of each point from the plane point feature set.
Further, in the step of detecting the corner of the current frame by using the FAST algorithm according to the information of the current frame and determining whether the current frame is a corner feature point, the method specifically comprises the following steps:
subscribing the information of the current frame extracted by the current camera;
and selecting a pixel from the information of the current frame, and judging whether the pixel is an angular feature point.
Further, in the step of selecting a pixel from the information of the current frame and determining whether the pixel is a corner feature point, the following process is specifically included:
setting the brightness value of the selected pixel p as I p Setting the brightness threshold value as t;
setting a discretized Bresenham circle with the radius equal to 3 pixels by taking a pixel p point as a center, wherein 16 pixels are arranged on the boundary of the Bresenham circle;
9 continuous pixel points are arranged on a Bresenham circle with the size of 16 pixels, and if the pixel values of the 9 continuous pixel points are larger or smaller than I p + t, then judge pixel p is a corner feature point;
if the pixel value of 9 continuous pixels is equal to I p + t, then determine that pixel p is not a corner feature point.
Further, in the step of tracking the feature points of the key frame of the current sliding window by adopting a KLT optical flow tracking algorithm according to the information of the current frame, the method specifically includes the following steps:
subscribing the information of the current frame extracted by the current camera;
carrying out histogram equalization on an original image in the current frame information;
extracting a pyramid of the image after histogram equalization, and then extracting FAST characteristic points of the previous frame of image;
performing LK optical flow tracking on FAST characteristic points of the previous frame of image;
judging the number of the feature points of the previous frame of image, and if the number of the feature points of the previous frame of image is less than 10, directly setting the tracking failure;
and if the number of the tracking points meets a preset threshold value, directly carrying out LK optical flow tracking.
Further, in the step of updating the pose estimation of the current state, the following process is specifically included:
judging whether the current frame is a key frame or not by adopting a marginalization method through parallax judgment, marginalizing the oldest frame if the parallax between the new frame and the previous frame is large, and marginalizing the previous frame if the parallax is small;
adding a new feature point prior into the IMU, and adding all observations of one feature point into triangularization calculation according to pose estimation of the IMU prior to obtain a latest observation frame;
and (3) carrying out estimation of the IMU poses corresponding to the latest observation frame of one feature point and the oldest frame and two frames in the sliding window to carry out re-projection error to establish a residual error relation.
Further, in the step of performing state update on the pose estimation of the current state through ESIKF to obtain the latest visual feature, the method specifically includes the following steps:
calculating to obtain a plane of the current frame through a parameter equation, and matching the plane of the current frame with a plane point of the previous frame to obtain an observed quantity which is regarded as the same plane, namely posture transformation;
performing EKF updating and covariance updating by taking IMU integral as a priori knowledge to obtain optimal pose estimation and state vector of the error;
when the ESIKF finishes exiting, updating the posterior covariance matrix, and finishing state updating the pose estimation of the current state;
adding some point clouds in the new frame into the kd-tree, and updating the mark of the next frame for searching the map;
and (4) carrying out a plurality of times of iteration approximation results through the ESIKF, and updating the current sliding window by using the latest ESKF posterior.
Further, in the step of performing three-dimensional map modeling after adding the laser radar odometer factor, the camera odometer factor and the IMU pre-integration factor into the factor graph for optimization, the method specifically comprises the following processes:
adding the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor into the factor graph for optimization;
respectively solving Jacobian matrixes of the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor relative to the state quantity, and solving the state quantity by using a Levenberg-Marquardt method;
and configuring an rgbdslm packet by using a map _ server in the ros system and carrying out three-dimensional map modeling by combining all data subjected to factor graph optimization.
The invention also provides another technical scheme, a system for realizing the SLAM method under the underground unstructured characteristics of the mine is characterized in that a laser radar odometer factor, a camera odometer factor and an IMU pre-integration factor after factor graph optimization are used for carrying out three-dimensional map modeling by utilizing a map _ server to configure an rgbdslm package and all data after combination graph optimization in a ros system, and the method comprises the following steps:
the point cloud information acquisition module is used for acquiring point cloud information of a current laser frame of the laser radar;
the characteristic extraction module is used for calculating the curvature of each point in the current laser frame point cloud information of the laser radar and extracting the angular point characteristic and the plane point characteristic of each point according to the curvature of each point;
the angular point characteristic judgment module is used for extracting the information of the current frame through the camera, detecting the angular point of the current frame by adopting a FAST algorithm according to the information of the current frame, and judging whether the angular point is an angular characteristic point;
the characteristic point tracking module is used for tracking the characteristic points of the key frame of the current sliding window by adopting a KLT optical flow tracking algorithm according to the information of the current frame;
a pose estimation update module for updating a pose estimate for a current state based on a reprojection error between an IMU prior pose estimate and a feature point tracking a current sliding window keyframe;
the visual feature obtaining module is used for carrying out state updating on pose estimation of the current state through ESIKF to obtain the latest visual feature;
the system comprises a speedometer factor obtaining module, a camera speedometer factor obtaining module and a display module, wherein the speedometer factor obtaining module is used for carrying out feature matching on the latest visual features and point cloud information of a current laser frame of the laser radar with angular point feature point cloud data to obtain a laser radar speedometer factor and a camera speedometer factor;
and the factor graph optimization module is used for adding the laser radar odometer factor, the camera odometer factor and the IMU pre-integration factor into the factor graph for optimization and then performing three-dimensional map modeling.
By means of the technical scheme, the invention provides a SLAM method and a system under the underground unstructured characteristic of a mine, which at least have the following beneficial effects:
1. according to the invention, the laser radar, the camera and the IMU are adopted to realize high-precision positioning of the underground coal mine, and the attitude perception of the underground environment is improved, so that high-precision positioning of the underground coal mine is realized, the precision and the calculated amount are balanced, the accuracy and the real-time performance of positioning and map building of a mine roadway are improved, and the problem of difficult underground positioning and map building due to the non-structural environmental characteristics and incapability of a GPS is effectively solved.
2. According to the method, the point cloud characteristics of the laser radar, the IMU data and the characteristic information extracted and optimized by the camera are calculated together by using the SLAM method of the laser radar, the camera and the IMU, so that the accuracy of the whole system in an unstructured environment is improved, the key frame is matched, the calculated amount of the whole system is reduced, the accuracy and the calculated amount are balanced, and the accuracy and the real-time performance of the system are guaranteed.
3. According to the invention, through carrying out point cloud feature extraction on the laser radar, the camera can also provide visual features to be matched with the point cloud features, so that the whole system is optimized, and the whole system is kept stable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application in a non-limiting sense. In the drawings:
FIG. 1 is a flow chart of the SLAM method of the present invention;
FIG. 2 is a schematic diagram illustrating the transmission of point cloud feature extraction data according to the present invention;
FIG. 3 is a schematic diagram of visual feature extraction data delivery according to the present invention;
FIG. 4 is a schematic diagram of the visual odometry data transfer of the camera of the present invention;
FIG. 5 is a diagram illustrating factor graph optimized data delivery according to the present invention.
In the figure: 100. a point cloud information acquisition module; 200. a feature extraction module; 300. a corner feature judgment module; 400. a feature point tracking module; 500. a pose estimation update module; 600. a visual characteristic deriving module; 700. an odometry factor derivation module; 800. and a factor graph optimization module.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. Therefore, the realization process of how to apply technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and therefore, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Referring to fig. 1 to 5, a specific implementation of the embodiment is shown, which fully considers factors that the mine roadway unstructured environment features and the GPS technology cannot be directly applied, and because of the mine roadway unstructured environment features, under certain circumstances, the laser radar cannot acquire enough feature point clouds, but the IMU may not rely on external information and is independent of the existence of other sensors, so that the IMU may perform pre-integration when there is no enough feature point clouds to make up for the defect that there is no enough feature point clouds. And then carry out the characteristic extraction of point cloud to laser radar, the camera also can provide visual feature and point cloud characteristic and match to this optimizes entire system, thereby makes entire system keep stable.
A SLAM method under unstructured features in a mine well, comprising the steps of:
s1, point cloud information of a current laser frame of the laser radar is obtained.
And S2, calculating the curvature of each point in the current laser frame point cloud information of the laser radar, and extracting the angular point characteristic and the plane point characteristic of each point according to the curvature of each point.
In step S2, the method specifically includes the following steps:
and S21, subscribing the point cloud information after the motion distortion of the current laser frame is corrected.
S22, traversing the effective point cloud after the motion distortion correction of the current laser frame, extracting a plurality of points before and after the current laser point from the effective point cloud, and calculating the corresponding curvatures of the points, wherein the square of the curvature distance difference value at the angular point of the current laser point is taken as the curvature, and the curvature value of the point is stored at the same time, the five points before and after the current laser point are taken by taking the parameter 5 as an example, the ten points are counted, the curvature of each point is calculated, and the corresponding curvature of each point is the square of the curvature distance difference value at the angular point.
S23, establishing an angular point feature set and a plane point feature set according to curvatures of a plurality of points before and after the current laser point, marking points which belong to two conditions of shielding and parallelism from the plurality of points without feature extraction in the process, traversing scanning lines, dividing point cloud of one circle scanned by each scanning line into 6 sections, extracting 20 angular points and an unlimited number of plane points for each section, adding points with large curvature into the angular point feature set and points with small curvature into the plane point feature set, and finally considering that the points without the angular points are all plane points, adding the plane point feature set and then performing down sampling.
S24, extracting the angular point features of each point from the angular point feature set, extracting the plane point features of each point from the plane point feature set, issuing angular point clouds and plane point clouds of corresponding points according to the angular point features and the plane point features in the process, and simultaneously issuing point cloud information of the current laser frame of the laser radar with angular point feature point cloud data.
And S3, extracting the information of the current frame through the camera, detecting the corner of the current frame by adopting a FAST algorithm according to the information of the current frame, and judging whether the corner is a corner feature point, wherein the camera shoots an image under the current scene by taking a mine well as an example, and the image of the current frame contains the information under the scene.
In step S3, the method specifically includes the following steps:
and S31, subscribing the information of the current frame extracted by the current camera.
And S32, selecting a pixel from the information of the current frame, and judging whether the pixel is an angular feature point.
In step S32, determining whether a pixel is a feature point includes the steps of:
s321, setting the brightness value of the selected pixel p as I p The brightness threshold is set to t.
S322, a discretized Bresenham circle with the radius equal to 3 pixels is set by taking the pixel p point as the center, and 16 pixels are arranged on the boundary of the Bresenham circle.
There are 9 continuous pixels on the Bresenham circle with the size of 16 pixels, if the pixel value of the 9 continuous pixels is larger or smaller than I p + t, then the pixel p is determined to be a corner feature point.
If the pixel value of 9 continuous pixels is equal to I p + t, then determine that pixel p is not a corner feature point.
And S4, tracking the feature points of the key frame of the current sliding window by adopting a KLT optical flow tracking algorithm according to the information of the current frame.
In step S4, the method specifically includes the following steps:
and S41, subscribing the information of the current frame extracted by the current camera.
And S42, carrying out histogram equalization on the original image in the current frame information.
And S43, extracting a pyramid of the image after histogram equalization, and then extracting FAST characteristic points of the previous frame of image.
And S44, performing LK optical flow tracking on the FAST characteristic points of the previous frame image.
S45, judging the number of the feature points of the previous frame of image, and if the number of the feature points of the previous frame of image is less than 10, directly setting the tracking failure; if the number of the tracking points meets a preset threshold value, LK optical flow tracking is directly carried out, at the moment, feature points of an original image are firstly subjected to distortion removal, then outliers are eliminated by using an RANSAC algorithm, whether all the feature points in a previous frame image are successfully tracked is set, and only the points meeting optical flow tracking and RANSAC detection at the same time are marked as 1.
And S5, updating the pose estimation of the current state according to the IMU priori pose estimation and the reprojection error between the feature points tracking the current sliding window key frame, wherein the IMU priori pose estimation is obtained from the IMU and is used for shooting the initial pose estimation of the current frame of the picture under the current scene for the camera.
In step S5, the method specifically includes the following steps:
s51, judging whether the current frame is a key frame or not by adopting a method of marginalization through parallax judgment, and marginalizing the oldest frame if the parallax between the new frame and the previous frame is large, or marginalizing the previous frame if the parallax is small.
And S52, adding a new feature point prior into the IMU, and adding all observations of one feature point into triangularization calculation according to the IMU prior pose estimation to obtain the latest observation frame.
S53, carrying out re-projection error on an observation frame with the latest feature point and IMU pose estimation corresponding to the oldest frame and two frames in a sliding window to establish a residual error relation, taking the pose estimation after finishing optimization of a visual inertia system, namely a camera and IMU combined optimization system, for the last time at the beginning of each iteration as an initial value, recalculating in a window, and updating the pose estimation of the current state through the re-projection error established residual error relation.
S6, updating the state of the pose estimation of the current state through ESIKF to obtain the latest visual characteristics;
in step S6, the method specifically includes the steps of:
s61, calculating through a parameter equation to obtain a plane of the current frame, and matching the plane of the current frame with a plane point of the previous frame to obtain an observed quantity which is regarded as the same plane, namely posture transformation.
And S62, performing EKF updating and covariance updating by taking IMU integral as a priori to obtain the optimal attitude estimation and state vector of the error, solving the gain matrix K to obtain the posterior of the error, further calculating the variable quantity of the error, and judging convergence when the variable quantity is smaller than a threshold value.
And S63, updating the posterior covariance matrix when the ESIKF is finished and quitting, and finishing the state updating of the attitude estimation of the current state.
And S64, adding some point clouds in the new frame into the kd-tree, and updating the next frame to find the mark of the next frame in the map.
And S65, carrying out a plurality of times of iterative approximation results through the ESIKF, and updating the current sliding window by using the latest ESKF posterior.
And S7, performing feature matching on the latest visual features and the point cloud information of the current laser frame of the laser radar with the point feature point cloud data to obtain a laser radar odometer factor and a camera odometer factor.
And S8, adding the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor into a factor graph for optimization, and then performing three-dimensional map modeling.
Adding IMU pre-integration factor to the factor graph:
consider two consecutive frames b within a sliding window k And b k-1 IMU measurement in between, IMU pre-integration measurement residues are defined as follows:
wherein [.] xyz The vector part (imaginary part) of the quaternion q for the error state representation is extracted.Is a third order error state representation of a quaternion.Is an IMU pre-integrated measurement measured with only noisy accelerometers and gyroscopes included in the residual term for online rectification during the time interval of two consecutive image frames.
In the above formula, the first and second carbon atoms are,representing the definition of the residual;difference values of position and speed direction pre-integral quantities for the IMU are respectively; δ b a 、δb g Offset difference values for IMU acceleration and angular velocity, respectively;respectively corresponding position, speed and rotation of the k-1 frame IMU coordinate system under a world coordinate system;pre-integral estimated values of position, speed and angular speed between the moment k-1 and the moment k respectively; g W Is a gravity vector under a world coordinate system;is the offset of the IMU acceleration at time k;is the offset of the IMU angular velocity at time k.
Adding a camera odometry factor to the factor graph: consider the ith th First observed ith in a picture th Is characterized in that th The residual of the feature observations in the images is defined as:
in the above-mentioned formula, the compound has the following structure,representing the definition of the visual residual, b1, b2 are the vectors a from the two orthogonal bases of the tangent plane, i.e. the origin to the point of the measuring unit sphere, the tangent unit vectors on the sphere.The pixels in the jth image for the ith feature are back projected through the camera intrinsic parameters to the three-dimensional coordinates on the unit sphere.The three-dimensional coordinates of the pixel in the jth image for the ith feature.Representing a rotational transformation matrix three-dimensional coordinate representation from the IMU coordinate system to the camera coordinate system.Representing a rotational transformation matrix three-dimensional coordinate representation from the world coordinate system to the IMU coordinate system.A three-dimensional coordinate representation of a position transformation matrix from the camera coordinate system to the IMU coordinate system is represented.A three-dimensional coordinate representation of a position transformation matrix from the IMU coordinate system to the world coordinate system is represented.
Wherein,is a back projection function that uses camera parameters to convert pixel coordinates into unit vectors.Is the ith th The first in the image th First observation of individual features.Is the jth th Observation of the same feature in the image. Since the degree of freedom of the visual residual is 2, the residual vector is projected onto the tangent plane.
Adding a lidar odometry factor to the factor graph: laser odometer factor ofWherein Δ T k,k+1 Is a state node X k And X k+1 Relative transformation relationship between them.
Considering that the method of using the laser radar and the IMU in the mine roadway improves the accuracy of the whole system in the unstructured environment, but increases the calculation amount, the embodiment performs the matching of the key frame, balances the accuracy and the calculation amount, and ensures the accuracy and the real-time performance of the system.
By using the SLAM method of the laser radar, the camera and the IMU, the point cloud characteristics of the laser radar, the IMU data and the characteristic information extracted and optimized by the camera are calculated together, the accuracy of the whole system in an unstructured environment is improved, but the calculated amount is increased, so that the key frame matching is performed, the calculated amount of the whole system is reduced, the accuracy and the real-time performance of the system are guaranteed, and the accuracy and the real-time performance of the system are balanced.
In step S8, the method specifically includes the steps of:
and S81, adding the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor into the factor graph for optimization.
And S82, respectively solving Jacobian matrixes of the laser radar odometer factor, the camera odometer factor and the IMU pre-integration factor about the state quantity, and solving the state quantity by using a Levenberg-Marquardt method.
S83, configuring an rgbdslm package and performing three-dimensional map modeling on all data subjected to combination factor graph optimization by using a map _ server in the ros system.
According to the method, the laser radar, the camera and the IMU are adopted to realize high-precision positioning of the underground coal mine, posture perception of the underground environment is improved, high-precision positioning of the underground coal mine is realized, precision and calculated amount are balanced, positioning and map building accuracy and real-time performance of a mine roadway are improved, and the problem that underground positioning and map building are difficult due to the fact that a GPS cannot act under unstructured environmental characteristics is effectively solved.
The embodiment also provides a system applied to the SLAM method under the mine underground unstructured feature, wherein the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor after factor graph optimization are subjected to three-dimensional map modeling by configuring an rgb dslam package by using a map _ server in an ros system and combining all data after graph optimization, and the method comprises the following steps of:
the system comprises a point cloud information acquisition module 100, wherein the point cloud information acquisition module 100 is used for acquiring point cloud information of a current laser frame of the laser radar.
The feature extraction module 200 is configured to calculate a curvature of each point in the point cloud information of the current laser frame of the laser radar, and extract an angular point feature and a plane point feature of each point according to the curvature of each point.
A corner feature determination module 300, where the corner feature determination module 300 is configured to extract information of a current frame through a camera, detect a corner of the current frame by using a FAST algorithm according to the information of the current frame, and determine whether the current frame is a corner feature point.
A feature point tracking module 400, wherein the feature point tracking module 400 is configured to track feature points of a current sliding window key frame by using a KLT optical flow tracking algorithm according to information of the current frame.
A pose estimate update module 500, the pose estimate update module 500 for updating the pose estimate for the current state based on the IMU a priori pose estimates and reprojection errors between tracking feature points of the current sliding window keyframe.
And a visual feature obtaining module 600, where the visual feature obtaining module 600 is configured to perform state update on the pose estimation of the current state through ESIKF to obtain the latest visual feature.
And the odometry factor obtaining module 700 is used for performing feature matching on the latest visual features and the point cloud information of the current laser frame of the laser radar with the point feature point cloud data to obtain a laser radar odometry factor and a camera odometry factor.
And the factor graph optimization module 800 is used for performing three-dimensional mapping after the laser radar odometer factor, the camera odometer factor and the IMU pre-integration factor are added into the factor graph for optimization.
This embodiment is through carrying out the feature extraction of point cloud to laser radar, and the camera also can provide visual characteristic and point cloud characteristic and match to this optimizes entire system, thereby makes entire system remain stable.
The laser radar, the camera and the IMU are adopted to realize high-precision positioning of the underground coal mine, posture perception of the underground environment is improved, high-precision positioning of the underground coal mine is realized, precision and calculation amount are balanced, positioning and map building accuracy and real-time performance of a mine roadway are improved, and the problem that underground positioning and map building are difficult due to the fact that a GPS cannot act under unstructured environmental characteristics is effectively solved.
The embodiments in the present specification are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts in each embodiment are referred to each other. For each of the above embodiments, since they are basically similar to the method embodiments, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The foregoing embodiments have described the present invention in detail, and the principle and embodiments of the present invention are explained by applying specific examples herein, and the descriptions of the foregoing embodiments are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (9)
1. A SLAM method under unstructured features in a mine well, comprising the following processes:
acquiring point cloud information of a current laser frame of a laser radar;
calculating the curvature of each point in the current laser frame point cloud information of the laser radar, and extracting the corner point characteristic and the plane point characteristic of each point according to the curvature of each point;
extracting the information of the current frame by a camera, detecting the corner of the current frame by using a FAST algorithm according to the information of the current frame, and judging whether the current frame is a corner feature point;
tracking feature points of a key frame of a current sliding window by adopting a KLT optical flow tracking algorithm according to the information of the current frame;
updating the pose estimation of the current state according to the IMU priori pose estimation and the reprojection error between the feature points tracking the current sliding window key frame;
updating the state of the pose estimation of the current state through ESIKF to obtain the latest visual feature;
performing feature matching on the latest visual features and point cloud information of a current laser frame of the laser radar with point feature point cloud data to obtain a laser radar odometry factor and a camera odometry factor;
and adding the laser radar odometer factor, the camera odometer factor and the IMU pre-integration factor into the factor graph for optimization, and then performing three-dimensional map modeling.
2. The SLAM method of claim 1, wherein: in the step of calculating the curvature of each point in the current laser frame point cloud information of the laser radar and extracting the corner point feature and the plane point feature of each point according to the curvature of each point, the method specifically comprises the following steps:
subscribing point cloud information after the motion distortion of the current laser frame is corrected;
traversing the effective point cloud after the motion distortion correction of the current laser frame, extracting a plurality of points before and after the current laser point from the effective point cloud, and calculating the corresponding curvatures of the points;
establishing an angular point feature set and a plane point feature set according to curvatures of a plurality of points before and after the current laser point;
and extracting the corner feature of each point from the corner feature set, and extracting the plane point feature of each point from the plane point feature set.
3. The SLAM method of claim 1, wherein: in the step of detecting the corner of the current frame by using a FAST algorithm according to the information of the current frame and judging whether the corner is a corner feature point, the method specifically comprises the following steps:
subscribing the information of the current frame extracted by the current camera;
and selecting a pixel from the information of the current frame, and judging whether the pixel is an angular feature point.
4. The SLAM method as claimed in claim 3, wherein: in the step of selecting a pixel from the information of the current frame and determining whether the pixel is an angular feature point, the method specifically includes the following steps:
setting the brightness value of the selected pixel p as I p Setting the brightness threshold value as t;
setting a discretized Bresenham circle with the radius equal to 3 pixels by taking a pixel p point as a center, wherein 16 pixels are arranged on the boundary of the Bresenham circle;
9 continuous pixel points are arranged on a Bresenham circle with the size of 16 pixels, and if the pixel values of the 9 continuous pixel points are larger or smaller than I p + t, then judging that the pixel p is an angle characteristic point;
if the pixel value of 9 continuous pixels is equal to I p + t, then determine that pixel p is not a corner feature point.
5. The SLAM method of claim 1, wherein: in the step of tracking the feature points of the key frame of the current sliding window by adopting a KLT optical flow tracking algorithm according to the information of the current frame, the method specifically comprises the following steps:
subscribing the information of the current frame extracted by the current camera;
carrying out histogram equalization on an original image in the current frame information;
extracting a pyramid of the image after histogram equalization, and then extracting FAST feature points of the previous frame of image;
performing LK optical flow tracking on FAST characteristic points of the previous frame image;
judging the number of the feature points of the previous frame of image, and if the number of the feature points of the previous frame of image is less than 10, directly setting the tracking failure;
and if the number of the tracking points meets a preset threshold value, directly tracking the LK optical flow.
6. The SLAM method of claim 1, wherein: in the step of updating the pose estimation of the current state, the following process is specifically included:
judging whether the current frame is a key frame or not by adopting a marginalization method through parallax judgment, and marginalizing the oldest frame if the parallax between the new frame and the previous frame is large, or marginalizing the previous frame if the parallax is small;
adding a new feature point prior into the IMU, and adding all observations of one feature point into triangularization calculation according to pose estimation of the IMU prior to obtain a latest observation frame;
and (3) carrying out estimation of the IMU poses corresponding to the latest observation frame of one feature point and the oldest frame and two frames in the sliding window to carry out re-projection error to establish a residual error relation.
7. The SLAM method of claim 1, wherein: in the step of updating the pose estimation of the current state through the ESIKF to obtain the latest visual features, the method specifically comprises the following steps:
calculating to obtain the plane of the current frame through a parameter equation, and matching the plane of the current frame with the plane point of the previous frame to obtain the same plane, namely the observed quantity of posture transformation;
performing EKF updating and covariance updating by taking IMU integral as a priori to obtain optimal pose estimation and state vector of errors;
updating the covariance matrix of the posterior when the ESIKF finishes exiting, and finishing state updating of the pose estimation of the current state;
adding some point clouds in the new frame into the kd-tree, and updating the mark of the next frame for searching the map;
and (4) carrying out a plurality of times of iteration approximation results through the ESIKF, and updating the current sliding window by using the latest ESKF posterior.
8. The SLAM method of claim 1, wherein: the method specifically comprises the following steps of adding a laser radar odometry factor, a camera odometry factor and an IMU pre-integration factor into a factor graph for optimization, and then modeling a three-dimensional map:
adding the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor into a factor graph for optimization;
respectively solving Jacobian matrixes of the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor about the state quantity, and solving the state quantity by using a Levenberg-Marquardt method;
and configuring an rgbdslm packet by using a map _ server in the ros system and carrying out three-dimensional map modeling by combining all data after factor graph optimization.
9. A system for implementing the SLAM method under mine downhole unstructured features as defined in any of claims 1-8, the factor-graph optimized lidar odometry factor, camera odometry factor and IMU pre-integration factor, three-dimensional map modeling by configuring rgbdslm packet and combining the map optimized total data in a ros system with a map _ server, comprising:
the system comprises a point cloud information acquisition module (100), a point cloud information acquisition module (100) and a data processing module, wherein the point cloud information acquisition module (100) is used for acquiring point cloud information of a current laser frame of the laser radar;
the characteristic extraction module (200) is used for calculating the curvature of each point in the current laser frame point cloud information of the laser radar, and extracting the corner point characteristic and the plane point characteristic of each point according to the curvature of each point;
the corner feature judgment module (300), the corner feature judgment module (300) is used for extracting the information of the current frame through the camera, detecting the corner of the current frame by using a FAST algorithm according to the information of the current frame, and judging whether the current frame is a corner feature point;
the characteristic point tracking module (400), the characteristic point tracking module (400) is used for tracking the characteristic point of the key frame of the current sliding window by adopting a KLT optical flow method tracking algorithm according to the information of the current frame;
a pose estimate update module (500), the pose estimate update module (500) to update the pose estimate for the current state as a function of the IMU a priori pose estimates and reprojection errors between tracking feature points of the current sliding window keyframe;
the visual feature obtaining module (600), the visual feature obtaining module (600) is used for carrying out state updating on the pose estimation of the current state through ESIKF to obtain the latest visual feature;
the system comprises an odometry factor obtaining module (700), wherein the odometry factor obtaining module (700) is used for carrying out feature matching on the latest visual features and point cloud information of a current laser frame of the laser radar with angular point feature point cloud data to obtain a laser radar odometry factor and a camera odometry factor;
and the factor graph optimization module (800) is used for performing three-dimensional map modeling after the laser radar odometry factor, the camera odometry factor and the IMU pre-integration factor are added into the factor graph for optimization.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210970287.5A CN115290073A (en) | 2022-08-12 | 2022-08-12 | SLAM method and system under mine underground unstructured characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210970287.5A CN115290073A (en) | 2022-08-12 | 2022-08-12 | SLAM method and system under mine underground unstructured characteristics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115290073A true CN115290073A (en) | 2022-11-04 |
Family
ID=83829328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210970287.5A Pending CN115290073A (en) | 2022-08-12 | 2022-08-12 | SLAM method and system under mine underground unstructured characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115290073A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115628738A (en) * | 2022-11-05 | 2023-01-20 | 合肥图灵纪元科技有限公司 | Multi-mode autonomous navigation and positioning system |
-
2022
- 2022-08-12 CN CN202210970287.5A patent/CN115290073A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115628738A (en) * | 2022-11-05 | 2023-01-20 | 合肥图灵纪元科技有限公司 | Multi-mode autonomous navigation and positioning system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN111024066B (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN110044354A (en) | A kind of binocular vision indoor positioning and build drawing method and device | |
CN111795686B (en) | Mobile robot positioning and mapping method | |
CN109506642B (en) | Robot multi-camera visual inertia real-time positioning method and device | |
CN109993113A (en) | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information | |
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN112219087A (en) | Pose prediction method, map construction method, movable platform and storage medium | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN107909614B (en) | Positioning method of inspection robot in GPS failure environment | |
CN109781092B (en) | Mobile robot positioning and mapping method in dangerous chemical engineering accident | |
CN114529576B (en) | RGBD and IMU hybrid tracking registration method based on sliding window optimization | |
CN114526745A (en) | Drawing establishing method and system for tightly-coupled laser radar and inertial odometer | |
CN111932674A (en) | Optimization method of line laser vision inertial system | |
CN115406447B (en) | Autonomous positioning method of quad-rotor unmanned aerial vehicle based on visual inertia in rejection environment | |
CN115574816B (en) | Bionic vision multi-source information intelligent perception unmanned platform | |
CN111609868A (en) | Visual inertial odometer method based on improved optical flow method | |
CN112731503B (en) | Pose estimation method and system based on front end tight coupling | |
CN114964276B (en) | Dynamic vision SLAM method integrating inertial navigation | |
CN115077519A (en) | Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling | |
CN115371665A (en) | Mobile robot positioning method based on depth camera and inertia fusion | |
CN115060268B (en) | Machine room fusion positioning method, system, equipment and storage medium | |
CN114440877B (en) | Asynchronous multi-camera visual inertial odometer positioning method | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |