CN115218889A - Multi-sensor indoor positioning method based on dotted line feature fusion - Google Patents
Multi-sensor indoor positioning method based on dotted line feature fusion Download PDFInfo
- Publication number
- CN115218889A CN115218889A CN202210776807.9A CN202210776807A CN115218889A CN 115218889 A CN115218889 A CN 115218889A CN 202210776807 A CN202210776807 A CN 202210776807A CN 115218889 A CN115218889 A CN 115218889A
- Authority
- CN
- China
- Prior art keywords
- line
- measurement unit
- image
- inertial measurement
- encoder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 238000005259 measurement Methods 0.000 claims abstract description 61
- 230000010354 integration Effects 0.000 claims abstract description 22
- 238000005457 optimization Methods 0.000 claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 19
- 230000005484 gravity Effects 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000003287 optical effect Effects 0.000 claims description 13
- 230000008878 coupling Effects 0.000 claims description 8
- 238000010168 coupling process Methods 0.000 claims description 8
- 238000005859 coupling reaction Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 8
- 230000001133 acceleration Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- 230000033001 locomotion Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 238000009795 derivation Methods 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract description 3
- 230000007613 environmental effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
A multi-sensor indoor positioning method based on point-line feature fusion belongs to the technical field of robot positioning. The invention provides a multi-sensor indoor positioning method based on dotted line feature fusion, which is characterized in that a line feature extraction and tracking module is added at the front end, image information is fully utilized, and wheel type encoder measurement information is fused in a pre-integration stage, so that four-degree-of-freedom nonlinear optimization is realized, and more accurate environmental scale information is obtained. And jointly initializing the measurement information, the inertial measurement unit and the visual information of the wheel type encoder to provide more accurate and robust initial values for pose estimation. The rear end adopts a sliding window-based nonlinear optimization method, combines wheel type encoder, inertial measurement unit constraint and visual constraint for joint optimization, and realizes an accurate and robust positioning scheme in an indoor weak texture or dark environment.
Description
Technical Field
The invention belongs to the technical field of robot positioning, and particularly relates to an indoor robot positioning method based on a visual inertial odometer.
Background
In order to achieve the purpose of autonomous movement in an unknown environment, the ground mobile robot explores the environment through a sensor carried by the ground mobile robot and estimates the pose of the ground mobile robot in the environment, and the task requirement corresponds to a positioning problem and is also a basic condition for the ground mobile robot to autonomously execute a specified task. Meanwhile, the positioning and Mapping (SLAM) technology well solves the positioning and Mapping problem of the robot in an unknown environment. In outdoor environments, the robot can use a global positioning system for positioning, while in environments where GPS is unavailable, such as indoors, underwater, tunnels, forests, etc., the mobile robot must use other sensors for positioning. SLAM can be implemented with a variety of sensors, such as lidar, cameras, ultrasound, ultra-wideband, etc., in contrast to cameras that are small, inexpensive, and easy to install, which are the most popular and promising sensors in SLAM. In recent years, many scholars have made many outstanding works in the field of visual SLAM, and further fusing the IMU with a camera can greatly improve the accuracy and robustness of SLAM, so Visual Inertial Odometer (VIO) has gained wide attention. The wheel type encoder provided by the wheel type robot can provide more accurate relative movement information, the attributes of the wheel type encoder and the camera are complementary in attitude estimation, and meanwhile the problem of the additional unobservable direction of the monocular vision inertial odometer when the wheel type robot moves along a straight line or a circular arc at constant acceleration can be solved by fusing the information of the wheel type encoder. One type of existing positioning method for fusing wheel type encoder information is to fuse single-purpose information with a wheel type encoder, but the problem of lack of accurate rotation measurement value exists. Another solution to integrate monocular information with wheel encoders and inertial measurement units, but inertial measurement units do not provide accurate acceleration measurements in the motion mode of a ground wheeled robot, and are not the most preferred.
Disclosure of Invention
The invention aims to solve the problem of low positioning precision and poor robustness based on a visual inertial odometer under the indoor weak texture and dark environment, and provides a multi-sensor positioning method based on dotted line feature fusion.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a multi-sensor positioning method based on dotted line feature fusion specifically comprises the following steps:
step 2, extracting Harris angular points from the preprocessed image, extracting line features by using an improved LSD method, tracking end points of the extracted line segment based on an optical flow method, establishing a corresponding relation by connecting tracking end points to obtain new line features, eliminating abnormal values by using line descriptors, and performing pre-integration processing on the measurement data of an inertial measurement unit and a wheel type encoder;
step 3, initializing loose coupling of the sensor, recovering a relative rotation matrix for each image entering a sliding window by using a five-point method, aligning a visual inertial coordinate system, optimizing gravity by using a pre-integration result of an inertial measurement unit and a wheel type encoder, and initializing speed;
and 4, calculating a re-projection error of the point-line characteristics and a pre-integration residual item of the inertial measurement unit and the wheel type encoder, adding the re-projection error and the pre-integration residual item into a sliding window queue to perform rear-end joint optimization to realize tight coupling optimization of the multiple sensors, and estimating the corresponding pose and speed at the current moment.
Further, the acquired image is transmitted to a computer mounted on the robot in the step 1, the wheel type encoder and the inertial measurement unit input the angular velocity and the linear velocity of the current moment into the computer, and the computer performs data transmission and transmission through the ROS, so that data input of the three sensors is obtained.
Further, in the step 2, the LSD line segment detector is used to extract the line segments in the image, and the LSD algorithm targets a local straight contour in the detected image, so that many short lines which are difficult to match and track are detected, and therefore, the improved LSD algorithm is selected to be used. The optimized LSD algorithm is mainly embodied in that sampling image scaling parameters are adjusted, short line segments are filtered, and line segments which do not reach the standard are filtered by using a line segment length minimum threshold, wherein the threshold is 0.1. And after the line features are extracted, matching of the line features is carried out. Tracking by using an LK optical flow method, wherein the optical flow tracking algorithm flow of the line features is as follows: firstly, an n-level Gao Sijin pyramid is established for an image, and a line segment L detected by a current frame 1 Take its endpoint as L 1 (x k ,y k ) And L 1 (x e ,y e ) Estimating the motion vectors of the two end points to be r respectively 1 And r 2 Thereby obtaining two end points L of the line segment tracked in the next frame image 2 (x s ,y s ) And L 2 (x e ,y e ) Obtaining the line segment L tracked by the next frame from the end point 2 And finally, screening out possible abnormal points by using a linear thinning method based on geometric constraint.
Further, in step 3, a scale factor s needs to be introduced when the camera coordinate system is aligned to the inertial measurement unit coordinate system for conversion, and since the measurement data of the wheel encoder is based on the actual scale, an equation is established by using the translation constraints of the wheel encoder and the inertial measurement unit, and the estimation formula of the scale factor obtained through sorting is:
whereinA transformation matrix representing the coordinate system from the encoder origin to the inertial measurement unit,a transformation matrix representing the coordinate system from the origin of the camera coordinate system to the inertial measurement unit,the method comprises the steps of calculating a rotation matrix from an initial time coordinate system to a kth frame coordinate system of an IMU coordinate system through a pre-integration result, estimating initial values of speed and gravity direction by using constraints of an inertial measurement unit and a wheel type encoder, and deducing a formula as follows:
wherein,is a pre-integrated value of the IMU acceleration,is a pre-integrated value of the angular velocity of the IMU,is a pre-integrated value of the linear velocity of the IMU,is a pre-integral value of the displacement of the wheel encoder, delta t k Representing the time interval between the k image and the k +1 image. The initial gravity vector isB consists of a 3 x 2 basis matrix, i.e. two orthogonal bases of gravity tangent space, Δ g, used to update the gravity vector. And converting the formula into a least square problem, solving variables to be optimized on the right side of the formula, and iterating the formula between the continuous image frames until convergence. Finally, the positions are calculated through the key frame propagation positions in the sliding window, the position information is used for triangularization calculation, and the position and the speed after initialization are calculatedThe degree is calculated under true scale.
Further, in step 4, for the successfully matched point features, the point coordinates of the current frame in the real three-dimensional space and the calculated camera pose are known, a beam adjustment method is used for carrying out secondary projection and taking the difference value of the two projection results, the deviations of all feature points of the current frame are summed and the minimum value is taken to obtain the optimal pose result, and the reprojection error of the feature points is deduced as follows:
wherein,is a characteristic point p i At C j Position coordinates obtained by projecting the pose estimated by the frame,andis the pixel coordinate of the point.
For the extracted line feature, the minimum value of the geometric distance between the two end points of the estimated line segment and the extracted line segment is used as the reprojection error of the line segment, as shown in fig. 3. (x) s ,y s )、(x e ,y e ) Is the coordinates of the two end points of the line segment, L' 1 In order to estimate the line segments,is a line feature L 1 At C j Reprojection error under frame:
wherein:
the constructed optimization objective function is as follows:
the target function is composed of four parts, | | r p -J p X|| 2 For the marginalized a priori information items of the sliding window,the pre-integration residual terms of the inertial measurement unit and the wheel encoder respectively,is the re-projection residual of the point feature,reprojected residuals of line features, whereinAnd P l I represents the index of the feature point, i represents the index of the line feature, and C represents the image group in which the feature appears, respectively, inertial measurement unit constraint, wheel encoder constraint, and camera constraint.
Drawings
FIG. 1 is a schematic diagram of an overall framework of a multi-sensor indoor positioning method based on dotted line feature fusion according to the present invention;
FIG. 2 is a schematic diagram of calculating line feature reprojection errors;
FIG. 3 is a graph of line feature extraction and optical flow tracking in an indoor environment according to an embodiment of the present invention;
Detailed Description
First embodiment this embodiment will be described with reference to fig. 1. The method for positioning multiple sensors indoors based on dotted line feature fusion specifically includes the following steps:
step 3, performing sensor loose coupling initialization, recovering a relative rotation matrix by using a five-point method for each image entering a sliding window, aligning a visual inertial coordinate system, optimizing gravity by using a pre-integration result of an inertial measurement unit and a wheel type encoder, and performing speed initialization;
and 4, calculating a re-projection error of the point-line characteristics and a pre-integration residual item of the inertial measurement unit and the wheel type encoder, adding the re-projection error and the pre-integration residual item into a sliding window queue to perform rear-end joint optimization to realize tight coupling optimization of the multiple sensors, and estimating the corresponding pose and speed at the current moment.
The invention relates to a multi-sensor indoor positioning method based on dotted line characteristic fusion, which adopts the close coupling mode fusion of a camera, an inertia measurement unit and a wheel type encoder; the method comprises the steps of utilizing relatively abundant line features in an indoor environment to achieve more accurate positioning, adopting an improved LSD algorithm to extract the line features in order to guarantee real-time performance of the method, carrying out interframe matching through an optical flow method, and finally carrying out nonlinear optimization based on a sliding window method to obtain optimized pose estimation.
The second embodiment is as follows: the monocular camera is installed right in front of the ground mobile robot body, collected images are transmitted to a computer carried on the robot in the running process of the robot, the angular speed and the linear speed of the current moment are input into the computer by the simultaneous wheel type encoder and the inertia measuring unit, and the computer transmits and sends data through the ROS so as to obtain data input of the three sensors.
The third concrete implementation mode: the difference between this embodiment and the first or second embodiment is that the specific implementation process of step 2 is:
and step 21, extracting and matching point features. Firstly, detecting Harris angular points of a current input image by using a Shi-Tomasi algorithm, wherein the angular points correspond to inflection points of objects in the image, can be intersection points of two edges, and can also be points with two main directions in a pixel point neighborhood, and the characteristic quantity of the points in the image is not less than a threshold value. Tracking the corner points of the previous frame from the points of the current frame by using a KLT pyramid optical flow tracking method, and deleting the points failed in tracking;
and 22, extracting and tracking line features. The use of an LSD line segment detector to extract line segments in the image, the target of the LSD algorithm is to detect locally straight contours in the image, many short lines that are difficult to match and track are detected, and therefore the use of the improved LSD algorithm is chosen. The optimized LSD algorithm is mainly embodied in that sampling image scaling parameters are adjusted, short line segments are filtered, and line segments which do not reach the standard are filtered by using a line segment length minimum threshold, wherein the threshold is 0.1. And after the line features are extracted, matching of the line features is carried out. Tracking by using an LK optical flow method, wherein the optical flow tracking algorithm flow of the line features is as follows:
firstly, establishing an n-level Gaussian pyramid for an image, and aiming at a line segment L detected by a current frame 1 Take its endpoint as
And step 23, measuring values of the inertial measurement unit and the wheel type encoder are acceleration, angular velocity and linear velocity, and rotation and displacement transformation between two frames is obtained through pre-integration. Meanwhile, during nonlinear optimization, the measurement value can be prevented from being retransmitted by adopting a pre-integration mode, so that the measurement values of the inertial measurement unit and the wheel type encoder are processed by adopting a pre-integration strategy. Between two continuous key frames, adopting quaternion parameterization rotation and transmitting a covariance matrix;
other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment and one of the first to third embodiments is that the specific process of step 3 is:
and step 31, firstly, judging whether the robot performs sufficient rotation change at the beginning by calculating the standard deviation of the acceleration of the inertial measurement unit, and making a basis for performing more accurate and comprehensive state estimation later. And then, carrying out pure vision SFM estimation on the initial pose of the camera on all the features in the current sliding window, calculating a basic matrix and an intrinsic matrix, calculating the movement between frames through the intrinsic matrix, and solving the rotation and translation parameters of the image.
And step 32, aligning the calculated state quantity to a coordinate system of the 0 th frame inertial measurement unit. The gyroscope's zero offset Bgs is first estimated using the derived rotational constraint, and the scale factor, velocity and gravity direction are estimated using the translational constraint.
The method comprises the following steps that a scale factor s needs to be introduced when a camera coordinate system is aligned to an inertial measurement unit coordinate system for conversion, and as measurement data of a wheel type encoder is based on an actual scale, an equation is established by utilizing translation constraints of the wheel type encoder and the inertial measurement unit, and the equation can be obtained through arrangement:
whereinA transformation matrix representing the coordinate system from the encoder origin to the inertial measurement unit,a transformation matrix representing the coordinate system from the origin of the camera coordinate system to the inertial measurement unit,the method comprises the steps of calculating a rotation matrix from an initial time coordinate system to a kth frame coordinate system of an IMU coordinate system through a pre-integration result, estimating initial values of speed and gravity direction by using constraints of an inertial measurement unit and a wheel type encoder, and deducing a formula as follows:
wherein,is a pre-integrated value of the IMU acceleration,is a pre-integrated value of the angular velocity of the IMU,is a pre-integrated value of the linear velocity of the IMU,is a pre-integral value of the displacement of the wheel encoder, delta t k Representing the time interval between the k image and the k +1 image. The initial gravity vector isB consists of a 3 x 2 basis matrix, i.e. two orthogonal bases of gravity tangent space, Δ g, used to update the gravity vector. And converting the formula into a least square problem, solving variables to be optimized on the right side of the formula, and iterating the formula between the continuous image frames until convergence. Finally, the positions are calculated through the key frame propagation positions in the sliding window, triangularization calculation is carried out by using the position information, and the position and the speed are calculated under the real proportion after initialization.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the difference between this embodiment and one of the first to the fourth embodiments is that the specific process of step 4 is:
and performing nonlinear optimization estimation on the current pose of the ground mobile robot by adopting a sliding window method, and constructing an objective function by using a point-line reprojection error, an IMU (inertial measurement unit) and a wheel type encoder pre-integration residual error term to perform high-precision and robust state estimation. The objective function c (x) used by the invention consists of an marginalization term, a dotted line reprojection error, and an inertial measurement unit and a wheel encoder residual term.
And step 41, constructing a dotted line re-projection residual error. For the successfully matched point characteristics, knowing the point coordinates of the current frame in a real three-dimensional space and the calculated camera pose, carrying out secondary projection by using a beam adjustment method, taking the difference value of two projection results, summing the deviations of all the characteristic points of the current frame, and calculating the minimum value to obtain an optimal pose result, and deducing the reprojection error of the characteristic points as follows:
wherein,is a characteristic point p i At C j Position coordinates obtained by projecting the pose estimated by the frame,andis the pixel coordinate of the point.
For the extracted line feature, the minimum value of the geometric distance between the two end points of the estimated line segment and the extracted line segment is used as the reprojection error of the line segment, as shown in fig. 3. (x) s ,y s )、(x e ,y e ) Are the two endpoint coordinates of the line segment, L' 1 In order to estimate the line segments, a linear segment,is a line feature L 1 At C j Reprojection error under frame:
wherein:
step 42, inertial measurement unit and wheel encoder residuals.Residual terms derived from pre-integration results using the inertial measurement unit and the wheel encoder, respectively.
Andrespectively representing the calculation of a pre-integrated residual error, δ b, using an inertial measurement unit and a wheel encoder ak 、δb ωk Error terms for the gyroscope and accelerometer bias estimates are represented, respectively.
And 43, constructing an objective function. The objective function of the invention for non-linear optimization is as follows:
the target function is composed of four parts, | | r p -J p X|| 2 For the marginalized a priori information items of the sliding window,the pre-integration residual terms of the inertial measurement unit and the wheel encoder respectively,is the re-projection residual of the point feature,reprojected residuals of line features, whereinAnd P l I denotes the index of the feature point, i denotes the index of the line feature, and C denotes the image group in which the feature appears, respectively, the inertial measurement unit constraint, the wheel encoder constraint, and the camera constraint. The nonlinear optimization is performed by CERES SOLVER.
Other steps and parameters are the same as in one of the first to fourth embodiments.
Aiming at the problem that the positioning accuracy of the ground mobile robot is low under the condition of indoor low texture or uneven illumination, the method is based on the traditional visual inertial odometer positioning system, wheel type encoder information is fused in a close coupling mode at the rear end optimization part, and an extraction and tracking module of line features is added at the front end, so that the tracking effect is shown in figure 2. The method improves the utilization rate of image information by simultaneously utilizing the point characteristics and the line characteristics in the image, improves the matching efficiency of the characteristics by tracking the extracted point and line characteristics by using an optical flow method, further optimizes the positioning information by combining the real scale information and the plane constraint of the wheel type encoder, ensures the positioning accuracy and improves the positioning robustness of the mobile robot in the indoor environment;
the above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications can be made on the above description without departing from the spirit of the invention, and it is intended to cover all such modifications and variations as fall within the true spirit and scope of the invention.
Claims (5)
1. A multi-sensor indoor positioning method based on dotted line feature fusion is characterized by specifically comprising the following steps of:
step 1, reading an image by taking a monocular camera as a visual sensor, calibrating internal parameters of the camera and preprocessing the image, and reading data of an inertial measurement unit and a wheel type encoder into a system at respective frequencies;
and 2, extracting Harris angular points from the preprocessed image, tracking end points of the extracted line segment based on an optical flow method, establishing a corresponding relation by connecting the tracking end points to obtain new line characteristics, eliminating abnormal values by using line descriptors, and performing pre-integration processing on the measurement data of the inertial measurement unit and the wheel type encoder.
Step 3, performing sensor loose coupling initialization, recovering a relative rotation matrix for each image entering a sliding window by using a five-point pure vision method, aligning a vision inertial coordinate system, optimizing gravity by using pre-integration results of an inertial measurement unit and a wheel type encoder, and performing speed initialization;
and 4, calculating a re-projection error of the point-line characteristics and a pre-integral residual error item of the inertial measurement unit and the wheel type encoder, adding the re-projection error and the pre-integral residual error item into a sliding window queue to perform rear-end joint optimization to realize tight coupling optimization of the multiple sensors, and estimating a pose and a speed corresponding to the current moment.
2. The indoor multi-sensor positioning method based on dotted line feature fusion as claimed in claim 1, characterized in that the collected images are transmitted to a computer carried on the robot, and at the same time, the wheel encoder and the inertial measurement unit input the angular velocity and linear velocity at the current moment into the computer, and the computer performs data transmission and transmission through the ROS, so as to obtain data input of three sensors.
3. The method as claimed in claim 2, wherein the LSD line detector is used to extract line segments in the image, the LSD algorithm targets local straight contours in the detected image, many short lines that are difficult to match and track are detected, so the improved LSD algorithm is selected. The optimized LSD algorithm is mainly embodied in that sampling image scaling parameters are adjusted, short line segments are filtered, and line segments which do not reach the standard are filtered by using a line segment length minimum threshold, wherein the threshold is 0.1. And matching the line features after the line features are extracted. Tracking by using an LK optical flow method, wherein the optical flow tracking algorithm flow of the line features is as follows:
firstly, establishing an n-level Gaussian pyramid for an image, and aiming at a line segment L detected by a current frame 1 Take its endpoint as L 1 (x s ,y s ) And L 1 (x e ,y e ) Estimating the motion vectors of the two endpoints as r respectively 1 And r 2 Thereby obtaining two end points L of the line segment tracked in the next frame image 2 (x s ,y s ) And L 2 (x e ,y e ) Obtaining the line segment L tracked by the next frame from the end point 2 And finally, screening out possible abnormal points by using a linear thinning method based on geometric constraint.
4. The indoor multi-sensor positioning method based on dotted line feature fusion as claimed in claim 3, wherein in step 3, a scale factor s is required to be introduced when the camera coordinate system is aligned to the inertial measurement unit coordinate system for conversion, and since the measurement data of the wheel encoder is based on the actual scale, an equation is established by using the translation constraints of the wheel encoder and the inertial measurement unit, and the estimation formula of the scale factor is obtained after the arrangement:
whereinRepresenting a transformation matrix from the encoder coordinate origin to the inertial measurement unit coordinate system,a transformation matrix representing the coordinate system from the origin of the camera coordinates to the inertial measurement unit,the method is characterized in that a rotation matrix of an IMU coordinate system from an initial time coordinate system to a kth frame coordinate system is calculated through a pre-integration result, then initial values of speed and gravity direction are estimated by using constraints of an inertia measurement unit and a wheel type encoder, and a derivation formula is as follows
Wherein,is a pre-integrated value of the IMU acceleration,the pre-integrated value of the angular velocity,the pre-integrated value of the linear velocity,is a pre-integral value of the displacement of the wheel encoder, delta t k Representing the time interval between the k image and the k +1 image. The initial gravity vector isB consists of a 3 x 2 basis matrix, i.e. two orthogonal bases of gravity tangent space, Δ g, used to update the gravity vector. And converting the formula into a least square problem, solving variables to be optimized on the right side of the formula, and iterating the formula between the continuous image frames until convergence.Finally, the positions are calculated through the key frame propagation positions in the sliding window, triangularization calculation is carried out by using the position information, and the positions and the speed are calculated under the real proportion after initialization.
5. The indoor multi-sensor positioning method based on dotted line feature fusion as claimed in claim 4, wherein the specific process of step 4 is to, for the successfully matched point feature, know the point coordinates of the current frame in the real three-dimensional space and the calculated camera pose, perform secondary projection by using a beam adjustment method and take the difference between the two projection results, sum the deviations of all feature points of the current frame and take the minimum value to obtain the optimal pose result, and derive the re-projection error of the feature point as:
wherein,is a characteristic point p i At C j Position coordinates obtained by projecting the pose estimated by the frame,andis the pixel coordinate of the point.
For the extracted line feature, the minimum value of the geometric distance between the two end points of the estimated line segment and the extracted line segment is used as the reprojection error of the line segment, as shown in fig. 3. (x) s ,y s )、(x e ,y e ) Are the two endpoint coordinates of the line segment, L' 1 In order to estimate the line segments,is a line feature L 1 At C j Reprojection error under frame:
wherein:
the constructed optimization objective function is as follows:
the target function is composed of four parts, | | r p -J p X|| 2 For the marginalized a priori information items of the sliding window,the pre-integration residual terms of the inertial measurement unit and the wheel encoder respectively,is the re-projection residual of the point feature,reprojection residual of line features, whereinAnd P l I denotes the index of the feature point, i denotes the index of the line feature, and C denotes the image group in which the feature appears, respectively, the inertial measurement unit constraint, the wheel encoder constraint, and the camera constraint.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210776807.9A CN115218889A (en) | 2022-07-02 | 2022-07-02 | Multi-sensor indoor positioning method based on dotted line feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210776807.9A CN115218889A (en) | 2022-07-02 | 2022-07-02 | Multi-sensor indoor positioning method based on dotted line feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115218889A true CN115218889A (en) | 2022-10-21 |
Family
ID=83609883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210776807.9A Pending CN115218889A (en) | 2022-07-02 | 2022-07-02 | Multi-sensor indoor positioning method based on dotted line feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115218889A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115235505A (en) * | 2022-07-12 | 2022-10-25 | 重庆邮电大学 | Visual odometer method based on nonlinear optimization |
CN116610129A (en) * | 2023-07-17 | 2023-08-18 | 山东优宝特智能机器人有限公司 | Local path planning method and system for leg-foot robot |
CN117109568A (en) * | 2023-08-24 | 2023-11-24 | 北京自动化控制设备研究所 | Inertial/multidimensional vision combined positioning method |
-
2022
- 2022-07-02 CN CN202210776807.9A patent/CN115218889A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115235505A (en) * | 2022-07-12 | 2022-10-25 | 重庆邮电大学 | Visual odometer method based on nonlinear optimization |
CN116610129A (en) * | 2023-07-17 | 2023-08-18 | 山东优宝特智能机器人有限公司 | Local path planning method and system for leg-foot robot |
CN116610129B (en) * | 2023-07-17 | 2023-09-29 | 山东优宝特智能机器人有限公司 | Local path planning method and system for leg-foot robot |
CN117109568A (en) * | 2023-08-24 | 2023-11-24 | 北京自动化控制设备研究所 | Inertial/multidimensional vision combined positioning method |
CN117109568B (en) * | 2023-08-24 | 2024-10-15 | 北京自动化控制设备研究所 | Inertial/multidimensional vision combined positioning method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112347840B (en) | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN112556719B (en) | Visual inertial odometer implementation method based on CNN-EKF | |
CN111932674B (en) | Optimization method of line laser visual inertial system | |
CN115218889A (en) | Multi-sensor indoor positioning method based on dotted line feature fusion | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN112254729B (en) | Mobile robot positioning method based on multi-sensor fusion | |
CN114526745A (en) | Drawing establishing method and system for tightly-coupled laser radar and inertial odometer | |
CN111795686A (en) | Method for positioning and mapping mobile robot | |
CN114529576B (en) | RGBD and IMU hybrid tracking registration method based on sliding window optimization | |
CN114234967B (en) | Six-foot robot positioning method based on multi-sensor fusion | |
CN104281148A (en) | Mobile robot autonomous navigation method based on binocular stereoscopic vision | |
CN114019552A (en) | Bayesian multi-sensor error constraint-based location reliability optimization method | |
CN114608554B (en) | Handheld SLAM equipment and robot instant positioning and mapping method | |
CN112731503A (en) | Pose estimation method and system based on front-end tight coupling | |
CN114754768A (en) | Visual inertial navigation method based on point-line fusion | |
CN116448100A (en) | Multi-sensor fusion type offshore unmanned ship SLAM method | |
CN116380079A (en) | Underwater SLAM method for fusing front-view sonar and ORB-SLAM3 | |
CN117128954A (en) | Complex environment-oriented combined positioning method for aircraft | |
CN115355904A (en) | Slam method for Lidar-IMU fusion of ground mobile robot | |
CN112945233B (en) | Global drift-free autonomous robot simultaneous positioning and map construction method | |
Wang et al. | SW-LIO: A Sliding Window Based Tightly Coupled LiDAR-Inertial Odometry | |
CN115344033B (en) | Unmanned ship navigation and positioning method based on monocular camera/IMU/DVL tight coupling | |
CN117470259A (en) | Primary and secondary type space-ground cooperative multi-sensor fusion three-dimensional map building system | |
CN115290073A (en) | SLAM method and system under mine underground unstructured characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |