Nothing Special   »   [go: up one dir, main page]

CN114529576B - RGBD and IMU hybrid tracking registration method based on sliding window optimization - Google Patents

RGBD and IMU hybrid tracking registration method based on sliding window optimization Download PDF

Info

Publication number
CN114529576B
CN114529576B CN202210002435.4A CN202210002435A CN114529576B CN 114529576 B CN114529576 B CN 114529576B CN 202210002435 A CN202210002435 A CN 202210002435A CN 114529576 B CN114529576 B CN 114529576B
Authority
CN
China
Prior art keywords
imu
information
sliding window
rgbd
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210002435.4A
Other languages
Chinese (zh)
Other versions
CN114529576A (en
Inventor
罗志勇
辛超宇
王月
邱卓
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202210002435.4A priority Critical patent/CN114529576B/en
Publication of CN114529576A publication Critical patent/CN114529576A/en
Application granted granted Critical
Publication of CN114529576B publication Critical patent/CN114529576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a RGBD and IMU hybrid tracking registration method based on sliding window optimization, which comprises the following steps: vision measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, and feature extraction and tracking are carried out; calibrating external parameters, offset and absolute dimensions of the three sensors, estimating the pose of all frames in a sliding window and the inverse depth of an observation point through PnP and ICP algorithms, and carrying out alignment with an IMU pre-integration result to solve related parameters; further optimizing initial pose information transmitted from the vision measurement front end by adopting a sliding window-based method, constructing a target error function by prior information constraint, vision constraint and IMU constraint, and continuously optimizing pose and bias information of all frames by adopting a sliding window-based graph optimization algorithm; loop detection and optimization: and performing loop detection by utilizing DboW algorithm, and if loop generation is detected, performing closed-loop optimization on the whole track in a repositioning mode.

Description

RGBD and IMU hybrid tracking registration method based on sliding window optimization
Technical Field
The invention belongs to the field of augmented reality, and is suitable for indoor industrial mechanical product assembly scenes. In particular to a RGBD and IMU hybrid tracking registration method based on sliding window optimization.
Background
In today's manufacturing industry, the assembly of complex mechanical products has been a critical issue in the production process. According to statistics, the assembly workload in modern manufacture accounts for 20% -70% of the whole product development workload, the average is 45%, and the assembly time accounts for 40% -60% of the whole manufacture time. At present, the assembly process of complex mechanical products is mainly manual assembly, and the assembly mode has the problems of long assembly period, low assembly efficiency and the like, and directly influences the development cost of the products.
In recent years, with the continuous improvement of the scientific and technical level, the problems existing in the manual assembly process are solved to a certain extent through an auxiliary assembly technology of augmented reality. The method generates virtual assembly auxiliary information related to assembly by means of a computer, and superimposes the virtual assembly auxiliary information into a real assembly operation environment, so that an augmented reality environment with virtual and real fusion is provided for assembly staff, the assembly process of the product is simpler and more visual, and the development cost of the product is effectively reduced.
The tracking registration technology refers to that virtual object information generated by a computer and other information are overlapped in a real scene, and is also a key for realizing virtual-real fusion by the augmented reality technology. Many students currently put forward a number of valuable methods in the field of tracking registration, including a tracking registration method based on artificial markers, a tracking registration method based on natural features, and a tracking registration method based on models, which are all vision-based methods. However, in the field of industrial mechanical product assembly, industrial parts have the characteristics of sparse texture and smooth surface, and meanwhile, in the process of movement of head-mounted equipment, the problem of fuzzy and jittering of images caused by movement is easy to occur, under the condition, a vision-based tracking registration method can generate great difficulty in feature extraction and tracking, and further, the precision and the robustness of tracking registration cannot be guaranteed. Therefore, in order to solve the problems, the measurement information of other types of sensors can be introduced on the basis of the measurement method of the vision sensor, so as to solve the problem that the tracking registration robustness of the pure vision tracking registration method in the industrial mechanical product assembly environment is poor. Aiming at the problems, other types of sensors with small volume and low cost can be introduced to measure based on the vision measurement sensor, and the advantage complementation among different sensors is realized through the idea of multi-sensor fusion, so that the robustness of the augmented reality tracking registration process is improved.
The depth sensor may obtain dense and stable depth characteristic information in a smooth surface, sparsely textured mechanical product surface, compared to a common RGB camera. Meanwhile, the introduction of depth information can also overcome the problems of scale uncertainty under a monocular camera and scale estimation inaccuracy under a binocular camera, and can assist RGB characteristic information to perform more stable tracking registration through an ICP algorithm. However, this approach has the following problems: firstly, 3D characteristics are lost when a camera moves rapidly, so that an ICP (iterative closest point) algorithm is easy to fall into a local optimal solution in an iterative process to fail, and secondly, when the depth sensor exceeds the working range (0.15-4 m), a frame loss phenomenon is easy to occur, and tracking registration failure is caused.
Inertial trackers, also known as inertial measurement units (Inertial measurement unit, IMUs), include inertial gyroscopes and micro-accelerometers, which calculate direction and position using the principle of strapdown inertial navigation. The RGBD-based tracking registration method has the characteristics of small volume, no shielding, unlimited working range and high updating frequency, and can effectively help the RGBD-based tracking registration method to overcome the problem of reduced tracking registration robustness caused by rapid movement and limited depth measurement range. Although the inertial measurement unit is susceptible to noise and self-bias, errors are continuously accumulated as the running time of the system increases, and due to the existence of the RGBD camera, the gyroscope bias of the IMU can be calibrated.
Aiming at the problems, and combining with the characteristic of the mechanical product assembly environment, a depth sensor and an Inertial Measurement Unit (IMU) with small volume and low cost are introduced on the basis of the traditional vision measurement sensor, the robustness of the augmented reality tracking registration process is improved through a multi-sensor fusion algorithm, and the assembly efficiency and stability of the complex mechanical product are improved.
Application publication number CN111462231a, a positioning method based on RGBD sensor and IMU sensor, the process is as follows: firstly, a mobile robot carries an RGBD sensor and an IMU sensor to collect data; secondly, initializing RGBD sensor data by adopting a visual SLAM algorithm to obtain pose information of the RGBD sensor data, pre-integrating IMU sensor data, and initializing a mobile robot to obtain initial state information of the mobile robot; optimizing and tracking pose information, speed and IMU sensor bias of the mobile robot based on the reprojection error; simultaneously, a nonlinear optimization algorithm based on a sliding window is adopted to optimize pose information and a map in real time; and loop detection is added, so that zero offset condition is avoided. The positioning method comprehensively utilizes the RGBD sensor and the IMU sensor, and overcomes the defects that the pure vision SLAM algorithm is easy to track failure in pure rotation and single background and the IMU sensor generates zero offset phenomenon in long-time movement.
In the above patent, although a tracking and positioning method based on fusion of RGBD information and IMU information is also proposed, the present patent has the following differences from the detail processing:
1. Since mechanical parts in industrial assembly sites have sparse texture and smooth surface, which directly results in that the RGB camera cannot extract a sufficient number of stable feature points for tracking registration, in the vision measurement front end described in claim 1, it is proposed to obtain dense point clouds based on the depth information acquired by the depth camera on the RGB camera, and to implement tracking registration in a point cloud registration manner, and although 2D ORB features are also extracted by the RGB camera, this ORB feature is sparse in weak texture scenes, and pose information calculated by this feature in the present invention has only two uses: the first is to accelerate the iterative process as initial pose information in the point cloud registration process; the second is that in the case of an undetectable depth camera depth, the error of the measurement generated by the vision tracking registration process is compensated by the IMU. In the above patent, only the ORB features collected by the RGB camera are adopted in the visual feature tracking and matching, and the depth information collected by the depth sensor is not adopted, so the robustness of tracking registration in the scene described in the patent cannot be guaranteed.
2. In the method, marginal prior information is introduced into an error function of back-end optimization, if a variable is removed from a window in a sliding window, the variable and related constraint information are not directly removed by the method, but a new constraint relation is generated between the variable and the variable with the constraint relation, so that information carried by discarded variables is transmitted to the residual variable, prior information is provided for a subsequent optimization process, more accurate pose estimation is obtained through optimization, and meanwhile, related optimization is performed on an optimization strategy of the sliding window, so that the real-time performance of an algorithm is improved while certain tracking registration precision is ensured. In the above patent, the constraint relation is only established with the adjacent frames in the back-end optimization process, and constraint information of the marginalized variables does not participate in optimization, so that the accuracy of tracking registration is not superior to that of the patent, and meanwhile, the above patent does not optimize in marginalization strategy, so that the real-time performance of tracking registration may be poor.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A hybrid tracking registration method of RGBD and IMU based on sliding window optimization is provided. The technical scheme of the invention is as follows:
an RGBD and IMU hybrid tracking registration method based on sliding window optimization comprises the following steps:
vision measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, feature extraction and tracking are carried out, a tracking registration method based on RGBD is adopted, 3D information of the position and an observation point of a frame at the current moment is calculated, the feature information which is well tracked and matched is added into a key frame queue and submitted to the rear end for optimization processing, meanwhile, the IMU carries out pre-integration operation on the collected pose information to obtain pose information between adjacent frames, the pose information is used as an initial pose when the tracking registration method based on RGBD is calculated, and the initial pose is submitted to the rear end for optimization processing;
A multi-sensor joint initialization step: calibrating external parameters, offset and absolute dimensions of the three sensors, estimating the pose of the first two frames and the inverse depth of an observation point in a sliding window through an SLAM algorithm, and carrying out alignment with an IMU pre-integration result to solve related parameters;
And (3) rear-end optimization: further optimizing initial pose information transmitted from the vision measurement front end by adopting a sliding window-based method, constructing a target error function by prior information constraint, vision constraint and IMU constraint, and continuously optimizing pose and bias information of all frames by adopting a sliding window-based graph optimization algorithm;
loop detection and optimization: and carrying out loop detection by utilizing DboW visual word bag algorithm, and if loop generation is detected, carrying out closed loop optimization on the whole track in a repositioning mode.
Further, the feature extraction and tracking is to extract and match feature points by using an ORB algorithm, and the steps of the ORB algorithm specifically include: firstly, finding out corner features of parts by comparing differences between current pixel points and surrounding pixel points, secondly, selecting N point pairs with a certain rule near the selected corner points, combining difference results of the N point pairs to serve as descriptors of the feature points, and finally, in a feature matching stage, performing feature matching through similarity of descriptors of the corner points;
And rejecting the mismatching points through a RANSAC algorithm, wherein the steps of rejecting the mismatching points through the RANSAC algorithm specifically comprise: firstly, mapping the characteristic point pairs matched with the corner points to a three-dimensional space through depth information acquisition, randomly assuming a group of local points, namely correct matching points, as initial values, and fitting the initial values to form a model; then, using the hypothesized model to try to fit other characteristic points, if the model is suitable, expanding the model, otherwise, taking the point as an outer point, namely, eliminating no matching point; finally, after the maximum iteration number is reached, if the number of local points in the hypothesized model is less than a certain threshold value, the hypothesized model is abandoned and the iteration is re-hypothesized, otherwise, the model is considered to have high confidence, the model is reserved, and the outer points of the model, namely, the non-matching points, are removed.
Further, the tracking registration method based on RGBD calculates the position of the frame at the current time and the 3D information of the observation point, and specifically includes:
On the basis of ORB feature matching, judging whether depth information of a corresponding feature point pair is measurable, measuring the depth information of the corresponding feature point pair for the measurable feature point pair, calculating initial pose information of a current frame through an ICP algorithm, obtaining dense point cloud information through a depth image, taking the initial pose as an initial value of ICP iteration, and performing further ICP iteration optimization to obtain more accurate pose information; and for the characteristic point pair information of which the depth cannot be obtained, the pose estimation is still carried out by using a PnP method.
Further, the SFM method comprises the following steps:
The calculating the initial pose information of the current frame through the ICP algorithm specifically comprises the following steps: firstly, calculating barycenter removing coordinates of each point of two matched point clouds P1 and P2, p i and P' i, then transforming the point cloud P1 point to P2 by a transformation matrix T, and constructing an error function; finally, continuously optimizing the transformation matrix T by an iterative method until the error function is lower than a certain threshold value or the maximum number of times is reached;
For the feature point pair information of which depth cannot be obtained, the pose estimation is still performed by using a PnP method, which specifically comprises the following steps: firstly, finding out an observation point q with a common view relation between a current frame and a previous frame according to feature matching points, and converting the point into pixel coordinates of the current frame through a transformation matrix T; then, a reprojection error function is constructed, and the transformation matrix T is continuously optimized through an iterative least square method, so that the pose of the current frame is solved. Further, the IMU performs pre-integration operation on the collected pose information, and obtains pvq pose information at the j moment by measuring the pose information through the IMU and combining pvq pose information at the i moment, wherein a calculation formula is as follows:
wherein, J, under the moment of i, the position information of the IMU under the world coordinate system; The speed of the IMU in the world coordinate system at the moment j and i; The rotation information of the IMU under the world coordinate system at the moment j and i; is IMU acceleration at time t; Is the angular velocity of the IMU at time t. The integration portions of the above three equations, representing the sum of the integration increments from time i to time j, are respectively denoted as, These three quantities will be non-linearly optimized as visually measured inter-frame constraint information for the backend.
Further, the specific steps of the multi-sensor joint initialization are as follows:
Step1: estimating an extrinsic parameter q bc between the IMU and the RGBD camera using rotation constraints; wherein the rotation is constrained as follows, wherein Is the rotational quaternion of the IMU itself from time k to time k +1,Is a rotation quaternion of the RGBD camera from the k moment to the k+1 moment;
Step2: and (3) utilizing the rotation constraint, constructing an error function through the visual observation value and the gyroscope measured value, carrying out least square iteration to solve the gyroscope bias b g, and recalculating the IMU pre-integral under the current bias.
Step3: absolute scale translation by IMU integration at time k and time k+1And relative scale translation of RGBDRotating an external parameter q bc between the IMU and RGBD, translating an external parameter p bc, and aligning the IMU track with the RGBD camera track to obtain an absolute scale s;
step4: minimizing IMU pre-integration error by translational constraint construction, solving the initialization speed of the system Initializing gravitySpecifically, the translational constraint calculation formula is as follows:
wherein, For the relative scale displacement between the IMU at the kth time and the RGBD camera at the 0 th time,For the relative scale displacement between the RGBD cameras of two frames from time k to time 0,The rotation matrix of the IMU system relative to the RGBD camera system at the kth time.
The IMU pre-integration error between the kth time and the (k+1) th time is calculated as follows:
wherein, Is the displacement variation error of the IMU between the kth moment and the k+1 moment; is a measured value and a predicted value of the IMU displacement variation between the kth time and the kth+1 time, Is a measured value and a predicted value of the IMU change amount between the kth time and the kth+1 time. The remaining symbol meanings refer to the above comments.
Step5: the gravity in step4 is described by constraint information of gravity modulus length gParameterizing and substituting the parameterized and calculated IMU pre-integral error calculation formula to correct the initialized gravity ruler. The parameterized expression is:
In the formula (I), the total number of the components, the g is the known gravity magnitude, Is the gravity direction unit vector, b 1、b2 is the two orthogonal basis vectors across the tangent plane, w 1、w2 is the variable to be optimized, representing the displacement along the two orthogonal basis vectors, which together make up
Further, the method based on sliding window is adopted to further optimize the initial pose information transmitted from the vision measurement front end, a target error function is constructed through prior information constraint, vision constraint and IMU constraint, and pose and bias information of all frames are continuously optimized through a graph optimization algorithm based on sliding window, and the method specifically comprises the following steps:
Step1: determining an optimized state vector, wherein the state vector needing to be optimized comprises the following steps: the states of all cameras in the sliding window, the inverse depth of m+1 observation points;
wherein, Representing all state quantities in the sliding window, wherein n is the number of frames in the sliding window, m represents the total number of characteristic points in the sliding window, and lambda k represents the inverse depth information of each observation point; x k represents pose information for each frame, including displacement of IMU in world coordinate systemSpeed of speedAngular velocity ofAccelerometer biasGyroscope bias
Step2: continuously optimizing state vectors within a sliding window by constructing a least squares based objective error function from visual errors and inertial measurement errors as follows
Where e prior is prior information for sliding window marginalization, r b is the IMU pre-integration error function between adjacent frames,Is an IMU pre-integration observation between time k and time k +1,Is the covariance of the IMU pre-integral noise term, r c is the visual re-projection error between adjacent frames,Is the observation of the observation point l by the RGBD camera at the moment j,Is the covariance of the RGBD observations;
Step3: the least square objective function in Step2 is unfolded and derivative is carried out, and each error term related to the state vector is calculated Jacobian matrix J;
step4: the Jacobian matrix J is solved, and the state vector is calculated by Gauss Newton method with damping factor And performing iterative optimization to accelerate the iterative process of optimization.
Step5: updating the state vector, stopping when the target error function is lower than a certain threshold value or the maximum iteration number N is reached, otherwise, jumping to Step3.
Further, in Step2, the marginalized prior information is obtained after removing the old variable through a sliding window and reestablishing inter-variable constraint information: the variable removal policy for the sliding window is:
(1) After a new frame is added into the sliding window, searching a road mark point l1 , which can be observed by all frames in the current sliding window, performing triangularization optimization through the frames, removing the sliding window from the observation point after the optimization is finished, and simultaneously establishing constraint among all frames;
(2) For an observation point l 2 that is not seen by the new frame, removing the observation point and establishing a constraint between old frames that can observe the point;
(3) After removing the above observation points, if some frames have no observable points within the sliding window, then these observation frames are directly removed.
Further, after removing the variables, calculating the influence of the removed variables on other state vectors in the sliding window, namely marginalized prior information according to a ShuerBu mode;
Further, the loop detection by using DboW visual word bag algorithm specifically includes: firstly, performing ORB operator description on new and old corner points of a current frame, then calculating similarity scores of the current frame and word bags, comparing the similarity scores with all frames in a key frame database, performing closed-loop consistency detection, and finally, if a closed loop is found, performing SLAM algorithm on state vectors And (5) loop optimization is performed.
The invention has the advantages and beneficial effects as follows:
(1) Because the mechanical parts have the characteristics of sparse texture and smooth surface, the 2D characteristic points are extracted with great difficulty, so that the robustness of tracking registration cannot be ensured, and the depth camera can extract dense point cloud information in a weak texture environment and supplement sparse 2D characteristics. Therefore, claim 3 proposes an RGBD-based tracking registration method, in which depth information is introduced on 2D features, and an initial pose is calculated through ORB features, and is used as a pose initial value in a subsequent dense point cloud ICP registration process, so that more accurate pose estimation can be obtained while ICP registration is accelerated.
(2) In the mechanical product assembly process, the problem of fuzzy jitter of RGB images and depth images can be caused due to the movement of the head of an operator, so that the accurate pose estimation cannot be performed by the tracking registration method based on RGBD characteristics, and the pose of the IMU sensor can be estimated in real time in the movement process. Therefore, in claims 5-8, IMU measurement information is introduced into the tracking registration method based on RGBD characteristics, the IMU measurement information is fused, and when the tracking registration method based on RGBD characteristics can work normally, pose information can be further optimized through inter-frame constraint information of the IMU; when the tracking registration method based on the RGBD features cannot work normally due to movement, the IMU can still provide the pose estimation of the IMU in real time, so that the defect of poor robustness of the RGBD camera in tracking registration under a movement scene is well overcome, and meanwhile, the pose estimation precision of the tracking registration method based on the RGBD features is further improved.
(3) While improving the tracking registration robustness of the augmented reality system, claims 7-8 provide a RGBD and IMU hybrid tracking registration method based on sliding window optimization, constraint information of the removed variables is transmitted to the residual variables in a priori information manner in the moving process of the sliding window, and the residual variables participate in the nonlinear optimization of the back end, so that the tracking registration precision is further improved; meanwhile, a sliding window variable eliminating strategy is provided on the basis, redundant state variables in the sliding window are eliminated, the response time of the back-end nonlinear optimization is shortened, and the instantaneity of the tracking registration process is improved.
Drawings
FIG. 1 is a flow chart of a hybrid augmented reality hybrid tracking registration method based on RGBD and IMU fusion in accordance with a preferred embodiment of the present invention;
FIG. 2 is a visual inertial confinement topology and sliding window marginalization map;
FIG. 3 is a flow chart of a method for RGBD-based pose estimation;
FIG. 4 is a schematic diagram of a sliding window based multi-sensor fusion strategy.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and specifically described below with reference to the drawings in the embodiments of the present invention. The described embodiments are only a few embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
Aiming at the problems of the augmented reality technology in the industrial machinery production and assembly scene, a hybrid tracking registration method based on fusion of RGBD and an inertial measurement unit is provided. The technical scheme of the invention is as follows:
A mixed tracking registration method based on RGBD and inertial measurement unit fusion is shown in FIG. 1, and comprises the following steps:
vision measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, feature extraction and tracking are carried out, a tracking registration method based on RGBD is adopted, 3D information of the position and an observation point of a frame at the current moment is calculated, the feature information which is well tracked and matched is added into a key frame queue and submitted to the rear end for optimization processing, meanwhile, the IMU carries out pre-integration operation on the collected pose information to obtain pose information between adjacent frames, the pose information is used as an initial pose when the tracking registration method based on RGBD is calculated, and the initial pose is submitted to the rear end for optimization processing;
A multi-sensor joint initialization step: calibrating external parameters, offset and absolute dimensions of the three sensors, estimating the pose of the first two frames and the inverse depth of an observation point in a sliding window through an SLAM algorithm, and carrying out alignment with an IMU pre-integration result to solve related parameters;
And (3) rear-end optimization: further optimizing initial pose information transmitted from the vision measurement front end by adopting a sliding window-based method, constructing a target error function by prior information constraint, vision constraint and IMU constraint, and continuously optimizing pose and bias information of all frames by adopting a sliding window-based graph optimization algorithm;
loop detection and optimization: and carrying out loop detection by utilizing DboW visual word bag algorithm, and if loop generation is detected, carrying out closed loop optimization on the whole track in a repositioning mode.
Further, the feature extraction and tracking is to extract and match feature points by using an ORB algorithm, and the steps of the ORB algorithm specifically include: firstly, finding out corner features of parts by comparing differences between current pixel points and surrounding pixel points, secondly, selecting N point pairs with a certain rule near the selected corner points, combining difference results of the N point pairs to serve as descriptors of the feature points, and finally, in a feature matching stage, performing feature matching through similarity of descriptors of the corner points;
And rejecting the mismatching points through a RANSAC algorithm, wherein the steps of rejecting the mismatching points through the RANSAC algorithm specifically comprise: firstly, mapping the characteristic point pairs matched with the corner points to a three-dimensional space through depth information acquisition, randomly assuming a group of local points, namely correct matching points, as initial values, and fitting the initial values to form a model; then, using the hypothesized model to try to fit other characteristic points, if the model is suitable, expanding the model, otherwise, taking the point as an outer point, namely, eliminating no matching point; finally, after the maximum iteration number is reached, if the number of local points in the hypothesized model is less than a certain threshold value, the hypothesized model is abandoned and the iteration is re-hypothesized, otherwise, the model is considered to have high confidence, the model is reserved, and the outer points of the model, namely, the non-matching points, are removed.
Further, the tracking registration method based on RGBD calculates the position of the frame at the current time and the 3D information of the observation point, and specifically includes:
On the basis of ORB feature matching, judging whether depth information of a corresponding feature point pair is measurable, measuring the depth information of the corresponding feature point pair for the measurable feature point pair, calculating initial pose information of a current frame through an ICP algorithm, obtaining dense point cloud information through a depth image, taking the initial pose as an initial value of ICP iteration, and performing further ICP iteration optimization to obtain more accurate pose information; and for the characteristic point pair information of which the depth cannot be obtained, the pose estimation is still carried out by using a PnP method.
Further, the SFM method comprises the following steps:
The calculating the initial pose information of the current frame through the ICP algorithm specifically comprises the following steps: firstly, calculating barycentric coordinates of each point of two matched point clouds P1 and P2, P i and P' i, then transforming the point cloud P1 point to P2 by a transformation matrix T, and constructing an error function; finally, continuously optimizing the transformation matrix T by an iterative method until the error function is lower than a certain threshold value or the maximum number of times is reached;
For the feature point pair information of which depth cannot be obtained, the pose estimation is still performed by using a PnP method, which specifically comprises the following steps: firstly, finding out an observation point q with a common view relation between a current frame and a previous frame according to feature matching points, and converting the point into pixel coordinates of the current frame through a transformation matrix T; then, a reprojection error function is constructed, and the transformation matrix T is continuously optimized through an iterative least square method, so that the pose of the current frame is solved. Further, the IMU performs pre-integration operation on the collected pose information, and obtains pvq pose information at the j moment by measuring the pose information through the IMU and combining pvq pose information at the i moment, wherein a calculation formula is as follows:
wherein, J, under the moment of i, the position information of the IMU under the world coordinate system; The speed of the IMU in the world coordinate system at the moment j and i; The rotation information of the IMU under the world coordinate system at the moment j and i; is IMU acceleration at time t; Is the angular velocity of the IMU at time t. The integration portions of the above three equations, representing the sum of the integration increments from time i to time j, are respectively denoted as, These three quantities will be non-linearly optimized as visually measured inter-frame constraint information for the backend.
Further, the specific steps of the multi-sensor joint initialization are as follows:
Step1: estimating an extrinsic parameter q bc between the IMU and the RGBD camera using rotation constraints; wherein the rotation is constrained as follows, wherein Is the rotational quaternion of the IMU itself from time k to time k +1,Is a rotation quaternion of the RGBD camera from the k moment to the k+1 moment;
Step2: and (3) utilizing the rotation constraint, constructing an error function through the visual observation value and the gyroscope measured value, carrying out least square iteration to solve the gyroscope bias b g, and recalculating the IMU pre-integral under the current bias.
Step3: absolute scale translation by IMU integration at time k and time k+1And relative scale translation of RGBDRotating an external parameter q bc between the IMU and RGBD, translating an external parameter p bc, and aligning the IMU track with the RGBD camera track to obtain an absolute scale s;
step4: minimizing IMU pre-integration error by translational constraint construction, solving the initialization speed of the system Initializing gravitySpecifically, the translational constraint calculation formula is as follows:
wherein, For the relative scale displacement between the IMU at the kth time and the RGBD camera at the 0 th time,For the relative scale displacement between the RGBD cameras of two frames from time k to time 0,The rotation matrix of the IMU system relative to the RGBD camera system at the kth time.
The IMU pre-integration error between the kth time and the (k+1) th time is calculated as follows:
wherein, Is the displacement variation error of the IMU between the kth moment and the k+1 moment; is a measured value and a predicted value of the IMU displacement variation between the kth time and the kth+1 time, Is a measured value and a predicted value of the IMU change amount between the kth time and the kth+1 time. The remaining symbol meanings refer to the above comments.
Step5: the gravity in step4 is described by constraint information of gravity modulus length gParameterizing and substituting the parameterized and calculated IMU pre-integral error calculation formula to correct the initialized gravity ruler. The parameterized expression is:
In the formula (I), the total number of the components, the g is the known gravity magnitude, Is the gravity direction unit vector, b 1、b2 is the two orthogonal basis vectors across the tangent plane, w 1、w2 is the variable to be optimized, representing the displacement along the two orthogonal basis vectors, which together make up
Further, the method based on sliding window is adopted to further optimize the initial pose information transmitted from the vision measurement front end, a target error function is constructed through prior information constraint, vision constraint and IMU constraint, and pose and bias information of all frames are continuously optimized through a graph optimization algorithm based on sliding window, and the method specifically comprises the following steps:
Step1: determining an optimized state vector, wherein the state vector needing to be optimized comprises the following steps: the states of all cameras in the sliding window, the inverse depth of m+1 observation points;
wherein, Representing all state quantities in the sliding window, wherein n is the number of frames in the sliding window, m represents the total number of characteristic points in the sliding window, and lambda k represents the inverse depth information of each observation point; x k represents pose information for each frame, including displacement of IMU in world coordinate systemSpeed of speedAngular velocity ofAccelerometer biasGyroscope bias
Step2: continuously optimizing state vectors within a sliding window by constructing a least squares based objective error function from visual errors and inertial measurement errors as follows
Where e prior is prior information for sliding window marginalization, r b is the IMU pre-integration error function between adjacent frames,Is an IMU pre-integration observation between time k and time k +1,Is the covariance of the IMU pre-integral noise term, r c is the visual re-projection error between adjacent frames,Is the observation of the observation point l by the RGBD camera at the moment j,Is the covariance of the RGBD observations;
Step3: the least square objective function in Step2 is unfolded and derivative is carried out, and each error term related to the state vector is calculated Jacobian matrix J;
step4: the Jacobian matrix J is solved, and the state vector is calculated by Gauss Newton method with damping factor And performing iterative optimization to accelerate the iterative process of optimization.
Step5: updating the state vector, stopping when the target error function is lower than a certain threshold value or the maximum iteration number N is reached, otherwise, jumping to Step3.
Further, in Step2, the marginalized prior information is obtained after removing the old variable through a sliding window and reestablishing inter-variable constraint information: the variable removal policy for the sliding window is:
(1) After a new frame is added into the sliding window, searching a road mark point l 1 which can be observed by all frames in the current sliding window, performing triangularization optimization through the frames, removing the sliding window from the observation point after the optimization is finished, and simultaneously establishing constraint among all frames;
(2) For an observation point l 2 that is not seen by the new frame, removing the observation point and establishing a constraint between old frames that can observe the point;
(3) After removing the above observation points, if some frames have no observable points within the sliding window, then these observation frames are directly removed.
Further, after removing the variables, calculating the influence of the removed variables on other state vectors in the sliding window, namely marginalized prior information according to a ShuerBu mode;
Further, the loop detection by using DboW visual word bag algorithm specifically includes: firstly, performing ORB operator description on new and old corner points of a current frame, then calculating similarity scores of the current frame and word bags, comparing the similarity scores with all frames in a key frame database, performing closed-loop consistency detection, and finally, if a closed loop is found, performing SLAM algorithm on state vectors And (5) loop optimization is performed.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The above examples should be understood as illustrative only and not limiting the scope of the invention. Various changes and modifications to the present invention may be made by one skilled in the art after reading the teachings herein, and such equivalent changes and modifications are intended to fall within the scope of the invention as defined in the appended claims.

Claims (8)

1. The RGBD and IMU hybrid tracking registration method based on sliding window optimization is characterized by comprising the following steps of:
vision measurement front-end step: firstly, RGB information and depth information are obtained through an RGBD camera, feature extraction and tracking are carried out, a tracking registration method based on RGBD is adopted, 3D information of the position and an observation point of a frame at the current moment is calculated, the feature information which is well tracked and matched is added into a key frame queue and submitted to the rear end for optimization processing, meanwhile, the IMU carries out pre-integration operation on the collected pose information to obtain pose information between adjacent frames, the pose information is used as an initial pose when the tracking registration method based on RGBD is calculated, and the initial pose is submitted to the rear end for optimization processing;
A multi-sensor joint initialization step: calibrating external parameters, offset and absolute dimensions of the three sensors, estimating the pose of the first two frames and the inverse depth of an observation point in a sliding window through an SLAM algorithm, and carrying out alignment with an IMU pre-integration result to solve related parameters;
And (3) rear-end optimization: further optimizing initial pose information transmitted from the vision measurement front end by adopting a sliding window-based method, constructing a target error function by prior information constraint, vision constraint and IMU constraint, and continuously optimizing pose and bias information of all frames by adopting a sliding window-based graph optimization algorithm;
loop detection and optimization: performing loop detection by using DboW visual word bag algorithm, and if loop generation is detected, performing closed-loop optimization on the whole track in a repositioning mode;
the method for optimizing the initial pose information transmitted from the vision measurement front end by adopting the sliding window-based method further optimizes the pose and bias information of all frames continuously by adopting a sliding window-based graph optimization algorithm through prior information constraint, vision constraint and IMU constraint to construct a target error function, specifically comprising the following steps:
Step1: determining an optimized state vector, wherein the state vector needing to be optimized comprises the following steps: the states of all cameras in the sliding window, the inverse depth of m+1 observation points;
Wherein, n is the number of frames in the sliding window, m represents the total number of characteristic points in the sliding window, and represents the inverse depth information of each observation point; representing pose information for each frame including displacement of IMU in world coordinate system Speed of speedAngular velocity ofAccelerometer biasGyroscope bias
Step2: the state vector in the sliding window is continuously optimized by constructing a least square-based target error function by the visual error and the inertial measurement error;
Wherein, the prior information of sliding window marginalization is IMU pre-integral error function between adjacent frames, Is an IMU pre-integration observation between time k and time k +1,Is the covariance of the IMU pre-integral noise term, is the visual re-projection error between adjacent frames,Is the observation of the observation point l by the RGBD camera at the moment j,Is the covariance of the RGBD observations;
Step3: expanding the least square objective function in Step2 and carrying out derivative, and calculating a Jacobian matrix J of each error term relative to the state vector;
step4: the state vector is subjected to iterative optimization through the solved Jacobian matrix J and through a Gauss Newton method with damping factors, so that the iterative process of optimization is accelerated;
Step5: updating the state vector, stopping when the target error function is lower than a certain threshold value or the maximum iteration number N is reached, otherwise, jumping to Step3.
2. The method for hybrid tracking registration of RGBD and IMU based on sliding window optimization of claim 1, wherein the feature extraction and tracking is feature point extraction and matching by using an ORB algorithm, and the steps of the ORB algorithm specifically include: firstly, the angular point characteristics of the parts are found out by comparing the differences between the current pixel points and surrounding pixel points, secondly, N point pairs are selected according to a certain rule near the selected angular points, the difference results of the N point pairs are combined to serve as descriptors of the characteristic points, and finally, in a characteristic matching stage, characteristic matching is carried out through the similarity of the descriptors of the angular points;
And rejecting the mismatching points through a RANSAC algorithm, wherein the steps of rejecting the mismatching points through the RANSAC algorithm specifically comprise: firstly, mapping the characteristic point pairs matched with the corner points to a three-dimensional space through depth information acquisition, randomly assuming a group of local points, namely correct matching points, as initial values, and fitting the initial values to form a model; then, using the hypothesized model to try to fit other characteristic points, if the model is suitable, expanding the model, otherwise, taking the point as an outer point, namely, eliminating no matching point; finally, after the maximum iteration number is reached, if the number of local points in the hypothesized model is less than a certain threshold value, the hypothesized model is abandoned and the iteration is re-hypothesized, otherwise, the model is considered to have high confidence, the model is reserved, and the outer points of the model, namely, the non-matching points, are removed.
3. The hybrid RGBD and IMU tracking registration method based on sliding window optimization according to claim 2, wherein the RGBD-based tracking registration method calculates the 3D information of the position and the observation point of the frame at the current time, and specifically includes:
On the basis of ORB feature matching, judging whether depth information of a corresponding feature point pair is measurable, measuring the depth information of the corresponding feature point pair for the measurable feature point pair, calculating initial pose information of a current frame through an ICP algorithm, obtaining dense point cloud information through a depth image, taking the initial pose as an initial value of ICP iteration, and performing further ICP iteration optimization to obtain more accurate pose information; and for the characteristic point pair information of which the depth cannot be obtained, the pose estimation is still carried out by using a PnP method.
4. The hybrid tracking registration method of RGBD and IMU based on sliding window optimization according to claim 3, wherein the calculating the initial pose information of the current frame by ICP algorithm specifically includes: firstly, calculating the barycenter removing coordinates of each point of two matched point clouds P1 and P2, and then, transforming the point cloud P1 to P2 by transforming a transformation matrix T, and constructing an error function; finally, continuously optimizing the transformation matrix T by an iterative method until the error function is lower than a certain threshold value or the maximum number of times is reached;
for the feature point pair information of which depth cannot be obtained, the pose estimation is still performed by using a PnP method, which specifically comprises the following steps: firstly, finding out an observation point q with a common view relation between a current frame and a previous frame according to feature matching points, and converting the point into pixel coordinates of the current frame through a transformation matrix T; then, a reprojection error function is constructed, and the transformation matrix T is continuously optimized through an iterative least square method, so that the pose of the current frame is solved.
5. The hybrid tracking registration method based on sliding window optimization according to claim 1, wherein the IMU performs pre-integration operation on collected pose information, obtains pvq pose information at the j moment by measuring the acquired pose information through the IMU and combining the pose information at the i moment pvq, and the calculation formula is as follows:
wherein, J, under the moment of i, the position information of the IMU under the world coordinate system; The speed of the IMU in the world coordinate system at the moment j and i; The rotation information of the IMU under the world coordinate system at the moment j and i; is IMU acceleration at time t; is the angular velocity of the IMU at time t; the integration portions of the above three equations, representing the sum of the integration increments from time i to time j, are respectively denoted as, These three quantities will be non-linearly optimized as visually measured inter-frame constraint information for the backend.
6. The hybrid tracking registration method of RGBD and IMU based on sliding window optimization of claim 5, wherein the specific steps of the multi-sensor joint initialization are as follows:
Step1: estimating external parameters between the IMU and the RGBD camera by using rotation constraint; wherein the rotation is constrained as follows, wherein Is the rotational quaternion of the IMU itself from time k to time k +1,Is a rotation quaternion of the RGBD camera from the k moment to the k+1 moment;
Step2: utilizing the rotation constraint, constructing an error function through a visual observation value and a gyroscope measured value, carrying out least square iteration to solve the gyroscope bias, and recalculating IMU pre-integration under the current bias;
step3: absolute scale translation by IMU integration at time k and time k+1 And relative scale translation of RGBDAnd rotating the external parameters between the IMU and the RGBD, translating the external parameters, and aligning the IMU track with the RGBD camera track to obtain an absolute scale s;
step4: minimizing IMU pre-integration error by translational constraint construction, solving the initialization speed of the system Initializing gravitySpecifically, the translational constraint calculation formula is as follows:
wherein, For the relative scale displacement between the IMU at the kth time and the RGBD camera at the 0 th time,For the relative scale displacement between the RGBD cameras of two frames from time k to time 0,A rotation matrix of the IMU system relative to the RGBD camera system at a kth time;
the IMU pre-integration error between the kth time and the (k+1) th time is calculated as follows:
wherein, Is the displacement variation error of the IMU between the kth moment and the k+1 moment; is a measured value and a predicted value of the IMU displacement variation between the kth time and the kth+1 time, Is a measured value and a predicted value of the IMU variation between the kth time and the k+1 time;
step5: by the constraint information of the gravity mode length, the gravity in step4 is calculated Parameterizing and substituting the parameterized and calculated IMU pre-integral error calculation formula to calculate so as to correct the initialized gravity ruler quantity; the parameterized expression is:
in the formula, the weight is known, Is a gravity direction unit vector, is two orthogonal basis vectors crossing a tangential plane, is a variable to be optimized, represents displacement along the two orthogonal basis vectors, and jointly forms
7. The hybrid tracking registration method of RGBD and IMU based on sliding window optimization of claim 1, wherein in Step2, the marginalized prior information is obtained after removing old variables through a sliding window to reestablish inter-variable constraint information: the variable removal policy for the sliding window is:
(1) After a new frame is added into the sliding window, searching for a road mark point which can be observed by all frames in the current sliding window, performing triangularization optimization through the frames, removing the sliding window from the observation point after the optimization is finished, and simultaneously establishing constraint among all frames;
(2) For an observation point that is not seen by the new frame, removing the observation point and establishing a constraint between old frames in which the point can be observed;
(3) After removing the above observation points, if some frames have no observable points within the sliding window, then directly removing the observation frames;
After removing these variables, the effect of the removed variables on other state vectors within the sliding window, i.e. the marginalized a priori information, is calculated according to the schuldering approach.
8. The method for hybrid tracking registration of RGBD and IMU based on sliding window optimization of claim 7, wherein the loop detection by DboW visual word bag algorithm specifically comprises: firstly, performing ORB operator description on new and old corner points of a current frame, then calculating similarity scores of the current frame and a word bag, comparing the similarity scores with all frames in a key frame database, performing closed-loop consistency detection, and finally, performing loop optimization on a state vector through an SLAM algorithm if a closed loop is found.
CN202210002435.4A 2022-01-04 2022-01-04 RGBD and IMU hybrid tracking registration method based on sliding window optimization Active CN114529576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210002435.4A CN114529576B (en) 2022-01-04 2022-01-04 RGBD and IMU hybrid tracking registration method based on sliding window optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210002435.4A CN114529576B (en) 2022-01-04 2022-01-04 RGBD and IMU hybrid tracking registration method based on sliding window optimization

Publications (2)

Publication Number Publication Date
CN114529576A CN114529576A (en) 2022-05-24
CN114529576B true CN114529576B (en) 2024-09-24

Family

ID=81620936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210002435.4A Active CN114529576B (en) 2022-01-04 2022-01-04 RGBD and IMU hybrid tracking registration method based on sliding window optimization

Country Status (1)

Country Link
CN (1) CN114529576B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115829833B (en) * 2022-08-02 2024-04-26 爱芯元智半导体(上海)有限公司 Image generation method and mobile device
CN115371665B (en) * 2022-09-13 2023-06-23 哈尔滨工业大学 Mobile robot positioning method based on depth camera and inertial fusion
CN115451958B (en) * 2022-11-10 2023-02-03 中国人民解放军国防科技大学 Camera absolute attitude optimization method based on relative rotation angle
CN117421384B (en) * 2023-10-24 2024-09-17 哈尔滨理工大学 Visual inertia SLAM system sliding window optimization method based on common view projection matching
CN117765084B (en) * 2024-02-21 2024-05-03 电子科技大学 Visual positioning method for iterative solution based on dynamic branch prediction
CN118424334A (en) * 2024-05-09 2024-08-02 北京自动化控制设备研究所 Decoupling inertial/visual navigation on-line dynamic calibration device and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014130854A1 (en) * 2013-02-21 2014-08-28 Regents Of The Univesity Of Minnesota Extrinsic parameter calibration of a vision-aided inertial navigation system
CN109960402B (en) * 2018-12-18 2022-04-01 重庆邮电大学 Virtual and real registration method based on point cloud and visual feature fusion
CN110111389B (en) * 2019-05-14 2023-06-02 南京信息工程大学 Mobile augmented reality tracking registration method and system based on SLAM

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合多源传感器数据的增强现实混合跟踪注册方法研究;辛超宇;《万方数据》;20230706;全文 *

Also Published As

Publication number Publication date
CN114529576A (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN114529576B (en) RGBD and IMU hybrid tracking registration method based on sliding window optimization
CN112304307B (en) Positioning method and device based on multi-sensor fusion and storage medium
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN111795686B (en) Mobile robot positioning and mapping method
CN106127739B (en) Monocular vision combined RGB-D SLAM method
CN109029433A (en) Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
CN112634451A (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN111288989B (en) Visual positioning method for small unmanned aerial vehicle
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN107796391A (en) A kind of strapdown inertial navigation system/visual odometry Combinated navigation method
CN106056664A (en) Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN112734841B (en) Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN114234967B (en) Six-foot robot positioning method based on multi-sensor fusion
CN115371665B (en) Mobile robot positioning method based on depth camera and inertial fusion
CN112731503B (en) Pose estimation method and system based on front end tight coupling
CN113188557B (en) Visual inertial integrated navigation method integrating semantic features
CN113503873B (en) Visual positioning method for multi-sensor fusion
CN111998862A (en) Dense binocular SLAM method based on BNN
CN116772844A (en) Navigation method of visual inertial indoor robot based on dynamic environment
CN115218889A (en) Multi-sensor indoor positioning method based on dotted line feature fusion
CN112179373A (en) Measuring method of visual odometer and visual odometer
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant