CN115855042A - Pedestrian visual navigation method based on laser radar cooperative assistance - Google Patents
Pedestrian visual navigation method based on laser radar cooperative assistance Download PDFInfo
- Publication number
- CN115855042A CN115855042A CN202211591779.XA CN202211591779A CN115855042A CN 115855042 A CN115855042 A CN 115855042A CN 202211591779 A CN202211591779 A CN 202211591779A CN 115855042 A CN115855042 A CN 115855042A
- Authority
- CN
- China
- Prior art keywords
- laser radar
- visual
- coordinate system
- navigation
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 33
- 239000011159 matrix material Substances 0.000 claims abstract description 25
- 238000005259 measurement Methods 0.000 claims abstract description 21
- 230000004927 fusion Effects 0.000 claims abstract description 8
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 230000002093 peripheral effect Effects 0.000 claims abstract description 4
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000007547 defect Effects 0.000 abstract description 3
- 230000014509 gene expression Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
Images
Landscapes
- Navigation (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention provides a pedestrian vision navigation method based on laser radar cooperative assistance, which comprises the steps that a carrier carries a laser radar, a vision sensor and an IMU (inertial measurement Unit) to scan a peripheral scene; extracting corner features in the visual sensor image as first observation information; extracting the structural features of the space object in the laser radar data as second observation information; setting a system navigation state; constructing an error calculation formula of the visual sensor to carry out Taylor expansion, and determining a Jacobian matrix of a visual observation equation; constructing a laser radar error calculation formula to perform Taylor expansion, and determining a Jacobian matrix of a laser radar observation equation; solving the optimal solution of the navigation state of the system; carrying out multi-source information fusion filtering, obtaining the real-time pose of the carrier, and constructing a global map; and assembling the global map to perform a human visual navigation system, and performing positioning. The method can overcome the defect of positioning only depending on vision, and realize large-range and long-time high-precision autonomous navigation.
Description
Technical Field
The invention belongs to the technical field of navigation guidance and control, and particularly relates to a pedestrian visual navigation method based on laser radar cooperative assistance.
Background
The navigation positioning technology based on visual feature measurement is an autonomous navigation technology which is independent of satellites, and has wide application prospect in navigation of pedestrians, unmanned vehicles and unmanned planes. The visual navigation positioning system for the pedestrians has the characteristics of small volume, light weight, low price and the like, and is particularly suitable for being used in environments with poor or invalid satellite positioning effect, such as indoor environments, underground environments, city streets, forests and the like to realize positioning.
Relying solely on visual navigation or inertial visual navigation still has some disadvantages. Firstly, the visual features are complex to extract and easy to track and lose; the visual depth measurement is inaccurate, the local precision is poor, and the long-time, long-distance and high-precision measurement is difficult to maintain under the condition of no loop; the visual field is limited, and the tracking is lost after being shielded and is difficult to recover; the pedestrian motion state is single, the error observability is poor, and the positioning accuracy is low; the camera parameters are greatly influenced by environmental factors, so that the positioning precision is influenced; the monocular camera lacks direct measurement of environmental structure information, the depth measurement accuracy of the binocular camera is rapidly reduced along with the increase of distance, and the effective range is only a few meters. These factors make it difficult for vision-based navigational positioning techniques to maintain high accuracy over a long, wide-area environment.
The laser radar has obvious advantages in the aspect of depth measurement accuracy, can reach millimeter level, has a measurement distance of dozens of meters to hundreds of meters, and can accurately sense environmental structure information. However, the method has the disadvantages that the texture characteristics of the environment cannot be sensed, and the method cannot be applied to environments with single structure, mirror surfaces, glass and the like. Therefore, the laser radar and the camera have complementary advantages, and the combination of the laser radar and the camera is an ideal high-precision autonomous navigation and positioning means under the satellite-free condition. However, lidar is currently limited in size and price and is not suitable for use on the back of pedestrians.
Disclosure of Invention
Aiming at the technical problem that a laser radar is limited by the size and the price and is not suitable for pedestrian visual navigation in the prior art, the invention provides a pedestrian visual navigation method based on laser radar cooperative assistance, wherein an unmanned platform is utilized, and an IMU, the laser radar and a camera are simultaneously carried to construct a regional environment global map; a pedestrian only needs to be provided with a camera sensor, and after entering an area, the pedestrian reads in an environment global map established by the unmanned platform, and then the self camera is utilized to realize positioning, so that the defect of only depending on visual positioning can be overcome, and large-range long-time high-precision autonomous navigation is realized.
The technical scheme adopted by the invention for solving the problems is as follows:
a pedestrian vision navigation method based on laser radar cooperative assistance comprises the following steps:
the carrier carries a laser radar, a vision sensor and an IMU (inertial measurement Unit) to scan a peripheral scene;
extracting corner features in the visual sensor image to serve as first observation information;
extracting the structural features of the space object in the laser radar data as second observation information;
setting a system navigation state;
constructing an error calculation formula of the visual sensor, performing Taylor expansion to determine a Jacobian matrix of a visual observation equation;
constructing an error calculation formula of the laser radar, performing Taylor expansion on the error calculation formula, and determining a Jacobian matrix of an observation equation of the laser radar;
solving the optimal solution of the navigation state of the system to minimize the errors of the visual sensor and the laser radar;
carrying out multi-source information fusion filtering based on measurement information of an IMU (inertial measurement Unit), a vision sensor and a laser radar, obtaining the real-time pose of a carrier, and constructing a global map;
and assembling the global map to perform a pedestrian visual navigation system, and performing positioning.
Further, the first observation information is
Wherein p is i Representing the ith visual feature point, wherein the capital letters at the upper right corner represent the coordinate system; r represents a direction cosine array and is converted from a lower right angular coordinate system to an upper right angular coordinate system; c represents a camera coordinate system; g represents a global coordinate system; i represents an inertial coordinate system; k +1 at the lower right corner of the coordinate system represents the current moment, and no subscript at the lower right corner of the coordinate system represents that the conversion relation is independent of time; and p represents three pieces of direction position information which are solved by each navigation mode, and the three pieces of direction position information are converted from a lower right angle coordinate system to an upper right angle coordinate system.
Further, the second observation information is
Wherein p is j Representing the jth laser characteristic point; l denotes the lidar coordinate system.
Further, the navigation state of the system is
Where v denotes the system speed, b g Representing gyro drift, b a Representing an accelerometer null;representing the velocity of the IMU solution in the global coordinate system.
Further, the error calculation formula of the vision sensor is as follows
Wherein x is k+1 Representing the system navigation state at the moment of k +1, and k representing a pinhole camera model;
the Jacobian matrix of the visual observation equation
Wherein f is x 、f y For the focal length of the vision sensor,I 3×3 is an identity matrix, 0 3×12 Is a 3 x 12 dimensional 0 matrix.
Further, the error calculation formula of the laser radar is
Wherein u is j Is a normal vector of a planar structural feature, q j Is in the plane with p j The nearest laser feature point;
the Jacobian matrix of the laser radar observation equation is
Wherein, 0 3×15 Is a 3 x 15 dimensional 0 matrix.
Further, the optimal solution of the system navigation state is calculated by adopting a Gauss-Newton method or a least square method.
Further, the visual sensor is one of a visible light camera, an infrared camera, and a stereo camera.
The invention has the beneficial effects that:
the invention provides a pedestrian visual navigation method based on cooperative assistance of laser radars, which comprises the steps of utilizing an unmanned platform, simultaneously carrying an IMU (inertial measurement Unit), the laser radars and a camera, and constructing a regional environment global map; a pedestrian only needs to be provided with a camera sensor, and after entering an area, the pedestrian reads in an environment global map established by the unmanned platform, and then the self camera is utilized to realize positioning, so that the defect of only depending on visual positioning can be overcome, and large-range long-time high-precision autonomous navigation is realized.
The high-precision environment global map is generated based on the vehicle-mounted laser radar, the vision and other sensors, and is sent to a pedestrian vision navigation system for use in a cooperative assistance mode, so that the pedestrian vision navigation positioning precision can be remarkably improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic block diagram of an inertial/lidar/vision integrated navigation system according to an embodiment of the present invention.
Detailed Description
The pedestrian visual navigation method based on laser radar cooperative assistance is described in detail below by embodiments with reference to the accompanying drawings.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The invention provides a pedestrian vision navigation method based on laser radar cooperative assistance, which comprises the following steps of:
carrying a laser radar, a vision sensor and an IMU (inertial measurement Unit) by utilizing carriers such as an unmanned vehicle platform to scan a peripheral scene, wherein the laser radar obtains object position distance information, the vision sensor obtains an image, and the IMU obtains object position and posture information;
extracting corner features in the visual sensor image to serve as first observation information;
extracting the structural features of the space object in the laser radar data as second observation information;
setting a system navigation state;
constructing an error calculation formula of the visual sensor, performing a section of Taylor expansion, and determining a visual observation equation Jacobian matrix (namely the observation equation Jacobian matrix and a visual related term expression);
constructing an error calculation formula of the laser radar, performing a section of Taylor expansion, and determining a Jacobian matrix of an observation equation of the laser radar (namely an expression of a related term of the Jacobian matrix of the observation equation and the laser radar);
solving the optimal solution of the navigation state of the system to minimize the errors of the visual sensor and the laser radar;
carrying out multi-source information fusion filtering based on measurement information of an IMU (inertial measurement Unit), a vision sensor and a laser radar, obtaining the real-time pose of a carrier, and constructing a global map;
and assembling the global map to perform a pedestrian visual navigation system, and performing positioning.
Further, the optimal solution of the system navigation state is calculated by adopting a Gauss-Newton method or a least square method.
Further, the visual sensor is one of visual imaging sensors such as a visible light camera, an infrared camera, and a stereo camera.
The invention provides a method for improving the pedestrian visual navigation positioning precision based on vehicle-mounted laser radar, vision and other sensors by utilizing a cooperative auxiliary mode, taking a visible light camera as an example, the specific implementation process is as follows:
an unmanned vehicle platform is used for carrying an IMU, a laser radar and a camera to scan surrounding scenes, and an environment global map is constructed while navigation and positioning are carried out, as shown in figure 1.
The visual navigation part extracts the corner features in the image through the visible light camera, and the first observation information input into the information fusion filter can be expressed as
Wherein p is i Representing the ith visual characteristic point, wherein the capital letters at the upper right corner represent the coordinate system; r represents a directional cosine array, and is converted from a lower right angle coordinate system to an upper right angle coordinate system; wherein C represents a camera coordinate system, the centroid of the camera is taken as the center of a circle, the x axis points to the direction of the camera side shaft and is positive to the right, the y axis points to the direction of the camera vertical shaft and is positive downwards, the Z axis points to the direction of the camera longitudinal shaft and is positive forwards; g represents a global coordinate system, and the global coordinate system is a camera coordinate system at the moment when the system initialization is successful; i represents an inertial coordinate system, the origin is the earth center, the x axis points to the spring equinox, the z axis is the earth rotation axis and points to the north pole, and the y axis and the x and z axes form a right-hand coordinate system; k +1 at the lower right corner of the coordinate system represents the current moment, and no subscript at the lower right corner of the coordinate system represents that the conversion relation is independent of time; and p represents three pieces of direction position information calculated by each navigation mode (namely IMU, laser radar and camera), and the three pieces of direction position information are converted from a lower right angular coordinate system to an upper right angular coordinate system.
The laser radar navigation part performs navigation by extracting the structural characteristics of the space object, and the second observation information input into the information fusion filter can be expressed as
Wherein p is j Representing the jth laser characteristic point; l represents a laser radar coordinate system, the center of mass of the laser radar is taken as the center of a circle in the L system, the x axis points to the axial direction of the side of the laser radar, the right direction is positive, the y axis points to the vertical axis direction of the laser radar, the downward direction is positive, the Z axis points to the longitudinal axis direction of the laser radar, and the forward direction is positive.
The calculation target of the information fusion filter is to make the overall error of the measurement information of the inertial/laser radar/visual sensor to be the minimum.
Error r of visual navigation part C The calculation formula is designed as follows:
wherein x is k+1 Representing the system navigation state at time k +1 and k representing the pinhole camera model (relational expression for the conversion of camera sensors from pixels to positions).
Where v denotes the system speed, b g Representing a gyro drift, b a Indicating an accelerometer zero position;representing the velocity of the IMU solution in the global coordinate system.
A Taylor expansion is performed on the formula (3) to obtain
Wherein,represents an estimate for x, <' > is asserted>Denotes the deviation, ε, of the estimate of x i In order to conform to the normally distributed white noise,the Jacobian matrix is an equation for visual observation, and->Represents a pair->An estimate of (d).
wherein f is x 、f y Is the focal length of the camera and is,I 3×3 is an identity matrix, 0 3×12 Is a 3 x 12 dimensional 0 matrix.
The error calculation formula of the laser radar navigation part is designed as follows:
wherein u is j Is of planar structureNormal vector of sign, q j Is in the plane with p j The nearest laser feature point.
A Taylor expansion is performed on the formula (6) to obtain
Wherein ξ j In order to conform to normally distributed white noise, the Jacobian matrix of the laser radar observation equationThe concrete form of (A) is as follows:
after the incremental forms (4) and (7) of the residual errors are obtained, the system state x can be calculated by using a parameter regression method such as a Gauss Newton method or a least square method k+1 Of the optimal solution of C And r L And reaches the minimum. Taking the least squares method as an example:
Wherein Y = [ Y = k+1-m ... Y k+1 ... Y k+m ] T ,D=[D k+1-m ... D k+1 ... D k+m ] T This is a set of 2m observations in total.
The laser radar has high measurement precision which can reach the level of cm, so that after information fusion and filtering, the position precision of the camera feature points can also reach the level of cm through correction, which is far higher than the measurement progress which can be achieved by utilizing the vision alone, and the precision of the generated visual feature map is improved.
The pedestrians wear the visual navigation system and load the global map generated by the unmanned platform. In the map-free area, the autonomous positioning is carried out through the visual SLAM technology, and in the map-available area, the system positioning error is corrected through the repositioning technology, namely the visual SLAM positioning is corrected through the global map, so that the visual navigation positioning accuracy of the pedestrians is improved.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
The invention has not been described in detail and is in part known to those of skill in the art.
Claims (8)
1. A pedestrian visual navigation method based on laser radar cooperative assistance is characterized by comprising the following steps:
the carrier carries a laser radar, a vision sensor and an IMU (inertial measurement Unit) to scan a peripheral scene;
extracting corner features in the visual sensor image to serve as first observation information;
extracting the structural features of the space object in the laser radar data as second observation information;
setting a system navigation state;
constructing an error calculation formula of the visual sensor, performing Taylor expansion, and determining a Jacobian matrix of a visual observation equation;
constructing an error calculation formula of the laser radar, performing Taylor expansion, and determining a Jacobian matrix of an observation equation of the laser radar;
solving the optimal solution of the navigation state of the system to minimize the errors of the visual sensor and the laser radar;
carrying out multi-source information fusion filtering based on measurement information of an IMU (inertial measurement Unit), a vision sensor and a laser radar, obtaining the real-time pose of a carrier, and constructing a global map;
and assembling the global map to perform a human visual navigation system, and performing positioning.
2. The visual navigation method of pedestrians as claimed in claim 1, wherein the first observation information is
Wherein p is i Representing the ith visual characteristic point, wherein the capital letters at the upper right corner represent the coordinate system; r represents a direction cosine array and is converted from a lower right angular coordinate system to an upper right angular coordinate system; c represents a camera coordinate system; g represents a global coordinate system; i represents an inertial coordinate system; k +1 at the lower right corner of the coordinate system represents the current moment, and the lower right corner of the coordinate system has no subscript to represent that the conversion relation is independent of time; and p represents three pieces of direction position information which are solved by each navigation mode, and the three pieces of direction position information are converted from a lower right angle coordinate system to an upper right angle coordinate system.
5. The visual navigation method of pedestrians as claimed in claim 4, wherein the error calculation formula of the vision sensor is as follows
Wherein x is k+1 Representing the system navigation state at the moment of k +1, and k representing a pinhole camera model;
the Jacobian matrix of the visual observation equation
6. The pedestrian vision navigation method of claim 5, wherein the error calculation formula of the lidar is
Wherein u is j Is a normal vector of a planar structural feature, q j Is in the plane with p j The nearest laser feature point;
the Jacobian matrix of the laser radar observation equation is
Wherein, 0 3×15 Is a 3 x 15 dimensional 0 matrix.
7. The visual navigation method of the pedestrian according to claim 6, wherein the optimal solution for the navigation state of the system is calculated by using gauss-newton method or least square method.
8. The visual navigation method for pedestrians as claimed in claim 1, wherein the visual sensor is one of a visible light camera, an infrared camera, and a stereo camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211591779.XA CN115855042B (en) | 2022-12-12 | 2022-12-12 | Pedestrian visual navigation method based on laser radar cooperative assistance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211591779.XA CN115855042B (en) | 2022-12-12 | 2022-12-12 | Pedestrian visual navigation method based on laser radar cooperative assistance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115855042A true CN115855042A (en) | 2023-03-28 |
CN115855042B CN115855042B (en) | 2024-09-06 |
Family
ID=85672191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211591779.XA Active CN115855042B (en) | 2022-12-12 | 2022-12-12 | Pedestrian visual navigation method based on laser radar cooperative assistance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115855042B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105371840A (en) * | 2015-10-30 | 2016-03-02 | 北京自动化控制设备研究所 | Method for combined navigation of inertia/visual odometer/laser radar |
WO2018054080A1 (en) * | 2016-09-23 | 2018-03-29 | 深圳大学 | Method and device for updating planned path of robot |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
WO2020104423A1 (en) * | 2018-11-20 | 2020-05-28 | Volkswagen Aktiengesellschaft | Method and apparatus for data fusion of lidar data and image data |
CN111521176A (en) * | 2020-04-27 | 2020-08-11 | 北京工业大学 | Visual auxiliary inertial navigation method fusing laser |
WO2021180128A1 (en) * | 2020-03-11 | 2021-09-16 | 华南理工大学 | Rgbd sensor and imu sensor-based positioning method |
JP2022039906A (en) * | 2020-08-28 | 2022-03-10 | 中国計量大学 | Multi-sensor combined calibration device and method |
CN115046543A (en) * | 2021-03-09 | 2022-09-13 | 济南大学 | Multi-sensor-based integrated navigation method and system |
-
2022
- 2022-12-12 CN CN202211591779.XA patent/CN115855042B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105371840A (en) * | 2015-10-30 | 2016-03-02 | 北京自动化控制设备研究所 | Method for combined navigation of inertia/visual odometer/laser radar |
WO2018054080A1 (en) * | 2016-09-23 | 2018-03-29 | 深圳大学 | Method and device for updating planned path of robot |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
WO2020104423A1 (en) * | 2018-11-20 | 2020-05-28 | Volkswagen Aktiengesellschaft | Method and apparatus for data fusion of lidar data and image data |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
WO2021180128A1 (en) * | 2020-03-11 | 2021-09-16 | 华南理工大学 | Rgbd sensor and imu sensor-based positioning method |
CN111521176A (en) * | 2020-04-27 | 2020-08-11 | 北京工业大学 | Visual auxiliary inertial navigation method fusing laser |
JP2022039906A (en) * | 2020-08-28 | 2022-03-10 | 中国計量大学 | Multi-sensor combined calibration device and method |
CN115046543A (en) * | 2021-03-09 | 2022-09-13 | 济南大学 | Multi-sensor-based integrated navigation method and system |
Non-Patent Citations (5)
Title |
---|
BU NINGBO: "Image and lidar fusion mapping method based on joint adjustment", 2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 31 December 2022 (2022-12-31) * |
史俊;杨功流;陈雅洁;万振塬;: "视觉辅助惯性定位定姿技术研究", 航空计算技术, no. 01, 25 January 2016 (2016-01-25) * |
王消为;贺利乐;赵涛;: "基于激光雷达与双目视觉的移动机器人SLAM研究", 传感技术学报, no. 03, 15 March 2018 (2018-03-15) * |
赵雨楠: "基于图优化的惯性/地磁/激光雷达复合定位技术研究", 导航定位与授时, 25 April 2022 (2022-04-25) * |
陈凯: "基于多传感器融合的同时定位和地图构建研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), 15 February 2022 (2022-02-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN115855042B (en) | 2024-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113781582B (en) | Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration | |
Gao et al. | Review of wheeled mobile robots’ navigation problems and application prospects in agriculture | |
CN109341706B (en) | Method for manufacturing multi-feature fusion map for unmanned vehicle | |
CN110262546B (en) | Tunnel intelligent unmanned aerial vehicle inspection method | |
CN109099901B (en) | Full-automatic road roller positioning method based on multi-source data fusion | |
Fan et al. | Data fusion for indoor mobile robot positioning based on tightly coupled INS/UWB | |
CN114199240B (en) | Two-dimensional code, laser radar and IMU fusion positioning system and method without GPS signal | |
CN111123911B (en) | Legged intelligent star catalogue detection robot sensing system and working method thereof | |
CN110702091B (en) | High-precision positioning method for moving robot along subway rail | |
CN106017463A (en) | Aircraft positioning method based on positioning and sensing device | |
CN103983263A (en) | Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network | |
CN115574816A (en) | Bionic vision multi-source information intelligent perception unmanned platform | |
Rahok et al. | Navigation using an environmental magnetic field for outdoor autonomous mobile robots | |
CN110658828A (en) | Autonomous landform detection method and unmanned aerial vehicle | |
CN116047565A (en) | Multi-sensor data fusion positioning system | |
Niewola et al. | PSD–probabilistic algorithm for mobile robot 6D localization without natural and artificial landmarks based on 2.5 D map and a new type of laser scanner in GPS-denied scenarios | |
CN117234203A (en) | Multi-source mileage fusion SLAM downhole navigation method | |
Wu et al. | AFLI-Calib: Robust LiDAR-IMU extrinsic self-calibration based on adaptive frame length LiDAR odometry | |
CN117434570B (en) | Visual measurement method, measurement device and storage medium for coordinates | |
Mostafa et al. | Optical flow based approach for vision aided inertial navigation using regression trees | |
Zhang et al. | Tightly coupled integration of vector HD map, LiDAR, GNSS, and INS for precise vehicle navigation in GNSS-challenging environment | |
Reitbauer | Multi-Sensor Positioning for the Automatic Steering of Tracked Agricultural Vehicles | |
Rahok et al. | Trajectory tracking using environmental magnetic field for outdoor autonomous mobile robots | |
CN116242372B (en) | UWB-laser radar-inertial navigation fusion positioning method under GNSS refusing environment | |
CN115855042A (en) | Pedestrian visual navigation method based on laser radar cooperative assistance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |