CN111882589A - Image-based monocular vision SLAM initialization method - Google Patents
Image-based monocular vision SLAM initialization method Download PDFInfo
- Publication number
- CN111882589A CN111882589A CN202010577408.0A CN202010577408A CN111882589A CN 111882589 A CN111882589 A CN 111882589A CN 202010577408 A CN202010577408 A CN 202010577408A CN 111882589 A CN111882589 A CN 111882589A
- Authority
- CN
- China
- Prior art keywords
- image
- data
- points
- point
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011423 initialization method Methods 0.000 title claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims abstract description 53
- 238000013507 mapping Methods 0.000 claims abstract description 36
- 238000007781 pre-processing Methods 0.000 claims abstract description 21
- 238000000034 method Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 9
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 230000004807 localization Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an initialization method of monocular vision SLAM based on an image, which comprises the steps of obtaining template image data, preprocessing the template image data through an image preprocessing module, extracting feature points, corner points and space 3D coordinates of template image boundary points of the image data, and extracting feature points of the image data for the current image data; calling feature point data of the template image and feature point data of the current image to perform feature point matching, calculating a homography matrix according to a matching result, calculating a mapping angular point of a template image angular point by using the homography matrix, and calculating an optimized homography matrix through block matching by using the mapping angular point and the angular point of the current image; mapping the coordinates of the four boundary points in the template image by using the optimized homography matrix to obtain coordinate data of the four boundary points mapped to the current image; and calculating the pose by using the space 3D coordinate data of the four boundary points in the template image and the mapped current image boundary point coordinate data.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an initialization method of monocular vision SLAM based on an image.
Background
In recent years, visual SLAM (simultaneous localization and mapping) has been widely used in the fields of augmented reality, robots, unmanned driving, and the like. SLAM mainly solves the problem that the robot moves in an unknown environment, determines the motion track of the robot through observation of the environment, and simultaneously constructs an environment map. Typically including feature extraction, data association, state estimation, state update, feature update, and the like. Visual SLAM (simultaneous localization and mapping) is mainly classified into three major categories: monocular, binocular, RGBD. The rest are cameras such as fish eyes and panorama. Monocular SLAM (simultaneous localization and mapping) is of great interest because of its simplicity and low cost.
Monocular SLAM (simultaneous localization and mapping) cannot obtain the real size of a motion track and a map because the absolute depth is unknown, so monocular vision SLAM (simultaneous localization and mapping) needs to be initialized to recover a scene. Because only image information does not have real space three-dimensional data, only relative depth is estimated, and the scale problem still exists. Moreover, a triangulation method is used when the scene depth is recovered, which requires that the camera needs to have a certain rotation and translation during initialization, and the camera motion cannot be pure rotation. When image matching is used, the method is an extracted feature point, and simultaneously, a scene is required to have a good enough geometric texture feature.
Disclosure of Invention
The invention aims to provide an initialization method of monocular vision SLAM based on images, which has high precision and good stability.
The invention discloses an initialization method of monocular vision SLAM based on images, which comprises the following steps:
step 1: acquiring template image data, preprocessing the template image data through an image preprocessing module, extracting feature points, corner points and spatial 3D coordinates of template image boundary points of the preprocessed image data, and storing the spatial 3D coordinates;
step 2: acquiring current image data, preprocessing each frame of image data of the current image through a preprocessing module, and extracting characteristic points of the preprocessed image data;
and step 3: calling feature point data of the template image and feature point data of the current image to perform feature point matching, calculating a homography matrix according to a matching result, calculating a mapping angular point of a template image angular point by using the homography matrix, and calculating an optimized homography matrix through block matching by using the mapping angular point and the angular point of the current image;
and 4, step 4: mapping the coordinates of the four boundary points in the template image by using the optimized homography matrix to obtain coordinate data of the four boundary points mapped to the current image;
and 5: and calculating the pose by using the space 3D coordinate data of the four boundary points in the template image and the mapped current image boundary point coordinate data to finish initialization.
The image-based monocular vision SLAM initialization method comprises the steps of calling feature point data of a template image and feature point data of a current image to carry out feature point matching, calculating a homography matrix according to a matching result, calculating a mapping angular point of a template image angular point by using the homography matrix, and calculating an optimized homography matrix by the mapping angular point and the angular point of the current image through block matching, so that matching can be carried out on the feature points with high precision, and the problem of poor monocular vision SLAM initialization effect can be effectively solved. The method for matching the corner points and the blocks can effectively calculate the points which are accurately matched, and effectively solves the problem of large calculation error caused by directly using the matched characteristic points. The pose is calculated only by utilizing the spatial 3D coordinate data of the four boundary points in the template image and the coordinate data of the boundary points of the mapped current image, so that the problems of large calculation amount and difficulty in initializing SLAM caused by the fact that 3D points corresponding to all feature points need to be calculated are solved.
Drawings
Fig. 1 is a flowchart illustrating an initialization method of image-based monocular vision SLAM according to the present invention.
Detailed Description
As shown in fig. 1, the initialization method of image-based monocular vision SLAM according to the present invention includes the following steps:
step 1: acquiring template image data, preprocessing the template image data through an image preprocessing module, extracting feature points, corner points and spatial 3D coordinates of template image boundary points of the preprocessed image data, and storing the spatial 3D coordinates;
step 2: acquiring current image data, preprocessing each frame of image data of the current image through a preprocessing module, and extracting characteristic points of the preprocessed image data;
and step 3: calling feature point data of the template image and feature point data of the current image to perform feature point matching, calculating a homography matrix according to a matching result, calculating a mapping angular point of a template image angular point by using the homography matrix, and calculating an optimized homography matrix through block matching by using the mapping angular point and the angular point of the current image;
and 4, step 4: mapping the coordinates of the four boundary points in the template image by using the optimized homography matrix to obtain coordinate data of the four boundary points mapped to the current image;
and 5: and calculating the pose by using the space 3D coordinate data of the four boundary points in the template image and the mapped current image boundary point coordinate data to finish initialization.
The image-based monocular vision SLAM initialization method comprises the steps of calling feature point data of a template image and feature point data of a current image to carry out feature point matching, calculating a homography matrix according to a matching result, calculating a mapping angular point of a template image angular point by using the homography matrix, and calculating an optimized homography matrix by the mapping angular point and the angular point of the current image through block matching, so that matching can be carried out on the feature points with high precision, and the problem of poor monocular vision SLAM initialization effect can be effectively solved. The method for matching the corner points and the blocks can effectively calculate the points which are accurately matched, and effectively solves the problem of large calculation error caused by directly using the matched characteristic points. The pose is calculated only by utilizing the spatial 3D coordinate data of the four boundary points in the template image and the coordinate data of the boundary points of the mapped current image, so that the problems of large calculation amount and difficulty in initializing SLAM caused by the fact that 3D points corresponding to all feature points need to be calculated are solved.
The image preprocessing comprises the following specific steps:
step 1-1: the image preprocessing module adjusts the image resolution;
step 1-2: establishing an image pyramid for the adjusted image;
step 1-3: and filtering or detecting edges of all layers of the image pyramid.
The visual SLAM is initialized by the identification image with the scale, so that the scale problem can be effectively removed. And meanwhile, the problem of difficult or inaccurate initialization can be effectively solved by using the image with the known size for calculation. The image preprocessing module can properly accelerate the adjustment of the image resolution; filtering or edge detection can ensure that the feature points are all at the texture edge, and the extracted feature points are more robust.
The camera acquires current image data, each frame of image data of the current image is preprocessed through the preprocessing module, and feature points of the preprocessed image data are extracted. Then, feature point data of the template image and feature point data of the current image are called to carry out feature point matching, a homography matrix is calculated according to a matching result, mapping corner points of the template image are calculated by utilizing the homography matrix, the mapping corner points are mapped to the corner points of the current image, and an optimized homography matrix is calculated through block matching. Specifically, feature point data of a template image and feature point data of a current image are called to perform feature point matching, a homography matrix is calculated according to a matching result, mapping angular points of the template image are calculated by using the homography matrix, the mapping angular points are subjected to forward or reverse mapping to the angular points of the current image, the optimized homography matrix is calculated through block matching, the calculation result can be guaranteed to be more optimal through forward and reverse mapping, the obtained result is more stable, and therefore the problems that calculation amount is large and SLAM is difficult to initialize due to the fact that 3D points corresponding to all feature points need to be calculated are solved.
The method comprises the following steps of calling feature point data of a template image and feature point data of a current image to perform feature point matching, calculating a homography matrix according to a matching result, calculating mapping angular points of the template image by using the homography matrix, performing forward or reverse mapping to the angular points of the current image by the mapping angular points, and calculating an optimized homography matrix through block matching, wherein the homography matrix comprises the following steps:
step 3-1: carrying out rough matching screening on the feature point data of the template image and the feature point data of the current image, wherein the two groups of feature point data use feature point matching, and use nearest neighbor matching knnnMatch to quickly screen out rough matching points;
step 3-2: calculating an affine transformation matrix A by using two groups of initially matched feature point data by using a RANSAC method;
step 3-3: calculating a converted point Bi (i =1.. N) by using an angular point Ai (i =1.. N) extracted from the identification graph according to the affine transformation matrix A, wherein the Bi = A. [ Ai;1 ];
step 3-4: initializing a homography matrix H0 according to the affine matrix A, giving a region threshold value D and an identified image, calculating a block region q1 after perspective transformation of the block region in each neighborhood D with a point Ai as a center, simultaneously calculating a block region q2 of the neighborhood D of a Bi point matched with the Ai, finally carrying out block matching on q1 and q2, recording a feature point group meeting the block matching, returning to the previous step if the number of the matched feature points is less than a given threshold value n, and continuing the following steps if the number of the matched feature points is greater than the given threshold value;
step 3-5: calculating a homography H by using the characteristic point group meeting the conditions as an effective point, performing perspective transformation on the identification image again according to the homography matrix H obtained by calculation to obtain a perspective image, and extracting angular points from the perspective image; then, carrying out feature point block matching on the angular points in the identification image and the angular points in the identification image to obtain a matched effective angular point calculation homography matrix rH;
step 3-6: h = H × rH as the final homography transform.
H = H × rH is used as final homography transformation, mapping is carried out on coordinates of four boundary points in the template image by using the homography matrix after optimization according to the calculated homography matrix H, coordinate data of the four boundary points mapped to the current image are obtained, then the pose of the camera is calculated by using space 3D coordinate data of the four boundary points in the template image and the coordinate data of the boundary points of the current image by using a PnP or nonlinear optimization method, and initialization is completed.
The method for matching the corner points and the blocks can effectively calculate the points which are accurately matched, and effectively solves the problem of large calculation error caused by directly using the matched characteristic points. The identification image is used as a first frame image for initializing two frames for matching, and better characteristic points can be used as matching points; as a single thread, the image can be processed in real time, and when the image is initialized or lost in tracking and needs to be reinitialized; a plurality of images can be efficiently recognized.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (7)
1. The initialization method of the monocular vision SLAM based on the image is characterized by comprising the following steps:
step 1: acquiring template image data, preprocessing the template image data through an image preprocessing module, extracting feature points, corner points and spatial 3D coordinates of template image boundary points of the preprocessed image data, and storing the spatial 3D coordinates;
step 2: acquiring current image data, preprocessing each frame of image data of the current image through a preprocessing module, and extracting characteristic points of the preprocessed image data;
and step 3: calling feature point data of the template image and feature point data of the current image to perform feature point matching, calculating a homography matrix according to a matching result, calculating a mapping angular point of a template image angular point by using the homography matrix, and calculating an optimized homography matrix through block matching by using the mapping angular point and the angular point of the current image;
and 4, step 4: mapping the coordinates of the four boundary points in the template image by using the optimized homography matrix to obtain coordinate data of the four boundary points mapped to the current image;
and 5: and calculating the pose by using the space 3D coordinate data of the four boundary points in the template image and the mapped current image boundary point coordinate data to finish initialization.
2. The initialization method of image-based monocular vision SLAM according to claim 1, wherein the image preprocessing specific step includes:
step 1-1: the image preprocessing module adjusts the image resolution;
step 1-2: establishing an image pyramid for the adjusted image;
step 1-3: and filtering or detecting edges of all layers of the image pyramid.
3. The method for initializing an image-based monocular vision SLAM according to claim 1, wherein the step 2 comprises: the camera acquires current image data, each frame of image data of the current image is preprocessed through the preprocessing module, and feature points of the preprocessed image data are extracted.
4. The method for initializing image-based monocular vision SLAM according to any one of claims 1 to 3, wherein the step 3 comprises: and calling feature point data of the template image and feature point data of the current image to perform feature point matching, calculating a homography matrix according to a matching result, calculating a mapping angular point of a template image angular point by using the homography matrix, performing forward or reverse mapping to the angular point of the current image by using the mapping angular point, and calculating the optimized homography matrix through block matching.
5. The initialization method of image-based monocular vision SLAM as described in claim 4, wherein said feature point data of the template image and the feature point data of the current image are matched for feature point, a homography matrix is calculated according to the matching result, the homography matrix is used to calculate the mapping corner points of the template image corner points, the mapping corner points are forward or backward mapped to the corner points of the current image, and the optimized homography matrix is calculated by block matching, comprising the steps of:
step 3-1: carrying out rough matching screening on the feature point data of the template image and the feature point data of the current image, wherein the two groups of feature point data use feature point matching, and use nearest neighbor matching knnnMatch to quickly screen out rough matching points;
step 3-2: calculating an affine transformation matrix A by using two groups of initially matched feature point data by using a RANSAC method;
step 3-3: calculating a converted point Bi (i =1.. N) by using an angular point Ai (i =1.. N) extracted from the identification graph according to the affine transformation matrix A, wherein the Bi = A. [ Ai;1 ];
step 3-4: initializing a homography matrix H0 according to the affine matrix A, giving a region threshold value D and an identified image, calculating a block region q1 after perspective transformation of the block region in each neighborhood D with a point Ai as a center, simultaneously calculating a block region q2 of the neighborhood D of a Bi point matched with the Ai, finally carrying out block matching on q1 and q2, recording a feature point group meeting the block matching, returning to the previous step if the number of the matched feature points is less than a given threshold value n, and continuing the following steps if the number of the matched feature points is greater than the given threshold value;
step 3-5: calculating a homography H by using the characteristic point group meeting the conditions as an effective point, performing perspective transformation on the identification image again according to the homography matrix H obtained by calculation to obtain a perspective image, and extracting angular points from the perspective image; then, carrying out feature point block matching on the angular points in the identification image and the angular points in the identification image to obtain a matched effective angular point calculation homography matrix rH;
step 3-6: h = H × rH as the final homography transform.
6. The method of initializing an image-based monocular vision SLAM according to claim 5, wherein the step 4 maps the coordinates of the four boundary points in the template image with the optimized homography matrix according to H = H × rH, and obtains the coordinate data of the four boundary points mapped to the current image.
7. The initialization method for image-based monocular vision SLAM of claim 6, wherein in step 5, the camera pose is calculated by utilizing the three-dimensional (3D) coordinate data of the four boundary points in the template image and the current image boundary point coordinate data by using a PnP method, and the initialization is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010577408.0A CN111882589A (en) | 2020-06-23 | 2020-06-23 | Image-based monocular vision SLAM initialization method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010577408.0A CN111882589A (en) | 2020-06-23 | 2020-06-23 | Image-based monocular vision SLAM initialization method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111882589A true CN111882589A (en) | 2020-11-03 |
Family
ID=73156616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010577408.0A Pending CN111882589A (en) | 2020-06-23 | 2020-06-23 | Image-based monocular vision SLAM initialization method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111882589A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113129376A (en) * | 2021-04-22 | 2021-07-16 | 青岛联合创智科技有限公司 | Checkerboard-based camera real-time positioning method |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101635859A (en) * | 2009-08-21 | 2010-01-27 | 清华大学 | Method and device for converting plane video to three-dimensional video |
CN102075686A (en) * | 2011-02-10 | 2011-05-25 | 北京航空航天大学 | Robust real-time on-line camera tracking method |
US20140147043A1 (en) * | 2012-11-26 | 2014-05-29 | Nokia Corporation | Method, apparatus and computer program product for processing of image frames |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN105427242A (en) * | 2015-10-28 | 2016-03-23 | 南京工业大学 | Interactive grid constraint deformation image self-adaptive scaling method based on content perception |
CN107959805A (en) * | 2017-12-04 | 2018-04-24 | 深圳市未来媒体技术研究院 | Light field video imaging system and method for processing video frequency based on Hybrid camera array |
CN109087331A (en) * | 2018-08-02 | 2018-12-25 | 阿依瓦(北京)技术有限公司 | A kind of motion forecast method based on KCF algorithm |
CN109242811A (en) * | 2018-08-16 | 2019-01-18 | 广州视源电子科技股份有限公司 | Image alignment method and device, computer readable storage medium and computer equipment |
CN109462748A (en) * | 2018-12-21 | 2019-03-12 | 福州大学 | A kind of three-dimensional video-frequency color correction algorithm based on homography matrix |
CN109615584A (en) * | 2018-12-17 | 2019-04-12 | 辽宁工程技术大学 | A kind of SAR image sequence MAP super resolution ratio reconstruction method based on homography constraint |
CN109636852A (en) * | 2018-11-23 | 2019-04-16 | 浙江工业大学 | A kind of monocular SLAM initial method |
CN109671120A (en) * | 2018-11-08 | 2019-04-23 | 南京华捷艾米软件科技有限公司 | A kind of monocular SLAM initial method and system based on wheel type encoder |
CN109961078A (en) * | 2017-12-22 | 2019-07-02 | 展讯通信(上海)有限公司 | Images match and joining method, device, system, readable medium |
CN110223235A (en) * | 2019-06-14 | 2019-09-10 | 南京天眼信息科技有限公司 | A kind of flake monitoring image joining method based on various features point combinations matches |
CN110264509A (en) * | 2018-04-27 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Determine the method, apparatus and its storage medium of the pose of image-capturing apparatus |
WO2020043155A1 (en) * | 2018-08-31 | 2020-03-05 | 清华-伯克利深圳学院筹备办公室 | Multiple scale image fusion method and device, storage medium, and terminal |
-
2020
- 2020-06-23 CN CN202010577408.0A patent/CN111882589A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101635859A (en) * | 2009-08-21 | 2010-01-27 | 清华大学 | Method and device for converting plane video to three-dimensional video |
CN102075686A (en) * | 2011-02-10 | 2011-05-25 | 北京航空航天大学 | Robust real-time on-line camera tracking method |
US20140147043A1 (en) * | 2012-11-26 | 2014-05-29 | Nokia Corporation | Method, apparatus and computer program product for processing of image frames |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN105427242A (en) * | 2015-10-28 | 2016-03-23 | 南京工业大学 | Interactive grid constraint deformation image self-adaptive scaling method based on content perception |
CN107959805A (en) * | 2017-12-04 | 2018-04-24 | 深圳市未来媒体技术研究院 | Light field video imaging system and method for processing video frequency based on Hybrid camera array |
CN109961078A (en) * | 2017-12-22 | 2019-07-02 | 展讯通信(上海)有限公司 | Images match and joining method, device, system, readable medium |
CN110264509A (en) * | 2018-04-27 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Determine the method, apparatus and its storage medium of the pose of image-capturing apparatus |
CN109087331A (en) * | 2018-08-02 | 2018-12-25 | 阿依瓦(北京)技术有限公司 | A kind of motion forecast method based on KCF algorithm |
CN109242811A (en) * | 2018-08-16 | 2019-01-18 | 广州视源电子科技股份有限公司 | Image alignment method and device, computer readable storage medium and computer equipment |
WO2020043155A1 (en) * | 2018-08-31 | 2020-03-05 | 清华-伯克利深圳学院筹备办公室 | Multiple scale image fusion method and device, storage medium, and terminal |
CN109671120A (en) * | 2018-11-08 | 2019-04-23 | 南京华捷艾米软件科技有限公司 | A kind of monocular SLAM initial method and system based on wheel type encoder |
CN109636852A (en) * | 2018-11-23 | 2019-04-16 | 浙江工业大学 | A kind of monocular SLAM initial method |
CN109615584A (en) * | 2018-12-17 | 2019-04-12 | 辽宁工程技术大学 | A kind of SAR image sequence MAP super resolution ratio reconstruction method based on homography constraint |
CN109462748A (en) * | 2018-12-21 | 2019-03-12 | 福州大学 | A kind of three-dimensional video-frequency color correction algorithm based on homography matrix |
CN110223235A (en) * | 2019-06-14 | 2019-09-10 | 南京天眼信息科技有限公司 | A kind of flake monitoring image joining method based on various features point combinations matches |
Non-Patent Citations (7)
Title |
---|
CHANGSOO JE 等: "Optimized hierarchical block matching for fast and accurate image registration", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》, pages 779 - 791 * |
SHARON ELSA MATHEW 等: "Intellectual Homography Based Global Motion Estimation For Blocks In Video Coding", 《INTERNATIONAL CONFERENCE ON PHYSICS AND PHOTONICS PROCESSES IN NANO SCIENCES》, pages 1 - 7 * |
于英 等: "影像连接点均衡化高精度自动提取", 《测绘学报》, pages 90 - 97 * |
张国亮: "空间机器人遥操作中基于视觉的局部自主控制研究", 《中国博士论文全文数据库 信息科技辑》, pages 140 - 25 * |
李佳 等: "图像分块匹配下视频全景拼接方法", 《应用基础与工程科学学报》, pages 697 - 708 * |
李富: "面向大范围场景的视觉SLAM方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 1410 * |
杨郑: "基于块匹配和特征点匹配的图像拼接算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 774 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113129376A (en) * | 2021-04-22 | 2021-07-16 | 青岛联合创智科技有限公司 | Checkerboard-based camera real-time positioning method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108717712B (en) | Visual inertial navigation SLAM method based on ground plane hypothesis | |
US9420265B2 (en) | Tracking poses of 3D camera using points and planes | |
US11830216B2 (en) | Information processing apparatus, information processing method, and storage medium | |
US20030012410A1 (en) | Tracking and pose estimation for augmented reality using real features | |
Beall et al. | 3D reconstruction of underwater structures | |
CN108776989B (en) | Low-texture planar scene reconstruction method based on sparse SLAM framework | |
CN111462200A (en) | Cross-video pedestrian positioning and tracking method, system and equipment | |
CN110176032B (en) | Three-dimensional reconstruction method and device | |
WO2020113423A1 (en) | Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle | |
CN108492316A (en) | A kind of localization method and device of terminal | |
CN110853075A (en) | Visual tracking positioning method based on dense point cloud and synthetic view | |
CN112801074B (en) | Depth map estimation method based on traffic camera | |
GB2580691A (en) | Depth estimation | |
CN112419497A (en) | Monocular vision-based SLAM method combining feature method and direct method | |
Ding et al. | Fusing structure from motion and lidar for dense accurate depth map estimation | |
Yuan et al. | 3D reconstruction of background and objects moving on ground plane viewed from a moving camera | |
CN110375765B (en) | Visual odometer method, system and storage medium based on direct method | |
CN115936029A (en) | SLAM positioning method and device based on two-dimensional code | |
CN115471748A (en) | Monocular vision SLAM method oriented to dynamic environment | |
KR100574227B1 (en) | Apparatus and method for separating object motion from camera motion | |
CN116843754A (en) | Visual positioning method and system based on multi-feature fusion | |
JP6228239B2 (en) | A method for registering data using a set of primitives | |
CN113345032B (en) | Initialization map building method and system based on wide-angle camera large distortion map | |
CN111882589A (en) | Image-based monocular vision SLAM initialization method | |
Wang et al. | RGB-guided depth map recovery by two-stage coarse-to-fine dense CRF models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220714 Address after: 510000 room 617, No. 193, Kexue Avenue, Huangpu District, Guangzhou City, Guangdong Province Applicant after: Guangzhou Gaowei Network Technology Co.,Ltd. Address before: 510670 room 604, 193 Kexue Avenue, Huangpu District, Guangzhou City, Guangdong Province Applicant before: Guangzhou wanwei Innovation Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201103 |
|
WD01 | Invention patent application deemed withdrawn after publication |