Nothing Special   »   [go: up one dir, main page]

CN112308917A - Vision-based mobile robot positioning method - Google Patents

Vision-based mobile robot positioning method Download PDF

Info

Publication number
CN112308917A
CN112308917A CN202011103272.6A CN202011103272A CN112308917A CN 112308917 A CN112308917 A CN 112308917A CN 202011103272 A CN202011103272 A CN 202011103272A CN 112308917 A CN112308917 A CN 112308917A
Authority
CN
China
Prior art keywords
key frame
point
points
mobile robot
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011103272.6A
Other languages
Chinese (zh)
Inventor
顾寄南
许昊
董瑞霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202011103272.6A priority Critical patent/CN112308917A/en
Publication of CN112308917A publication Critical patent/CN112308917A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a mobile robot positioning method based on vision, which comprises the following steps: s1) acquiring image information of the surrounding environment of the mobile robot; s2) acquiring the motion trail of the mobile robot based on the optical flow method, and simultaneously screening key frames; s3) constructing a local map based on the motion trail; s4) acquiring the next key frame of the real-time environment through the local map, and extracting the characteristics of the key frame; s5) projecting the three-dimensional map points of the local map to the current key frame, and calculating a reprojection error; s6) judging whether the reprojection error is larger than a threshold value, if so, performing bundle set optimization on the current key frame, correcting the pose of the key frame, updating a local map according to the key frame, and entering the step S7, otherwise, entering the step S7; s7), whether the positioning is continued is judged, if yes, the step S2 is returned, and if not, the positioning is ended. The invention not only improves the calculation speed, but also has certain robustness.

Description

Vision-based mobile robot positioning method
Technical Field
The invention relates to the field of mobile robot positioning, in particular to a mobile robot positioning method based on vision.
Background
With the continuous development of mobile robots, the demands of various industries on mobile robots, such as epidemic prevention robots, cleaning robots, inspection robots, etc., are continuously increased. The autonomous positioning research of the mobile robot is widely concerned, and the accurate positioning further meets the requirements of subsequent tasks, such as path planning, local obstacle avoidance and the like. Currently, there are two main methods for positioning a mobile robot, namely, a vision-based positioning method and a laser radar-based positioning method. Because the laser radar is expensive and the camera has the advantages of low cost, easy installation and the like, the positioning method based on the vision becomes a research hotspot.
The vision-based positioning method can be divided into a feature point-based method and an optical flow method, wherein the optical flow method uses the assumption that the gray level is unchanged, and adopts an optical flow tracking method to perform positioning. The feature point method estimates the pose by extracting and matching features of an image frame, and although this method does not need to consider gray scale change, it takes a lot of time to extract and match feature points.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a mobile robot positioning method based on vision, which not only improves the calculation speed, but also has certain robustness.
The present invention achieves the above-described object by the following technical means.
A vision-based mobile robot positioning method comprises the following steps:
s1) acquiring image information of the surrounding environment of the mobile robot;
s2) carrying out optical flow tracking on the input image frames to obtain the motion trail of the mobile robot based on the optical flow method, and screening key frames while inputting the image frames;
s3) constructing a local map based on the motion trail of the mobile robot obtained in the step S2;
s4) acquiring the next key frame of the real-time environment through the local map, and extracting the characteristics of the key frame;
s5) projecting the three-dimensional map points of the local map to the current key frame, and calculating a reprojection error;
s6) judging whether the reprojection error is larger than a threshold value, if so, performing bundle set optimization on the current key frame, correcting the pose of the key frame, updating a local map according to the key frame, and entering the step S7, otherwise, directly entering the step S7;
s7), if yes, returning to step S2, otherwise, ending the positioning.
Preferably, the step S2 is to perform the filtering of the key frames while inputting the image frames, and specifically includes the following steps:
A1) at least 30 frames away from the last inserted key frame;
A2) the current frame is tracked to 60 points at least;
A3) the repetition ratio of the point observed by the current frame and the point observed by the last key frame is less than 85%.
Preferably, the extracting the feature of the key frame in step S4 specifically includes the following steps:
B1) dividing image grids of the key frame;
B2) extracting ORB characteristics from the current key frame;
B3) judging the number of the feature points, if the number of the feature points is less than 1000, reducing the threshold value for extraction, and if the number of the feature points is more than 2000, carrying out quadtree homogenization.
Preferably, the step B2 specifically includes the following steps:
b2.1) smoothing the image using a 9 × 9 gaussian kernel;
b2.2) detecting the characteristic points by adopting a FAST algorithm, detecting a circle of gray values around the candidate characteristic points, and if the gray value difference between at least 12 pixel points around the candidate point and the candidate point is greater than a threshold value, determining that the candidate point is a characteristic point, wherein a mathematical expression for calculating the gray value difference is as follows:
Figure BDA0002726120600000021
in the formula: n represents the number of the feature points, i represents the current candidate feature point, I (i) is the gray value of any point on the circumference, I (p) is the gray value of the feature point at the center of the circle, and d is a threshold.
B2.3) taking the characteristic point of the step 4.2.2 as a circle center and d as a radius to make a circle;
b2.4) randomly taking 132 pairs of points in the circle obtained in step S4.2.3, and in the same pair of points, if the gray value of the previous point is greater than that of the next point, taking 1, otherwise, taking 0, and forming a 132-bit binary descriptor.
Preferably, the calculation formula for calculating the reprojection error in step S5 is:
ei,j=xi,j–ρ(Ti,w,Pw,j)
in the formula: e.g. of the typei,jRepresenting the reprojection error, x, corresponding to the jth feature point in the ith keyframei,jRepresents the jth feature point in the ith frame, p represents the projection function, and Ti,wCamera extrinsic matrix representing the ith frame, where w represents the world coordinate system, Pw,jThe coordinates in the world coordinate system of the map point corresponding to the jth feature point are represented.
Preferably, the step S6 of updating the local map according to the key frame includes the following steps:
C1) adding feature points which do not appear in the local map in the current key frame into the local map;
C2) after the map point is created, if the map point is observed by less than three key frames, deleting the map point in the local map;
C3) and (4) removing unqualified points from the new feature points and the map points through positive depth inspection or parallax inspection or reprojection error inspection.
The invention has the beneficial effects that:
the invention discloses a mobile robot positioning method based on vision, which comprises the following steps: the pose estimation of the mobile robot is carried out by adopting an optical flow method, the pose estimation of the mobile robot is further optimized by carrying out feature extraction on a key frame and carrying out reprojection error constraint on points projected to the current key frame by map points and feature points, the speed of the optical flow method is reserved, and the initial pose estimated by the optical flow method is optimized by using a feature point method, so that the positioning precision of the mobile robot is improved.
Drawings
Fig. 1 is a flowchart of a method for positioning a mobile robot based on vision according to an embodiment of the present invention.
Fig. 2 is a flow chart of feature extraction according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as broadly as the present invention is capable of modification in various respects, all without departing from the spirit and scope of the present invention.
Referring to fig. 1, a method for positioning a mobile robot based on vision according to an embodiment of the present invention includes the following steps:
step S1), image information of the surrounding environment of the mobile robot is collected through camera equipment and transmitted to an industrial personal computer through a transmission interface;
step S2) carrying out optical flow tracking on the input image frames to obtain the motion trail of the mobile robot based on the optical flow method, and screening key frames while inputting the image frames;
the method comprises the following steps of inputting image frames and screening key frames at the same time, wherein the method specifically comprises the following steps:
A1) at least 30 frames away from the last inserted key frame;
A2) the current frame is tracked to 60 points at least;
A3) the repetition ratio of the point observed by the current frame and the point observed by the last key frame is less than 85%.
Step S3) constructing a local map based on the motion trail of the mobile robot obtained by the optical flow method;
step S4) obtaining the next key frame of the real-time environment through the local map, and extracting the characteristics of the key frame;
as shown in fig. 2, the method for extracting the feature of the key frame includes the following specific steps:
B1) dividing image grids of the key frame;
B2) extracting an ORB feature from the current key frame specifically includes:
b2.1) smoothing the image using a 9 × 9 gaussian kernel;
b2.2) detecting the feature points by using a FAST (features from obtained segmented segment test) algorithm, detecting a circle of gray values around the candidate feature points, and if the gray value difference between at least 12 pixel points around the candidate points and the candidate points is greater than a threshold value, the candidate points are one feature point, wherein the mathematical expression for calculating the gray value difference is as follows:
Figure BDA0002726120600000041
in the formula: n represents the number of the feature points, i represents the current candidate feature point, I (i) is the gray value of any point on the circumference, I (p) is the gray value of the feature point at the center of the circle, and d is a threshold.
B2.3) taking the characteristic point of the step B2.2 as a circle center and d as a radius to make a circle;
b2.4) randomly selecting 132 pairs of points in the circle obtained in the step B2.3, and in the same pair of points, if the gray value of the previous point is greater than that of the next point, selecting 1, otherwise, 0, and forming a 132-bit binary descriptor.
B3) Judging the number of the feature points, if the number of the feature points is less than 1000, reducing the threshold value for extraction, and if the number of the feature points is more than 2000, carrying out quadtree homogenization.
Step S5), projecting the three-dimensional map points of the local map to the current key frame, and calculating a reprojection error;
wherein, the calculation formula for calculating the reprojection error is as follows:
ei,j=xi,j–ρ(Ti,w,Pw,j)
in the formula: e.g. of the typei,jRepresenting the reprojection error, x, corresponding to the jth feature point in the ith keyframei,jRepresents the jth feature point in the ith frame, p represents the projection function, and Ti,wCamera extrinsic matrix representing the ith frame, where w represents the world coordinate system, Pw,jRepresenting the map point corresponding to the jth feature point in the world coordinate systemAnd (4) coordinates.
Step S6), judging whether the reprojection error is larger than a threshold value, if so, performing bundle set optimization on the current key frame, correcting the pose of the key frame, updating a local map according to the key frame, and entering step S7, otherwise, entering step S7;
wherein, updating the local map according to the key frame specifically comprises the following steps:
C1) adding feature points which do not appear in the local map in the current key frame into the local map;
C2) after the map point is created, if the map point is observed by less than three key frames, deleting the map point in the local map;
C3) and (4) removing unqualified points from the new feature points and the map points through positive depth inspection or parallax inspection or reprojection error inspection.
Step S7) determines whether positioning is to be continued, if so, returns to step 2), and if not, ends positioning.
The invention realizes the rapid pose tracking of the mobile robot by using the optical flow method, and simultaneously optimizes the pose of the mobile robot solved by the optical flow method by using the characteristic point method under the constraint of minimized reprojection error, thereby improving the calculation speed and having certain robustness.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A mobile robot positioning method based on vision is characterized by comprising the following steps:
s1) acquiring image information of the surrounding environment of the mobile robot;
s2) carrying out optical flow tracking on the input image frames to obtain the motion trail of the mobile robot based on the optical flow method, and screening key frames while inputting the image frames;
s3) constructing a local map based on the motion trail of the mobile robot obtained in the step S2;
s4) acquiring the next key frame of the real-time environment through the local map, and extracting the characteristics of the key frame;
s5) projecting the three-dimensional map points of the local map to the current key frame, and calculating a reprojection error;
s6) judging whether the reprojection error is larger than a threshold value, if so, performing bundle set optimization on the current key frame, correcting the pose of the key frame, updating a local map according to the key frame, and entering the step S7, otherwise, entering the step S7;
s7), if yes, returning to step S2, otherwise, ending the positioning.
2. The vision-based mobile robot positioning method of claim 1, wherein the step S2 is performed by performing key frame filtering while inputting image frames, and specifically comprises the following steps:
A1) at least 30 frames away from the last inserted key frame;
A2) the current frame is tracked to 60 points at least;
A3) the repetition ratio of the point observed by the current frame and the point observed by the last key frame is less than 85%.
3. The vision-based mobile robot positioning method according to claim 1, wherein the step S4 of extracting the feature of the key frame specifically comprises the following steps:
B1) dividing image grids of the key frame;
B2) extracting ORB characteristics from the current key frame;
B3) judging the number of the feature points, if the number of the feature points is less than 1000, reducing the threshold value for extraction, and if the number of the feature points is more than 2000, carrying out quadtree homogenization.
4. The vision-based mobile robot positioning method of claim 1, wherein said step B2 specifically comprises the steps of:
b2.1) smoothing the image using a 9 × 9 gaussian kernel;
b2.2) detecting the characteristic points by adopting a FAST algorithm, detecting a circle of gray values around the candidate characteristic points, and if the gray value difference between at least 12 pixel points around the candidate point and the candidate point is greater than a threshold value, determining that the candidate point is a characteristic point, wherein a mathematical expression for calculating the gray value difference is as follows:
Figure FDA0002726120590000011
in the formula: n represents the number of the feature points, i represents the current candidate feature point, I (i) is the gray value of any point on the circumference, I (p) is the gray value of the feature point at the center of the circle, and d is a threshold.
B2.3) taking the characteristic point of the step B2.2 as a circle center and d as a radius to make a circle;
b2.4) randomly selecting 132 pairs of points in the circle obtained in the step B2.3, and in the same pair of points, if the gray value of the previous point is greater than that of the next point, selecting 1, otherwise, 0, and forming a 132-bit binary descriptor.
5. The vision-based mobile robot positioning method of claim 1, wherein the calculation formula for calculating the reprojection error in step S5 is as follows:
ei,j=xi,j–ρ(Ti,w,Pw,j)
in the formula: e.g. of the typei,jRepresenting the reprojection error, x, corresponding to the jth feature point in the ith keyframei,jRepresents the jth feature point in the ith frame, p represents the projection function, and Ti,wCamera extrinsic matrix representing the ith frame, where w represents the world coordinate system, Pw,jThe coordinates in the world coordinate system of the map point corresponding to the jth feature point are represented.
6. The vision-based mobile robot positioning method of claim 1, wherein the step S6 of updating the local map according to the key frame includes the following steps:
C1) adding feature points which do not appear in the local map in the current key frame into the local map;
C2) after the map point is created, if the map point is observed by less than three key frames, deleting the map point in the local map;
C3) and (4) removing unqualified points from the new feature points and the map points through positive depth inspection or parallax inspection or reprojection error inspection.
CN202011103272.6A 2020-10-15 2020-10-15 Vision-based mobile robot positioning method Pending CN112308917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011103272.6A CN112308917A (en) 2020-10-15 2020-10-15 Vision-based mobile robot positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011103272.6A CN112308917A (en) 2020-10-15 2020-10-15 Vision-based mobile robot positioning method

Publications (1)

Publication Number Publication Date
CN112308917A true CN112308917A (en) 2021-02-02

Family

ID=74327437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011103272.6A Pending CN112308917A (en) 2020-10-15 2020-10-15 Vision-based mobile robot positioning method

Country Status (1)

Country Link
CN (1) CN112308917A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420590A (en) * 2021-05-13 2021-09-21 北京航空航天大学 Robot positioning method, device, equipment and medium in weak texture environment
CN115900553A (en) * 2023-01-09 2023-04-04 成都盛锴科技有限公司 Compound positioning method and system for train inspection robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109059930A (en) * 2018-08-31 2018-12-21 西南交通大学 A kind of method for positioning mobile robot of view-based access control model odometer
CN110070564A (en) * 2019-05-08 2019-07-30 广州市百果园信息技术有限公司 A kind of characteristic point matching method, device, equipment and storage medium
CN111462207A (en) * 2020-03-30 2020-07-28 重庆邮电大学 RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109059930A (en) * 2018-08-31 2018-12-21 西南交通大学 A kind of method for positioning mobile robot of view-based access control model odometer
CN110070564A (en) * 2019-05-08 2019-07-30 广州市百果园信息技术有限公司 A kind of characteristic point matching method, device, equipment and storage medium
CN111462207A (en) * 2020-03-30 2020-07-28 重庆邮电大学 RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420590A (en) * 2021-05-13 2021-09-21 北京航空航天大学 Robot positioning method, device, equipment and medium in weak texture environment
CN113420590B (en) * 2021-05-13 2022-12-06 北京航空航天大学 Robot positioning method, device, equipment and medium in weak texture environment
CN115900553A (en) * 2023-01-09 2023-04-04 成都盛锴科技有限公司 Compound positioning method and system for train inspection robot

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
EP2858008B1 (en) Target detecting method and system
CN112132893A (en) Visual SLAM method suitable for indoor dynamic environment
CN114782691A (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN112336342B (en) Hand key point detection method and device and terminal equipment
CN112801047B (en) Defect detection method and device, electronic equipment and readable storage medium
CN108765452A (en) A kind of detection of mobile target in complex background and tracking
CN113377888A (en) Training target detection model and method for detecting target
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN111476814B (en) Target tracking method, device, equipment and storage medium
CN113658203A (en) Method and device for extracting three-dimensional outline of building and training neural network
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN112308917A (en) Vision-based mobile robot positioning method
CN113420590A (en) Robot positioning method, device, equipment and medium in weak texture environment
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN116385493A (en) Multi-moving-object detection and track prediction method in field environment
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN117132620A (en) Multi-target tracking method, system, storage medium and terminal for automatic driving scene
CN114998411B (en) Self-supervision monocular depth estimation method and device combining space-time enhancement luminosity loss
CN111382834B (en) Confidence degree comparison method and device
CN113702941A (en) Point cloud speed measurement method based on improved ICP (inductively coupled plasma)
CN113763468A (en) Positioning method, device, system and storage medium
CN112344936A (en) Semantic SLAM-based mobile robot automatic navigation and target recognition algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination