CN108051002A - Transport vehicle space-location method and system based on inertia measurement auxiliary vision - Google Patents
Transport vehicle space-location method and system based on inertia measurement auxiliary vision Download PDFInfo
- Publication number
- CN108051002A CN108051002A CN201711259465.9A CN201711259465A CN108051002A CN 108051002 A CN108051002 A CN 108051002A CN 201711259465 A CN201711259465 A CN 201711259465A CN 108051002 A CN108051002 A CN 108051002A
- Authority
- CN
- China
- Prior art keywords
- msub
- mrow
- mtr
- mtd
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses transport vehicle space-location method and system based on inertia measurement auxiliary vision, the IMU sensing datas of acquisition AGV trolleies;Gather the consecutive image of AGV trolley ambient enviroments;IMU sensing datas are added up according to time block to obtain IMU sensing data blocks;Image is pre-processed, obtains Feature Points Matching data;Using the Feature Points Matching data of obtained IMU sensing data block sums, the position and direction of camera are estimated;According to the position and direction of camera, the movement locus figure of camera is drawn;Map match:World coordinate system where world coordinate system and real scene where camera is subjected to coordinate conversion so that the position and path that AGV is obtained can be matched with real scene map on position and path;Obtain position and all possible operating path of the AGV trolleies in real scene.The present invention can reach AGV trolleies precisely sterically defined purpose.
Description
Technical field
The present invention relates to transport vehicle space-location methods and system based on inertia measurement auxiliary vision.
Background technology
AGV (Automated Guided Vehicle, automated guided vehicle) system, also referred to as AGV trolleies, including positioning
With navigation two parts.Positioning mainly has two big mainstreams:The alignment system of alignment system, view-based access control model based on laser radar.Base
It is laser ranging in the principle of the alignment system of laser radar, i.e., emits infrared laser into environment and receive infrared laser, obtain
The time difference is obtained, and then obtains the distance of object.Laser radar alignment system is stable, precision is higher, but laser radar price
It is prohibitively expensive, and laser radar cannot perceive some visual informations of surrounding, such as the color of ambient enviroment and some spies
Different mark.
The alignment system of view-based access control model is the environmental information by image capture device collection closely, and utilizes computer
Vision technique carries out image procossing and obtains environmental information, realizes positioning.In image capture device, monocular camera is simple because of it
Structure and principle, and be widely used.But some defects are still had as visual sensor based on monocular camera:
1. without depth information, cause position inaccurate;
2. the camera position estimated according to tracing characteristic points and direction are constantly accumulated there are error, cause trolley inclined
From path.
The content of the invention
Based on problem above, we are aided in using Inertial Measurement Unit (IMU, Inertial Measurement Unit)
Monocular camera realizes space orientation, it is proposed that a kind of to aid in the transport vehicle space-location method of vision based on inertia measurement and be
System, the estimation of wherein IMU plus the accurate matching of visual signature so that system accuracy is very high.The present invention can be relatively more multiple
The precise locating function of AGV can also be realized in miscellaneous environment.
In a first aspect, the present invention provides the transport vehicle space-location method based on inertia measurement auxiliary vision, including:
Step (1):The IMU sensing datas of acquisition AGV trolleies, the IMU sensing datas, including:AGV trolleies angle speed
Degree and acceleration;
Step (2):Gather the consecutive image of AGV trolley ambient enviroments;
Step (3):IMU sensing datas are added up according to time block to obtain IMU sensing data blocks so that IMU
The output frequency of consecutive image of the frequency of sensing data block output with gathering is consistent, and is alignd in time;
Step (4):The image of step (2) is pre-processed, obtains Feature Points Matching data;
Step (5):The Feature Points Matching number that the IMU sensing datas block and step (4) obtained using step (3) is obtained
According to estimating the position and direction of camera;
Step (6):According to the position and direction for the camera that step (5) obtains, the movement locus figure of camera is drawn;
Step (7):Map match:World coordinate system where world coordinate system and real scene where camera is carried out
Coordinate is converted so that the position and path that AGV is obtained can be matched with real scene map on position and path;It obtains
Position and all possible operating path of the AGV trolleies in real scene.
The step of step (3) is:
Assuming that the angular speed obtained from IMU sensors is ωr, acceleration αr, but since sensor is done by outside noise
It disturbs and the limitation of own hardware, there are measured deviation σ, outside noise interference η, the IMU sensing datas actually obtained:
Wherein, σgIt is angular speed deviation, σαIt is acceleration bias, ηαRepresent the interference of outside noise angular velocity, ηgIt represents
Interference of the outside noise to acceleration, ωrRepresent the angular speed with error, αrRepresent the acceleration with error;
Assuming that [t, t+ Δ t] represents the time interval between+1 frame two images of kth frame and kth, it is right according to formula (2)
IMU sensing datas carry out pre-integration, obtain the IMU sensing data blocks between two field pictures:
The step of step (4) image preprocessing, is as follows:
Step (41):Characteristic point detects:Characteristic point detection is carried out to each two field picture of step (2) using FAST algorithms;
Step (42):Feature Points Matching:The characteristic point of the image of all two continuous frames is matched, is obtained same
Relative position of the characteristic point in two continuous frames image;
Step (43):Key-frame extraction:If the parallax of current frame image is more than given threshold or current frame image ratio
There is the new feature point more than given threshold number in previous frame image, then present frame is considered as key frame;
Step (44):The BRIEF for extracting the characteristic point of key frame images describes operator, and key frame data is first according to the time
Afterwards in order deposit key frame data storehouse, the key frame data includes:Characteristic point, BRIEF describe operator and IMU sensor numbers
According to block;Using DBOW2 algorithms, BRIEF is described into operator in the form of dictionary and is stored in BoW bag of words;
The BRIEF describes operator for Expressive Features point, by the description operator of the characteristic point in key frame images with word
The form deposit BoW bag of words of allusion quotation, it is therefore an objective to as the database of subsequent time (closing) ring detection.Winding detection needs to compare
Whether current key frame and the key frame being stored in before are same frame, and then judge whether trolley is repeated in some position.Sentence
It is disconnected whether be same frame method be exactly the characteristic point for judging two key frame images description operator it is whether approximate.
The effect of BoW bag of words storage description operator:High due to describing sub- dimensional comparison, bag of words storage is simple;Due to
Bag of words are tree structures, and convenient for the calculating to describing operator similarity, winding is favourable when detecting.
The step of step (5) is:
Step (51):Initial phase:Initial phase is according to the pass of the characteristic point being mutually matched of continuous two images
System constrains the estimation rotation amount and translational movement of the camera for obtaining the starting stage using Epipolar geometry;Further according to the rotation estimated
Turn amount and translational movement, the method based on triangulation estimate the three-dimensional coordinate of characteristic point;
Step (51) the Epipolar geometry constraint:
If point P in space corresponding on the image plane is point p, according to pinhole imaging system principle, the point in space
P, the optical center of the point p in image and camera is point-blank.Two characteristic points so to match in two images must
To an Epipolar geometry constraint equation:
Wherein, the pixels of point P in two images in space are respectively p1And p2;
It solves Epipolar geometry constraint equation and just obtains the estimation rotation amount R of the camera and estimation translational movement t of camera.
The depth information s of image is obtained using the method for triangulation1And s2, and then obtain the three-dimensional coordinate of pixel.
Step (52):Motion stage according to the three-dimensional coordinate of pixel and the correspondence of pixel two-dimensional coordinate, utilizes perspective
N point methods PnP estimates the position and direction of camera;PnP problems are solved using Bundle Adjustment modes, obtain camera
Position and direction.
The step of step (52) is:
N three dimensions point P and its projection p, the camera rotation amount R and translational movement intentionally got using Lie algebra ξ expressions
T, K are the internal references of camera, it is assumed that three dimensions point coordinates is Pi=[Xi,Yi,Zi]T, pixel point coordinates is εi=[μi,υi]T, root
The relation of pixel two-dimensional coordinate and three dimensions point coordinates is obtained according to camera imaging principle:
There are an errors for the relation of pixel two-dimensional coordinate and three dimensions point coordinates;Error is summed, structure minimum two
Multiply problem, find best camera pose, make to minimize the error:
Error is exactly to be projected to obtain according to the pose currently estimated with three dimensions point coordinates by pixel two-dimensional coordinate
The error that compares of position, wherein φi=[μi,υi,1]T。
In short, pass through Feature Points Matching, p1And p2It is the projection of same spatial point P, initial phase obtains pixel
Three-dimensional coordinate obtains p by projection2With actual p2Between there are distance, by adjusting camera pose so that projection obtain p2
With actual p2Between exist distance disappear, so as to obtain the position and direction of camera.
The step of step (7) is:
Tri- points of A, B, C are fixed in real scene map, camera obtains movement locus and obtain B points from A points to B points
Compared with the position and direction of A.By the transition matrix between coordinate system, position of A, B point in real scene map is obtained
And actual distance.
Assuming that A is initial point, coordinate of the B points under the world coordinate system using A points as origin is
Coordinate system in real scene isA scale M and translation Δ S=is differed between two coordinate systems
[ΔX,ΔY,ΔZ]T, rotation R0, transformational relation is as follows:
There are three unknown numbers in formula (5):Scale M, translation Δ S and rotation R0, by A, B, C coordinate, solve three not
Know number, position coordinates of the camera in real scene is then obtained by formula (5) at any time.
It is further included between the step (6) and step (7):
Step (70):Closed loop detects;If present frame is key frame, operator is described into the BRIEF of characteristic point in key frame
It is compared with the description operator of key frame in BoW bag of words before, the BRIEF of characteristic point describes operator therewith in key frame
The description operator similarity of key frame is more than setting ratio in preceding BoW bag of words, and it is similar to be considered as two key frames, if obtained
It is similar to obtain the key frame of continuously setting number, illustrates to reach current location before AGV trolleies.
Second aspect, the present invention provide a kind of computer readable storage medium, are stored thereon with computer program, the program
The step of as above any the method is realized when being executed by processor.
The third aspect, the present invention provide a kind of computer equipment, including memory, processor and are stored in the storage
On device and the computer program that can perform on the processor, the processor realize as above any institute when performing described program
The step of stating method.
Beneficial effects of the present invention:
The present invention solves the spatialization function of the AGV under industrial complex environment, utilizes sensor IMU simple in structure
AGV location informations and the status information of itself are obtained in real time with monocular camera, using quick data processing method and constantly
Update optimization mode, complete space orientation.The present invention pursues efficiently quick mode, reaches in line with the theory of energy-saving consumption-reducing
To the accurate sterically defined purposes of AGV.
Description of the drawings
Relations of the Fig. 1 between present system module.
Fig. 2 carries out sterically defined flow chart for the method for the present invention.
Specific embodiment:
The invention will be further described with embodiment below in conjunction with the accompanying drawings:
It is noted that following detailed description is all illustrative, it is intended to provide further instruction to the application.It is unless another
It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field
The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root
According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singulative
It is also intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " bag
Include " when, indicate existing characteristics, step, operation, device, component and/or combination thereof.
As shown in Figure 1, the present invention is to aid in the transport vehicle space positioning system of vision, it is necessary in car body based on inertia measurement
Upper installation IMU sensors, car body installed in front common monocular cam install necessary equipment on car body:Power supply, motor,
Host computer, communication apparatus.
Whole system mainly includes following module:IMU data acquisition modules, camera data acquisition module, IMU data
Preprocessing module, image processing module, map-matching module, space orientation module, control module, interactive module, network communication
Module.
Wherein IMU data acquisition modules are to gather the number of the angular speed of moving of car generation and acceleration on IMU sensors
According to;Camera data acquisition module obtains continuous two dimensional image using monocular camera;IMU data preprocessing modules are to pass IMU
Sensor collects data and adds up;Image processing module is that characteristic point detection is carried out to each two field picture, and according to parallax and
Characteristic point obtains key frame;Map-matching module is AGV according to remote control, perceives the environmental information of scene and obtain may operation
Routing information, and by the matching with map in navigation system, obtain AGV corresponding location informations in real scene;Space
Locating module is positioned according to image information and IMU data moving of car state, position and orientation;Control module is that reception is small
Final result is transferred navigation mould by the information of the position signal of vehicle, user demand information and communication module, and integrated treatment
Block carries out path planning.Human-computer interaction module mainly carries out human-computer interaction, and user can set AGV according to the demand of oneself
Starting and final position;Network communication module, realization communicate with the information of navigation system.
As shown in Fig. 2, the present invention provides the transport vehicles based on inertia measurement auxiliary vision in industrial complex environment
Space-location method, step are as follows:
(1) the AGV trolleies of view-based access control model not can determine that the obstacle information in environment and feasible road in a new environment
Footpath information.Therefore launched by AGV trolleies before some industrial environment, artificial control AGV trolleies, according to scene map,
The environmental information in place is gathered, by the mapping relations with some position of map, obtains location informations of the AGV in true map.
(2) AGV is placed into industrial environment, opens AGV alignment systems and wireless remote controller, AGV passes through artificial distant
Control starts to perceive the environmental information of surrounding.
(3) after opening AGV, IMU sensors and camera start to obtain data.The data that IMU sensors obtain are the angles of AGV
Speed and acceleration, the image that camera obtains are the consecutive images of AGV trolley local environments.General IMU sensor frequencies are
70Hz, phase unit frequency are 30Hz, and the rate that they obtain data is variant, so needing to need alignment of data by IMU numbers
It adds up according to according to time block so that IMU data blocks are consistent in time with image information.
IMU data processings.Due to IMU sensors output data rate it is very fast, camera obtain vision information rate compared with
It is slow to integrate (integration), it is necessary to which the IMU data in a period of time add up, the IMU data blocks in short time are obtained, are made
The frequency for obtaining the output of IMU data blocks is consistent with image output frequency, and is alignd in time.Assuming that it is obtained from IMU
Angular speed and acceleration be respectively ωrAnd αr, and consider measured deviation σ and outside noise interference η, obtain following IMU measurements
Value:
Wherein, σgIt is angular speed deviation, σαIt is acceleration bias, ηα、ηgOutside noise angular velocity and acceleration are represented respectively
The interference of degree, ωrRepresent the angular speed with error, αrRepresent the acceleration with error;
Assuming that [t, t+ Δ t] represents the time interval between+1 frame two images of kth frame and kth, it is right according to formula (2)
IMU data carry out pre-integration, obtain the IMU data blocks between two field pictures.
(4) image real time transfer.AGV trolleies obtain consecutive image by camera, and image are handled, image data
Processing pass through following steps:
(41) characteristic point detects.Utilize FAST (Features From Accelerated Segment Test) algorithm pair
Image characteristic point is detected.But if all carrying out characteristic point detection per piece image, substantial amounts of resource will be expended, in order to improve
Efficiency, using pyramid optical flow algorithm, to characteristic point into line trace, i.e. the characteristic point of image is the spy in the upper piece image of tracking
Obtained from sign point.Be certain to lose Partial Feature point in this process, thus set every three frame using FAST algorithms into
Row characteristic point detects, and then the characteristic point in previous frame is obtained remaining two frame using pyramid optical flow algorithm.
(42) Feature Points Matching.It is detected by characteristic point, each two field picture can obtain enough characteristic points.Continuous two
There are many same characteristic points in the image of frame, but the position of characteristic point in both figures is different, it is necessary to carry out characteristic point
Match somebody with somebody, obtain the relative position of same characteristic point in two images.
(43) key frame obtains.Camera obtains 30 two field pictures in each second, if preserving each frame image data, can cause to count
According to redundancy, it is computationally intensive the problems such as.Representative key frame is extracted, amount of storage and operand can be greatly reduced.It is crucial
Frame is judged according to parallax size and the number for characteristic point newly occur, if a certain frame, its parallax is more than one
Fixed threshold value or it many new characteristic points occurs compared with former frame, then this frame is just considered as key frame.
(44) BRIEF (the Binary Robust Independent Elementary of key frame characteristic point are extracted
Feature) operator is described, and by key frame data (characteristic point, BRIEF describe operator, IMU sensing datas block), according to when
Between sequencing, in deposit key frame data storehouse, utilize DBOW2 (Bags of binary words for fast place
Recognition in image sequence) algorithm, BRIEF is described into operator, BoW bag of words are stored in the form of dictionary
In.
(5) image information for the characteristic point that the IMU sensing datas block and step (42) obtained step (3) acquires,
It is transferred to AGV space orientation modules.The module estimates position and the state of itself of camera according to two data messages, obtains
Movement locus and spatial position.It is as follows:
(51) initial phase:When opening AGV system, monocular camera can only obtain the image information of two dimension, this process
In be lost depth information in scene, so there was only the pixel coordinate of two dimension in the starting stage.To obtain the movement of camera,
It needs to constrain using Epipolar geometry, according to two groups of 2D point estimation campaigns.Its algorithm principle is as follows:
Epipolar geometry constrains, and substantially make use of camera imaging principle.If some point P in space is corresponded on the image plane
Be point p, according to pinhole imaging system principle, the optical center of the point P in space, the point p in image and camera is in straight line
On.Two characteristic points so to match in two images can be obtained by an Epipolar geometry constraint, such as following formula:
Wherein P be in space a bit, its pixel in two images is respectively p1, p2, the rotation of R expression cameras,
T represents the translation of camera.s1,s2Represent the distance of camera metric space point P at two positions.
The constraint equation solved above is obtained with R and t.So by solving several pairs to the characteristic point to match
Epipolar geometric constraint, it is possible to estimate camera motion (rotation amount and translational movement), the position of camera is can be obtained by conjunction with IMU data
It puts and orientation.
In initial phase, using at least eight or the information of the 2D-2D characteristic points pair of 8 or more, obtained according to Epipolar geometry
To the movement of camera, the three-dimensional coordinate information of principle of triangulation acquisition pixel is then utilized.
After the rotation amount and translational movement that estimate camera, the space of the estimation characteristic point with camera is needed in next step
Position.The present invention obtains the depth information of image, i.e. s using the method for triangulation1,s2, and then obtain the three-dimensional seat of pixel
Mark information.
There must be certain translation in initial phase AGV, if treating in situ stationary, Epipolar geometry constraint equation
It is unsatisfactory for, initialization procedure just fails.So AGV needs to carry out movement could to complete to initialize faster, smoothly into next
Stage.
(52) motion stage.In motion stage, according to the correspondence of 3D-2D characteristic points, using have an X-rayed n points method (PnP,
Perspective-n-Point the movement of camera) is estimated.Perspective n point methods be also known as distinguished point based positioning mode, according to distance,
The depth value constraint equation of the construction features such as angle point imaging, then convert estimation camera posture through three-dimensional rigid body.In two images
Known to the three-dimensional position of wherein one characteristic point, then minimum only to need 3 points to that estimate the movement of camera.3D-2D side
Epipolar geometry constraint is not required in method, and preferable estimation can be obtained in seldom match point.
PnP problems are solved using BA (Bundle Adjustment) mode.First by PnP problems be built into one it is non-thread
Property least square problem, regard the position of camera pose and spatial point as optimized variable, optimize, cardinal principle be utilize
Minimum re-projection error obtains pose and the location information of spatial point.
Such as n three dimensions point P and its projection p, the camera pose R and t, K intentionally got using Lie algebra ξ expressions are
The internal reference of camera, it is assumed that certain space point coordinates is Pi=[Xi,Yi,Zi]T, pixel point coordinates is εi=[μi,υi]T, according to camera into
As principle can obtain the relation of location of pixels and spatial point position:
Here it is holding matrix dimension correctness, above formula make use of homogeneous coordinates.Because camera pose is unknown and exists
Noise, so there are an errors for formula (3).So error is summed, least square problem is built, finds best phase seat in the plane
Appearance minimizes it:
The error is exactly to project pixel two-dimensional coordinate according to the pose currently estimated with three dimensions point coordinates
The error that obtained position compares, wherein φi=[μi,υi,1]T。
In short, pass through Feature Points Matching, it is known that p1And p2It is the projection of same spatial point P, initial phase has obtained point
Three-dimensional coordinate, obtain p by projection2With actual p2Between there are distance, by adjusting camera pose so that this distance
Become smaller and even disappear, so as to obtain the position and direction of camera;
By above-mentioned steps, it is estimated that the position of camera and posture, but estimate and actual value be there are error,
Pose (position and posture) is carried out in space orientation module using nonlinear least square method to optimize.By IMU measurement errors, vision
Error and pose evaluated error are taken into account, and build object function, optimization object function so that all errors are minimum.
(6) according to the position of the camera after optimization, on user interface, the movement locus of camera is drawn in real time
Figure.
(7) when being remotely controlled AGV movements, it is possible to repeat stretch footpath of having walked, it is necessary to carry out closed loop detection.It is if current
Frame is key frame, and the BRIEF of characteristic point in key frame is described operator compares with the description operator in DBOW2 storehouses before,
Search out it is enough as characteristic point, it is close to be considered as two key frames, if acquiring continuous 4 pairs of key frames corresponds to phase
Seemingly, illustrate to reach here before AGV.
(8) map match.Above-mentioned camera can obtain image information and extract image characteristic point information combination IMU data and obtain
Camera location information and ambient condition information, but AGV is when being positioned, be in fact using initial coordinate as world coordinates,
Realize the self poisoning under this coordinate system.But positions of the AGV in real scene can not be determined, it is necessary to by the generation where camera
World coordinate system where boundary's coordinate system and real scene carries out coordinate conversion so that the position and path that AGV is obtained can with it is true
Real field scape map is matched on position and path.After above-mentioned steps, AGV system can be obtained in some real scene
In position, and obtain all possible operating path.
Tri- points of A, B, C are fixed in scene map, camera obtains movement locus and to obtain B points opposite from A points to B points
In the position and direction of A.By the transition matrix between coordinate system, obtain position of A, B point in real scene and really away from
From.
Assuming that A is initial point, coordinate of the B points under the world coordinate system using A points as origin is
Coordinate system in real scene isA scale M and translation Δ S=is differed between two coordinate systems
[ΔX,ΔY,ΔZ]T, rotation R0, transformational relation is as follows:
There are three unknown numbers in formula (5):Scale M, translation Δ S and rotation R0, by A, B, C coordinate, three can be solved
Then position coordinates of the camera in real scene can be obtained at any time by formula (5) for a unknown number.
Application example:
(101) space orientation:AGV is put into actual production process, realizes the positioning function in true map.
Due to AGV run in real scene pursue be stablize it is high, it is safe.Mono- second distance walked of AGV will not be too
It is long, each second 30 frame image datas excessively redundancy obtained of AGV cameras, so here still with key frame data.AGV passes through
Camera obtains image, by image procossing, obtains the characteristic point information of image, and detects and determine whether key frame.If
Key frame, just by the description operator of this key frame compared with the description operator of the key frame stored before, if similarity
Reach a certain scope, it is consistent to be considered as two key frames, if continuous 4 key frames correspond to it is consistent, so that it may know camera true
Which position of real field scape, you can realize positioning.And location information is presented on user interface in real time.
(102) human-computer interaction and network communication:User's actual motion state with AGV according to demand, sets the start-stop of AGV
Position.Alignment system and navigation system are carried out by information communication by network communication, such AGV can believe according to actual position
Breath and target position information, carry out the adjustment of oneself state.
(103) control module:User demand, the spatial positional information and navigation information of trolley are merged, is effectively controlled
Signal processed.
(104) path planning module:With reference to the position and azimuth information of track cartographic information and AGV, to the path of AGV
It is planned, obtains optimal path, optimal path information is passed into drive module, realizes navigation feature.
(105) drive module:The routing information planned is received, driving device driving trolley carries out the fortune of next step
It is dynamic, reach the function of precisely navigating.
The foregoing is merely the preferred embodiments of the application, are not limited to the application, for the skill of this field
For art personnel, the application can have various modifications and variations.It is all within spirit herein and principle, made any repair
Change, equivalent substitution, improvement etc., should be included within the protection domain of the application.
Claims (10)
1. based on the transport vehicle space-location method of inertia measurement auxiliary vision, it is characterized in that, including:
Step (1):The IMU sensing datas of acquisition AGV trolleies, the IMU sensing datas, including:AGV trolleies angular speed and
Acceleration;
Step (2):Gather the consecutive image of AGV trolley ambient enviroments;
Step (3):IMU sensing datas are added up according to time block to obtain IMU sensing data blocks so that IMU is sensed
The output frequency of consecutive image of the frequency of device data block output with gathering is consistent, and is alignd in time;
Step (4):The image of step (2) is pre-processed, obtains Feature Points Matching data;
Step (5):The Feature Points Matching data that the IMU sensing datas block and step (4) obtained using step (3) is obtained, estimate
Count out the position and direction of camera;
Step (6):According to the position and direction for the camera that step (5) obtains, the movement locus figure of camera is drawn;
Step (7):Map match:World coordinate system where world coordinate system and real scene where camera is subjected to coordinate
Conversion so that the position and path that AGV is obtained can be matched with real scene map on position and path;It is small to obtain AGV
Position and all possible operating path of the vehicle in real scene.
2. the transport vehicle space-location method as described in claim 1 based on inertia measurement auxiliary vision, it is characterized in that, it is described
The step of step (3) is:
Assuming that the angular speed obtained from IMU sensors is ωr, acceleration αr, but due to sensor by outside noise interference with
And the limitation of own hardware, there are measured deviation σ, outside noise interference η, the IMU sensing datas actually obtained:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msup>
<mi>&omega;</mi>
<mi>r</mi>
</msup>
<mo>=</mo>
<msup>
<mi>&omega;</mi>
<mi>r</mi>
</msup>
<mo>+</mo>
<msub>
<mi>&sigma;</mi>
<mi>g</mi>
</msub>
<mo>+</mo>
<msub>
<mi>&eta;</mi>
<mi>g</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>&alpha;</mi>
<mi>r</mi>
</msup>
<mo>=</mo>
<msup>
<mi>&alpha;</mi>
<mi>r</mi>
</msup>
<mo>+</mo>
<msub>
<mi>&sigma;</mi>
<mi>&alpha;</mi>
</msub>
<mo>+</mo>
<msub>
<mi>&eta;</mi>
<mi>&alpha;</mi>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, σgIt is angular speed deviation, σαIt is acceleration bias, ηαRepresent the interference of outside noise angular velocity, ηgRepresent extraneous
Interference of the noise to acceleration, ωrRepresent the angular speed with error, αrRepresent the acceleration with error;
Assuming that [t, t+ Δ t] represents the time interval between+1 frame two images of kth frame and kth, IMU is passed according to formula (2)
Sensor data carry out pre-integration, obtain the IMU sensing data blocks between two field pictures:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msubsup>
<mo>&Integral;</mo>
<mi>t</mi>
<mrow>
<mi>t</mi>
<mo>+</mo>
<mi>&Delta;</mi>
<mi>t</mi>
</mrow>
</msubsup>
<msup>
<mi>&omega;</mi>
<mi>r</mi>
</msup>
<msub>
<mi>d</mi>
<mi>t</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msubsup>
<mo>&Integral;</mo>
<mi>t</mi>
<mrow>
<mi>t</mi>
<mo>+</mo>
<mi>&Delta;</mi>
<mi>t</mi>
</mrow>
</msubsup>
<msup>
<mi>&alpha;</mi>
<mi>r</mi>
</msup>
<msub>
<mi>d</mi>
<mi>t</mi>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
<mo>.</mo>
</mrow>
3. the transport vehicle space-location method as described in claim 1 based on inertia measurement auxiliary vision, it is characterized in that, it is described
The step of step (4) image preprocessing, is as follows:
Step (41):Characteristic point detects:Characteristic point detection is carried out to each two field picture of step (2) using FAST algorithms;
Step (42):Feature Points Matching:The characteristic point of the image of all two continuous frames is matched, obtains same feature
Relative position of the point in two continuous frames image;
Step (43):Key-frame extraction:If the parallax of current frame image is more than given threshold or current frame image than previous
There is the new feature point more than given threshold number in two field picture, then present frame is considered as key frame;
Step (44):The BRIEF for extracting the characteristic point of key frame images describes operator, and key frame data is suitable according to time order and function
In sequence deposit key frame data storehouse, the key frame data includes:Characteristic point, BRIEF describe operator and IMU sensing datas
Block;Using DBOW2 algorithms, BRIEF is described into operator in the form of dictionary and is stored in BoW bag of words.
4. the transport vehicle space-location method as claimed in claim 3 based on inertia measurement auxiliary vision, it is characterized in that, it is described
The step of step (5) is:
Step (51):Initial phase:Initial phase is according to the relation of the characteristic point being mutually matched of continuous two images, profit
The estimation rotation amount and translational movement for the camera for obtaining the starting stage are constrained with Epipolar geometry;Further according to the rotation amount that estimates with
Translational movement, the method based on triangulation estimate the three-dimensional coordinate of characteristic point;
Step (52):Motion stage, according to the three-dimensional coordinate of pixel and the correspondence of pixel two-dimensional coordinate, using having an X-rayed n points
Method PnP estimates the position and direction of camera;PnP problems are solved using Bundle Adjustment modes, obtain the position of camera
And direction.
5. the transport vehicle space-location method as claimed in claim 4 based on inertia measurement auxiliary vision, it is characterized in that,
Step (51) the Epipolar geometry constraint:
If point P in space corresponding on the image plane is point p, according to pinhole imaging system principle, point P, figure in space
The optical center of point p and camera as in are point-blank.Two characteristic points so to match in two images just obtain one
A Epipolar geometry constraint equation:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>s</mi>
<mn>1</mn>
</msub>
<msub>
<mi>p</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mi>K</mi>
<mi>P</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>s</mi>
<mn>2</mn>
</msub>
<msub>
<mi>p</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mi>R</mi>
<mi>P</mi>
<mo>+</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>20</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, the pixels of point P in two images in space are respectively p1And p2;
It solves Epipolar geometry constraint equation and just obtains the estimation rotation amount R of the camera and estimation translational movement t of camera;
The depth information s of image is obtained using the method for triangulation1And s2, and then obtain the three-dimensional coordinate of pixel.
6. the transport vehicle space-location method as claimed in claim 5 based on inertia measurement auxiliary vision, it is characterized in that,
The step of step (52) is:
N three dimensions point P and its projection p, the camera rotation amount R and translational movement t, K intentionally got using Lie algebra ξ expressions are
The internal reference of camera, it is assumed that three dimensions point coordinates is Pi=[Xi,Yi,Zi]T, pixel point coordinates is εi=[μi,υi]T, according to camera
Image-forming principle obtains the relation of pixel two-dimensional coordinate and three dimensions point coordinates:
<mrow>
<msub>
<mi>s</mi>
<mi>i</mi>
</msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>&mu;</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>&upsi;</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mi>K</mi>
<mi> </mi>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>&xi;</mi>
<mo>^</mo>
</msup>
<mo>)</mo>
</mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>X</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Y</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Z</mi>
<mi>i</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
There are an errors for the relation of pixel two-dimensional coordinate and three dimensions point coordinates;Error is summed, structure least square is asked
Topic, finds best camera pose, makes to minimize the error:
<mrow>
<msup>
<mi>&xi;</mi>
<mo>*</mo>
</msup>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mi>min</mi>
<mi>&xi;</mi>
</munder>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>&phi;</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>s</mi>
<mi>i</mi>
</msub>
</mfrac>
<mi>K</mi>
<mi> </mi>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>&xi;</mi>
<mo>^</mo>
</msup>
<mo>)</mo>
</mrow>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
Error is exactly the position for being projected pixel two-dimensional coordinate and three dimensions point coordinates according to the pose currently estimated
Put the error to compare, wherein φi=[μi,υi,1]T。
In short, pass through Feature Points Matching, p1And p2It is the projection of same spatial point P, initial phase obtains the three-dimensional of pixel
Coordinate obtains p by projection2With actual p2Between there are distance, by adjusting camera pose so that projection obtain p2With reality
The p on border2Between exist distance disappear, so as to obtain the position and direction of camera.
7. the transport vehicle space-location method as described in claim 1 based on inertia measurement auxiliary vision, it is characterized in that,
The step of step (7) is:
Tri- points of A, B, C are fixed in real scene map, camera obtains movement locus and to obtain B points opposite from A points to B points
In the position and direction of A.By the transition matrix between coordinate system, position of A, B point in real scene map and true is obtained
Actual distance from.
Assuming that A is initial point, coordinate of the B points under the world coordinate system using A points as origin isTrue
Coordinate system in real field scape isA scale M and translation Δ S=[Δs are differed between two coordinate systems
X,ΔY,ΔZ]T, rotation R0, transformational relation is as follows:
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>X</mi>
<msub>
<mi>G</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Y</mi>
<msub>
<mi>G</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Z</mi>
<msub>
<mi>G</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>+</mo>
<mi>M</mi>
<mo>)</mo>
</mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>X</mi>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Y</mi>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Z</mi>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<msub>
<mi>R</mi>
<mn>0</mn>
</msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>X</mi>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Y</mi>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>Z</mi>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>&Delta;</mi>
<mi>X</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>&Delta;</mi>
<mi>Y</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>&Delta;</mi>
<mi>Z</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
There are three unknown numbers in formula (5):Scale M, translation Δ S and rotation R0, by A, B, C coordinate, three unknown numbers are solved,
Then position coordinates of the camera in real scene is obtained by formula (5) at any time.
8. the transport vehicle space-location method as claimed in claim 3 based on inertia measurement auxiliary vision, it is characterized in that,
It is further included between the step (6) and step (7):
Step (70):Closed loop detects;If present frame is key frame, operator is described into therewith by the BRIEF of characteristic point in key frame
The description operator of key frame is compared in preceding BoW bag of words, in key frame the BRIEF of characteristic point describe operator with before
The description operator similarity of key frame is more than setting ratio in BoW bag of words, and it is similar to be considered as two key frames, if obtained
Key frame to continuous setting number is similar, illustrates to reach current location before AGV trolleies.
9. a kind of computer readable storage medium, is stored thereon with computer program, it is characterized in that, which is executed by processor
The step of Shi Shixian such as claim 1-8 any the methods.
10. a kind of computer equipment, including memory, processor and it is stored on the memory and can be in the processor
The computer program of upper execution, it is characterized in that, the processor is realized when performing described program as described in claim 1-8 is any
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711259465.9A CN108051002B (en) | 2017-12-04 | 2017-12-04 | Transport vehicle space positioning method and system based on inertial measurement auxiliary vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711259465.9A CN108051002B (en) | 2017-12-04 | 2017-12-04 | Transport vehicle space positioning method and system based on inertial measurement auxiliary vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108051002A true CN108051002A (en) | 2018-05-18 |
CN108051002B CN108051002B (en) | 2021-03-16 |
Family
ID=62122007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711259465.9A Expired - Fee Related CN108051002B (en) | 2017-12-04 | 2017-12-04 | Transport vehicle space positioning method and system based on inertial measurement auxiliary vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108051002B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109074407A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Multi-source data mapping method, related device and computer-readable storage medium |
CN109074638A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Fusion graph building method, related device and computer readable storage medium |
CN109063703A (en) * | 2018-06-29 | 2018-12-21 | 南京睿悦信息技术有限公司 | Augmented reality location algorithm based on mark identification and Inertial Measurement Unit fusion |
CN109147058A (en) * | 2018-08-31 | 2019-01-04 | 腾讯科技(深圳)有限公司 | Initial method and device and storage medium for the fusion of vision inertial navigation information |
CN109974721A (en) * | 2019-01-08 | 2019-07-05 | 武汉中海庭数据技术有限公司 | A kind of vision winding detection method and device based on high-precision map |
CN110084832A (en) * | 2019-04-25 | 2019-08-02 | 亮风台(上海)信息科技有限公司 | Correcting method, device, system, equipment and the storage medium of camera pose |
CN110176034A (en) * | 2019-05-27 | 2019-08-27 | 盎锐(上海)信息科技有限公司 | Localization method and end of scan for VSLAM |
CN110243380A (en) * | 2019-06-26 | 2019-09-17 | 华中科技大学 | A kind of map-matching method based on multi-sensor data and angle character identification |
CN110645975A (en) * | 2019-10-16 | 2020-01-03 | 北京华捷艾米科技有限公司 | Monocular vision positioning IMU (inertial measurement unit) auxiliary tracking method and device |
CN110880189A (en) * | 2018-09-06 | 2020-03-13 | 舜宇光学(浙江)研究院有限公司 | Combined calibration method and combined calibration device thereof and electronic equipment |
CN111044039A (en) * | 2019-12-25 | 2020-04-21 | 中航华东光电有限公司 | Monocular target area self-adaptive high-precision distance measuring device and method based on IMU |
CN111160362A (en) * | 2019-11-27 | 2020-05-15 | 东南大学 | FAST feature homogenization extraction and IMU-based inter-frame feature mismatching removal method |
CN111351436A (en) * | 2020-03-06 | 2020-06-30 | 大连理工大学 | Method for verifying precision of structural plane displacement vision measurement system |
CN111354042A (en) * | 2018-12-24 | 2020-06-30 | 深圳市优必选科技有限公司 | Method and device for extracting features of robot visual image, robot and medium |
CN111665512A (en) * | 2019-02-21 | 2020-09-15 | 香港科技大学 | Range finding and mapping based on fusion of 3D lidar and inertial measurement unit |
CN111998870A (en) * | 2019-05-26 | 2020-11-27 | 北京初速度科技有限公司 | Calibration method and device of camera inertial navigation system |
CN112050806A (en) * | 2019-06-06 | 2020-12-08 | 北京初速度科技有限公司 | Positioning method and device for moving vehicle |
CN112284399A (en) * | 2019-07-26 | 2021-01-29 | 北京初速度科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
CN112419403A (en) * | 2020-11-30 | 2021-02-26 | 海南大学 | Indoor unmanned aerial vehicle positioning method based on two-dimensional code array |
CN112699765A (en) * | 2020-12-25 | 2021-04-23 | 北京百度网讯科技有限公司 | Method and device for evaluating visual positioning algorithm, electronic equipment and storage medium |
CN112747754A (en) * | 2019-10-30 | 2021-05-04 | 北京初速度科技有限公司 | Fusion method, device and system of multi-sensor data |
US11138465B2 (en) * | 2019-12-10 | 2021-10-05 | Toyota Research Institute, Inc. | Systems and methods for transforming coordinates between distorted and undistorted coordinate systems |
CN113739803A (en) * | 2021-08-30 | 2021-12-03 | 中国电子科技集团公司第五十四研究所 | Indoor and underground space positioning method based on infrared datum point |
CN115406447A (en) * | 2022-10-31 | 2022-11-29 | 南京理工大学 | Autonomous positioning method of quad-rotor unmanned aerial vehicle based on visual inertia in rejection environment |
CN116576866A (en) * | 2023-07-13 | 2023-08-11 | 荣耀终端有限公司 | Navigation method and device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2434256A2 (en) * | 2010-09-24 | 2012-03-28 | Honeywell International Inc. | Camera and inertial measurement unit integration with navigation data feedback for feature tracking |
CN103817699A (en) * | 2013-09-25 | 2014-05-28 | 浙江树人大学 | Quick hand-eye coordination method for industrial robot |
CN104280022A (en) * | 2013-07-13 | 2015-01-14 | 哈尔滨点石仿真科技有限公司 | Digital helmet display device tracking system of visual-aided inertial measuring unit |
CN105931222A (en) * | 2016-04-13 | 2016-09-07 | 成都信息工程大学 | High-precision camera calibration method via low-precision 2D planar target |
CN106127739A (en) * | 2016-06-16 | 2016-11-16 | 华东交通大学 | A kind of RGB D SLAM method of combination monocular vision |
CN106485744A (en) * | 2016-10-10 | 2017-03-08 | 成都奥德蒙科技有限公司 | A kind of synchronous superposition method |
CN106556395A (en) * | 2016-11-17 | 2017-04-05 | 北京联合大学 | A kind of air navigation aid of the single camera vision system based on quaternary number |
CN107167826A (en) * | 2017-03-31 | 2017-09-15 | 武汉光庭科技有限公司 | The longitudinal direction of car alignment system and method for Image Feature Detection based on variable grid in a kind of automatic Pilot |
CN107255476A (en) * | 2017-07-06 | 2017-10-17 | 青岛海通胜行智能科技有限公司 | A kind of indoor orientation method and device based on inertial data and visual signature |
-
2017
- 2017-12-04 CN CN201711259465.9A patent/CN108051002B/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2434256A2 (en) * | 2010-09-24 | 2012-03-28 | Honeywell International Inc. | Camera and inertial measurement unit integration with navigation data feedback for feature tracking |
CN104280022A (en) * | 2013-07-13 | 2015-01-14 | 哈尔滨点石仿真科技有限公司 | Digital helmet display device tracking system of visual-aided inertial measuring unit |
CN103817699A (en) * | 2013-09-25 | 2014-05-28 | 浙江树人大学 | Quick hand-eye coordination method for industrial robot |
CN105931222A (en) * | 2016-04-13 | 2016-09-07 | 成都信息工程大学 | High-precision camera calibration method via low-precision 2D planar target |
CN106127739A (en) * | 2016-06-16 | 2016-11-16 | 华东交通大学 | A kind of RGB D SLAM method of combination monocular vision |
CN106485744A (en) * | 2016-10-10 | 2017-03-08 | 成都奥德蒙科技有限公司 | A kind of synchronous superposition method |
CN106556395A (en) * | 2016-11-17 | 2017-04-05 | 北京联合大学 | A kind of air navigation aid of the single camera vision system based on quaternary number |
CN107167826A (en) * | 2017-03-31 | 2017-09-15 | 武汉光庭科技有限公司 | The longitudinal direction of car alignment system and method for Image Feature Detection based on variable grid in a kind of automatic Pilot |
CN107255476A (en) * | 2017-07-06 | 2017-10-17 | 青岛海通胜行智能科技有限公司 | A kind of indoor orientation method and device based on inertial data and visual signature |
Non-Patent Citations (1)
Title |
---|
郑伟: "基于视觉的微小型四旋翼飞行机器人位姿估计与导航研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063703A (en) * | 2018-06-29 | 2018-12-21 | 南京睿悦信息技术有限公司 | Augmented reality location algorithm based on mark identification and Inertial Measurement Unit fusion |
WO2020019116A1 (en) * | 2018-07-23 | 2020-01-30 | 深圳前海达闼云端智能科技有限公司 | Multi-source data mapping method, related apparatus, and computer-readable storage medium |
CN109074638A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Fusion graph building method, related device and computer readable storage medium |
CN109074407A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Multi-source data mapping method, related device and computer-readable storage medium |
CN109074638B (en) * | 2018-07-23 | 2020-04-24 | 深圳前海达闼云端智能科技有限公司 | Fusion graph building method, related device and computer readable storage medium |
CN109147058A (en) * | 2018-08-31 | 2019-01-04 | 腾讯科技(深圳)有限公司 | Initial method and device and storage medium for the fusion of vision inertial navigation information |
CN109147058B (en) * | 2018-08-31 | 2022-09-20 | 腾讯科技(深圳)有限公司 | Initialization method and device for visual and inertial navigation information fusion and storage medium |
CN110880189B (en) * | 2018-09-06 | 2022-09-09 | 舜宇光学(浙江)研究院有限公司 | Combined calibration method and combined calibration device thereof and electronic equipment |
CN110880189A (en) * | 2018-09-06 | 2020-03-13 | 舜宇光学(浙江)研究院有限公司 | Combined calibration method and combined calibration device thereof and electronic equipment |
CN111354042A (en) * | 2018-12-24 | 2020-06-30 | 深圳市优必选科技有限公司 | Method and device for extracting features of robot visual image, robot and medium |
CN111354042B (en) * | 2018-12-24 | 2023-12-01 | 深圳市优必选科技有限公司 | Feature extraction method and device of robot visual image, robot and medium |
CN109974721A (en) * | 2019-01-08 | 2019-07-05 | 武汉中海庭数据技术有限公司 | A kind of vision winding detection method and device based on high-precision map |
CN111665512B (en) * | 2019-02-21 | 2024-04-30 | 香港科技大学 | Ranging and mapping based on fusion of 3D lidar and inertial measurement unit |
CN111665512A (en) * | 2019-02-21 | 2020-09-15 | 香港科技大学 | Range finding and mapping based on fusion of 3D lidar and inertial measurement unit |
CN110084832A (en) * | 2019-04-25 | 2019-08-02 | 亮风台(上海)信息科技有限公司 | Correcting method, device, system, equipment and the storage medium of camera pose |
CN111998870A (en) * | 2019-05-26 | 2020-11-27 | 北京初速度科技有限公司 | Calibration method and device of camera inertial navigation system |
CN110176034A (en) * | 2019-05-27 | 2019-08-27 | 盎锐(上海)信息科技有限公司 | Localization method and end of scan for VSLAM |
CN112050806A (en) * | 2019-06-06 | 2020-12-08 | 北京初速度科技有限公司 | Positioning method and device for moving vehicle |
CN110243380B (en) * | 2019-06-26 | 2020-11-24 | 华中科技大学 | Map matching method based on multi-sensor data and angle feature recognition |
CN110243380A (en) * | 2019-06-26 | 2019-09-17 | 华中科技大学 | A kind of map-matching method based on multi-sensor data and angle character identification |
CN112284399B (en) * | 2019-07-26 | 2022-12-13 | 北京魔门塔科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
CN112284399A (en) * | 2019-07-26 | 2021-01-29 | 北京初速度科技有限公司 | Vehicle positioning method based on vision and IMU and vehicle-mounted terminal |
CN110645975A (en) * | 2019-10-16 | 2020-01-03 | 北京华捷艾米科技有限公司 | Monocular vision positioning IMU (inertial measurement unit) auxiliary tracking method and device |
CN112747754A (en) * | 2019-10-30 | 2021-05-04 | 北京初速度科技有限公司 | Fusion method, device and system of multi-sensor data |
CN111160362A (en) * | 2019-11-27 | 2020-05-15 | 东南大学 | FAST feature homogenization extraction and IMU-based inter-frame feature mismatching removal method |
US11138465B2 (en) * | 2019-12-10 | 2021-10-05 | Toyota Research Institute, Inc. | Systems and methods for transforming coordinates between distorted and undistorted coordinate systems |
CN111044039B (en) * | 2019-12-25 | 2024-03-19 | 中航华东光电有限公司 | Monocular target area self-adaptive high-precision distance measurement device and method based on IMU |
CN111044039A (en) * | 2019-12-25 | 2020-04-21 | 中航华东光电有限公司 | Monocular target area self-adaptive high-precision distance measuring device and method based on IMU |
CN111351436B (en) * | 2020-03-06 | 2021-06-18 | 大连理工大学 | Method for verifying precision of structural plane displacement vision measurement system |
CN111351436A (en) * | 2020-03-06 | 2020-06-30 | 大连理工大学 | Method for verifying precision of structural plane displacement vision measurement system |
CN112419403A (en) * | 2020-11-30 | 2021-02-26 | 海南大学 | Indoor unmanned aerial vehicle positioning method based on two-dimensional code array |
CN112699765A (en) * | 2020-12-25 | 2021-04-23 | 北京百度网讯科技有限公司 | Method and device for evaluating visual positioning algorithm, electronic equipment and storage medium |
CN113739803A (en) * | 2021-08-30 | 2021-12-03 | 中国电子科技集团公司第五十四研究所 | Indoor and underground space positioning method based on infrared datum point |
CN115406447A (en) * | 2022-10-31 | 2022-11-29 | 南京理工大学 | Autonomous positioning method of quad-rotor unmanned aerial vehicle based on visual inertia in rejection environment |
CN116576866B (en) * | 2023-07-13 | 2023-10-27 | 荣耀终端有限公司 | Navigation method and device |
CN116576866A (en) * | 2023-07-13 | 2023-08-11 | 荣耀终端有限公司 | Navigation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108051002B (en) | 2021-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108051002A (en) | Transport vehicle space-location method and system based on inertia measurement auxiliary vision | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
Yin et al. | Dynam-SLAM: An accurate, robust stereo visual-inertial SLAM method in dynamic environments | |
CN110312912B (en) | Automatic vehicle parking system and method | |
Panahandeh et al. | Vision-aided inertial navigation based on ground plane feature detection | |
Sweeney et al. | Solving for relative pose with a partially known rotation is a quadratic eigenvalue problem | |
CN107193279A (en) | Robot localization and map structuring system based on monocular vision and IMU information | |
CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
CN112115874B (en) | Cloud-fused visual SLAM system and method | |
CN107741234A (en) | The offline map structuring and localization method of a kind of view-based access control model | |
CN112734841B (en) | Method for realizing positioning by using wheel type odometer-IMU and monocular camera | |
US20060008119A1 (en) | Visual object recognition and tracking | |
CN109298778B (en) | Tracking system and method thereof | |
CN111354043A (en) | Three-dimensional attitude estimation method and device based on multi-sensor fusion | |
US20230085384A1 (en) | Characterizing and improving of image processing | |
Miao et al. | UniVIO: Unified direct and feature-based underwater stereo visual-inertial odometry | |
CN109781092A (en) | Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident | |
CN112101160B (en) | Binocular semantic SLAM method for automatic driving scene | |
Murillo et al. | Wearable omnidirectional vision system for personal localization and guidance | |
CN106574836A (en) | A method for localizing a robot in a localization plane | |
US10977810B2 (en) | Camera motion estimation | |
CN110052020B (en) | Apparatus, control apparatus and method for operation in portable device or robot system | |
CN114663473A (en) | Personnel target positioning and tracking method and system based on multi-view information fusion | |
TW202238449A (en) | Indoor positioning system and indoor positioning method | |
Xie et al. | Angular tracking consistency guided fast feature association for visual-inertial slam |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210316 Termination date: 20211204 |