Nothing Special   »   [go: up one dir, main page]

CN116429098A - Visual navigation positioning method and system for low-speed unmanned aerial vehicle - Google Patents

Visual navigation positioning method and system for low-speed unmanned aerial vehicle Download PDF

Info

Publication number
CN116429098A
CN116429098A CN202310286582.3A CN202310286582A CN116429098A CN 116429098 A CN116429098 A CN 116429098A CN 202310286582 A CN202310286582 A CN 202310286582A CN 116429098 A CN116429098 A CN 116429098A
Authority
CN
China
Prior art keywords
information
image
matrix
aircraft
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310286582.3A
Other languages
Chinese (zh)
Inventor
梁壮
陈天予
沈皓敏
徐培
梅小宁
张迪
顾村锋
李晓东
赖文星
李航宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Electromechanical Engineering
Original Assignee
Shanghai Institute of Electromechanical Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Electromechanical Engineering filed Critical Shanghai Institute of Electromechanical Engineering
Priority to CN202310286582.3A priority Critical patent/CN116429098A/en
Publication of CN116429098A publication Critical patent/CN116429098A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The invention provides a visual navigation positioning method and a visual navigation positioning system for a low-speed unmanned aerial vehicle, wherein the visual navigation positioning method comprises the following steps: step S1: acquiring a current image through a vision sensor, preprocessing the acquired current image, and extracting feature points based on the preprocessed current image to acquire image coordinate values of the feature points; step S2: acquiring pose transformation matrix information under an inertial system based on the position, the pose information and the height information of the aircraft measured by an aircraft inertial measurement unit and a height measurement sensor, and subscribing the pose information and the height information output by the flight control in a pose transformation matrix mode; step S3: based on the obtained image coordinate values of the feature points, the obtained gesture information and the obtained height information, the spatial positions of the corresponding pixel points under the world three-dimensional coordinate system are obtained through calculation of the visual sensor model.

Description

Visual navigation positioning method and system for low-speed unmanned aerial vehicle
Technical Field
The invention relates to the technical field of computer vision, in particular to a visual navigation positioning method and system of a low-speed unmanned aerial vehicle. Mainly provides a software and hardware integrated navigation positioning solution based on a monocular vision sensor and a multi-mode positioning calculation algorithm under the condition that a small unmanned aerial vehicle is under the condition of global satellite navigation refusal.
Background
The small unmanned autonomous aircraft is an aircraft which flies under a driving condition, can finish multidirectional maneuver and can realize an autonomous control strategy. Compared with a large and medium-sized unmanned aerial vehicle, the unmanned aerial vehicle has lower self speed and can be better applied to various demanding tasks such as searching, monitoring and observing special regions. However, under indoor environments or other global satellite navigation refusal conditions, the aircraft cannot effectively acquire the position information of the aircraft in the three-dimensional space, and meanwhile, if an inertial measurement unit is adopted, the drift of the aircraft cannot be effectively restrained, and the task execution of the aircraft is blocked.
Patent document CN113917939B (application number 202111174390.0) discloses a positioning navigation method, system and computing device for an aircraft. The positioning navigation method comprises the following steps: acquiring real-time flight map data generated in the current flight of the aircraft, wherein the real-time flight map data comprises terrain data; judging whether the current flight area is a known area or not according to the topographic data in the real-time flight map data and pre-stored historical flight map data; and if the current flight area is determined to be a known area, positioning navigation data of the aircraft are obtained according to the pre-stored historical flight map data, so that flight control is performed according to the positioning navigation data.
The visual navigation positioning method has the advantages of strong autonomy, lighter load and the like, can be used as more accurate feedback information of a control system, can correct measurement errors caused by inertial elements and simultaneously can enable autonomous flight control of an aircraft to be more accurate and stable, and in addition, machine vision has enough technical support for characteristic and target identification, so that the scheme is better in maturity, and the application of the vision in the aircraft control system and related tasks has wide prospect and important research.
Therefore, the invention provides an effective and feasible method for the autonomous positioning problem of the small unmanned aerial vehicle under the satellite navigation refusing environment condition by applying the homography relation of the visual imaging and combining the gesture information of the inertial measurement unit.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a visual navigation positioning method and system for a low-speed unmanned aerial vehicle.
The invention provides a visual navigation positioning method of a low-speed unmanned aerial vehicle, which comprises the following steps:
step S1: acquiring a current image through a vision sensor, preprocessing the acquired current image, and extracting feature points based on the preprocessed current image to acquire image coordinate values of the feature points;
step S2: acquiring pose transformation matrix information under an inertial system based on the position, the pose information and the height information of the aircraft measured by an aircraft inertial measurement unit and a height measurement sensor, and subscribing the pose information and the height information output by the flight control in a pose transformation matrix mode;
step S3: based on the obtained image coordinate values of the feature points, the obtained gesture information and the obtained height information, the spatial positions of the corresponding pixel points under the world three-dimensional coordinate system are obtained through calculation of the visual sensor model.
Preferably, a monocular camera is used to look down as desiredThe angle strapdown is arranged on the aircraft body, and the fixed installation pose relationship is M fix The inherent inertial measurement unit of the aircraft control system and the altimeter sensor are combined to form an integral configuration scheme of the aircraft sensor; and (5) strapdown installation calibration is carried out on the visual sensor by using a Zhang Zhengyou calibration method, so as to obtain an installation conversion matrix, an internal parameter matrix and an optical distortion matrix of the monocular camera relative to the machine body.
Preferably, the preprocessing of the acquired current image adopts:
step S1.1: carrying out algorithm correction on distortion of an acquired current image according to an optical distortion parameter matrix, carrying out separation operation on the corrected image in an HSV channel, dividing a highlight region according to the characteristics of different brightness of the highlight region on different channels, eliminating residual interference factors through thresholding and corrosion operation, and further inhibiting highlight interference of the characteristic region;
step S1.2: and filtering and removing the region which is not interested or the interference region and appears in the processing process of the image by adopting morphological expansion and corrosion operation.
Preferably, the step S1 employs:
step S1.3: calculating the image by adopting a Canny edge double-threshold detection method, adopting a high-low threshold ratio meeting preset conditions, calculating the gradient change of the image in two directions by utilizing the approximate limit difference of the first partial derivative of a Canny gradient operator, and determining the image edge by combining the maximum value of the gradient analyzed in the gradient direction;
step S1.4: carrying out Hough transformation operation on the binary image with only edge information, storing the information of the parameter space obtained by Hough transformation, and extracting the needed linear slope and distance from the origin from the information of the parameter space; and performing double-parameter sequencing on the slope of the straight line and the distance from the origin to separate the straight line clusters, screening and extracting the intersection points and the characteristic lines in the environment, and further obtaining the image coordinates of the characteristic points.
Preferably, the step S2 employs: reading the position, the attitude information and the height information of the aircraft measured by the aircraft inertial measurement unit and the height measurement sensor in a serial communication mode, filtering the acquired position, attitude information and height information of the aircraft, and outputting pose transformation matrix information under an inertial system; and subscribing the attitude information and the altitude information output by the flight control in an attitude transformation matrix mode.
Preferably, the step S3 employs: aiming at a structured environment with regularized linear features and an unstructured environment with irregular field features in a downward-looking field of view, based on the obtained image coordinate values of the feature points and the obtained attitude information and height information, different positioning calculation algorithms are adopted to calculate and obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the structured environment of the regularized linear features is based on the image coordinates X of the feature points uv And world coordinates X of feature points calibrated in a structured environment W Based on the measured values of the world coordinates of the extracted feature points
Figure BDA0004140057150000031
Deviation Δx from actual value W For aircraft position X C And correcting, wherein the corrected data further participate in iterative operation of a next-step position resolving algorithm, and a main model of the iterative algorithm is as follows:
Figure BDA0004140057150000032
Figure BDA0004140057150000033
Figure BDA0004140057150000034
wherein s represents a scale factor and represents the ratio between the logical resolution and the physical resolution of the camera;
Figure BDA0004140057150000035
representing a conversion matrix from a camera coordinate system to a world coordinate system for a kth frame image obtained based on the attitude information and the altitude information; />
Figure BDA0004140057150000036
An inverse matrix representing the matrix of parameters within the camera;
the unstructured environment with irregular field of view features is based on the image coordinate pair p of feature points matched by two adjacent frames 1 And p is as follows 2 The rotation matrix R and the translation vector T of the inter-frame transformation are obtained by solving the basic matrix F and the essential matrix E in the epipolar geometry, the rotation matrix R and the translation vector T are verified and compensated by utilizing the attitude information and the height information, the motion state of the camera in the three-dimensional space is restored, and then the coordinate transformation M for fixing the attitude relation is carried out fix Navigation positioning information of the aircraft is obtained.
The invention provides a visual navigation positioning system of a low-speed unmanned aerial vehicle, which comprises the following components:
module M1: acquiring a current image through a vision sensor, preprocessing the acquired current image, and extracting feature points based on the preprocessed current image to acquire image coordinate values of the feature points;
module M2: acquiring pose transformation matrix information under an inertial system based on the position, the pose information and the height information of the aircraft measured by an aircraft inertial measurement unit and a height measurement sensor, and subscribing the pose information and the height information output by the flight control in a pose transformation matrix mode;
module M3: based on the obtained image coordinate values of the feature points, the obtained gesture information and the obtained height information, the spatial positions of the corresponding pixel points under the world three-dimensional coordinate system are obtained through calculation of the visual sensor model.
Preferably, a monocular camera is adopted to be strapdown mounted on an aircraft body according to the required angle of view, and the fixed mounting pose relationship is M fix The inherent inertial measurement unit of the aircraft control system and the altimeter sensor are combined to form an integral configuration scheme of the aircraft sensor; strapdown of the visual sensor using Zhang Zhengyou calibrationAnd (5) mounting and calibrating to obtain a mounting conversion matrix, an internal parameter matrix and an optical distortion matrix of the monocular camera relative to the machine body.
Preferably, the preprocessing of the acquired current image adopts:
module M1.1: carrying out algorithm correction on distortion of an acquired current image according to an optical distortion parameter matrix, carrying out separation operation on the corrected image in an HSV channel, dividing a highlight region according to the characteristics of different brightness of the highlight region on different channels, eliminating residual interference factors through thresholding and corrosion operation, and further inhibiting highlight interference of the characteristic region;
module M1.2: filtering and removing an uninteresting region or an interference region appearing in the image processing process by adopting morphological expansion and corrosion operation;
module M1.3: calculating the image by adopting a Canny edge double-threshold detection method, adopting a high-low threshold ratio meeting preset conditions, calculating the gradient change of the image in two directions by utilizing the approximate limit difference of the first partial derivative of a Canny gradient operator, and determining the image edge by combining the maximum value of the gradient analyzed in the gradient direction;
module M1.4: carrying out Hough transformation operation on the binary image with only edge information, storing the information of the parameter space obtained by Hough transformation, and extracting the needed linear slope and distance from the origin from the information of the parameter space; and performing double-parameter sequencing on the slope of the straight line and the distance from the origin to separate the straight line clusters, screening and extracting the intersection points and the characteristic lines in the environment, and further obtaining the image coordinates of the characteristic points.
Preferably, the module M3 employs: aiming at a structured environment with regularized linear features and an unstructured environment with irregular field features in a downward-looking field of view, based on the obtained image coordinate values of the feature points and the obtained attitude information and height information, different positioning calculation algorithms are adopted to calculate and obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the structured environment of the regularized linear features is based on the image coordinates X of the feature points uv And world coordinates X of feature points calibrated in a structured environment W Based on the measured values of the world coordinates of the extracted feature points
Figure BDA0004140057150000041
Deviation Δx from actual value W For aircraft position X C And correcting, wherein the corrected data further participate in iterative operation of a next-step position resolving algorithm, and a main model of the iterative algorithm is as follows:
Figure BDA0004140057150000051
Figure BDA0004140057150000052
Figure BDA0004140057150000053
wherein s represents a scale factor and represents the ratio between the logical resolution and the physical resolution of the camera;
Figure BDA0004140057150000054
representing a conversion matrix from a camera coordinate system to a world coordinate system for a kth frame image obtained based on the attitude information and the altitude information; />
Figure BDA0004140057150000055
An inverse matrix representing the matrix of parameters within the camera;
the unstructured environment with irregular field of view features is based on the image coordinate pair p of feature points matched by two adjacent frames 1 And p is as follows 2 The rotation matrix R and the translation vector T of the inter-frame transformation are obtained by solving the basic matrix F and the essential matrix E in the epipolar geometry, and the rotation matrix R and the translation vector T are verified and compensated by utilizing the attitude information and the height information, so that the motion state of the camera in the three-dimensional space is restored, and the camera is fixed by a fixed positionCoordinate transformation M of pose relationship fix Navigation positioning information of the aircraft is obtained.
Compared with the prior art, the invention has the following beneficial effects:
1. compared with the traditional navigation mode of singly adopting a satellite navigation system or an inertial measurement unit, the autonomous navigation positioning method based on the self-inertial measurement unit and the monocular camera of the aircraft is independent of information of an external sensor of the aircraft, and ensures the autonomy of the aircraft;
2. the software algorithm is easy to realize, real-time performance can be ensured on the embedded platform, the method is suitable for the fast dynamic change task environment of the aircraft, meanwhile, the algorithm is easy to embed a flight control program, and the accumulated error of the flight control restraining inertia measurement unit can be supported, so that the navigation positioning information error is not accumulated but gradually diverged with time;
3. the hardware configuration of the invention is simple, and the monocular camera is configured to be strapdown with the aircraft according to the task execution requirement of the actual aircraft;
4. the integral navigation scheme idea of the invention has stronger hardware expansibility and software expansibility, and can be applied to low-speed aircrafts and expanded to other intelligent bodies or robots with autonomous movements;
5. the whole scheme of the invention has lower cost and low performance requirement on the imaging sensor, and the communication interfaces can all adopt universal standard protocols, thereby being applicable to the design of the aircraft with limited cost.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a flowchart of an unmanned aerial vehicle visual navigation method algorithm.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The invention aims to solve the problems of realizing and improving the stability and the robustness of a navigation and positioning algorithm of a small-sized aircraft by combining the optimization processing of an acquired image of the aircraft based on a monocular vision measurement and a positioning and resolving model of the attitude of the aircraft.
The invention adopts the technical proposal for solving the technical problems that: a software and hardware integrated navigation positioning solution based on a monocular vision sensor and a multi-mode positioning calculation algorithm.
Example 1
According to the invention, as shown in fig. 1, the visual navigation positioning method of the low-speed unmanned aerial vehicle comprises the following steps:
step S1: acquiring a current image through a vision sensor, preprocessing the acquired current image, and extracting feature points based on the preprocessed current image to acquire image coordinate values of the feature points;
step S2: acquiring pose transformation matrix information under an inertial system based on the position, the pose information and the height information of the aircraft measured by an aircraft inertial measurement unit and a height measurement sensor, and subscribing the pose information and the height information output by the flight control in a pose transformation matrix mode;
step S3: based on the obtained image coordinate values of the feature points, the obtained gesture information and the obtained height information, calculating through a visual sensor model to obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the vision sensor model is
Specifically, a monocular camera is adopted to be strapdown mounted on an aircraft body according to the required angle of view, and the fixed mounting pose relationship is M fix The inherent inertial measurement unit of the aircraft control system and the altimeter sensor are combined to form an integral configuration scheme of the aircraft sensor; strapdown installation calibration is carried out on the visual sensor by using a Zhang Zhengyou calibration method, and a monocular camera relative to the camera is obtainedBody mounting conversion matrix, internal reference matrix M in Basic parameters such as optical distortion matrix. The video stream information captured by the camera and the information of each measuring sensor of the flight control system are integrated in the flight control computer, and the navigation information is output through calculation by an algorithm.
Specifically, the preprocessing of the acquired current image adopts:
step S1.1: carrying out algorithm correction on distortion of an acquired current image according to an optical distortion parameter matrix, carrying out separation operation on the corrected image in an HSV channel, dividing a highlight region according to the characteristics of different brightness of the highlight region on different channels, eliminating residual interference factors through thresholding and corrosion operation, and further inhibiting highlight interference of the characteristic region;
step S1.2: and filtering and removing the region which is not interested or the interference region and appears in the processing process of the image by adopting morphological expansion and corrosion operation.
Specifically, the step S1 employs:
step S1.3: calculating the image by adopting a Canny edge double-threshold detection method, adopting a high-low threshold ratio between 2:1 and 3:1, calculating the gradient change of the image in two directions by utilizing the approximate limit difference of the first partial derivative of a Canny gradient operator, and determining the image edge by combining the maximum value of the gradient analyzed in the gradient direction; through accurate extraction of edge information, the algorithm design of subsequent straight line fitting is simplified, and errors of feature point detection in the graph are reduced;
step S1.4: carrying out Hough transformation operation on the binary image with only edge information, storing the information of the parameter space obtained by Hough transformation, and extracting the needed linear slope and distance from the origin from the information of the parameter space; and performing double-parameter sequencing on the slope of the straight line and the distance from the origin to separate the straight line clusters, screening and extracting the intersection points and the characteristic lines in the environment, and further obtaining the image coordinates of the characteristic points. And extracting SIFT features or ORB features between two adjacent frames aiming at unstructured environments, and completing matching and mismatching processing.
Aiming at an image sequence acquired by a vision sensor, preprocessing methods such as optical distortion correction, highlight inhibition, morphological filtering and the like are comprehensively adopted, and feature extraction algorithms such as Hough transformation, edge detection, rapid corner feature extraction and the like are improved, so that real-time processing of images and stable and high-precision output of information are ensured;
specifically, the step S2 employs: reading the position, the attitude information and the height information of the aircraft measured by the aircraft inertial measurement unit and the height measurement sensor in a serial communication mode, filtering the acquired position, attitude information and height information of the aircraft, and outputting pose transformation matrix information under an inertial system; and subscribing the attitude information and the altitude information output by the flight control in an attitude transformation matrix mode.
Specifically, the step S3 employs: aiming at a structured environment with regularized linear features and an unstructured environment with irregular field features in a downward-looking field of view, based on the obtained image coordinate values of the feature points and the obtained attitude information and height information, different positioning calculation algorithms are adopted to calculate and obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the structured environment of the regularized linear features is based on the image coordinates X of the feature points uv And world coordinates X of feature points calibrated in a structured environment W Based on the measured values of the world coordinates of the extracted feature points
Figure BDA0004140057150000071
Deviation Δx from actual value W For aircraft position X C Correcting, wherein the corrected data further participate in iterative operation of a next-step position calculation algorithm, so that the navigation positioning information is ensured not to generate error accumulation drift along with the increase of flight time; the main model of the iterative algorithm is as follows:
Figure BDA0004140057150000081
Figure BDA0004140057150000082
Figure BDA0004140057150000083
Figure BDA0004140057150000084
wherein s represents a scale factor and represents the ratio between the logical resolution and the physical resolution of the camera;
Figure BDA0004140057150000085
representing a transformation matrix from a camera coordinate system to a world coordinate system for a kth frame of image; />
Figure BDA0004140057150000086
An inverse matrix representing the matrix of parameters within the camera;
the unstructured environment with irregular field of view features is based on the image coordinate pair p of feature points matched by two adjacent frames 1 And p is as follows 2 The rotation matrix R and the translation vector T of the inter-frame transformation are obtained by solving the basic matrix F and the essential matrix E in the epipolar geometry, the rotation matrix R and the translation vector T are verified and compensated by utilizing the attitude information and the height information, the motion state of the camera in the three-dimensional space is restored, and then the coordinate transformation M for fixing the attitude relation is carried out fix Navigation positioning information of the aircraft is obtained.
The invention provides a visual navigation positioning system of a low-speed unmanned aerial vehicle, which comprises the following components:
module M1: acquiring a current image through a vision sensor, preprocessing the acquired current image, and extracting feature points based on the preprocessed current image to acquire image coordinate values of the feature points;
module M2: acquiring pose transformation matrix information under an inertial system based on the position, the pose information and the height information of the aircraft measured by an aircraft inertial measurement unit and a height measurement sensor, and subscribing the pose information and the height information output by the flight control in a pose transformation matrix mode;
module M3: based on the obtained image coordinate values of the feature points, the obtained gesture information and the obtained height information, calculating through a visual sensor model to obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the vision sensor model is
Specifically, a monocular camera is adopted to be strapdown mounted on an aircraft body according to the required angle of view, and the fixed mounting pose relationship is M fix The inherent inertial measurement unit of the aircraft control system and the altimeter sensor are combined to form an integral configuration scheme of the aircraft sensor; strapdown installation calibration is carried out on the visual sensor by using a Zhang Zhengyou calibration method, so as to obtain an installation conversion matrix and an internal parameter matrix M of the monocular camera relative to the machine body in Basic parameters such as optical distortion matrix. The video stream information captured by the camera and the information of each measuring sensor of the flight control system are integrated in the flight control computer, and the navigation information is output through calculation by an algorithm.
Specifically, the preprocessing of the acquired current image adopts:
module M1.1: carrying out algorithm correction on distortion of an acquired current image according to an optical distortion parameter matrix, carrying out separation operation on the corrected image in an HSV channel, dividing a highlight region according to the characteristics of different brightness of the highlight region on different channels, eliminating residual interference factors through thresholding and corrosion operation, and further inhibiting highlight interference of the characteristic region;
module M1.2: and filtering and removing the region which is not interested or the interference region and appears in the processing process of the image by adopting morphological expansion and corrosion operation.
Specifically, the module M1 employs:
module M1.3: calculating the image by adopting a Canny edge double-threshold detection method, adopting a high-low threshold ratio between 2:1 and 3:1, calculating the gradient change of the image in two directions by utilizing the approximate limit difference of the first partial derivative of a Canny gradient operator, and determining the image edge by combining the maximum value of the gradient analyzed in the gradient direction; through accurate extraction of edge information, the algorithm design of subsequent straight line fitting is simplified, and errors of feature point detection in the graph are reduced;
module M1.4: carrying out Hough transformation operation on the binary image with only edge information, storing the information of the parameter space obtained by Hough transformation, and extracting the needed linear slope and distance from the origin from the information of the parameter space; and performing double-parameter sequencing on the slope of the straight line and the distance from the origin to separate the straight line clusters, screening and extracting the intersection points and the characteristic lines in the environment, and further obtaining the image coordinates of the characteristic points. And extracting SIFT features or ORB features between two adjacent frames aiming at unstructured environments, and completing matching and mismatching processing.
Aiming at an image sequence acquired by a vision sensor, preprocessing methods such as optical distortion correction, highlight inhibition, morphological filtering and the like are comprehensively adopted, and feature extraction algorithms such as Hough transformation, edge detection, rapid corner feature extraction and the like are improved, so that real-time processing of images and stable and high-precision output of information are ensured;
specifically, the module M2 employs: reading the position, the attitude information and the height information of the aircraft measured by the aircraft inertial measurement unit and the height measurement sensor in a serial communication mode, filtering the acquired position, attitude information and height information of the aircraft, and outputting pose transformation matrix information under an inertial system; and subscribing the attitude information and the altitude information output by the flight control in an attitude transformation matrix mode.
Specifically, the module M3 employs: aiming at a structured environment with regularized linear features and an unstructured environment with irregular field features in a downward-looking field of view, based on the obtained image coordinate values of the feature points and the obtained attitude information and height information, different positioning calculation algorithms are adopted to calculate and obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the structured environment of the regularized linear features is based on the image coordinates X of the feature points uv And world coordinates X of feature points calibrated in a structured environment W According to the proposalWorld coordinate measurement of feature points
Figure BDA0004140057150000101
Deviation Δx from actual value W For aircraft position X C Correcting, wherein the corrected data further participate in iterative operation of a next-step position calculation algorithm, so that the navigation positioning information is ensured not to generate error accumulation drift along with the increase of flight time; the main model of the iterative algorithm is as follows:
Figure BDA0004140057150000102
Figure BDA0004140057150000103
Figure BDA0004140057150000104
wherein s represents a scale factor and represents the ratio between the logical resolution and the physical resolution of the camera;
Figure BDA0004140057150000105
representing a transformation matrix from a camera coordinate system to a world coordinate system for a kth frame of image; />
Figure BDA0004140057150000106
An inverse matrix representing the matrix of parameters within the camera;
the unstructured environment with irregular field of view features is based on the image coordinate pair p of feature points matched by two adjacent frames 1 And p is as follows 2 By solving the fundamental matrix F and the essential matrix E in the epipolar geometry to obtain the rotation matrix R and the translation vector T of the interframe transformation, the numerical values such as the attitude angle change rate, the position change rate and the altitude change rate of the altitude sensor of the inertial measurement unit can be utilized, and the consistency of the spatial movement of the aircraft is based, namely, the change rate does not generate abrupt change exceeding a set threshold valueThe rotation moment array R and the translation vector T are verified and compensated, inaccurate data of a certain frame are filtered, the motion state of the camera in the three-dimensional space is better recovered, and the coordinate conversion M for fixing the pose relation is further achieved fix Navigation positioning information of the aircraft is obtained.
Example 2
Example 2 is a preferred example of example 1
The invention provides a visual navigation positioning method of a low-speed unmanned aerial vehicle, which comprises the following steps:
taking a conventional low-speed four-rotor unmanned aerial vehicle as an example in terms of hardware configuration, selecting cameras meeting the performances of field of view, resolution, frame rate and the like according to task execution requirements of the unmanned aerial vehicle, taking the cameras as sensors to be installed on a frame of the unmanned aerial vehicle in a strapdown mode, installing the cameras at a certain angle of view according to information (ground) required to be acquired, and measuring to obtain that a fixed pose conversion relation between the cameras and an aircraft body coordinate system is M fix The camera and the aircraft flight control computer can be connected through standard interfaces such as USB, and the integral configuration scheme of the sensor is formed by combining an inherent inertial measurement unit of the aircraft control system and the altimeter sensor.
In order to support subsequent algorithm calculation, the internal parameter matrix of the camera needs to be calibrated, the main method can adopt a Zhang Zhengyou calibration method based on a main stream, a plurality of checkerboard images are collected by the camera to calibrate, information such as optical center offset, focal length, distortion parameters and the like of the camera can be obtained, and then the internal parameter matrix M of the camera can be given in An optical distortion matrix, and the like.
And thirdly, integrating video stream information captured by a camera and information of each measuring sensor of the flight control system in a flight control computer by an algorithm, and completing the output of navigation information through algorithm calculation. Taking a typical quadrotor aircraft as an example, each measurement sensor of the flight control system comprises an ultrasonic height measurement sensor, an inertial measurement unit, a geomagnetic direction measurement sensor, an optical flow speed measurement sensor and the like. Algorithms typically use measurement information from altimetric sensors and inertial measurement units. The inertial measurement unit is generally connected with the flight control computer through a UART serial port, and outputs attitude angle and position information of the aircraft after filtering processing and integration in the flight control software.
And fourthly, preprocessing the video sequence acquired by the sensor by an algorithm, wherein the main purpose is to eliminate the influence of interference factors such as high light, complex textures and the like in the image, focus the region of interest of the image and facilitate the overall processing instantaneity. For highlight influence in an image, such as ground reflectors, the image can be separated in HSV channels, the image of the area is segmented through a gray histogram according to the characteristic that gray differences of highlight areas on different channels are obvious, the image is finely tuned through thresholding and morphological corrosion operation on the basis of channel separation, morphological expansion and corrosion operation can be adopted for filtering elimination of uninteresting areas or interference areas in the image processing process, and specific parameters of filtering can be adjusted according to desktop test simulation.
And fifthly, extracting the characteristics of the image, wherein according to the design thought of a multimode algorithm, characteristic points with large gradient change of the image, large gray value difference, straight line intersection and the like in a known view field can be extracted from the known environment such as a room and the like. In order to accurately extract the features and facilitate subsequent calculation, firstly, a Canny edge double-threshold detection method is adopted to calculate an image, the high-low threshold ratio between 2:1 and 3:1 can be determined through desktop simulation, the gradient change of the image in two directions is calculated by utilizing the approximate limit difference of first partial derivatives of Canny gradient operators, and then the maximum value of the gradient is analyzed by combining the directions of the gradient, so that the image edge is determined. Then, the Hough transformation operation is carried out on the binary image with only edge information to extract the linear information, the linear error of the general Hough transformation fitting is larger, and the phenomenon that a plurality of linear clusters are overlapped together is caused, so that the accurate calculation of the intersection point is not facilitated, the Hough transformation is improved, the information of the parameter space obtained by the Hough transformation is stored, and the needed linear slope and the distance from the origin are extracted from the parameter space. And (3) carrying out double-parameter sequencing on the two parameters to separate the linear clusters, and further screening and extracting intersection points in the environment, wherein the pixel values of the intersection points are the image coordinates of the feature points. For an unknown unstructured environment, SIFT feature or ORB feature points between two adjacent frames are extracted based on a mature SIFT or ORB feature extraction algorithm, inter-frame matching is achieved, mismatching points are removed based on a RANSAC algorithm, and then pixel coordinates of the same feature point in two images of different frames are obtained and can be used as input of a follow-up algorithm.
And step six, integrating the information of each sensor to perform algorithm calculation. Under conditions of known environment or obvious characteristics (such as intersection points of ground straight lines), the algorithm transforms the matrix in terms of pose converted from the camera coordinate system to the world coordinate system
Figure BDA0004140057150000121
Subscribing data in a mode, subscribing the aircraft altimeter sensor data by using altitude information Z, and according to the characteristic point pixel coordinate X uv And the world coordinates X of feature points in a known environment W Based on a mature homography relation formula, the position X of the camera under the world coordinate system can be calculated C . Measured values according to the world coordinates of the extracted feature points +.>
Figure BDA0004140057150000122
Deviation Δx from actual value W For aircraft position X C And correcting, wherein the corrected data further participate in iterative operation of a next-step position calculation algorithm, and a main model of the iterative algorithm is as follows, wherein an upper mark k represents an image of a kth frame.
Figure BDA0004140057150000123
Figure BDA0004140057150000124
Figure BDA0004140057150000125
Step 5, under the condition of unknown environment, based on the feature point pair p matched with two adjacent frames 1 And p is as follows 2 The following relationship exists:
x 2 =Rx 1 +T
wherein R, T are respectively a rotation matrix and a translation vector of the transformation between two adjacent frames,
Figure BDA0004140057150000126
Figure BDA0004140057150000127
i.e. representing a continuous movement of the camera and thus also of the aircraft. Based on the mature epipolar constraint relationship in the field of computer vision, there are:
Figure BDA0004140057150000128
the basic matrix F and the essential matrix E in the epipolar geometry are solved through a mature algorithm, namely, the motion state of the camera in the three-dimensional space is recovered, and then the coordinate conversion M for fixing the pose relation is realized fix Navigation positioning information of the aircraft is obtained.
Those skilled in the art will appreciate that the systems, apparatus, and their respective modules provided herein may be implemented entirely by logic programming of method steps such that the systems, apparatus, and their respective modules are implemented as logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc., in addition to the systems, apparatus, and their respective modules being implemented as pure computer readable program code. Therefore, the system, the apparatus, and the respective modules thereof provided by the present invention may be regarded as one hardware component, and the modules included therein for implementing various programs may also be regarded as structures within the hardware component; modules for implementing various functions may also be regarded as being either software programs for implementing the methods or structures within hardware components.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily without conflict.

Claims (10)

1. The visual navigation positioning method of the low-speed unmanned aerial vehicle is characterized by comprising the following steps of:
step S1: acquiring a current image through a vision sensor, preprocessing the acquired current image, and extracting feature points based on the preprocessed current image to acquire image coordinate values of the feature points;
step S2: acquiring pose transformation matrix information under an inertial system based on the position, the pose information and the height information of the aircraft measured by an aircraft inertial measurement unit and a height measurement sensor, and subscribing the pose information and the height information output by the flight control in a pose transformation matrix mode;
step S3: based on the obtained image coordinate values of the feature points, the obtained gesture information and the obtained height information, calculating through a visual sensor model to obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the visual sensor model is a mathematical model for obtaining the space coordinates of the target point through a two-dimensional image, and the mathematical model is a mathematical conversion relation between the pixel coordinates of the image and the world coordinates of the target point, which is established under the condition of considering lens distortion.
2. The visual navigation positioning method of the low-speed unmanned aerial vehicle according to claim 1, wherein a monocular camera is adopted to be strapdown installed on an aircraft body according to a required angle of view, and the fixed installation pose relationship is M fix Combining inertial measurement unit and altimeter transmission inherent to aircraft control systemThe sensor forms an overall configuration scheme of the aircraft sensor; and (5) strapdown installation calibration is carried out on the visual sensor by using a Zhang Zhengyou calibration method, so as to obtain an installation conversion matrix, an internal parameter matrix and an optical distortion matrix of the monocular camera relative to the machine body.
3. The method for visual navigation and positioning of a low-speed unmanned aerial vehicle according to claim 1, wherein the preprocessing of the acquired current image comprises:
step S1.1: carrying out algorithm correction on distortion of an acquired current image according to an optical distortion parameter matrix, carrying out separation operation on the corrected image in an HSV channel, dividing a highlight region according to the characteristics of different brightness of the highlight region on different channels, eliminating residual interference factors through thresholding and corrosion operation, and further inhibiting highlight interference of the characteristic region;
step S1.2: and filtering and removing the region which is not interested or the interference region and appears in the processing process of the image by adopting morphological expansion and corrosion operation.
4. The method for visual navigation and positioning of a low-speed unmanned aerial vehicle according to claim 1, wherein the step S1 employs:
step S1.3: calculating the image by adopting a Canny edge double-threshold detection method, adopting a high-low threshold ratio meeting preset conditions, calculating the gradient change of the image in two directions by utilizing the approximate limit difference of the first partial derivative of a Canny gradient operator, and determining the image edge by combining the maximum value of the gradient analyzed in the gradient direction;
step S1.4: carrying out Hough transformation operation on the binary image with only edge information, storing the information of the parameter space obtained by Hough transformation, and extracting the needed linear slope and distance from the origin from the information of the parameter space; and performing double-parameter sequencing on the slope of the straight line and the distance from the origin to separate the straight line clusters, screening and extracting the intersection points and the characteristic lines in the environment, and further obtaining the image coordinates of the characteristic points.
5. The method for visual navigation and positioning of a low-speed unmanned aerial vehicle according to claim 1, wherein the step S2 employs: reading the position, the attitude information and the height information of the aircraft measured by the aircraft inertial measurement unit and the height measurement sensor in a serial communication mode, filtering the acquired position, attitude information and height information of the aircraft, and outputting pose transformation matrix information under an inertial system; and subscribing the attitude information and the altitude information output by the flight control in an attitude transformation matrix mode.
6. The method for visual navigation and positioning of a low-speed unmanned aerial vehicle according to claim 1, wherein the step S3 employs: aiming at a structured environment with regularized linear features and an unstructured environment with irregular field features in a downward-looking field of view, based on the obtained image coordinate values of the feature points and the obtained attitude information and height information, different positioning calculation algorithms are adopted to calculate and obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the structured environment of the regularized linear features is based on the image coordinates X of the feature points uv And world coordinates X of feature points calibrated in a structured environment W Based on the measured values of the world coordinates of the extracted feature points
Figure FDA0004140057130000021
Deviation Δx from actual value W For aircraft position X C And correcting, wherein the corrected data further participate in iterative operation of a next-step position resolving algorithm, and a main model of the iterative algorithm is as follows:
Figure FDA0004140057130000022
Figure FDA0004140057130000023
Figure FDA0004140057130000024
wherein s represents a scale factor and represents the ratio between the logical resolution and the physical resolution of the camera;
Figure FDA0004140057130000025
representing a conversion matrix from a camera coordinate system to a world coordinate system for a kth frame image obtained based on the attitude information and the altitude information;
Figure FDA0004140057130000026
an inverse matrix representing the matrix of parameters within the camera;
the unstructured environment with irregular field of view features is based on the image coordinate pair p of feature points matched by two adjacent frames 1 And p is as follows 2 The rotation matrix R and the translation vector T of the inter-frame transformation are obtained by solving the basic matrix F and the essential matrix E in the epipolar geometry, the rotation matrix R and the translation vector T are verified and compensated by utilizing the attitude information and the height information, the motion state of the camera in the three-dimensional space is restored, and then the coordinate transformation M for fixing the attitude relation is carried out fix Navigation positioning information of the aircraft is obtained.
7. A low speed unmanned aerial vehicle visual navigation positioning system, comprising:
module M1: acquiring a current image through a vision sensor, preprocessing the acquired current image, and extracting feature points based on the preprocessed current image to acquire image coordinate values of the feature points;
module M2: acquiring pose transformation matrix information under an inertial system based on the position, the pose information and the height information of the aircraft measured by an aircraft inertial measurement unit and a height measurement sensor, and subscribing the pose information and the height information output by the flight control in a pose transformation matrix mode;
module M3: based on the obtained image coordinate values of the feature points, the obtained gesture information and the obtained height information, calculating through a visual sensor model to obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the visual sensor model is a mathematical model for obtaining the space coordinates of the target point through a two-dimensional image, and the mathematical model is a mathematical conversion relation between the pixel coordinates of the image and the world coordinates of the target point, which is established under the condition of considering lens distortion.
8. The visual navigation positioning system of a low-speed unmanned aerial vehicle according to claim 7, wherein the monocular camera is strapdown mounted on the body of the unmanned aerial vehicle according to a required angle of view, and the fixed mounting pose relationship is M fix The inherent inertial measurement unit of the aircraft control system and the altimeter sensor are combined to form an integral configuration scheme of the aircraft sensor; and (5) strapdown installation calibration is carried out on the visual sensor by using a Zhang Zhengyou calibration method, so as to obtain an installation conversion matrix, an internal parameter matrix and an optical distortion matrix of the monocular camera relative to the machine body.
9. The low-speed unmanned aerial vehicle visual navigation positioning system of claim 7, wherein the preprocessing of the acquired current image uses:
module M1.1: carrying out algorithm correction on distortion of an acquired current image according to an optical distortion parameter matrix, carrying out separation operation on the corrected image in an HSV channel, dividing a highlight region according to the characteristics of different brightness of the highlight region on different channels, eliminating residual interference factors through thresholding and corrosion operation, and further inhibiting highlight interference of the characteristic region;
module M1.2: filtering and removing an uninteresting region or an interference region appearing in the image processing process by adopting morphological expansion and corrosion operation;
module M1.3: calculating the image by adopting a Canny edge double-threshold detection method, adopting a high-low threshold ratio meeting preset conditions, calculating the gradient change of the image in two directions by utilizing the approximate limit difference of the first partial derivative of a Canny gradient operator, and determining the image edge by combining the maximum value of the gradient analyzed in the gradient direction;
module M1.4: carrying out Hough transformation operation on the binary image with only edge information, storing the information of the parameter space obtained by Hough transformation, and extracting the needed linear slope and distance from the origin from the information of the parameter space; and performing double-parameter sequencing on the slope of the straight line and the distance from the origin to separate the straight line clusters, screening and extracting the intersection points and the characteristic lines in the environment, and further obtaining the image coordinates of the characteristic points.
10. The system for visual navigation and positioning of a low-speed unmanned aerial vehicle of claim 7, wherein said module M3 employs: aiming at a structured environment with regularized linear features and an unstructured environment with irregular field features in a downward-looking field of view, based on the obtained image coordinate values of the feature points and the obtained attitude information and height information, different positioning calculation algorithms are adopted to calculate and obtain the spatial positions of the corresponding pixel points under a three-dimensional world coordinate system;
the structured environment of the regularized linear features is based on the image coordinates X of the feature points uv And world coordinates X of feature points calibrated in a structured environment W Based on the measured values of the world coordinates of the extracted feature points
Figure FDA0004140057130000041
Deviation Δx from actual value W For aircraft position X C And correcting, wherein the corrected data further participate in iterative operation of a next-step position resolving algorithm, and a main model of the iterative algorithm is as follows:
Figure FDA0004140057130000042
Figure FDA0004140057130000043
Figure FDA0004140057130000044
wherein s represents a scale factor and represents the ratio between the logical resolution and the physical resolution of the camera;
Figure FDA0004140057130000045
representing a conversion matrix from a camera coordinate system to a world coordinate system for a kth frame image obtained based on the attitude information and the altitude information;
Figure FDA0004140057130000046
an inverse matrix representing the matrix of parameters within the camera;
the unstructured environment with irregular field of view features is based on the image coordinate pair p of feature points matched by two adjacent frames 1 And p is as follows 2 The rotation matrix R and the translation vector T of the inter-frame transformation are obtained by solving the basic matrix F and the essential matrix E in the epipolar geometry, the rotation matrix R and the translation vector T are verified and compensated by utilizing the attitude information and the height information, the motion state of the camera in the three-dimensional space is restored, and then the coordinate transformation M for fixing the attitude relation is carried out fix Navigation positioning information of the aircraft is obtained.
CN202310286582.3A 2023-03-21 2023-03-21 Visual navigation positioning method and system for low-speed unmanned aerial vehicle Pending CN116429098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310286582.3A CN116429098A (en) 2023-03-21 2023-03-21 Visual navigation positioning method and system for low-speed unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310286582.3A CN116429098A (en) 2023-03-21 2023-03-21 Visual navigation positioning method and system for low-speed unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN116429098A true CN116429098A (en) 2023-07-14

Family

ID=87086440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310286582.3A Pending CN116429098A (en) 2023-03-21 2023-03-21 Visual navigation positioning method and system for low-speed unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN116429098A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058209A (en) * 2023-10-11 2023-11-14 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map
CN118521764A (en) * 2024-07-23 2024-08-20 西北工业大学 Unmanned aerial vehicle to ground target combined positioning method, device and system under refusing environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058209A (en) * 2023-10-11 2023-11-14 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map
CN117058209B (en) * 2023-10-11 2024-01-23 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map
CN118521764A (en) * 2024-07-23 2024-08-20 西北工业大学 Unmanned aerial vehicle to ground target combined positioning method, device and system under refusing environment

Similar Documents

Publication Publication Date Title
CN107567412B (en) Object position measurement using vehicle motion data with automotive camera
CN103149939B (en) A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
Liu et al. Towards robust visual odometry with a multi-camera system
CN101598556A (en) Unmanned plane vision/inertia integrated navigation method under a kind of circumstances not known
CN116429098A (en) Visual navigation positioning method and system for low-speed unmanned aerial vehicle
WO2018182524A1 (en) Real time robust localization via visual inertial odometry
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN106529587A (en) Visual course identification method based on target point identification
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
WO2019144289A1 (en) Systems and methods for calibrating an optical system of a movable object
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
WO2018142533A1 (en) Position/orientation estimating device and position/orientation estimating method
KR20210034253A (en) Method and device to estimate location
Nussberger et al. Robust aerial object tracking in images with lens flare
CN109341685B (en) Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation
CN113971697A (en) Air-ground cooperative vehicle positioning and orienting method
Del Pizzo et al. Reliable vessel attitude estimation by wide angle camera
CA3064640A1 (en) Navigation augmentation system and method
WO2020154911A1 (en) Sky determination in environment detection for mobile platforms, and associated systems and methods
CN117330052A (en) Positioning and mapping method and system based on infrared vision, millimeter wave radar and IMU fusion
CN117029870A (en) Laser odometer based on road surface point cloud
Moore et al. A method for the visual estimation and control of 3-DOF attitude for UAVs
CN114842224A (en) Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map
Li-Chee-Ming et al. Augmenting visp’s 3d model-based tracker with rgb-d slam for 3d pose estimation in indoor environments
Martinez et al. A multi-resolution image alignment technique based on direct methods for pose estimation of aerial vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination