CN108446634B - Aircraft continuous tracking method based on combination of video analysis and positioning information - Google Patents
Aircraft continuous tracking method based on combination of video analysis and positioning information Download PDFInfo
- Publication number
- CN108446634B CN108446634B CN201810230742.1A CN201810230742A CN108446634B CN 108446634 B CN108446634 B CN 108446634B CN 201810230742 A CN201810230742 A CN 201810230742A CN 108446634 B CN108446634 B CN 108446634B
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- aircraft
- image
- target aircraft
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20156—Automatic seed setting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention relates to an aircraft continuous tracking method based on combination of video analysis and positioning information, which comprises the steps of calculating and obtaining image coordinates of a target aircraft in a video image according to positioning information of the target aircraft, tracking the target aircraft in a single target according to a monitoring video, comparing the target aircraft and the monitoring video, considering the target aircraft to be tracked and lost in the single target tracking mode if the distance between the target aircraft and the monitoring video continuously exceeds an effective tracking threshold value, confirming the target aircraft lost in the single target tracking mode, determining a search frame according to an effective tracking range of the image coordinate point by taking an image coordinate point of the target aircraft in the video image as a central point to carry out re-identification on the target aircraft, preferably adopting HOG characteristics as a characteristic descriptor for detection of the target aircraft, and calculating and obtaining a position model and a scale model of a tracking filter. The method can judge the loss of the target aircraft in time, has less data processing amount for finding the lost target, and is mainly used for continuous tracking of the targets such as the aircraft and the like and similar occasions.
Description
Technical Field
The invention relates to an aircraft continuous tracking method based on combination of video analysis and positioning information, belongs to the video monitoring and computer information technology, and is mainly used for continuous tracking of targets such as aircrafts and the like and similar occasions.
Background
The existing aircraft continuous tracking technology mainly comprises two types: one is that based on the existing aircraft positioning information, the latitude and longitude of the aircraft adopted by the positioning information is mapped to the image coordinates of an aircraft monitoring or tracking system, so as to realize the identification and tracking of the aircraft, and the latitude and longitude information (positioning information) of the aircraft can be obtained by various methods, such as a scene monitoring radar (SSR), a broadcast type automatic correlation monitoring (ADS-B), a multi-base station positioning system (multiple positioning) and the like; the other type of the method is based on video analysis of an aircraft monitoring or tracking system, identification and tracking of the aircraft are realized through the video analysis, at present, two major technical routes are mainly used for single-target tracking based on the video analysis, one is a method based on a generated model, the other is a method based on a discriminant model, wherein the method based on the generated model refers to modeling a target area in a current video frame, and a region most similar to the model is found in the next video frame and is a predicted position of a specified tracking target, and the main methods comprise Kalman filtering, particle filtering, mean-shift, ASMS and the like; the method based on the discriminant model is based on the combination of image features and a machine learning method, a target area is used as a positive sample in a current video frame, a background area is used as a negative sample, a machine learning method is used for training a classifier, and the trained classifier is used for finding an optimal area in a next video frame.
The above methods are applied in respective occasions and requirements, but have respective defects. For example, the existing method based on aircraft positioning information is greatly limited in application due to low updating frequency of positioning information, low positioning accuracy and the like, and cannot meet the actual requirements of aircraft registration and the like. The method based on video analysis of aircraft monitoring or tracking systems faces several difficulties: deformation of the appearance of the target, changes in illumination of the target, rapid movement and motion blur of the target, similar interference of the target with the background, rotation of the target out of the plane, rotation of the target in the plane, changes in the dimensions of the target, occlusion of the target, movement of the target out of view, and the like. These technical difficulties all cause the loss of the selected tracking target, i.e. the tracking frame is separated from the actual target in the video, which affects the practical popularization and utilization.
The problems that need to be dealt with first in the face of the loss problem at present are: how to judge the loss of target tracking and the loss time, however, none of the existing algorithms can really judge when the target is lost. In the prior art, usually, the disappearance of a tracking frame in a video is regarded as target loss, and in this case, the lost target can be retrieved by target re-identification, for example, the characteristics of the tracking target are recorded before the disappearance of the target, the global target detection is performed on the video after the disappearance of the target, then the detected target is subjected to the characteristic matching, and the target can be retrieved after the successful matching, because the existing target re-identification algorithms are all based on the region selection strategy of sliding windows of the whole video image, no pertinence exists, the time complexity is high, and the windows are redundant; even the target detection based on the deep learning needs the target detection of the whole video image, and the speed is slow. Particularly, in most of the real situations of target loss, a tracking frame is still in a picture but not in the actual position of a target, so that false tracking is caused, the target loss is more dangerous than disappearance of the tracking frame and is difficult to find and correct, and the target loss under the condition of the existence of the tracking frame cannot be judged and corrected in the prior art.
Disclosure of Invention
In order to solve the technical problem, the invention provides an aircraft continuous tracking method based on the combination of video analysis and positioning information, the method can timely judge the loss of a target aircraft, and the data processing amount for finding out the lost target is less.
The technical scheme of the invention is as follows: an aircraft continuous tracking method based on video analysis and positioning information combination includes the steps of obtaining image coordinates of a target aircraft in a video image through calculation according to positioning coordinates of the target aircraft in a physical world, contained in target aircraft positioning information, conducting single-target tracking on the target aircraft according to a monitoring video, comparing the image coordinates of a target aircraft tracking frame used for single-target tracking with the image coordinates of the target aircraft, judging that the single-target tracking is not lost if the distance between the two video images does not continuously exceed an effective tracking threshold (for example, twice the width of the target aircraft tracking frame) in a set number of continuous frame video images (for example, 180 continuous frame video images), and judging that the single-target tracking is lost if the distance between the two video images does not continuously exceed the effective tracking threshold, wherein the image coordinates of the target aircraft tracking frame are image coordinates of a central point of the target aircraft tracking frame, this point corresponds to the position of the target aircraft in the target aircraft positioning information, which is not aircraft-related information derived from the video images.
The matching of the target aircraft and the target aircraft tracking frame is preferably achieved in the following manner: according to the image coordinates of all the aircrafts in the video images, the distance between the image coordinates of each aircraft and the image coordinates of the target aircraft tracking frame is respectively calculated, the aircraft closest to the target aircraft tracking frame is found out, if the distance between the aircraft and the target aircraft tracking frame does not continuously exceed the effective tracking threshold value in a set number of continuous frame video images (such as continuous 180 frame video images), and the matching success is confirmed, wherein the aircraft closest to the target aircraft tracked by the single target is the aircraft tracked by the single target, or the aircraft closest to the target aircraft tracking frame is the target aircraft corresponding to the target aircraft tracking frame, and the aircraft identification information in the aircraft positioning information, such as the flight number, can be used as the target aircraft identification information tracked by the single target.
And after confirming the single target tracking lost target, carrying out re-identification on the target aircraft. One preferred way is: the image coordinate point of the target aircraft in a video image (for example, a current frame starting re-identification) is taken as a central point, a search frame (a search area in the video image) is determined according to an effective tracking range (a range covering all areas with the distance not exceeding an effective tracking threshold value) of the image coordinate point to perform re-identification on the target aircraft, and a new target aircraft tracking frame obtained through re-identification is taken as a target aircraft tracking frame of the current frame to perform subsequent single-target tracking.
The target aircraft re-identification preferably uses a model and/or model parameters of tracking detection used when the distance between the target aircraft and the target aircraft tracking frame does not exceed the effective tracking threshold value.
Preferably, the hog (aircraft of organized gradient) features are used as the feature descriptors for target aircraft detection.
Preferably, a dsst (discrete Scale Space tracker) algorithm is adopted to calculate and obtain the position model and the Scale model of the tracking filter.
The scale estimation can be done with a one-dimensional filter and a two-dimensional filter for the position estimation.
When the distance between the image coordinates of the target aircraft in the current frame image and the image coordinates of the target aircraft tracking frame does not exceed the effective tracking threshold, updating of the filter position model and the scale model is performed according to the set learning rate parameter η.
When the distance between the image coordinates of the target aircraft in the current frame image and the image coordinates of the target aircraft tracking frame exceeds the effective tracking threshold, updating of the filter position model and the scale model is preferably not performed, i.e., the last previous filter position model and the last previous scale model are maintained.
Preferably, on the image of the starting frame (also called as the first frame), the starting seed point is manually selected in the outline of the target aircraft, the outline area of the target aircraft is obtained by adopting an image segmentation method of area growing, and the abscissa end value and the ordinate end value x of the outline area of the target aircraft are determined according to the abscissa end value and the ordinate end value of the outline area of the target aircraftmin、xmax、ymin、ymaxWith angular point as (x)min,ymin)、(xmax,ymin)、(xmin,ymax)、(xmax,ymax) As a starting target aircraft tracking frame (P)1(xmin,ymin),P2(xmax,ymax))。
In the HOG feature extraction, the color image is usually converted into a gray-scale image and Gamma correction is performed according to actual needs, and it is preferable that γ used for Gamma correction is 0.5.
The histogram of gradient statistics for a cell (cell) may be: the gradient direction is mapped into a range of 180 degrees and divided into 9-dimensional characteristic vectors, the gradient amplitude of a pixel is used as a weight to carry out projection, and the gradient direction is used for determining which dimension to carry out projection.
The invention has the beneficial effects that: by comparing the positioning position of the target aircraft with the real-time tracking position obtained based on video analysis, under the condition that the tracking frame exists, the tracking frame can be found out not to be in the actual position or in the effective tracking range of the target aircraft in time, misjudgment caused by the existence of the tracking frame is avoided, loss of the tracking target is found out in time, the target aircraft lost by video tracking can be found out in time according to the actual position of the target aircraft, the data processing amount of target re-identification is reduced, and effective tracking is recovered in time.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic diagram of a single target tracking filter according to the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention combines video analysis and positioning information to realize continuous tracking of an aircraft, on one hand, the position (image coordinates) of a target aircraft on a video image is calculated by using the positioning information of the aircraft, on the other hand, single-target tracking of the target aircraft is continuously performed by video analysis, a tracking frame of the aircraft is calculated and compared, and when the distance between the two exceeds a preset threshold, it is determined that the aircraft is lost for tracking. When such tracking is lost, the position of the aircraft is calculated based on aircraft positioning information from an airport, an airspace management system, or the like, using filter information for tracking the target aircraft stored before tracking, detection by a filter is performed in the vicinity of the aircraft, and the tracked target aircraft is retrieved and tracking is continued.
The method mainly comprises the following steps:
first-hand and manual interactive selection of target aircraft
The method comprises the steps of monitoring a video stream in real time, and obtaining a starting tracking frame of a target aircraft by using an image segmentation method based on region growing.
The image segmentation method based on the region growing comprises the following specific steps:
1) selecting a starting seed point c (x, y) in a video frame;
2) taking the seed point c (x, y) as a center, and performing recursive traversal on the neighborhood pixels;
3) for each neighborhood pixel N (x ', y'), according to the prior art, designing a discriminant Q (c (x, y), N (x ', y')) to perform the discrimination of whether the pixel belongs to the aircraft or not;
4) if the discriminant is true, the neighborhood pixel N (x ', y') is set as a new seed point, step 2) is entered, and the point is added into the result set (the same region as the seed point), otherwise, the recursion is exited and the next neighborhood pixel is detected after step 3).
For the recursive traversal of the above method, the contour region of the selected aircraft can be successfully found, and the minimum and maximum horizontal coordinates and the maximum vertical coordinate x are found out on the boundary of the found regionmin、xmax、ymin、ymaxThe initial tracking frame (P) of the target aircraft is obtained1(xmin,ymin),P2(xmax,ymax) Get the starting target aircraft heelThe tracking frame can start to track the single target.
Secondly, single-target tracking is carried out on the target aircraft
Here, the HOG feature is used as a feature description of the target aircraft.
The HOG feature extraction process is as follows:
1) and carrying out graying processing on the video frame. Because the HOG features are extracted to be texture features and the color information does not play a role, the color image is converted into a gray image;
2) and normalizing the grayscale image. In order to improve the robustness of the detector to interference factors such as illumination, Gamma correction needs to be performed on the image to complete normalization of the whole image, so as to adjust the contrast of the image, reduce the influence caused by local illumination and shadow, and simultaneously reduce the interference of noise:
G(x,y)=F(x,y)γformula (1)
Wherein G (x, y) is the normalized image, and F (x, y) is the gray image processed in the step 1).
3) Calculate the gradient of the image pixels: the gradients in the horizontal and vertical directions of each pixel are calculated according to the following equation (2), and the magnitude and direction of the gradient at each pixel position are calculated according to equation (3).
The horizontal and vertical gradients of the image at pixel point (x, y) are:
the gradient magnitude and gradient direction at pixel point (x, y) are next calculated:
4) histogram of gradient direction of statistical Cell (Cell): the image is divided into small cells, gradient directions are mapped into a range of 180 degrees, gradient amplitudes of pixels are projected as weights, and the gradient direction is used to determine which dimension is to be projected, and the image can be generally divided into 9 dimensions.
5) Histogram of gradient direction of statistical Block (Block): and (3) counting the gradient histogram in each Cell unit to form a descriptor of each Cell unit, wherein the descriptor is a larger descriptor composed of cells and is called a Block (Block), and the gradient direction histogram of the Block is formed by connecting the feature vectors of the cells in the Block in series. Due to the change of local illumination and the change of the contrast of the foreground and the background, the change range of the gradient intensity is very large, and local contrast normalization needs to be performed on the gradient. The strategy here is to perform contrast normalization for each block, typically using a normalization approach that is regular by L2.
Thus, a descriptor for each block is obtained, and for an object feature, the block may be rectangular or circular, and is determined according to the object feature to be extracted. After obtaining the features, detecting whether there is a matched object feature on the target image with a Cell size as a step size, where the object feature matching may be based on similarity, for example, euclidean distance may be used.
6) Histogram of gradient direction for statistical Window (Window): the HOG feature vectors of all blocks in the Window are connected in series to obtain the HOG feature of Window.
7) Counting the histogram of gradient direction of the whole image: an image can be divided into a plurality of windows without overlapping, all the Window feature vectors are connected in series to form the HOG feature of the whole image, and if the Window size is the same as the image size, the HOG feature of the Window is the HOG feature of the whole image, and the HOG feature is the feature vector finally used for tracking.
The scale estimation can typically be performed using one-dimensional filters, two-dimensional filters for position estimation and three-dimensional filters for scale-space positioning estimation of the target aircraft. The input, output and filter relationships are shown in figure 2.
Consider a d-dimensional HOG profile representation of a target aircraft. Let f be the tracking rectangle (Window) of the object extracted from this HOG feature map. The feature dimension of f is represented by l e { 1.... d }, and each dimension is represented as fl. The goal is to find an optimal filterA filter h, each characteristic dimension comprising a filter hl. This goal can be achieved by minimizing a cost function:
here, g is the desired gaussian output response associated with f, with the center point of the response output being the center position of the tracked target aircraft. The parameter lambda is more than or equal to 0 to control the influence of the regularization term. The solution of equation (4) can be found by converting to the frequency domain:
in the above equation, H, F and G are frequency domain representations of the spatial variables h, f, and G, respectively, after discrete Fourier transform, and the horizontal bar in the symbol represents the conjugate of the corresponding frequency domain representation.
The regularization parameter λ solves the problem of zero frequency components after f is converted to the frequency domain. An optimal filter can be obtained by minimizing the output error over all blocks. However, this requires solving a system of d × d linear equations for each pixel, which is time consuming for online learning applications. To obtain a robust approximation, the time t-1 correlation filter H in equation (5) herelThe numerator and denominator of (a) are respectively represented as:
the filter that predicts time t is then expressed as:
η in the equations (8) and (9) are learning rate parameters, since many tracking frames with different centers and sizes are sampled around the target tracking frame, a score function needs to be calculated, and the highest score, i.e. the maximum y value in the equation (10), is the new target aircraft state.
Wherein Z islAnd the HOG characteristic frequency domain expression of the sample tracking box.
The above flow can be expressed as:
input:
input image It;
Position p of the previous framet-1Sum scale st-1;
Output:
estimated target position ptSum scale st;
Wherein:
displacement estimation:
in picture ItPosition p oft-1Sum scale st-1Sampling the position to extract ztrans;
Find the largest ytansA new target position p can be sett,
Scale estimation:
in picture ItPosition p oftSum scale st-1Sampling the scale to extract zscale;
Find the largest yscaleThen a new target dimension s can be sett,
Model update:
in picture ItPosition p oftSum scale stExtracting multiple samples fscaleAnd fscale;
updating a scale modelAndobtaining an updated scale modelAndtherefore, the single-target tracking of the target aircraft can be realized.
Thirdly, calculating image coordinates for longitude and latitude of each aircraft
1) Coordinate calibration based on selected points
Firstly, j image points are manually selected on an original video image to be used as calibration points, image pixel coordinates of the calibration points are obtained in the video image through an image processing tool, and the image pixel coordinate values of the calibration points are recorded. Then, the real scene markers corresponding to the determined calibration points in the video image or the longitude and latitude coordinates corresponding to the marker points are found in the real scene map calibration tool, and the longitude and latitude coordinate values of the calibration points are recorded.
The selected principle of the calibration point is as follows:
(a) in order to accurately find the real-scene markers corresponding to the video image markers or the longitude and latitude coordinates corresponding to the markers, the markers having landmark points need to be selected. Such as the angular points or mounting points of road signs, the angular points of traffic lane lines, the angular points of ground ornaments and the like, so that the uniqueness of the calibration points can be conveniently determined and the calibration is convenient;
(b) the number j of index points should be at least 4 points or more, preferably 8 points or more for better effect, for reasons explained below;
(c) any three points of the selected calibration points are not required to be on the same straight line;
(d) the selected calibration points are uniformly distributed on the whole picture in density, and cannot be very dense in some local areas and very sparse in other areas;
(e) the larger the screen, the higher the calculation accuracy if the index points are more.
2) Coordinate mapping calculation based on nearest neighbor least square nonlinear model
The longitude and latitude coordinates (Sx, Sy) of the physical world can be mapped to the image point corresponding to the represented pixel coordinate (Dx, Dy), i.e. by a coordinate mapping calculation based on a nearest neighbor least squares nonlinear model
Because the airport surface flatness is extremely high, only the algorithm of converting the GPS information (longitude and latitude) which does not relate to the altitude into the image coordinate can be considered, and the conversion between the image coordinate system and the image coordinate system can be realized in a longitude and latitude coordinate system and a two-dimensional plane.
Therefore, the transformation from the longitude and latitude coordinates to the video image coordinates belongs to the mapping of converting two dimensions into two dimensions, and is in one-to-one correspondence.
The least squares method is a mathematical optimization technique. It finds the best functional match of the data by minimizing the sum of the squares of the errors. Unknown data can be easily obtained by the least square method, and the sum of squares of errors between these obtained data and actual data is minimized. Because the longitude and latitude coordinates of the physical world to the video image coordinates are not in a simple linear mapping relation, the invention adopts a nonlinear function model of a least square method to approximate the image coordinates of the non-calibrated points.
Because the physical world surface conditions are different, if only one nonlinear function model is built on the whole picture, a large error is caused. The invention adopts a method of finding nearest neighbor approximation to establish a nonlinear function model for each longitude and latitude, and solves the parameters of the corresponding model through a least square method, thereby solving the image coordinates corresponding to the selected longitude and latitude. Namely, each longitude and latitude coordinate (Sx, Sy), firstly, the square difference distance between the longitude and latitude and the longitude and latitude of all the calibration points is calculated, and then a plurality of calibration points with the nearest distance (Sx, Sy) are found.
The coordinate of the set point is (Sx)1,Sy1),(Sx2,Sy2),...,(Sxn,Syn) (n is more than or equal to 8), the sitting of the selected pointDenoted (Sx, Sy), the squared difference distance is:
Dist1p=(Sxp-Sx)2+(Syp-Sy)2(p ═ 1,2, …, j) formula (11)
Then, the obtained j Distp(p ═ 1,2, …, j) for fast sorting based on size by: the data to be sorted is divided into two independent parts by one-time sorting, wherein all the data of one part is smaller than all the data of the other part, then the two parts of data are respectively sorted rapidly according to the method, and the whole sorting process can be carried out recursively, so that the whole data becomes an ordered sequence.
Through a quick sequencing method, 8 calibration points closest to the selected points (Sx, Sy) are found out, and then nonlinear model parameters based on a least square method are solved through the 8 calibration points.
There are many types of non-linear models, and the present invention uses a quadratic polynomial model, i.e.
WhereinRepresenting the image coordinates corresponding to the selected point (Sx, Sy), a, b, c, d, e, f being the corresponding model parameters, and (mn) representing the longitude and latitude values of the physical world of the selected point (Sx, Sy).
After repeated verification and groping, m is ignored2And n2After two terms, the fitting effect is still good, so the nonlinear model can be simplified as the following formula:
the method of solving the nonlinear model parameters is described below. Without loss of generality, let the latitude and longitude coordinates of the 8 index points closest to the selected point (Sx, Sy) be:
(Sx1,Sy1),(Sx2,Sy2),(Sx3,Sy3),(Sx4,Sy4),(Sx5,Sy5),(Sx6,Sy6),(Sx7,Sy7),(Sx8,Sy8),
the horizontal coordinate and the vertical coordinate of the corresponding image are
Assuming that coefficients a, b, c, d are present, the following equation is satisfied:
formula (14) can be written as
Cu ═ v formula (15)
Wherein
Obtained by converting the formula (15)
u=C-1v type (19)
Since the matrix A is a non-square matrix, here A-1A pseudo-inverse matrix (also called generalized inverse matrix) which is the matrix A, i.e.
C-1=(CTC)-1CTFormula (20)
By substituting formula (19) for formula (21), a compound of formula
Similarly, if ω is the row vector composed of the vertical coordinate values of the calibration point, that is
The ordinate value phi of the selected point (Sx, Sy) is
With the above method, the image coordinate values corresponding to each longitude and latitude can be found.
Matching the calculated image coordinates of each aircraft with the single-target tracked target aircraft, calculating the longitude and latitude of each aircraft to obtain the central point position of each aircraft in the image, and calculating the distance between the central point of the single-target tracked target aircraft and the central point position of each aircraft calculated in the image
Dist2q=(Dxq-Dx)2+(Dyq-Dy)2(q-1, 2, …, k) formula (24)
Wherein (Dx)q,Dyq) For the center point position calculated in the image for each aircraft, (Dx, Dy) is the target aircraft center point for single target tracking, for a total of k aircraft.
And quickly sequencing the calculated distance values to find the aircraft closest to the single-target tracking aircraft. If the distance value of the continuous 180 frames of images from the nearest aircraft is within a value twice the width of the tracking frame of the single-target tracking aircraft, the matching is considered to be successful, and the flight code of the single-target tracking aircraft can be obtained.
Therefore, the distance between the position in the image and the central point of the single-target tracking aircraft can be calculated by the longitude and latitude of the target aircraft which is successfully detected and matched each time. If the distance value is within the double value of the width of the single-target tracking aircraft tracking frame, the single-target tracking is considered not to lose the target, and the single-target tracking filter position model is updatedSum scale modelIf the distance value is beyond the double value of the width of the single-target tracking aircraft tracking frame, the last single-target tracking filter position model of which the distance value is within the double value of the width of the single-target tracking aircraft tracking frame is savedSum scale modelNon-updating single target tracking filter position modelSum scale modelIf the target is not recovered after 180 frames, the target is considered to be lost in the single-target tracking, and the last single-target tracking frame width value of the distance value within the double value of the single-target tracking aircraft tracking frame width is saved.
Four, finding single target tracking aircraft
And calculating the latitude and longitude of the aircraft matched in the last step to obtain the existing position area of the lost aircraft, wherein the target aircraft is in the area.
And (3) performing sampling tracking on a frame z by taking the search box as a Windows frame with HOG characteristics according to the single-target tracking frame width value which is calculated by latitude and longitude and is stored and grows twice in the vertical direction and the central point of the aircraft, and the distance value of the single-target tracking frame width value is last time within a value twice as wide as a single-target tracking aircraft tracking frame.
Single-target tracking filter position model for last time within double width of single-target tracking aircraft tracking frame by using distance valueSum scale modelAnd sampling the search box z to calculate the y value.
If the difference between the maximum value and the second maximum value is smaller, the target may not appear in the picture due to occlusion and the like, and the calculation is continued; if the difference between the maximum value and the second maximum value is larger, the target appears in the picture, the position and the scale of the lost original aircraft can be obtained by taking the maximum value, single-target tracking can be carried out on the position and the scale again, and the original lost aircraft can be found back.
The technical means disclosed by the invention can be combined arbitrarily to form a plurality of different technical schemes except for special description and the further limitation that one technical means is another technical means.
Claims (9)
1. An aircraft continuous tracking method based on video analysis and positioning information combination is characterized in that image coordinates of a target aircraft in a video image are obtained through calculation according to positioning coordinates of the target aircraft in a physical world, the target aircraft is tracked in a single-target mode according to a monitoring video, the image coordinates of a target aircraft tracking frame used for tracking the single target are compared with the image coordinates of the target aircraft, if the distance between the two video images does not continuously exceed an effective tracking threshold value in a set number of continuous frame video images, the target is considered not lost through single-target tracking, if the distance between the two video images continuously exceeds the effective tracking threshold value, the target is considered lost through single-target tracking, and under the condition that the tracking frame exists, the tracking frame is timely found not to be in the actual position or in the effective tracking range of the target aircraft, so that misjudgment caused by the existence of the tracking frame is avoided, the target aircraft positioning information is not aircraft related information from a video image, after a single target tracking lost target is confirmed, an image coordinate point of the target aircraft in the video image is taken as a central point, a search frame is determined according to an effective tracking range of the image coordinate point to perform re-identification of the target aircraft, a new target aircraft tracking frame obtained by re-identification is taken as a target aircraft tracking frame of a current frame to perform subsequent single target tracking, image coordinates are calculated for the longitude and latitude of each aircraft, firstly more than 8 image points are manually selected on the original video image to be used as calibration points, image pixel coordinates of the calibration points are obtained in the video image through an image processing tool, image pixel coordinate values of the calibration points are recorded, and then longitude and latitude coordinates corresponding to live-action markers or marker points corresponding to the determined calibration points in the video image are found in a live-action map calibration tool, and recording longitude and latitude coordinate values of the calibration points, establishing a nonlinear function model for each longitude and latitude by adopting a method of searching nearest neighbor approximation, solving parameters of the corresponding model by a least square method, thus solving image coordinates corresponding to the selected longitude and latitude, finding out 8 calibration points closest to the selected points by a quick sequencing method, and then solving nonlinear model parameters based on the least square method by the 8 calibration points.
2. The method of claim 1, wherein the matching of the target aircraft to the target aircraft tracking box is achieved according to the following: and respectively calculating the distance between the image coordinates of each aircraft and the image coordinates of the target aircraft tracking frame according to the image coordinates of all the aircraft in the video images, finding out the aircraft with the closest distance to the target aircraft tracking frame, and if the distance between the aircraft and the target aircraft tracking frame does not continuously exceed an effective tracking threshold value in a set number of continuous frame video images, confirming that the matching is successful, wherein the aircraft with the closest distance is the target aircraft tracked by the single target.
3. The method of claim 2, wherein the target aircraft re-identification employs models and/or model parameters of tracking detection used when the distance between the target aircraft and the target aircraft tracking frame has not exceeded a valid tracking threshold.
4. A method according to any one of claims 1-3, characterized by using the HOG signature as a signature descriptor for target aircraft detection.
5. The method of claim 4, wherein the position model and the scale model of the tracking filter are obtained using DSST algorithm calculations, and wherein the scale estimation is performed using a one-dimensional filter and a two-dimensional filter is used for the position estimation.
6. The method of claim 5, wherein the single target tracking is performed based on a position model and a scale model of the tracking filter, wherein the updating of the position model and the scale model of the filter is performed based on the set learning rate parameter η when a distance between image coordinates of the target aircraft in the current frame image and image coordinates of the target aircraft tracking frame does not exceed an effective tracking threshold, and wherein the updating of the position model and the scale model of the filter is not performed when a distance between image coordinates of the target aircraft in the current frame image and image coordinates of the target aircraft tracking frame exceeds an effective tracking threshold.
7. The method of claim 6, wherein the starting seed point is manually selected within the outline of the target aircraft on the starting frame image, the outline region of the target aircraft is obtained by using a region growing image segmentation method, and the abscissa end value and the ordinate end value x of the outline region of the target aircraft are determined according to the abscissa end value and the ordinate end value of the outline region of the target aircraftmin、xmax、ymin、ymaxWith angular point as (x)min,ymin)、(xmax,ymin)、(xmin,ymax)、(xmax,ymax) As a starting target aircraft tracking frame (P)1(xmin,ymin),P2(xmax,ymax))。
8. The method as claimed in claim 7, wherein the HOG feature extraction is performed by converting the color image into a gray-scale image and performing Gamma correction, wherein γ is 0.5 for Gamma correction.
9. The method of claim 8, wherein the histogram of Cell gradients is statistically: the gradient direction is mapped into a range of 180 degrees and divided into 9-dimensional characteristic vectors, the gradient amplitude of a pixel is used as a weight to carry out projection, and the gradient direction is used for determining which dimension to carry out projection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810230742.1A CN108446634B (en) | 2018-03-20 | 2018-03-20 | Aircraft continuous tracking method based on combination of video analysis and positioning information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810230742.1A CN108446634B (en) | 2018-03-20 | 2018-03-20 | Aircraft continuous tracking method based on combination of video analysis and positioning information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108446634A CN108446634A (en) | 2018-08-24 |
CN108446634B true CN108446634B (en) | 2020-06-09 |
Family
ID=63195462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810230742.1A Active CN108446634B (en) | 2018-03-20 | 2018-03-20 | Aircraft continuous tracking method based on combination of video analysis and positioning information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446634B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543634B (en) * | 2018-11-29 | 2021-04-16 | 达闼科技(北京)有限公司 | Data processing method and device in positioning process, electronic equipment and storage medium |
CN111275766B (en) * | 2018-12-05 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | Calibration method and device for image coordinate system and GPS coordinate system and camera |
CN109670462B (en) * | 2018-12-24 | 2019-11-01 | 北京天睿空间科技股份有限公司 | Continue tracking across panorama based on the aircraft of location information |
CN111210457B (en) * | 2020-01-08 | 2021-04-13 | 北京天睿空间科技股份有限公司 | Aircraft listing method combining video analysis and positioning information |
CN113763416A (en) * | 2020-06-02 | 2021-12-07 | 璞洛泰珂(上海)智能科技有限公司 | Automatic labeling and tracking method, device, equipment and medium based on target detection |
CN112084952B (en) * | 2020-09-10 | 2023-08-15 | 湖南大学 | Video point location tracking method based on self-supervision training |
CN112686921B (en) * | 2021-01-08 | 2023-12-01 | 西安羚控电子科技有限公司 | Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics |
CN112949588B (en) * | 2021-03-31 | 2022-07-22 | 苏州科达科技股份有限公司 | Target detection tracking method and target detection tracking device |
CN113902775B (en) * | 2021-10-13 | 2024-10-15 | 河北汉光重工有限责任公司 | Target tracking method and device based on ASMS algorithm |
CN113641685B (en) * | 2021-10-18 | 2022-04-08 | 中国民用航空总局第二研究所 | Data processing system for guiding aircraft |
CN115100293A (en) * | 2022-06-24 | 2022-09-23 | 河南工业大学 | ADS-B signal blindness-compensating method |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10037689B2 (en) * | 2015-03-24 | 2018-07-31 | Donald Warren Taylor | Apparatus and system to manage monitored vehicular flow rate |
CN104299243B (en) * | 2014-09-28 | 2017-02-08 | 南京邮电大学 | Target tracking method based on Hough forests |
CN105488815B (en) * | 2015-11-26 | 2018-04-06 | 北京航空航天大学 | A kind of real-time objects tracking for supporting target size to change |
CN106097391B (en) * | 2016-06-13 | 2018-11-16 | 浙江工商大学 | A kind of multi-object tracking method of the identification auxiliary based on deep neural network |
CN106127807A (en) * | 2016-06-21 | 2016-11-16 | 中国石油大学(华东) | A kind of real-time video multiclass multi-object tracking method |
US9911311B1 (en) * | 2016-08-31 | 2018-03-06 | Tile, Inc. | Tracking device location and management |
CN106981073B (en) * | 2017-03-31 | 2019-08-06 | 中南大学 | A kind of ground moving object method for real time tracking and system based on unmanned plane |
CN107292911B (en) * | 2017-05-23 | 2021-03-30 | 南京邮电大学 | Multi-target tracking method based on multi-model fusion and data association |
CN107705324A (en) * | 2017-10-20 | 2018-02-16 | 中山大学 | A kind of video object detection method based on machine learning |
-
2018
- 2018-03-20 CN CN201810230742.1A patent/CN108446634B/en active Active
Non-Patent Citations (4)
Title |
---|
Discriminative Correlation Filter with Channel and Spatial Reliability;Alan Lukezic等;《2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20171109;第4847-4856页 * |
Online Object Tracking: A Benchmark;Yi Wu等;《2013 IEEE Conference on Computer Vision and Pattern Recognition》;20131003;第2411-2418页 * |
基于序列二次规划算法的定位器坐标快速标定方法;王青等;《浙江大学学报(工学版)》;20170228;第51卷(第2期);第319-327页 * |
基于改进的Hausdorff距离匹配算法的运动目标跟踪系统的研究;刘晓鸣;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140415(第04期);第I140-584页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108446634A (en) | 2018-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446634B (en) | Aircraft continuous tracking method based on combination of video analysis and positioning information | |
CN109829398B (en) | Target detection method in video based on three-dimensional convolution network | |
Zhou et al. | Efficient road detection and tracking for unmanned aerial vehicle | |
CN104658011B (en) | A kind of intelligent transportation moving object detection tracking | |
CN107633226B (en) | Human body motion tracking feature processing method | |
CN106709472A (en) | Video target detecting and tracking method based on optical flow features | |
CN105279769B (en) | A kind of level particle filter tracking method for combining multiple features | |
CN111028292B (en) | Sub-pixel level image matching navigation positioning method | |
CN109767454B (en) | Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance | |
CN109934131A (en) | A kind of small target detecting method based on unmanned plane | |
CN115049700A (en) | Target detection method and device | |
CN112818905B (en) | Finite pixel vehicle target detection method based on attention and spatio-temporal information | |
CN109410248B (en) | Flotation froth motion characteristic extraction method based on r-K algorithm | |
CN109376641B (en) | Moving vehicle detection method based on unmanned aerial vehicle aerial video | |
CN105913459A (en) | Moving object detection method based on high resolution continuous shooting images | |
CN116863357A (en) | Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method | |
CN110458064B (en) | Low-altitude target detection and identification method combining data driving type and knowledge driving type | |
CN106203439A (en) | The homing vector landing concept of unmanned plane based on mark multiple features fusion | |
CN113689459B (en) | Real-time tracking and mapping method based on GMM and YOLO under dynamic environment | |
CN109508674A (en) | Airborne lower view isomery image matching method based on region division | |
Dou et al. | Robust visual tracking based on joint multi-feature histogram by integrating particle filter and mean shift | |
CN116665097A (en) | Self-adaptive target tracking method combining context awareness | |
CN116429126A (en) | Quick repositioning method for ground vehicle point cloud map in dynamic environment | |
CN109740468B (en) | Self-adaptive Gaussian low-pass filtering method for extracting black soil organic matter information | |
Geng et al. | A Vision-based Ship Speed Measurement Method Using Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Li Xiangbin Inventor after: Lin Shuhan Inventor after: Zheng Wentao Inventor after: Wang Guofu Inventor before: Li Xiangbin Inventor before: Zheng Wentao Inventor before: Wang Guofu |
|
GR01 | Patent grant | ||
GR01 | Patent grant |