CN108230367A - A kind of quick method for tracking and positioning to set objective in greyscale video - Google Patents
A kind of quick method for tracking and positioning to set objective in greyscale video Download PDFInfo
- Publication number
- CN108230367A CN108230367A CN201711395019.0A CN201711395019A CN108230367A CN 108230367 A CN108230367 A CN 108230367A CN 201711395019 A CN201711395019 A CN 201711395019A CN 108230367 A CN108230367 A CN 108230367A
- Authority
- CN
- China
- Prior art keywords
- matrix
- target
- image
- frame
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 239000011159 matrix material Substances 0.000 claims abstract description 83
- 125000004122 cyclic group Chemical group 0.000 claims abstract description 19
- 230000004044 response Effects 0.000 claims abstract description 14
- 230000009466 transformation Effects 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000009499 grossing Methods 0.000 claims description 6
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims description 2
- 238000002203 pretreatment Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 102000002274 Matrix Metalloproteinases Human genes 0.000 description 1
- 108010000684 Matrix Metalloproteinases Proteins 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 229940050561 matrix product Drugs 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/262—Analysis of motion using transform domain methods, e.g. Fourier domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of quick method for tracking and positioning to set objective in greyscale video, two-dimensional Gaussian kernel cyclic convolution is done by the image in standard target image and sequence of frames of video and seeks cross-correlation matrix, uses Ridge Regression Method processing cross-correlation matrix tracking target present position.Standard target image of the testing result weighted average of present frame as next frame.From convolution and its statistical nature is sought per frame standard target image, two interframe characterize the change for being mapped as scale, and then acquire the dimensional variation of target during the motion.Autocorrelation matrix is done by deconvolution renewal learning ridge regression parameter by discrete Fourier transform and its inverse transformation simultaneously, response matrix is sought for next frame.Calculating speed of the present invention is fast, using dense sampling;Tracking accuracy is high;Dimensional variation can be adapted to completely, can be suitably used in the scene of target quickly from the distant to the near or from the near to the distant.
Description
Technical field
The invention belongs to monotrack location technologies in video, and in particular to a kind of to set objective in greyscale video
Quick method for tracking and positioning.
Background technology
Target following is a basic research direction of computer vision, in human-computer interaction, monitoring, augmented reality, machine
It has a wide range of applications in the scenes such as perception.Presently, there are tracking be broadly divided into two kinds:Generate model algorithm and differentiation
Model algorithm.The former uses learning objective feature, then does in rear frame the method for characteristic matching to target into line trace, and the latter is then
Using the method for study structure grader, background and target are distinguished with grader, so as to achieve the purpose that identify target.
And there is currently tracking all can not directly adapt to the scene that target scale changes, fixed scale causes
It does not simply fail to accurately export the coordinate position for tracking target in scene in target scale variation, and it is even more impossible to spottings
Range.The existing scheme for adapting to scale employs certain independent methods to the estimation of scale, is equivalent to while tracking again
Independent to be superimposed a set of dimension calculation method, this undoubtedly increases calculation amount and complexity, causes tracking process complicated and slow.
Invention content
The purpose of the present invention is to provide mutative scale method for tracking target in a kind of video based on interframe correlation filtering,
During the mutual convolution of correlation filtering matrix, standard drawing autocorrelation matrix is obtained using interframe standard picture cyclic convolution, is led to
The mapping relations of standard deviation between autocorrelation matrix are crossed to calculate scale proportionality coefficient, so as to update target scale.It can be fast
Output and real-time update target scale, effectively improve tracking accuracy, and then be adapted to target scale to have while speed tracking
In the scene of variation, there is good robustness.It can be applied in the scenes such as shooting auto-focusing, monitor video target lock-on.
The technical problem to be solved in the present invention is achieved through the following technical solutions:
A kind of quick method for tracking and positioning to set objective in greyscale video includes the following steps:
Step 1: coordinate position of the target in present frame is calculated into line trace to target by the way of cross correlation filter;
Step 2: the target coordinate position and target sizes that are calculated according to upper frame cut to obtain present frame image to be detected
x;
It is Step 3: to be checked to present frame in a manner that image to be detected x does Gaussian kernel cyclic convolution with upper frame standard picture z
Altimetric image x is circular matrix Cx(i,j), with two permutation matrix TiWith TjTo represent:
Cx(i,j)=TixTj (1)
In formula (1):TiIt is operated for unit battle array row, i gained of cyclic shift;TjFor unit array processing, cyclic shift j times
Gained;Cx(i,j)As size is cyclic shift matrixes of image to be detected x of m × n in the i-th row jth row;
Step 4: according to circular matrix Cx(i,j), obtain cross-correlation matrix:
In formula (2),For cross-correlation matrixEach step is all calculating in the element value of i rows jth row, formula (2)
Image to be detected x and its cyclic shift Matrix C at i row jth rowx(i,j)Correlation, the two is more similar, then correlation is (i.e.) higher, β is Gaussian kernel bandwidth.
Step 5: Ridge Regression Method learning position target location is used after cross-correlation battle array is obtained;
Ridge regression learning function grader object function is:
In formula (3), parameter alphaiBe withThe identical coefficient matrix of length and width is the learning parameter of Ridge Regression Method;R(xi) be
I-th frame image xiWith standard target image ziCross-correlation response matrix;
The coordinate position of target is tracked i.e. at the top of response matrix.
Preferably, further including parameter learning step after the step 5, the detailed process of the parameter learning step is:
A1, the coordinate position that target is got according to step 5, cut image centered on new coordinate, obtain new
Standard drawing z'i+1, to old and new standard figure using the average weighted mode of coefficient update to obtain next frame calculate used in standard
Figure,
zi+1=θ z'i+1+(1-θ)zi (4)
In formula (4), θ is weighting coefficient:
A2, renewal learning parameter and target scale is calculated by way of seeking standard drawing autocorrelation matrix, calculates auto-correlation
Matrix is identical with seeking cross-correlation matrix method:
The update mode of Ridge Regression Method learning parameter is:
In formula (6):Y is normal response matrix;
A3, construction are centered on image geometry midpoint, the Gauss model consistent with image length and width, by normal response battle array and
The discrete Fourier transform of standard drawing autocorrelation matrix is doing inverse discrete fourier transform, quickly after matrix dot removes in a frequency domain
It realizes the process of deconvolution, obtains next frame ridge regression learning parameter.
Preferably, further including dimension calculation step after the parameter learning step, which includes:
The process for seeking target current scale size is to seek the process of proportionality coefficient P, ifRepresent respectively previous frame and
The autocorrelation matrix of frame afterwards, behalf target scale, then have:
In formula (7), g be aboutFunction Mapping, proportionality coefficient P represent fromIt arrivesDimensional variation, have sA
=PsB;
Autocorrelation matrix is obtained by Gauss nuclear convolution, therefore result shows the shape of similar dimensional Gaussian model, uses
The standard deviation sigma of image autocorrelation matrix calculates ratio FACTOR P as the independent variable of mapping function g, and the calculation formula of σ is as follows:
In formula (8), N is number of pixels,For the value of the i-th row of autocorrelation matrix jth row, u is matrix mean value.It uses
Standard deviation does independent variable, uses σBWith σAQuotient as mapping function independent variable, take the mapping function g to be:
So far target current scale can be solved by proportionality coefficient P.
Preferably, further including pre-treatment step before the step 1, which is:
Logarithmic transformation is done to video frame first:
X (i, j)=clog (1+x (i, j)) (10)
In formula (10), c is logarithmic transformation constant coefficient, and x (i, j) is the pixel value of single frames picture corresponding coordinate;
Then dot product is done to single-frame images with cosine window, with a cosine window smoothing processing consistent with image length and width
Target image;If being h, width l per frame picture altitude, it is respectively 1 × h and 1 to build the method for cosine window by two sizes
× l m-cosines VhAnd VlMultiplication cross obtain size be h × l matrix W:
W=Vh T×Vl
In formula (11), W is the cosine window built, with image of the W matrix smoothing processings after logarithmic transformation:x
=xW.
Beneficial effects of the present invention:
Calculating speed of the present invention is fast, using dense sampling;Tracking accuracy is high;Dimensional variation can be adapted to completely, can be applicable in
In the scene of target quickly from the distant to the near or from the near to the distant.
The present invention is described in further details below with reference to accompanying drawings and embodiments.
Description of the drawings
Fig. 1 is the system block diagram of the quick method for tracking and positioning of target proposed by the invention.
Specific embodiment
The technological means and effect reached predetermined purpose for the present embodiment is expanded on further and taken, below in conjunction with attached drawing and
Embodiment is to the specific embodiment, structure feature and its effect of the present embodiment, and detailed description are as follows.
The technical solution that the present embodiment uses is broadly divided into pretreatment stage, tracking phase, parameter learning and scale prediction
Four parts of stage.It needs artificially to obtain to setting the goal in first frame position and size, and then cut artwork in first frame
Standard drawing centered on tracking target.
Step 1, by the coordinates of targets that upper frame calculates and acquiring size present frame tracking window, by tracking window and standard
Target figure does logarithmic transformation respectively and cosine window is smooth.
Step 2, pretreated tracking window does cyclic convolution with standard target figure, obtains cross-correlation matrix.
Step 3, ridge regression is carried out to cross-correlation matrix, acquires coordinate where present frame target.
Step 4, target criteria figure is updated according to the coordinate newly acquired again.
Step 5, the standard deviation of the autocorrelation matrix of Current standards figure is sought, to present frame autocorrelation matrix standard deviation and upper frame
Standard is quotient, acquires proportionality coefficient P, updates target scale.
Step 6, using discrete Fourier transform and its inverse transformation, the warp of normal response figure is done for autocorrelation matrix
Product updates the learning parameter of ridge regression.
It is unified to be arranged with the i-th row jth of x (i, j) and z (i, j) representing matrix in the present embodiment, use xiAnd ziRepresent the i-th frame
Matrix.
The detailed process of the present embodiment is:
First, pretreatment stage:
To enhance picture contrast, image is made also to have obvious contrast in dark region, first to video
Frame does logarithmic transformation:
X (i, j)=clog (1+x (i, j))
Wherein c is logarithmic transformation constant coefficient, and x (i, j) is the pixel value of single frames picture corresponding coordinate.While in order to eliminate not
Symmetrical noise, and make the target at single-frame images center more prominent and eliminate noise, then single-frame images is made of cosine window
Dot product, with a cosine window smoothing processing target image consistent with image length and width.If being h per frame picture altitude, width is
L, it is respectively 1 × h and 1 × l m-cosines V to build the method for cosine window by two sizeshAnd VlMultiplication cross obtain size as h
The matrix W of × l:
W=Vh T×Vl
W is the cosine window built, with image of the W matrix smoothing processings after logarithmic transformation:X=xW.
2nd, tracking phase:
Coordinate position of the target in present frame is calculated into line trace to target by the way of cross correlation filter.According to upper
The target coordinate position and target sizes that frame is calculated cut to obtain present frame image to be detected x.With image to be detected x and upper frame
The mode that standard picture z does Gaussian kernel cyclic convolution seeks cross-correlation matrix between the two:
Circular matrix C is done to present frame image to be detectedx(i,j)Two permutation matrix T can be usediWith TjTo represent:
Cx(i,j)=TixTj
T in above formulaiIt is operated for unit battle array row, i gained of cyclic shift, similarly TjFor unit array processing, cyclic shift j
Secondary gained.Cx(i,j)The cyclic shift matrix when convolved image x is in the i-th row jth row that as size is m × n.The present embodiment
Cyclic convolution is carried out to upper frame standard drawing and image to be detected using Gauss kernel method, it can thus be concluded that in cross-correlation matrix:
For cross-correlation matrixIn the element value of i rows jth row.Each step all treats convolved image x in calculating in formula
With its cyclic shift Matrix C at i row jth rowx(i,j)Correlation, the two is more similar, then correlation is (i.e.) more
Height, β are Gaussian kernel bandwidth..
It is obtained after cross-correlation battle array using Ridge Regression Method learning position target location, ridge regression learning function grader target letter
Number is:
Parameter alphaiBe withThe identical coefficient matrix of length and width is the learning parameter of Ridge Regression Method.The two convolution obtains R (xi)。R
(xi) it is the i-th frame image xiWith the cross-correlation response matrix of standard target image z.The coordinate position of target is tracked i.e. in response square
At the top of battle array.
3rd, the parameter learning stage:
Target coordinate position is got by tracking phase, image is cut centered on new coordinate, is newly marked
Quasi- figure z'i+1.To old and new standard figure using the average weighted mode of coefficient update to obtain next frame calculate used in standard drawing,
Wherein θ is weighting coefficient:
zi+1=θ z'i+1+(1-θ)zi
The present embodiment renewal learning parameter and calculates target scale by way of seeking standard drawing autocorrelation matrix.It calculates certainly
Correlation matrix is identical with seeking cross-correlation matrix method:
The update mode of Ridge Regression Method learning parameter is:
Wherein y is normal response matrix, and the present embodiment is configured to centered on image geometry midpoint, with image length and width
One to Gauss model.By the discrete Fourier transform of normal response battle array and standard drawing autocorrelation matrix, matrix in a frequency domain
Point is doing inverse discrete fourier transform after removing, and the quick process for realizing deconvolution obtains next frame ridge regression learning parameter.
4th, the dimension calculation stage
The variation of the autocorrelation matrix of target image can embody the changing rule of picture centre target scale, ifThe autocorrelation matrix of previous frame and rear frame is represented respectively, and behalf target scale then has:
Wherein g be aboutFunction Mapping, proportionality coefficient P represent fromIt arrivesDimensional variation, have sA=P
sB, the process for seeking target current scale size is to seek the process of proportionality coefficient P.Since the element contained in matrix is excessive, directly
The processing connect to matrix is complex, and specific aim is not strong.The independent variable of mapping function is done according to the statistic of autocorrelation matrix,
Then calculation amount and complexity can all be optimized well.
Autocorrelation matrix is obtained by Gauss nuclear convolution, therefore result shows the shape of similar dimensional Gaussian model.Gauss
The most important statistical parameter of model is standard deviation sigma, and the size of σ decides the collecting and distributing degree in the center of Gauss model, σ bigger Gausses point
Cloth more disperses.In size measurement, target bigger, autocorrelation matrix all more disperses.Understand that σ and target scale s is presented just
It is related.Therefore ratio FACTOR P is calculated using the standard deviation sigma of image autocorrelation matrix as the independent variable of mapping function g herein.σ
Calculation formula it is as follows:
Wherein N is number of pixels,For the value of the i-th row of autocorrelation matrix jth row, u is matrix mean value.Use standard
Difference does independent variable, and due to proportionate relationship to be solved, the present embodiment directly uses σBWith σAQuotient as mapping function independent variable, take
Mapping function g is:
So far place's target current scale can be solved by proportionality coefficient P.Single frames tracking terminates with dimension calculation.
The present embodiment calculating speed is fast, using dense sampling;Tracking accuracy is high;Dimensional variation can be adapted to completely, can be fitted
For in the scene of target quickly from the distant to the near or from the near to the distant.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, it is impossible to assert
The specific implementation of the present invention is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, exist
Under the premise of not departing from present inventive concept, several simple deduction or replace can also be made, should all be considered as belonging to the present invention's
Protection domain.
Claims (4)
1. a kind of quick method for tracking and positioning to set objective in greyscale video includes the following steps:
Step 1: coordinate position of the target in present frame is calculated into line trace to target by the way of cross correlation filter;
Step 2: the target coordinate position and target sizes that are calculated according to upper frame cut to obtain present frame image to be detected x;
Step 3: in a manner that image to be detected x and upper frame standard picture z do Gaussian kernel cyclic convolution to present frame mapping to be checked
As x is circular matrix Cx(i,j), with two permutation matrix TiWith TjTo represent:
Cx(i,j)=TixTj (1)
In formula (1):TiIt is operated for unit battle array row, i gained of cyclic shift;TjFor unit array processing, j gained of cyclic shift;
Cx(i,j)As size is cyclic shift matrixes of image to be detected x of m × n in the i-th row jth row;
Step 4: according to circular matrix Cx(i,j), obtain cross-correlation matrix:
In formula (2),For cross-correlation matrixEach step is all to be checked in calculating in the element value of i rows jth row, formula (2)
Altimetric image x and its cyclic shift Matrix C at i row jth rowx(i,j)Correlation, the two is more similar, then correlation is (i.e.) higher, β is Gaussian kernel bandwidth;
Step 5: Ridge Regression Method learning position target location is used after cross-correlation battle array is obtained;
Ridge regression learning function grader object function is:
In formula (3), parameter alphaiBe withThe identical coefficient matrix of length and width is the learning parameter of Ridge Regression Method;R(xi) it is the i-th frame
Image xiWith the cross-correlation response matrix of standard target image z;
The coordinate position of target is tracked i.e. at the top of response matrix.
2. as described in claim 1 to the quick method for tracking and positioning of set objective in greyscale video, which is characterized in that described
Parameter learning step is further included after step 5, the detailed process of the parameter learning step is:
A1, the coordinate position that target is got according to step 5, cut image centered on new coordinate, obtain new standard
Scheme z'i+1, to old and new standard figure using the average weighted mode of coefficient update to obtain next frame calculate used in standard drawing,
zi+1=θ z'i+1+(1-θ)zi (4)
In formula (4), θ is weighting coefficient:
A2, renewal learning parameter and target scale is calculated by way of seeking standard drawing autocorrelation matrix, calculates autocorrelation matrix
It is identical with seeking cross-correlation matrix method:
The update mode of Ridge Regression Method learning parameter is:
In formula (6):Y is normal response matrix;
Centered on image geometry midpoint, the Gauss model consistent with image length and width passes through normal response battle array and standard for a3, construction
The discrete Fourier transform of figure autocorrelation matrix is doing inverse discrete fourier transform after matrix dot removes in a frequency domain, quick to realize
The process of deconvolution obtains next frame ridge regression learning parameter.
3. as claimed in claim 2 to the quick method for tracking and positioning of set objective in greyscale video, which is characterized in that in institute
It states parameter learning step and further includes dimension calculation step later, which includes:
The process for seeking target current scale size is to seek the process of proportionality coefficient P, ifPrevious frame and rear frame are represented respectively
Autocorrelation matrix, behalf target scale then has:
In formula (7), g be aboutFunction Mapping, proportionality coefficient P represent fromIt arrivesDimensional variation, have sA=P
sB;
Autocorrelation matrix is obtained by Gauss nuclear convolution, therefore result shows the shape of similar dimensional Gaussian model, uses image
The standard deviation sigma of autocorrelation matrix calculates ratio FACTOR P as the independent variable of mapping function g, and the calculation formula of σ is as follows:
In formula (8), N is number of pixels,For the value of the i-th row of autocorrelation matrix jth row, u is matrix mean value;Use standard
Difference does independent variable, uses σBWith σAQuotient as mapping function independent variable, take the mapping function g to be:
So far target current scale can be solved by proportionality coefficient P.
4. if claim 1-3 any one of them is to the quick method for tracking and positioning of set objective in greyscale video, feature
It is, further includes pre-treatment step before the step 1, which is:
Logarithmic transformation is done to video frame first:
X (i, j)=clog (1+x (i, j)) (10)
In formula (10), c is logarithmic transformation constant coefficient, and x (i, j) is the pixel value of single frames picture corresponding coordinate;
Then dot product is done to single-frame images with cosine window, with a cosine window smoothing processing target consistent with image length and width
Image;If being h, width l per frame picture altitude, it is respectively more than 1 × h and 1 × l to build the method for cosine window by two sizes
Tangential amount VhAnd VlMultiplication cross obtain size be h × l matrix W:
W=Vh T×Vl
In formula (11), W is the cosine window built, with image of the W matrix smoothing processings after logarithmic transformation:X=x
W。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711395019.0A CN108230367A (en) | 2017-12-21 | 2017-12-21 | A kind of quick method for tracking and positioning to set objective in greyscale video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711395019.0A CN108230367A (en) | 2017-12-21 | 2017-12-21 | A kind of quick method for tracking and positioning to set objective in greyscale video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108230367A true CN108230367A (en) | 2018-06-29 |
Family
ID=62647573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711395019.0A Pending CN108230367A (en) | 2017-12-21 | 2017-12-21 | A kind of quick method for tracking and positioning to set objective in greyscale video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108230367A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035304A (en) * | 2018-08-07 | 2018-12-18 | 北京清瑞维航技术发展有限公司 | Method for tracking target, calculates equipment and device at medium |
CN109766752A (en) * | 2018-11-28 | 2019-05-17 | 西安电子科技大学 | A kind of object matching and localization method and system, computer based on deep learning |
CN109978908A (en) * | 2019-03-21 | 2019-07-05 | 西安电子科技大学 | A kind of quick method for tracking and positioning of single goal adapting to large scale deformation |
CN111508002A (en) * | 2020-04-20 | 2020-08-07 | 北京理工大学 | Small-sized low-flying target visual detection tracking system and method thereof |
CN111815981A (en) * | 2019-04-10 | 2020-10-23 | 黑芝麻智能科技(重庆)有限公司 | System and method for detecting objects on long distance roads |
CN112308871A (en) * | 2020-10-30 | 2021-02-02 | 地平线(上海)人工智能技术有限公司 | Method and device for determining motion speed of target point in video |
US11030774B2 (en) | 2019-03-19 | 2021-06-08 | Ford Global Technologies, Llc | Vehicle object tracking |
CN113674326A (en) * | 2020-05-14 | 2021-11-19 | 惟亚(上海)数字科技有限公司 | Frequency domain processing tracking method based on augmented reality |
US11460851B2 (en) | 2019-05-24 | 2022-10-04 | Ford Global Technologies, Llc | Eccentricity image fusion |
US11521494B2 (en) | 2019-06-11 | 2022-12-06 | Ford Global Technologies, Llc | Vehicle eccentricity mapping |
US11662741B2 (en) | 2019-06-28 | 2023-05-30 | Ford Global Technologies, Llc | Vehicle visual odometry |
CN116523918A (en) * | 2023-07-04 | 2023-08-01 | 深圳英美达医疗技术有限公司 | Method and device for freezing endoscopic image, electronic equipment and storage medium |
US11783707B2 (en) | 2018-10-09 | 2023-10-10 | Ford Global Technologies, Llc | Vehicle path planning |
US11797854B2 (en) | 2019-07-08 | 2023-10-24 | Sony Semiconductor Solutions Corporation | Image processing device, image processing method and object recognition system |
US12046047B2 (en) | 2021-12-07 | 2024-07-23 | Ford Global Technologies, Llc | Object detection |
-
2017
- 2017-12-21 CN CN201711395019.0A patent/CN108230367A/en active Pending
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035304A (en) * | 2018-08-07 | 2018-12-18 | 北京清瑞维航技术发展有限公司 | Method for tracking target, calculates equipment and device at medium |
CN109035304B (en) * | 2018-08-07 | 2022-04-29 | 北京清瑞维航技术发展有限公司 | Target tracking method, medium, computing device and apparatus |
US11783707B2 (en) | 2018-10-09 | 2023-10-10 | Ford Global Technologies, Llc | Vehicle path planning |
CN109766752A (en) * | 2018-11-28 | 2019-05-17 | 西安电子科技大学 | A kind of object matching and localization method and system, computer based on deep learning |
CN109766752B (en) * | 2018-11-28 | 2023-01-03 | 西安电子科技大学 | Target matching and positioning method and system based on deep learning and computer |
US11030774B2 (en) | 2019-03-19 | 2021-06-08 | Ford Global Technologies, Llc | Vehicle object tracking |
CN109978908A (en) * | 2019-03-21 | 2019-07-05 | 西安电子科技大学 | A kind of quick method for tracking and positioning of single goal adapting to large scale deformation |
CN109978908B (en) * | 2019-03-21 | 2023-04-28 | 西安电子科技大学 | Single-target rapid tracking and positioning method suitable for large-scale deformation |
CN111815981A (en) * | 2019-04-10 | 2020-10-23 | 黑芝麻智能科技(重庆)有限公司 | System and method for detecting objects on long distance roads |
US11460851B2 (en) | 2019-05-24 | 2022-10-04 | Ford Global Technologies, Llc | Eccentricity image fusion |
US11521494B2 (en) | 2019-06-11 | 2022-12-06 | Ford Global Technologies, Llc | Vehicle eccentricity mapping |
US11662741B2 (en) | 2019-06-28 | 2023-05-30 | Ford Global Technologies, Llc | Vehicle visual odometry |
US11797854B2 (en) | 2019-07-08 | 2023-10-24 | Sony Semiconductor Solutions Corporation | Image processing device, image processing method and object recognition system |
CN111508002B (en) * | 2020-04-20 | 2020-12-25 | 北京理工大学 | Small-sized low-flying target visual detection tracking system and method thereof |
CN111508002A (en) * | 2020-04-20 | 2020-08-07 | 北京理工大学 | Small-sized low-flying target visual detection tracking system and method thereof |
CN113674326A (en) * | 2020-05-14 | 2021-11-19 | 惟亚(上海)数字科技有限公司 | Frequency domain processing tracking method based on augmented reality |
CN113674326B (en) * | 2020-05-14 | 2023-06-20 | 惟亚(上海)数字科技有限公司 | Tracking method of frequency domain processing based on augmented reality |
CN112308871A (en) * | 2020-10-30 | 2021-02-02 | 地平线(上海)人工智能技术有限公司 | Method and device for determining motion speed of target point in video |
CN112308871B (en) * | 2020-10-30 | 2024-05-14 | 地平线(上海)人工智能技术有限公司 | Method and device for determining movement speed of target point in video |
US12046047B2 (en) | 2021-12-07 | 2024-07-23 | Ford Global Technologies, Llc | Object detection |
CN116523918A (en) * | 2023-07-04 | 2023-08-01 | 深圳英美达医疗技术有限公司 | Method and device for freezing endoscopic image, electronic equipment and storage medium |
CN116523918B (en) * | 2023-07-04 | 2023-09-26 | 深圳英美达医疗技术有限公司 | Method and device for freezing endoscopic image, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108230367A (en) | A kind of quick method for tracking and positioning to set objective in greyscale video | |
US11928800B2 (en) | Image coordinate system transformation method and apparatus, device, and storage medium | |
CN102307274B (en) | Motion detection method based on edge detection and frame difference | |
CN106920221B (en) | Take into account the exposure fusion method that Luminance Distribution and details are presented | |
CN111160210B (en) | Video-based water flow rate detection method and system | |
CN105894538A (en) | Target tracking method and target tracking device | |
CN106023148B (en) | A kind of sequence focuses on star image point position extracting method under observation mode | |
CN104834915B (en) | A kind of small infrared target detection method under complicated skies background | |
CN111260687B (en) | Aerial video target tracking method based on semantic perception network and related filtering | |
CN105913453A (en) | Target tracking method and target tracking device | |
CN108257153B (en) | Target tracking method based on direction gradient statistical characteristics | |
CN104796582A (en) | Video image denoising and enhancing method and device based on random ejection retinex | |
CN108038856B (en) | Infrared small target detection method based on improved multi-scale fractal enhancement | |
CN108550126A (en) | A kind of adaptive correlation filter method for tracking target and system | |
CN117115210B (en) | Intelligent agricultural monitoring and adjusting method based on Internet of things | |
CN102340620B (en) | Mahalanobis-distance-based video image background detection method | |
CN103500454A (en) | Method for extracting moving target of shaking video | |
CN101587590A (en) | Selective visual attention computation model based on pulse cosine transform | |
CN110111347A (en) | Logos extracting method, device and storage medium | |
CN106683043B (en) | Parallel image splicing method and device of multi-channel optical detection system | |
CN102510437B (en) | Method for detecting background of video image based on distribution of red, green and blue (RGB) components | |
CN111126508A (en) | Hopc-based improved heterogeneous image matching method | |
CN113705380B (en) | Target detection method and device for foggy days, electronic equipment and storage medium | |
CN108510510A (en) | Method for detecting image edge based on gradient direction | |
CN102509076B (en) | Principal-component-analysis-based video image background detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180629 |
|
WD01 | Invention patent application deemed withdrawn after publication |