Nothing Special   »   [go: up one dir, main page]

CN108520527A - A kind of space-time context fast track method based on color attribute and PCA - Google Patents

A kind of space-time context fast track method based on color attribute and PCA Download PDF

Info

Publication number
CN108520527A
CN108520527A CN201810264378.0A CN201810264378A CN108520527A CN 108520527 A CN108520527 A CN 108520527A CN 201810264378 A CN201810264378 A CN 201810264378A CN 108520527 A CN108520527 A CN 108520527A
Authority
CN
China
Prior art keywords
model
context
target
color attribute
prior probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810264378.0A
Other languages
Chinese (zh)
Inventor
张云洲
刘秀
刘一秀
史维东
王松
孙立波
刘双伟
李瑞龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201810264378.0A priority Critical patent/CN108520527A/en
Publication of CN108520527A publication Critical patent/CN108520527A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The space-time context fast track method based on color attribute and PCA that the invention discloses a kind of, method include:Establish the spatial context model of target following;Establish context prior probability model;Target location confidence graph model is established to indicate target location;Obtain the likelihood function model about target location;The likelihood function model of construction is changed into frequency domain from transform of spatial domain;Based on context prior probability model and the likelihood function model of frequency domain is transformed into train studying space context model;Extract the various dimensions color attribute feature of target object to be identified;Normalized color attribute feature does dimension-reduction treatment to the color attribute feature after extraction and normalization using PCA algorithms, obtains final feature vector;New confidence graph model is obtained, variable corresponding to new confidence graph model maximum value determines that position occurs in the maximum possible of target.The above method can improve tracking accuracy and robustness, while improve tracking velocity.

Description

A kind of space-time context fast track method based on color attribute and PCA
Technical field
The invention belongs to image processing techniques, more particularly to one kind being based on color attribute and principal component analysis (principal Component analysis, hereinafter referred to as PCA) space-time context quickly (Spatio-Temporal Context, below Abbreviation STC) method for tracing.
Background technology
In computer vision field, tracking is a popular problem, its main task can be obtained in a video Take the track of target.Trajectory track technology has been widely used in the researchs such as video monitoring, man-machine interface and robot perception Field.Currently, due to being illuminated by the light variation, attitudes vibration, motion blur and the influence for the factors such as blocking, effective tracking to pedestrian It is still an extremely challenging task.
From nearest research, target tracking has the difficulty of following three aspect:Although the use of color system can To reduce the influence of illumination variation, but it still cannot completely eliminate the influence of illumination variation.Therefore, illumination variation is still one A important problem.Tracked caused by being blocked due to other objects unsuccessfully be also target following in a thorny problem.Fortune Moving-target will also result in the loss of tracking target since posture changes during the motion.
Tracking problem when how to solve to occur because of target illumination variation, target carriage change, target occlusion becomes current The problem of urgent need to resolve.
Invention content
For the problems of the prior art, it is quick that the present invention provides a kind of space-time context based on color attribute and PCA Method for tracing, this method can improve tracking accuracy and robustness, while improve tracking velocity.
In a first aspect, the present invention provides a kind of space-time context fast track method based on color attribute and PCA, packet It includes:
101, the spatial context model of target following is established;
102, the context prior probability model for handling picture frame is established;
103, the confidence graph model for indicating target location is established;
104, it according to spatial context model and context prior probability model and confidence graph model, obtains about target The likelihood function model of position;
105, the likelihood function model of construction is changed into frequency domain from transform of spatial domain using quick Fourier transformation;
106, based on context prior probability model and the likelihood function model of frequency domain is transformed into train on studying space Hereafter model;
107, the various dimensions color attribute feature of target object to be identified is extracted;
108, normalized color attribute feature,
109, dimension-reduction treatment is done to the color attribute feature after extraction and normalization using PCA algorithms, obtains final feature Vector (i.e. following x applied in formula, z variables are the feature vector after dimensionality reduction, that is, my final feature for saying to Amount);
110, pass through the context prior probability mould of the spatial context model of study, final feature vector and testing image Type obtains new confidence graph model, and variable corresponding to new confidence graph model maximum value determines the maximum possible of target There is position.
Optionally, the step 101 includes:
Use hsc(x-z) it indicates the relationship of target location x and local context z between distance and direction, obtains mesh Conditional probability function relational expression between mark and context;
P (x | c (z), o)=hsc(x-z) (1)
The conditional probability function is as the spatial context model established.
Optionally, the step 102 includes:
Using context prior probability function as context prior probability model;
Wherein, context prior probability function is:P (c (x) | o)=I (z) ωσ(z-x*) (2)
I (z) is various dimensions color attribute characteristic strength;
K is normaliztion constant, and σ is scale parameter, ωσ() indicates weighting function.
Optionally, the step 103 includes:
It enables:C (x)=p (x | o) (4)
Wherein, o is the target that occurs in scene, and x is target location, and P (x | o) it is probability of the target in the target location;
Spatial context model and context prior probability models coupling are obtained into the confidence graph model of target location:
α is parameter, and β is a form factor, and s is normalized parameter.
Optionally, the step 104 includes:
The model of the likelihood function of target location is
Optionally, the step 105 includes:
The likelihood function model that frequency domain is changed to from transform of spatial domain is:
Optionally, the step 106 includes:
Learn the general function of obtained condition as training studying space context model;
The conditional probability function learnt is
Optionally, the step 107 includes:
Extract 11 dimension color characteristics of target object to be identified;
The step 108 includes:Using 11 dimension color characteristic of formula (9) normalized;
X is the eigenmatrix extracting object after normalized, x(i)Indicate the ith feature vector of initial data, μiFor The mean value of raw sample data, σ are raw sample data standard deviation.
Optionally, the step 109 includes:
Calculate covariance matrix:
∑ is covariance matrix, and m is dimension, x(i)It is the feature vector after normalization;By singular value decomposition, have
Using formula (11), projection matrix U is obtained, the dimension of eigenmatrix x is reduced to n × k from m × n;
It is 11 to obtain feature z, m using formula (12), and k is 3, and n is the length of color designation vector.
Optionally, the step 110 includes:
Variable corresponding to new confidence graph model maximum value determines that position occurs in the maximum possible of target;
Wherein, new confidence graph model is
T represents t-th frame images,New target location.
The device have the advantages that as follows:
The present invention has fully considered the influence of complex environment and the limitation of conventional method, and mesh is extracted by color attribute The color characteristic for marking 11 dimensions overcomes color change, and target carriage change blocks the tracking failure problem brought, ensure that The accuracy of target tracking;To accelerate system real time, the present invention takes eigenmatrix dimensionality reduction thought, using PCA algorithms by 11 Dimensional characteristics vector drops to 3 dimensions.It is provided to change the most probable position that quickening calculating target occurs using fast Fourier It may.The non-frgile robust performance of system is ensured using cascade HSV histograms, reliable guarantor is provided for the method operation of the present invention Barrier.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention without having to pay creative labor, may be used also for those of ordinary skill in the art To obtain other attached drawings according to these attached drawings.
Fig. 1 is the flow chart of the specific embodiment of the invention;
Fig. 2 is that color designation indicates design sketch with corresponding color;
Fig. 3 is that object and the model of local context space relationship indicate;
Fig. 4 is the comparing result of algorithm of the invention and other algorithms in success rate;
Fig. 5 is the comparison of algorithm of the invention and other algorithms in center error (pixel) and average frame/second (FPS) As a result;
Fig. 6 be the present invention when significant change occurs for illumination to the tracking result of target;
Fig. 7 is tracking result of the present invention in target carriage change;
Fig. 8 is tracking result of the present invention when target is blocked in various degree;
Fig. 9 is situation curve graph of the one-dimensional cross section of position confidence map at different parameters β.
Specific implementation mode
In order to preferably explain the present invention, in order to understand, below in conjunction with the accompanying drawings, by specific implementation mode, to this hair It is bright to be described in detail.
In the following description, by multiple and different aspects of the description present invention, however, for common skill in the art For art personnel, the present invention can be implemented just with some or all structures or flow of the present invention.In order to explain Definition for, specific number, configuration and sequence are elaborated, however, it will be apparent that these specific details the case where Under can also implement the present invention.It in other cases, will no longer for some well-known features in order not to obscure the present invention It is described in detail.
Embodiment 1
Identification for pedestrian and tracer technique need to accelerate tracking velocity while improving tracking accuracy rate, in this way Just there is practical significance.The present invention provides a kind of methods being combined based on colorful attribute and PCA, as shown in Figure 1, this implementation The method of example includes the following steps.
101, the spatial context model (i.e. conditional probability function) of target following is established.
For example, using hsc(x-z) it indicates the relationship of target location x and local context z between distance and direction, obtains To the conditional probability function relational expression between target and context;
P (x | c (z), o)=hsc(x-z) (1)
The conditional probability function is as the spatial context model established.
102, the context prior probability model (i.e. prior probability function) for handling picture frame is established.
For example, using context prior probability function as context prior probability model;
Wherein, context prior probability function is:P (c (x) | o)=I (z) ωσ(z-x*) (2)
I (z) is various dimensions color attribute characteristic strength;
K is normaliztion constant, and σ is scale parameter, ωσ() indicates weighting function.
103, the confidence graph model for indicating target location is established.
For example, it enables:C (x)=p (x | o) (4)
Wherein, o is the target that occurs in scene, and x is target location, and P (x | o) it is probability of the target in the target location;
Spatial context model and context prior probability models coupling are obtained into the confidence graph model of target location:
α is parameter, and β is a form factor, and s is normalized parameter.
104, it according to spatial context model and context prior probability model and confidence graph model, obtains about target The likelihood function model of position.
The model of the likelihood function of target location is
105, the likelihood function model of construction is changed into frequency domain from transform of spatial domain using quick Fourier transformation.
The likelihood function model that frequency domain is changed to from transform of spatial domain is:
106, based on context prior probability model and the likelihood function model of frequency domain is transformed into train on studying space Hereafter model.
Learn the general function of obtained condition as training studying space context model;
The conditional probability function learnt is
107, the various dimensions color attribute feature of target object to be identified is extracted.
108, normalized color attribute feature.
In the present embodiment, 11 dimension color characteristics of target object to be identified are extracted;
The step 108 includes:Using 11 dimension color characteristic of formula (9) normalized;
X is the eigenmatrix extracting object after normalized, x(i)Indicate the ith feature vector of initial data, μiFor The mean value of raw sample data, σ are raw sample data standard deviation.
109, dimension-reduction treatment is done to the color attribute feature after extraction and normalization using PCA algorithms, obtains final feature Vector.
110, pass through the context prior probability mould of the spatial context model of study, final feature vector and testing image Type obtains new confidence graph model, and variable corresponding to new confidence graph model maximum value determines the maximum possible of target There is position.
For above-mentioned steps 109, for example, calculate covariance matrix:
∑ is covariance matrix, and m is dimension, x(i)It is the feature vector after normalization;By singular value decomposition, have
Using formula (11), projection matrix U is obtained, the dimension of eigenmatrix x is reduced to n × k from m × n;
It is 11 to obtain feature z, m using formula (12), and k is 3, and n is the length of color designation vector.
In addition, above-mentioned steps 110 include:
Variable corresponding to new confidence graph model maximum value determines that position occurs in the maximum possible of target;
Wherein, new confidence graph model is
T represents t-th frame images,New target location.
In Fig. 1, left side can be according to prior probability model and confidence map model training studying space context model;Right side The confidence map of next frame image is acquired using the prior probability model of the spatial context model and next frame image learnt well Model determines that target location, the calculating process of algorithm use Fast Fourier Transform (FFT), accelerate according to confidence map function maximum occurrences Calculating speed.
On the basis of existing Bayesian frame, correlation of the target area with its local background is made full use of, mesh is established The statistical relationship of mark and its neighboring area feature obtains target most probable position by calculating the maximum value of its probability, to It converts problem to and calculates confidence map function maxima problem, in order to improve the accuracy of target tracking, the present embodiment proposes STC The method that algorithm is combined with color attribute extracts the color attribute of 11 dimensions from target, makes full use of the color of target special Levy attribute.It is also contemplated that tracking must be real-time, therefore use PCA methods, dimension-reduction treatment is done to feature vector, i.e., is not lost Mistake key message can reduce dimension and facilitate subsequent calculating again.Algorithm is quickly counted using Fast Fourier Transform (FFT) (FFT) algorithm It calculates to realize fast track.In order to ensure that entire tracing system obtains better robustness, the present invention utilizes cascade HSV face Color Histogram, to reach stable to pedestrian track, accurately and rapidly track.
Embodiment 2
To target location and target local context construction confidence function (i.e. likelihood function) in the present embodiment, with spatially Hereafter function model and context prior probability model indicate confidence map function model, training studying space context model.It carries Color of object characteristic attribute, generalization processing, PCA methods is taken to do dimension-reduction treatment to characteristic.It is general using the condition learnt The prior probability function (context prior probability model) of rate function (spatial context model) and next frame image obtains next The confidence map function of frame image.The position of target when confidence map function is maximized is calculated, position occurs in as target maximum possible It sets.Fast Fourier Transform (FFT) has been used in the calculating of algorithm, accelerates calculating speed.
Specifically, a kind of space-time context fast track based on colorful attribute and PCA, specific implementation step are as follows:
Step 1:Establish spatial context model;
P (x | c (z), o)=hsc(x-z) (1)
The target of the present embodiment is learning function hsc(x-z)。
Step 2:Establish context prior probability model:
P (c (x) | o)=I (z) ωσ(z-x*) (2)
According to the color attribute intensity function I () of first image, weighting function ωσAnd local context and target Placement configurations context prior probability model initializes prior probability function.
Step 3:Formula (1) and (2) combined structure target confidence map expression formula form:
Formula (3) can be rewritten into:
Step 4:Using fast Fourier changing method, formula (4) is changed into frequency domain from transform of spatial domain,
It is as follows so as to obtain the conditional probability model (spatial context model) for needing to learn:
Step 5:Feature extraction is carried out to image, in order to keep our algorithm more preferable of the invention in STC algorithms in precision On the basis of introduce color attribute algorithm, express target signature with color attribute feature because data dimension difference can pair Result after dimensionality reduction has certain deviation, needs that the characteristic of extraction is normalized thus:
X is the eigenmatrix extracting object after normalized, x(i)Indicate the ith feature vector of initial data, μiFor The mean value of raw sample data, σ are raw sample data standard deviation.
Step 6:Dimension-reduction treatment is done to extraction and normalized characteristic:
Calculate covariance matrix:
Using equation set forth above, projection matrix U can be obtained.Now by the dimension of eigenmatrix x from m × n It is reduced to n × k
After reducing redundancy, feature z can be obtained.M is 11, and k is 3.
Step 7:With the context priori for crossing the spatial context model and next frame (image to be detected) image that learn Probabilistic model obtains new confidence graph model, acquires the target location x corresponding to confidence map maximum value to determine the maximum of target It is likely to occur position
T represents t-th frame images,New target location.
Iteration is executed from step 2 each time, and prior probability function is obtained from newly, then executes step 3 to step 7, thus Realize target fast track.
The present embodiment has fully considered the influence of complex environment and the limitation of conventional method, is extracted by color attribute The color characteristic of 11 dimension of target overcomes color change, and target carriage change blocks the tracking failure problem brought, ensures The accuracy of target tracking;To accelerate system real time, the present embodiment takes the thought of eigenmatrix dimensionality reduction, utilizes PCA skills The feature vector of 11 dimensions is dropped to 3 dimensions by art.Most probable position to accelerate to calculate target appearance using fast Fourier carries Having supplied may.The non-frgile robust performance of system is ensured using cascade HSV histograms, being provided for method operation of the invention can By ensureing.
That is, the present embodiment extracts 11 dimensional characteristics of color attribute, as (image is according to color for the color attribute of Fig. 2 Title carries out subregion, is divided to target using color attribute feature, corresponding color designation is represented with corresponding color), and return One change is handled.Next, being handled 11 dimension color attribute Feature Dimension Reductions of target using PCA technologies.Target object with its The graphical model of spatial relationship indicates between ground context, and such as Fig. 3 shows the relationship of target image and context.Foundation waits for The conditional probability function and context prior probability function of study.Using the conditional probability function learnt, new confidence is calculated Function maxima.So that it is determined that the most probable position of target.The calculating of function uses Fast Fourier Transform (FFT) and accelerates to calculate speed Degree.
The statistical relationship for establishing target and its regional area establishes likelihood function (formula 4), by calculating maximum likelihood letter Number, calculates the most probable position of target.
In addition, five kinds of different six kinds of different data collection of method pair of the present invention, respectively in success rate
(SR) present invention is assessed in (%) and center error (pixel).Since most of trackers all include Randomness runs 5 times and records the average result of each video clipping here, excludes contingency.From Fig. 4,5 (Fig. 4 be the present invention With existing method under different data collection the comparison of recognition accuracy, 5 be the present invention and existing method under different data collection in The comparison of heart error rate) it can be seen that.The present invention is compared with other algorithm, that is, existing methods, and tracker of the invention is in precision side Face increases, and decreases on the errorless rate in center.
For illumination variation, (Fig. 6 target objects are detected in different illumination such as tetra- width figure institutes of a b c d the present invention Show), target carriage change (in Fig. 7 skier posture occur significant change as shown in tetra- width figures of a b c d), block The problem of (target object of Fig. 8 is blocked by different degrees of as shown in tetra- width figures of a b c d), carries out target following respectively.Identification Result as shown in Fig. 6,7,8.From the result of display, the light that algorithm of the invention reduces varying strength chases after target The influence of track, in terms of attitudes vibration, different attitudes vibrations does not cause the loss of tracking, instead in tracking target upper table Existing is outstanding.In terms of target occlusion, there are higher accuracy, the present invention to select to use when target is blocked for the verification present invention The track identification of target is interfered as shelter with tracking another personage of the target with similar features.Fig. 8 is shown The algorithm effect of the present invention is fine.Confidence map when the different curves of Fig. 9 represent different beta value.It can be found that working as β values from figure Effect is more preferable when being 1.That is the curve distribution of confidence map functions of the β in the case where taking different value.
In conclusion the space-time context fast track method based on colorful attribute and PCA, overcomes on target tracking Illumination variation, target carriage change block the adverse effect brought to tracking, solve the thorny problem of target tracking.Simultaneously The real-time that ensure that target tracking provides possibility for the practical application of the algorithm.
Finally it should be noted that:Above-described embodiments are merely to illustrate the technical scheme, rather than to it Limitation;Although the present invention is described in detail referring to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: It can still modify to the technical solution recorded in previous embodiment, or to which part or all technical features into Row equivalent replacement;And these modifications or substitutions, it does not separate the essence of the corresponding technical solution various embodiments of the present invention technical side The range of case.

Claims (10)

1. a kind of space-time context fast track method based on color attribute and PCA, which is characterized in that specifically include:
101, the spatial context model of target following is established;
102, the context prior probability model for handling picture frame is established;
103, the confidence graph model for indicating target location is established;
104, it according to spatial context model and context prior probability model and confidence graph model, obtains about target location Likelihood function model;
105, the likelihood function model of construction is changed into frequency domain from transform of spatial domain using quick Fourier transformation;
106, based on context prior probability model and the likelihood function model of frequency domain is transformed into train studying space context Model;
107, the various dimensions color attribute feature of target object to be identified is extracted;
108, normalized color attribute feature,
109, dimension-reduction treatment is done to the color attribute feature after extraction and normalization using PCA algorithms, obtains final feature vector;
110, it is obtained by the context prior probability model of the spatial context model of study, final feature vector and testing image New confidence graph model, variable corresponding to new confidence graph model maximum value determine that the maximum possible of target occurs Position.
2. according to the method described in claim 1, it is characterized in that, the step 101 includes:
Use hsc(x-z) indicate the relationship of target location x and local context z between distance and direction, obtain target and Conditional probability function relational expression between context;
P (x | c (z), o)=hsc(x-z) (1)
The conditional probability function is as the spatial context model established.
3. according to the method described in claim 2, it is characterized in that, the step 102 includes:
Using context prior probability function as context prior probability model;
Wherein, context prior probability function is:P (c (x) | o)=I (z) ωσ(z-x*) (2)
I (z) is various dimensions color attribute characteristic strength;
K is normaliztion constant, and σ is scale parameter, ωσ() indicates weighting function.
4. according to the method described in claim 3, it is characterized in that, the step 103 includes:
It enables:C (x)=p (x | o) (4)
Wherein, o is the target that occurs in scene, and x is target location, and P (x | o) it is probability of the target in the target location;
Spatial context model and context prior probability models coupling are obtained into the confidence graph model of target location:
α is scale parameter, and β is a form factor, and s is normalization coefficient.
5. according to the method described in claim 4, it is characterized in that, the step 104 includes:
The model of the likelihood function of target location is
6. according to the method described in claim 5, it is characterized in that, the step 105 includes:
The likelihood function model that frequency domain is changed to from transform of spatial domain is:
7. according to the method described in claim 6, it is characterized in that, the step 106 includes:
Learn the general function of obtained condition as training studying space context model;
The conditional probability function learnt is
8. the method according to the description of claim 7 is characterized in that the step 107 includes:
Extract 11 dimension color characteristics of target object to be identified;
The step 108 includes:Using 11 dimension color characteristic of formula (9) normalized;
X is the eigenmatrix extracting object after normalized, x(i)Indicate the ith feature vector of initial data, μiIt is original The mean value of sample data, σ are raw sample data standard deviation.
9. according to the method described in claim 8, it is characterized in that, the step 109 includes:
Calculate covariance matrix:
∑ is covariance matrix, and m is dimension, x(i)It is the feature vector after normalization;By singular value decomposition, have
Using formula (11), projection matrix U is obtained, the dimension of eigenmatrix x is reduced to n × k from m × n;
It is 11 to obtain feature z, m using formula (12), and k is 3, and n is the length of color designation vector.
10. according to the method described in claim 9, it is characterized in that, the step 110 includes:
Variable corresponding to new confidence graph model maximum value determines that position occurs in the maximum possible of target;
Wherein, new confidence graph model is
T represents t-th frame images,New target location.
CN201810264378.0A 2018-03-28 2018-03-28 A kind of space-time context fast track method based on color attribute and PCA Pending CN108520527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810264378.0A CN108520527A (en) 2018-03-28 2018-03-28 A kind of space-time context fast track method based on color attribute and PCA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810264378.0A CN108520527A (en) 2018-03-28 2018-03-28 A kind of space-time context fast track method based on color attribute and PCA

Publications (1)

Publication Number Publication Date
CN108520527A true CN108520527A (en) 2018-09-11

Family

ID=63433039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810264378.0A Pending CN108520527A (en) 2018-03-28 2018-03-28 A kind of space-time context fast track method based on color attribute and PCA

Country Status (1)

Country Link
CN (1) CN108520527A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101653278B1 (en) * 2016-04-01 2016-09-01 수원대학교산학협력단 Face tracking system using colar-based face detection method
CN107093189A (en) * 2017-04-18 2017-08-25 山东大学 Method for tracking target and system based on adaptive color feature and space-time context
CN107507198A (en) * 2017-08-22 2017-12-22 中国民用航空总局第二研究所 Aircraft brake disc detects and method for tracing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101653278B1 (en) * 2016-04-01 2016-09-01 수원대학교산학협력단 Face tracking system using colar-based face detection method
CN107093189A (en) * 2017-04-18 2017-08-25 山东大学 Method for tracking target and system based on adaptive color feature and space-time context
CN107507198A (en) * 2017-08-22 2017-12-22 中国民用航空总局第二研究所 Aircraft brake disc detects and method for tracing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARTIN DANELLJAN 等: "Adaptive color attributes for real-time visual tracking", 《IEEE》 *
陈国梅: "上下文信息和颜色信息融合的目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈晓书: "融合颜色特征的时空上下文目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN110363047B (en) Face recognition method and device, electronic equipment and storage medium
CN106845621B (en) Dense population number method of counting and system based on depth convolutional neural networks
US7957557B2 (en) Tracking apparatus and tracking method
US8873798B2 (en) Methods for tracking objects using random projections, distance learning and a hybrid template library and apparatuses thereof
CN109598684B (en) Correlation filtering tracking method combined with twin network
CN109685045B (en) Moving target video tracking method and system
US20030161500A1 (en) System and method for probabilistic exemplar-based pattern tracking
KR102132722B1 (en) Tracking method and system multi-object in video
CN112926410A (en) Target tracking method and device, storage medium and intelligent video system
CN111325051B (en) Face recognition method and device based on face image ROI selection
KR20060097074A (en) Apparatus and method of generating shape model of object and apparatus and method of automatically searching feature points of object employing the same
CN111199554A (en) Target tracking anti-blocking method and device
CN111739064B (en) Method for tracking target in video, storage device and control device
CN105760898A (en) Vision mapping method based on mixed group regression method
CN113763427A (en) Multi-target tracking method based on coarse-fine shielding processing
Yang et al. A method of pedestrians counting based on deep learning
CN115019241A (en) Pedestrian identification and tracking method and device, readable storage medium and equipment
CN109993090B (en) Iris center positioning method based on cascade regression forest and image gray scale features
CN111105436B (en) Target tracking method, computer device and storage medium
CN113450457B (en) Road reconstruction method, apparatus, computer device and storage medium
CN114022567A (en) Pose tracking method and device, electronic equipment and storage medium
CN113255549A (en) Intelligent recognition method and system for pennisseum hunting behavior state
CN106446832B (en) Video-based pedestrian real-time detection method
CN115018886B (en) Motion trajectory identification method, device, equipment and medium
Belmouhcine et al. Robust deep simple online real-time tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180911

RJ01 Rejection of invention patent application after publication