Nothing Special   »   [go: up one dir, main page]

CN103390278A - Detecting system for video aberrant behavior - Google Patents

Detecting system for video aberrant behavior Download PDF

Info

Publication number
CN103390278A
CN103390278A CN2013103118000A CN201310311800A CN103390278A CN 103390278 A CN103390278 A CN 103390278A CN 2013103118000 A CN2013103118000 A CN 2013103118000A CN 201310311800 A CN201310311800 A CN 201310311800A CN 103390278 A CN103390278 A CN 103390278A
Authority
CN
China
Prior art keywords
model
sigma
video
constantly
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103118000A
Other languages
Chinese (zh)
Other versions
CN103390278B (en
Inventor
郭立
刘鹏
王成彰
于昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201310311800.0A priority Critical patent/CN103390278B/en
Publication of CN103390278A publication Critical patent/CN103390278A/en
Application granted granted Critical
Publication of CN103390278B publication Critical patent/CN103390278B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A detecting system for video aberrant behavior comprises a track extracting module, a zone dividing module, a condition random field modeling module and a detecting module, wherein the track extracting module is used for extracting character outline through Gauss mixed model to acquire track sequence; the zone dividing module is used for dividing the background of the video to be detected into different zones through manual method or algorithm according to different requirement, and the background block sequence is manually calibrated; the condition random field modeling module is used for combining the divided zones and corresponding tracks to construct feature vector for CRF model training and parameter estimation; the detecting module for video to be detected is used for acquiring a feature vector of the tested sequence through similar method, utilizing the estimated parameter to judge the probability of different aberrant behaviors, and categorizing into the aberrant behavior with the highest probability. The detecting system has good practicability and higher categorizing accuracy.

Description

A kind of video abnormal behaviour detection system
Technical field
The present invention relates to video track anomaly analysis detection field, particularly, the present invention relates to a kind of video track abnormality detection system.
Background technology
People's behavioural analysis has a wide range of applications based on aspects such as the video frequency searching of behavior and medical diagnosiss and potential economic worth is a study hotspot in computer vision field in the senior man-machine interaction video conference of security monitoring, video brainpower watch and control system (Intelligent Video Surveillance System, IVSS) is one of its most important application.
Along with the development of infotech, due to the needs of public safety, the demand of intelligent monitoring aspect increased sharply in recent years.Traditional passive monitoring system, by the monitor data of artificial observation and analysis magnanimity, has caused high cost of labor, lower discrimination and higher loss, and the later stage is selected the work that can be used for evidence in a large amount of video datas be also extremely consuming time.This can't meet public security, bank, the security protection requirement that the department of the security sensitives such as traffic proposes video monitoring.
The abnormal behaviour identification is the main task of intelligent monitor system.The major requirement of intelligent monitor system is real-time and robustness.Behavior identification or the abnormal behaviour in special scenes that present research also mainly concentrates on limited classification simple rule detect.
The detection of abnormal behaviour has two kinds of methods commonly used, a kind of method that is based on distinctiveness ratio between abnormal and normal behaviour.This method further is subdivided into two kinds of different submethods according to whether building behavior model again:
(1) do not need to build the method for behavior model.At first the behavior pattern observed of cluster, then the cluster that it is medium and small is labeled as extremely, during detection, the behavior in scene and the normal behaviour in database is done likelihood score and calculates, and as likelihood score, departs from over thresholding, is judged as abnormal.
(2) or at first build the database of normal behaviour collection, can not be labeled as abnormal behaviour by the behavior of data representation in database.
These class methods are mainly used to analysis list people behavior, and need a large amount of prioris to build model, and all there is defect in the model that builds on scene adaptability and real-time.
Another method is based on the method for abnormal behaviour modeling.
At first, extract characteristics of image from video sequence, feature is passed through detecting and moving object tracking usually, and calculates its track, and speed and shape description symbols obtain.Then, pass through artificial or supervised learning technique construction " normally " behavior model based on the feature of feature.Behavior modeling is selected other graph models such as Hidden Markov (HMM) model or maximum entropy Hidden Markov Model (HMM) (MEMM) usually.
These models turn to the state of series of discrete and modeling state mode over time with image feature amount.In order to detect abnormal behaviour, video and a series of normal model are complementary, the paragraph that wherein is not suitable for model is thought extremely.Method based on model is quite effective in the scene that " normal behaviour " can clearly define and retrain.Yet in typical actual life, define and modeling " normally " behavior more difficult than definite " extremely " behavior.
Although these two kinds of methods can be set up behavior model accurately under fixing scene, need a large amount of behavior sequence of manual markings to obtain enough training samples, this can cause the waste of a large amount of human resources.
Summary of the invention
Technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, a kind of video abnormal behaviour detection system is provided, realize effectively to the overall situation, the detection of long-time behavior, have good applicability, and have higher classification accuracy rate.
The technology of the present invention solution: a kind of video abnormal behaviour detection system, its characteristics are to comprise:
The track extraction module, for the track sequence that builds training video and test video target.To video extraction prospect and background, obtain denoising after prospect, remove shade, the structure bounding box, with the bounding box barycenter as track.
Module is divided in zone: background is carried out zone divide, according to different requirements, can automatically divide by artificial division or by algorithm.Divide with trajectory coordinates and combine and obtain the characteristic sequence of these abnormal behaviours by zone.
Condition random field MBM: adopt condition random field (CRF) to detect abnormal in video: track extremely on the specific background particular space main manifestations, in some position, to hover, stay, retrograde, cross border etc.Before utilizing, the abnormal behavior sequence of structure is carried out conditional random field models training and parameter estimation.The condition random field parameter estimation can adopt the iteration pantography to calculate.
Video detection module to be measured: after completing modeling, cycle tests is done same processing, obtain characteristic sequence, the model of setting up before utilizing, calculate the conditional probability that belongs to different abnormal behaviours, gets maximum probability and mark as it, determines whether abnormal behaviour.
Described track extraction module implementation procedure is as follows:
The GMM(mixed Gauss model) thought is that each pixel of a sub-picture can represent with the weighted sum of M Gaussian distribution, being distributed as of pixel:
p ( X t ) = Σ k = 1 M w k , t * η ( X t , μ k , t , Σ k , t )
Wherein M is the number of Gaussian distribution model, X tThis pixel at t red, green, blue three colouring components constantly,
Figure BDA00003555803200032
w k,tAt the t shared weight of k Gauss model (1≤k≤M), μ constantly k,tAnd Σ k,tBe respectively constantly k Gauss model average and covariance matrix in mixed Gauss model of t, η is Gaussian density function.
η ( X t , μ , Σ ) = 1 ( 2 π ) n / 2 | Σ k , t | 1 / 2 e - 1 2 ( x t - μ kt ) T Σ k , t - 1 ( x t - μ kt )
(1) suppose each Color Channel independent distribution, the simplification covariance is
Figure BDA00003555803200034
The initialization mixed Gauss model;
(2) at moment t, to each pixel X of video tMate with all Gauss models, if pixel value X tWith k Gaussian distribution g kThe distance of average less than threshold value, pixel X tThe match is successful with this Gaussian distribution, and this Gaussian distribution, according to following formula undated parameter, increases the weight of the Gauss model of this coupling.
w k , t = ( 1 - α ) w k , t - 1 + α μ k , t = ( 1 - ρ ) μ k , t - 1 + ρX t σ k , t 2 = ( 1 - ρ ) σ k , t - 1 2 + ρ ( X t - μ k , t ) 2 ρ = α w k , t
(3) if do not mate, the Gaussian distribution of so minimum weights is replaced by new distribution, and all the other Gaussian distribution are upgraded according to following formula:
w k,t=(1-α)w k,t-1
(4) finally according to priority w k,t/ σ k,tGauss model is sorted.Wherein the greater represents that it has less variance, and probability of occurrence is larger.Elect front C distribution after sequence as background model, all the other are as foreground model.C meets:
C = arg min ( Σ k = 1 c w k > T )
Wherein T is a weight threshold, can be understood as background and accounts for the ratio of whole picture.The increase that the complex environment if threshold value setting is excessive (such as the slow mobile water surface, the twig of rocking with the wind etc.) can bring calculated amount, if threshold value setting is too small, mixed Gauss model may deteriorate to single Gauss model.Module is divided in described zone, is used for background is divided, and can adopt the GMM model to carry out zone and divide.
The thought of GMM model is to utilize M Gaussian distribution that image is divided into a plurality of parts, and it is μ that the pixel of each part is obeyed average k, variance is Gaussian distribution.The Gauss model of so how obtaining in the GMM model is counted M and parameter
Figure BDA00003555803200044
(weight of mixed Gauss model, average, variance) just becomes main content.If certain sample meets a kind of probability distribution, but parameter wherein needs estimation, need to obtain result by test many times, utilizes result to release the maximum possible value of parameter.The basic ideas that Here it is maximum likelihood is estimated: a known parameter can make certain sample occurrence probability maximum, so just this parameter value as estimated value.Can adopt the EM algorithm to carry out parameter estimation to the GMM model:
, for background and track are combined the required proper vector of structure training condition random field models, background is divided into a plurality of zones, each zone of mark.Can adopt algorithm (GMM) to divide or artificial division, implementation procedure is as follows:
(1) initialization Initialize:
If the background image observed reading is vector x i, i=1 ..., n
θ ( 0 ) = ( w 1 ( 0 ) · · · w M ( 0 ) , μ 1 ( 0 ) · · · μ M ( 0 ) , σ 1 2 ( 0 ) · · · σ M 2 ( 0 ) )
(2) E-STEP(estimates)
For each pixel, the probability that it belongs to k Gaussian distribution is:
p ( k | x i , θ old ) = α k = w k η k ( x i ; μ k , σ k ) Σ j M w j η j ( x j ; μ j , σ j ) , 1 ≤ i ≤ n , 1 ≤ k ≤ M
(3) M-step(maximizes)
Likelihood function is maximized and obtains new parameter value.
At first upgrade weight:
w k = Σ i = 1 n α ik n
Upgrade average:
μ k = Σ i = 1 n α ik x i Σ i = 1 n α ik
Upgrade variance:
σ k 2 = Σ i = 1 n α ik ( x i - μ k ) 2 Σ i = 1 n α ik
(4) repeat (2), (3) two steps, until convergence.
Described condition random field MBM, be used for building the conditional random field models based on track and zone division.Utilize the proper vector of training video to set up CRF model and estimated parameter.Utilize the iteration pantography to estimate the CRF model parameter proper vector of same class abnormal behaviour video.Whole implementation procedure is as follows:
When training pattern, following several abnormal behaviours have been stipulated: cross the border; Hover; Stay; Drive in the wrong direction.All the other behaviors are regarded as normally.Utilize method noted earlier to obtain target trajectory, the zone that background segment goes out combines them: T i=(p t, q t, p t-1, q t-1, t, subarea k, state), p wherein t, q tRespectively t coordinate constantly, p t-1, q t-1T-1 coordinate constantly, subarea kBe the mark in target zone of living in, the state target-marking, at the process state of current region, shows whether target once passed through this zone before., to belonging to the training video of a class abnormal behaviour together, obtain respectively their proper vector (as obtaining the proper vector that owns " driving in the wrong direction " videos).
After obtaining all proper vectors, need to carry out estimated parameter from training data: λ=(λ 1, λ 2... λ s... λ m)
(1) in actual applications, at first need structural attitude function and potential function.In this article, according to the proper vector T that obtains before i=(p t, q t, p t-1, q t-1, t, subarea k, state), by the residing background area subarea of t moment target k, obtain fundamental function:
Figure BDA00003555803200061
X wherein tThe feature that expression obtains, that select here is time t and background area subarea k, y t-1And y tT-1 handmarking and t handmarking constantly constantly.
By the t moment and t-1 coordinate p constantly t, q t, p t-1, q t-1, we can obtain the direction of motion of target, the structural attitude function:
Figure BDA00003555803200062
Construct by that analogy other fundamental function, construct finally potential function, it is the linear combination exponential form formation of many fundamental functions:
Figure BDA00003555803200063
Their coefficient is respectively λ a, a=1,2,3,4, initialization coefficient lambda a=1, (a=1,2,3,4)
Calculate E ~ a = Σ x , y f a ( x t , y t - 1 , y t ) .
(2) calculate normalized factor: Z ( x ) = Σ y = 1 5 exp ( Σ a Σ t = 1 n λ a f a ( x t , y t - 1 , y t ) ) .
(3) article of delivering in 2004 according to Hanna M) ask condition to distribute p ( k ) ( y t | x t ) = 1 Z ( x ) exp { Σ a Σ t = 1 n λ a f a ( x t , y t - 1 , y t ) }
Utilize current λ aThe computation model expectation E a k = Σ x t , y t p ( k ) ( y t | x t ) f a ( x t , y t - 1 , y t )
(4) undated parameter value
Figure BDA00003555803200073
C gets 4. in this article
(5) repeat (2) to (4) until λ aConvergence.
Which kind of abnormal behaviour described video detection module to be measured, belong to for detection of video to be measured.With the proper vector of video to be measured, before utilizing, the conditional random field models of structure, calculate the probability that belongs to different abnormal behaviours, gets the affiliated abnormal behaviour classification of maximum probability, video track carried out abnormal behaviour detect.Implementation procedure is as follows:
At first define δ t(i), the known front t of its expression observation sequence constantly is x 1x 2... x t, t constantly vertex ticks be the maximum probability of i, i is behavior marking serial numbers (normal, cross the border, hover, stay, drive in the wrong direction and be numbered respectively 1 to 5), defines again a rollback array W tOptimum mark while (i) depositing t before marking i recurrence.According to t δ constantly t(i) and W t(i) can recurrence obtain t+1 δ constantly t+1(i):
(1) initialization δ 1(i)=p (y 1=i|x 1), 1≤i≤5, wherein
Figure BDA00003555803200074
Can see in the previous step Construction of A Model.λ aEstimate the parameter that obtains while being tectonic model, f a(x 1, y 0, y 1) be the fundamental function that is obtained by the test video proper vector, i is behavior marking serial numbers (normal, cross the border, hover, stay, drive in the wrong direction and be numbered respectively 1 to 5)
(2) recurrence is asked locally optimal solution:
δ t ( j ) = max 1 ≤ i ≤ 5 [ δ t - 1 ( i ) p ( y t = j , y t - 1 = i | x ) ] p ( y t | x t ) , 1 ≤ i ≤ 5,2 ≤ t ≤ n .
Wherein
Figure BDA00003555803200076
What represent is at t constantly, known observed reading data, and behavior is labeled as the probability of j, (j is behavior classification sequence number, numbering from 1 to 5).
p(y t=j, y t-1=i|x) the known t-1 of expression behavior constantly is labeled as i, and t constantly transfers to behavior and is labeled as the probability of j.
(3) upgrade rollback array element wherein: W t ( j ) = arg max 1 ≤ i ≤ 5 [ δ t - 1 ( i ) p ( y t = j , y t - 1 = i | x ) ] , 1≤i≤5,2≤t≤n, return make product term maximum in bracket respective markers j to W t(j).
(4) calculate
P *δ n(i) maximal value
Figure BDA00003555803200083
Figure BDA00003555803200084
Value be to make δ n(i) maximum mark i.
(5) according to the value of rollback array the inside, the mark of maximum probability while returning back to moment t:
y t * = W t + 1 ( y t + 1 * ) , t=n-1,n-2,...,1。
By the rollback array, from
Figure BDA00003555803200086
Start to ask
Figure BDA00003555803200087
Until obtain all states constantly
Figure BDA00003555803200088
The invention has the advantages that:
(1), based on the video accident detection method of condition random field computation model, can realize the detection to the unexpected abnormality event of specific several classes according to the present invention, its anomaly analysis algorithm has good applicability, and has higher classification accuracy rate
(2) such scheme of the present invention's proposition, very little to the change of existing system, can not affect the compatibility of system, and realize simple, efficient.
Description of drawings
Above-mentioned and/or the additional aspect of the present invention and advantage are from obviously and easily understanding below in conjunction with becoming the accompanying drawing description of this invention, wherein:
Fig. 1 is according to video accident detection FB(flow block) of the present invention;
Fig. 2 detects the support module implementation procedure according to abnormal behaviour of the present invention;
Fig. 3 is according to GMM model extraction prospect process flow diagram of the present invention;
Fig. 4 is that wherein (a) track sequence 1, (b) is track sequence 2 according to extraction track sequence schematic diagram of the present invention;
Fig. 5 divides schematic diagram according to background area of the present invention; (a) be former background, (b) for utilizing GMM model partition zone;
Fig. 6 is according to first-order condition random field models figure of the present invention;
Fig. 7 is according to abnormal behaviour testing result example one of the present invention; Wherein (a) is normal, and be (b) normal, (c) stays;
Fig. 8 is according to abnormal behaviour testing result example two of the present invention; Wherein (a) hovers, and (b) crosses the border, and (c) crosses the border;
Fig. 9 is according to device testing result example of the present invention, and wherein (a) selects video, (b) extracts target, (c) obtain track, (d) select the background segment number of regions, (e) obtain the background segment result, (f) detect.
Embodiment
Below describe the present invention in detail, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or the element with identical or similar functions from start to finish.Be exemplary below by the embodiment that is described with reference to the drawings, only be used for explaining the present invention, and can not be interpreted as limitation of the present invention.
In order to realize the present invention's purpose, the invention discloses a kind of abnormal track-detecting method of video based on condition random field and background area division, in conjunction with shown in Figure 1, whole method comprises the steps:
(1) abnormal behaviour detects and is comprised of the training and testing two large divisions, at first training and testing all uses the track extraction module to obtain to extract target trajectory, the method of extracting at present target has a lot, that the present invention mainly adopts is gauss hybrid models (GMM), its background that can upgrade in time, fully take into account the slight change of target.
After extracting target, because noise is arranged, shade etc., so will carry out follow-up processing.At first this paper carry out morphology processes, and removes the isolated noise point of target image, calculates simultaneously connected domain, filters out the connected domain than small size, is Shadows Processing afterwards.Obtain finally the target bounding box, calculate the center of bounding box as target trajectory.
(2) after obtaining track and utilize zone division module, cut apart background area.
(3) utilize the condition random field MBM, the track that obtains before utilizing and cut apart after background light structural attitude vector, the set up the condition random field models, carry out parameter estimation.
(4) video detection module part of detecting to be measured passes through and the same method construct proper vector of training video, carries out mode inference.During detection, the calculated characteristics vector belongs to the probability of different abnormal behaviours, gets the affiliated abnormal behaviour of maximum probability as classification.
At first general module flow process of the present invention obtains the track sequence by the track extraction module in conjunction with shown in Figure 2, afterwards background is divided module by zone, zone after being divided,, by condition random field MBM tectonic model, detect the abnormal behaviour classification by video detection module to be measured finally again.The specific implementation process of above-mentioned each module is as follows:
1, track extraction module
Utilize GMM model extraction prospect and background in training video, as shown in Figure 3.
Utilize GMM to detect prospect and background, be mainly because:
In the scene of long-term observation, background occupies the majority the time, even relatively the moving object of solid colour also can produce more changeableization than background, and generally object all with different colours.When adding object in scene,, by the adaptive process of a period of time, can replace old background model with new background model.And when object is removed, because original background model still exists, quick-recovery background model soon.
The GMM(mixed Gaussian) thought of model is that the color that each pixel presents is represented with M state, usually M gets 3-5, each state is similar to a Gaussian distribution, the color that pixel is presented represents with stochastic variable X, each constantly T pixel value of obtaining video image be the sampled value of stochastic variable X, being distributed as of pixel:
p ( X t ) = Σ k = 1 M w k , t * η ( X t , μ k , t , Σ k , t )
X tThis pixel at t red, green, blue three colouring components constantly,
Figure BDA00003555803200102
w k,tAt the t shared weight of k Gauss model (1≤k≤M), μ constantly k,tAnd Σ k,tBe respectively constantly k Gauss model average and covariance matrix in mixed Gauss model of t, η is Gaussian density function.
η ( X t , μ , Σ ) = 1 ( 2 π ) n / 2 | Σ k , t | 1 / 2 e - 1 2 ( x t - μ kt ) T Σ k , t - 1 ( x t - μ kt )
(1) suppose each Color Channel independent distribution,
Figure BDA00003555803200111
After the initialization mixed Gauss model, according to priority w k,t/ σ k,tGauss model is sorted.
(2) at moment t, to each pixel X of video tMate with all Gauss models, if pixel value X tWith k Gaussian distribution g kThe distance of average less than threshold value (2.5 times of standard deviations), define this Gaussian distribution and pixel X tCoupling.So according to following formula undated parameter
w k , t = ( 1 - α ) w k , t - 1 + α μ k , t = ( 1 - ρ ) μ k , t - 1 + ρX t σ k , t 2 = ( 1 - ρ ) σ k , t - 1 2 + ρ ( X t - μ k , t ) 2 ρ = α w k , t
In formula, α is learning rate, reaction be the speed of Gauss model undated parameter, be a comparison close to zero decimal, getting initial value is 0.001, w k,tAt the t shared weight of k Gauss model (1≤k≤M), μ constantly k,tAnd Σ k,tBe respectively constantly k Gauss model average and covariance matrix in the GMM model of t, the simplification covariance is
Figure BDA00003555803200113
σ k,tIt is the standard deviation of k Gauss model;
(3) if do not mate, the Gauss model g that priority is minimum lAgain assignment.No matter whether mate, all will be according to certain Policy Updates weight, average, variance.
Elect front C distribution after sequence as background model, all the other are as foreground model.
(4) utilize the GMM model extraction after target, can obtain coordinate (x, y) the ∈ { (x of target 1, y 1), (x 2, y 2) .... (x n, y n), find out afterwards wherein maximum value and the minimum point of middle transverse and longitudinal coordinate, x min=min (x 1, x 2... x n), x max=max (x 1, x 2... x n), y min=min (y 1, y 2... y n), y max=max (y 1, y 2... y n), then with these extreme point structure parallel lines, x=x min, x=x max, y=y min, y=y max.Thereby obtain Rectangular Bounding Volume (x, the y) ∈ [x of surrounding target min, x max] * [y min, y max].After known bounding box, obtain the barycenter ((x of bounding box min+ x max)/2, (y min+ y max)/2), with its tracing point as target.All tracing points in video are combined and can obtain the track sequence according to time sequencing, as shown in Figure 4.
2, module is divided in zone
In order better to disclose track in the meaning of current scene and to reduce computation complexity, the background that the present invention will utilize GMM to extract is divided into L zone, and each zone marker is subarea k, (k=1,2 ... L), as shown in Figure 5.
Background is divided and can be adopted artificial division or adopt algorithm to divide.What Fig. 5 adopted is GMM algorithm zoning.
The thought that the GMM algorithm be used for to be divided is mainly, suppose total M Gauss model, that image can be divided into M regional, wherein each regional pixel obedience average is μ k, variance is δ kK Gaussian distribution.Model parameter is set to
Figure BDA00003555803200122
(being respectively the Gauss model weight, average and variance), θ can estimate to obtain with maximum likelihood, by the EM Algorithm for Solving.
The EM algorithm steps:
1. initialization θ, Initialize:
2. estimate (E-steps):
Calculate weight w kPosterior probability
α k = w k η k ( x i ; μ k , δ k ) Σ j M w j η j ( x j ; μ j , δ j ) ,
1≤i≤n,1≤k≤M
x iI pixel in frame of video, w kThe weight of k Gaussian distribution, μ kAnd σ kRespectively average and the standard deviation of k Gaussian distribution;
3. maximize (M-steps):
Upgrade weight, average, variance.
Repeat E step and M step until convergence belongs to which Gaussian distribution, partitioned image according to parameter θ calculating pixel finally.
3, condition random field MBM
Condition random field chain type non-directed graph model as shown in Figure 6.Y shows as the abnormal behaviour classification in the present invention, and X represents the eigenwert that observation obtains in this patent.When training pattern, as previously described, following several abnormal behaviours have been stipulated: cross the border; Hover; Stay; Drive in the wrong direction.All the other behaviors are regarded as normally.Utilize method noted earlier to obtain target trajectory, the zone that background segment goes out combines them: T i=(p t, q t, p t-1, q t-1, t, subarea k, state), p wherein t, q tRespectively t coordinate constantly, p t-1, q t-1T-1 coordinate constantly, subarea kBe the mark in target zone of living in, the state target-marking, at the process state of current region, shows whether target once passed through this zone before., to belonging to the training video of a class abnormal behaviour together, obtain respectively their proper vector (as obtaining the proper vector that owns " driving in the wrong direction " videos).
After obtaining all proper vectors, need to carry out estimated parameter from training data: λ=(λ 1, λ 2... λ s... λ m)
(1) in actual applications, at first need structural attitude function and potential function.In invention, according to the proper vector T that obtains before i=(p t, q t, p t-1, q t-1, t, subarea k, state), by the residing background area subarea of t moment target k, obtain fundamental function:
Figure BDA00003555803200131
X wherein tThe feature that expression obtains, that select here is time t and background area subarea k,
y t-1And y tT-1 handmarking and t handmarking constantly constantly.
By the t moment and t-1 coordinate p constantly t, q t, p t-1, q t-1, can obtain the direction of motion of target, structural attitude function:
Figure BDA00003555803200132
By the current regional subarea of living in of target k, with mark state, whether differentiate repeatedly through a zone structural attitude function:
Figure BDA00003555803200133
By the current regional subarea of living in of target k, and time t, statistics enters a subarea kTime, the structural attitude function:
Figure BDA00003555803200134
Construct finally potential function, it is the linear combination exponential form formation of many fundamental functions:
Figure BDA00003555803200135
Their coefficient is respectively λ a, a=1,2,3,4, initialization coefficient lambda a=1, (a=1,2,3,4)
Calculate E ~ a = Σ x , y f a ( x t , y t - 1 , y t ) .
(2) calculate normalized factor: Z ( x ) = Σ y = 1 5 exp ( Σ a Σ t = 1 n λ a f a ( x t , y t - 1 , y t ) ) .
(3) (article of delivering in 2004 according to Hanna M) asks condition to distribute p ( k ) ( y t | x t ) = 1 Z ( x ) exp { Σ a Σ t = 1 n λ a f a ( x t , y t - 1 , y t ) }
Utilize current λ aThe computation model expectation E a k = Σ x t , y t p ( k ) ( y t | x t ) f a ( x t , y t - 1 , y t )
(4) undated parameter value
Figure BDA00003555803200145
C gets 4. in this article
Repeat (2) to (4) until λ aConvergence.
4, video to be measured detects MBM
Adopt and infer that algorithm calculates the mark of test data by the CRF model of having set up.At first before test data being done, same pre-service obtains proper vector, then utilizes the CRF model that just now built (normal, cross the border, hover, stay, drive in the wrong direction) to estimate the parameter lambda that obtains=(λ 1, λ 2... λ a) calculate test video and belong to respectively the probability of different abnormal behaviours, get its maximum as classification foundation.Fig. 7 is detected normal behaviour, and Fig. 8 is detected abnormal behaviour.If simple chain type non-directed graph structure, can be with reference to the Viterbi algorithm.Viterbi, by finding locally optimal solution, separates these to form a complete solution finally, and step is as follows:
At first define δ t(i), the known front t of its expression observation sequence constantly is x 1x 2... x t, t constantly vertex ticks be the maximum probability of i, i is behavior marking serial numbers (normal, cross the border, hover, stay, drive in the wrong direction and be numbered respectively 1 to 5), defines again a rollback array W tOptimum mark while (i) depositing t before marking i recurrence.According to t δ constantly t(i) and W t(i) can recurrence obtain t+1 δ constantly t+1(i):
(1) initialization δ 1(i)=p (y 1=i|x 1), 1≤i≤5, wherein
Figure BDA00003555803200146
Can see in the previous step Construction of A Model.λ aEstimate the parameter that obtains while being tectonic model, f a(x 1, y 0, y 1) be the fundamental function that is obtained by the test video proper vector, i is behavior marking serial numbers (normal, cross the border, hover, stay, drive in the wrong direction and be numbered respectively 1 to 5).
(2) recurrence is asked locally optimal solution:
δ t ( j ) = max 1 ≤ i ≤ 5 [ δ t - 1 ( i ) p ( y t = j , y t - 1 = i | x ) ] p ( y t | x t ) , 1≤i≤5,2≤t≤n。
Wherein
Figure BDA00003555803200152
What represent is at t constantly, known observed reading data, and behavior is labeled as the probability of j, (j is behavior classification sequence number, numbering from 1 to 5).
p(y t=j, y t-1=i|x) the known t-1 of expression behavior constantly is labeled as i, and t constantly transfers to behavior and is labeled as the probability of j.
In order to calculate p (y t=j, y t-1=i|x), at first establishing node is X t(t=1 ..., n) (node can be understood as the characteristic information of frame of video here), all correspondence has 5 kinds of mark y t=(1 ..., 5), represent respectively normally, cross the border, hover, drive in the wrong direction, stay 5 kinds of abnormal behaviours., in order to calculate foremost and to add finally two nodes, be defined as ' start node ' and ' end node '.
For the labeled bracketing of whole sequence node, p (y|x) just is equivalent to select the probability of one paths from first node to final node.
To 5 * 5 matrix M of each node definition t(x), wherein each matrix element is defined as m ( y t - 1 = i , y t = j | x t ) = exp ( Σ a λ a f a ( x t , y t - 1 , y t ) ) , (i, j are the behavior labeled bracketings, get 1 to 5, f a(x t, y t-1, y t) be the characteristic of correspondence functional value).Utilize the element in this matrix can calculate marginal distribution p (y t=j, y t-1=i|x t):
p ( y t = j , y t - 1 = i | x t ) = α t - 1 ( y t - 1 | x t ) m ( y t - 1 = i , y t = j | x t ) β t ( y t | x t ) Z ( x ) α wherein t-1(y t-1=i|x t) be a forward direction vector, for start node α 0(y 0| x 1)=1, the forward direction vector is produced by iteration: α t(y t| x t+1)=α t-1(y t-1| x t) M t(x), M t(x) be before the matrix of each node of structure.
Similar β t(y t| x t)=M t(x) β t+1(y t| x t).β n+1(y n+1|x n+1)=1
Need afterwards to calculate
Figure BDA00003555803200155
, such as for t=8, during j=2, calculate the maximal value of product in following bracket.
max[δ 7(1)p(y 8=2,y 7=1|x 8),δ 7(2)p(y 8=2,y 7=2|x 8),…,δ 7(5)p(y 8=2,y 7=5|x 8)]。
Choose finally the result of product maximum in bracket, then with p (y t| x t) multiplying each other obtains δ t(j).
(3) upgrade rollback array element wherein: W t ( j ) = arg max 1 ≤ i ≤ 5 [ δ t - 1 ( i ) p ( y t = j , y t - 1 = i | x ) ] , 1≤i≤5,2≤t≤n, return make product term maximum in bracket respective markers j to W t(j).
(4) establish the maximum probability P of final complete flag sequence *With last state in this sequence be
Figure BDA00003555803200162
Obtain δ n(1) δ n(2) ... δ n(5) calculate after.
Figure BDA00003555803200163
P *δ n(i) maximal value.
Figure BDA00003555803200164
Figure BDA00003555803200165
Value be to make δ n(i) maximum mark i.
According to the value of rollback array the inside, the mark of maximum probability while returning back to moment t:
y t * = W t + 1 ( y t + 1 * ) , t=n-1,n-2,...,1。
By the rollback array, from
Figure BDA00003555803200167
Start to ask
Figure BDA00003555803200168
Until obtain all states constantly
What in the present invention, experimental data adopted is autonomous database and the 3D PES database of taking.Experiment has comprised following several behaviors: normal Normal; Wander hovers; The Cross that crosses the border, stay Stay, oppositely Reverse.The training video number of each abnormal behaviour is at 25-30, the lasting duration 8-30 second of each training video.Adopt altogether 30% data to train, 70% data are used for test.This paper has also adopted the HMM model to detect simultaneously.For same video data, both verification and measurement ratios contrast as shown in Table 1 and Table 2.
Table 1HMM testing result
Figure BDA000035558032001610
Table 2CRF model testing result
Figure BDA00003555803200171
Fig. 9 is the experimental prototype of writing, and has comprised above-mentioned several modules and ruuning situation.Through experiment, the database that adopts the present invention to use, several abnormal behaviour verification and measurement ratios of defined can reach more than 90%.
Non-elaborated part of the present invention belongs to techniques well known.

Claims (5)

1. a video abnormal behaviour detection system, is characterized in that comprising: track extraction module, zone division module, condition random field MBM, video detection module to be measured; Wherein:
Track extraction module: be that mixed Gauss model first detects the track of target to be detected in video by the GMM model, deliver to the condition random field MBM;
Module is divided in zone: background is carried out territorial classification, according to different requirements, by artificial division or by algorithm, automatically divide, then mark is carried out in the zone after dividing, deliver to the condition random field MBM;
The condition random field MBM: the zone after dividing and trajectory coordinates combine and obtain the proper vector of these abnormal behaviours, utilization belongs to a kind of proper vector of abnormal behaviour, the structural attitude function, carry out conditional random field models training and parameter estimation, obtain the weight coefficient of fundamental function in condition random field, deliver to video detection module to be measured;
Video detection module to be measured: cycle tests is extracted trajectory coordinates, be combined in zone after dividing, obtain the proper vector of cycle tests, the parameter of utilizing the condition random field MBM to estimate judges, calculating belongs to the probability of different abnormal behaviours, gets the affiliated abnormal behaviour of maximum probability as classification.
2. video abnormal behaviour detection system according to claim 1, it is characterized in that: described track extraction module specific implementation process is as follows:
(1) suppose each Color Channel independent distribution, the simplification covariance is
Figure FDA00003555803100011
The initialization mixed Gauss model: the average of each Gauss model and variance are initialized as zero, and the weight of each Gauss model is initialized as 1/M, and M is the number of each Gauss model;
(2) at moment t each pixel X to video tMate with all Gauss models, if pixel X tValue and k Gaussian distribution g kThe distance of average less than threshold value, pixel X tThe match is successful with this Gaussian distribution, this Gaussian distribution is according to following formula undated parameter, increase the weight of the Gauss model of this coupling, weight, average, the variance of each Gauss model during according to step (1) initialization, the RGB triple channel value of each pixel in one frame, this pixel and existing Model Matching simultaneously, what export is weight, average, the variance of each Gauss model after the 2nd moment coupling, by that analogy, known t-1 is the parameter of Gauss model constantly, upgrades and obtains t weight, average, the variance of each Gauss model constantly;
w k , t = ( 1 - α ) w k , t - 1 + α μ k , t = ( 1 - ρ ) μ k , t - 1 + ρX t σ k , t 2 = ( 1 - ρ ) σ k , t - 1 2 + ρ ( X t - μ k , t ) 2 ρ = α w k , t ;
In formula, α is learning rate, reaction be the speed of Gauss model undated parameter, be a comparison close to zero decimal, getting initial value is 0.001, w k,tAt the t shared weight of k Gauss model (1≤k≤M), μ constantly k,tAnd Σ k,tBe respectively constantly k Gauss model average and covariance matrix in the GMM model of t, the simplification covariance is σ k,tIt is the standard deviation of k Gauss model;
(3) if do not mate, the Gaussian distribution of minimum weights is replaced by new distribution, all the other Gaussian distribution are upgraded according to following formula: the weight of each Gauss model during according to step (1) initialization, average, variance, the RGB triple channel value of each pixel in one frame, the existing Model Matching of this pixel discord simultaneously, output be the 2nd weight, average, the variance of each Gauss model constantly; The parameter of known t-1 moment Gauss model, upgrade and obtain t weight, average, the variance of each Gauss model constantly by that analogy,
w k,t=(1-α)w k,t-1 (2)
(4) finally according to priority w k,t/ σ k,tGauss model is sorted, and wherein the greater represents that it has less variance, and probability of occurrence is larger; Front C after sequence is distributed and elect background model as, and all the other are as foreground model, and described C meets:
C = arg min ( Σ k = 1 c w k > T )
Wherein T is a weight threshold, and the weight threshold scope is 0.65 to 0.75, judges finally whether this pixel belongs to background model, if do not belong to background model belong to foreground model; After having determined to belong to each pixel of foreground model, which pixel can judge what belong to foreground target in each two field picture is, obtains all foreground pixels, namely obtains foreground target; Obtain weight, average, the variance of k each Gaussian distribution constantly according to previous step, what this step was exported is to meet the pixel set that belongs to the prospect condition, i.e. foreground target;
After obtaining foreground target, with the barycenter of prospect bounding box as tracing point, implementation procedure is as follows: maximum value and the minimum point of finding out foreground target transverse and longitudinal coordinate, again with these extreme point structure parallel lines, thereby obtain the Rectangular Bounding Volume of surrounding target, obtain the barycenter of bounding box, namely detect the track of target to be detected.
3. video abnormal behaviour detection system described according to right 1, it is characterized in that: the specific implementation process of described regional divided block is as follows:
(1) initialization Initialize
The initial parameter of each Gauss model is set:
θ ( 0 ) = ( w 1 ( 0 ) · · · w M ( 0 ) , μ 1 ( 0 ) · · · μ M ( 0 ) , σ 1 2 ( 0 ) · · · σ M 2 ( 0 ) )
Figure FDA00003555803100032
Figure FDA00003555803100033
Figure FDA00003555803100034
Represent respectively the 1st weight to M Gauss model, average, variance, the value after the null representation in the subscript bracket is upgraded for the 0th time; During initialization, the weight of all Gauss models is set to 1/M, and M is the Gauss model number, and average and variance are initialized as zero;
(2) carrying out E-STEP estimates
For each pixel, the probability that it belongs to k Gaussian distribution is:
p ( k | x i , θ old ) = α k = w k η k ( x i ; μ k , σ k ) Σ j M w j η j ( x j ; μ j , σ j ) , 1 ≤ i ≤ n , 1 ≤ k ≤ M
x iI pixel in frame of video, w kThe weight of k Gaussian distribution, μ kAnd σ kRespectively average and the standard deviation of k Gaussian distribution; The RGB triple channel value of weight, average, the variance of k the Gaussian distribution that obtains according to initialization or step (3) and i the pixel that obtains from image, output be the probability that i pixel belongs to k Gaussian distribution;
(3)M-step
, by likelihood function is maximized and obtains new parameter value, at first upgrade weight:
w k = Σ i = 1 n α ik n
Upgrade average:
μ k = Σ i = 1 n α ik x i Σ i = 1 n α ik
Upgrade variance:
σ k 2 = Σ i = 1 n α ik ( x i - μ k ) 2 Σ i = 1 n α ik
The pixel that obtains according to step (2) belongs to the probability of different Gaussian distribution, the weight that iteration makes new advances, average, variance;
(4) repeating step (2), (3), until the likelihood function after upgrading
Figure FDA00003555803100042
Convergence, calculate each pixel and belong to which Gauss model after convergence, according to the Gauss model numbering, all pixels of frame of video classified, and the image after obtaining to divide.
4. video abnormal behaviour detection system described according to right 1, it is characterized in that: described condition random field MBM implementation procedure is as follows:
After obtaining all proper vectors, need to carry out estimated parameter from training data:
λ=(λ 12,...λ s,...λ m)
(1) at first need structural attitude function and potential function, according to background segment and zone, divide the proper vector T that obtains i=(p t, q t, p t-1, q t-1, t, subarea k, state), by the residing background area subarea of t moment target k, obtain fundamental function:
Figure FDA00003555803100043
X wherein tThe feature that expression obtains, that select here is time t and background area subarea k, y t-1And y tT-1 handmarking and t handmarking constantly constantly; By the t moment and t-1 coordinate p constantly t, q t, p t-1, q t-1, the direction of motion of acquisition target, the structural attitude function:
Figure FDA00003555803100044
Construct by that analogy other fundamental function, construct finally potential function, it is the linear combination exponential form formation of many fundamental functions: Their coefficient is respectively λ a, a=1,2,3,4, initialization coefficient lambda a=1, (a=1,2,3,4), calculate the sample expectation f a(x t, y t-1, y t) be the fundamental function of structure in (1) step;
This step defined feature function, and calculate the sample expectation;
(2) calculate normalized factor: Z ( x ) = Σ y = 1 5 exp ( Σ a Σ t = 1 n λ a f a ( x t , y t - 1 , y t ) ) ; N represents that video has n node, and n gets the frame number of video; , according to the fundamental function of step (1) structure, calculate and obtain normalized factor;
(3) ask condition to distribute p ( k ) ( y t | x t ) = 1 Z ( x ) exp { Σ a Σ t = 1 n λ a f a ( x t , y t - 1 , y t ) } ;
Utilize current λ aThe computation model expectation E a k = Σ x t , y t p ( k ) ( y t | x t ) f a ( x t , y t - 1 , y t )
, according to the normalized factor that calculates in previous step and the fundamental function of structure before, ask model expectation;
(4) undated parameter value λ a k + 1 = λ a k + 1 C log ( E ~ a E a k ) ; C gets 4
According to sample expectation and the model expectation of step (2) and step (3) acquisition, and the parameter value of last iteration, undated parameter value λ a
(5) repeating step (2)-step (4) is until λ aConvergence.
5. video abnormal behaviour detection system described according to right 1, it is characterized in that: described video detection module implementation procedure to be measured is as follows:
(1) initialization δ 1(i)=p (y 1=i|x 1), 1≤i≤5, wherein λ aEstimate the parameter that obtains while being tectonic model, f a(x 1, y 0, y 1) be the fundamental function that is obtained by the test video proper vector, i is that the behavior marking serial numbers is normally, crosses the border, and hovers, and stays, and drives in the wrong direction, and is numbered respectively 1 to 5, with δ 1(i), 1≤i≤5 initialization are set to δ 1(i)=1;
(2) recurrence is asked locally optimal solution: according to the initialized δ of step (1) 1(i), calculate δ 2(i), the probability of vertex ticks while namely obtaining for 2 moment, similar known t-1 is known observed reading constantly, behavior is labeled as the conditional probability of i, be labeled as i with known t-1 behavior constantly, t behavior constantly is labeled as the conditional probability of j, and obtaining t moment vertex ticks is the probability δ of j t(j), until obtain the probability δ of last moment n(i), 1≤i≤5;
δ t ( j ) = max 1 ≤ i ≤ 5 [ δ t - 1 ( i ) p ( y t = j , y t - 1 = i | x ) ] p ( y t | x t ) , 1 ≤ i ≤ 5,2 ≤ t ≤ n ;
Wherein
Figure FDA00003555803100062
What represent is at t constantly, known observed reading data, and behavior is labeled as the probability of j, and j is behavior classification sequence number, numbering from 1 to 5;
p(y t=j, y t-1=i|x) the known t-1 of expression behavior constantly is labeled as i, and t constantly transfers to behavior and is labeled as the probability of j;
(3) upgrade rollback array element wherein: W t ( j ) = arg max 1 ≤ i ≤ 5 [ δ t - 1 ( i ) p ( y t = j , y t - 1 = i | x ) ] , 1≤i≤5,2≤t≤n, calculate t-1 behavior constantly according to step (2) and be labeled as the probability δ of i t-1(i), the element in renewal t moment rollback array; With the respective markers j of product term maximum in bracket to W t(j);
(4) calculate
Figure FDA00003555803100064
P *δ n(i) maximal value; Input is the probability that each node belongs to different abnormal behaviours, output be that probabilistic packet marking of maximum probability,
Figure FDA00003555803100065
Return
Figure FDA00003555803100066
Value be to make δ n(i) maximum mark i;
δ according to step (2) acquisition n(i), obtain making the mark i of its maximum;
(5) according to the value of rollback array the inside, the mark of maximum probability while returning back to moment t:
y t * = W t + 1 ( y t + 1 * ) , t=n-1,n-2,...,1;
Obtain according to step (4)
Figure FDA00003555803100068
The rollback array that obtains with step (3), from
Figure FDA00003555803100069
Start to ask
Figure FDA000035558031000610
Until obtain all status indications constantly
Figure FDA000035558031000611
CN201310311800.0A 2013-07-23 2013-07-23 A kind of video unusual checking system Expired - Fee Related CN103390278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310311800.0A CN103390278B (en) 2013-07-23 2013-07-23 A kind of video unusual checking system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310311800.0A CN103390278B (en) 2013-07-23 2013-07-23 A kind of video unusual checking system

Publications (2)

Publication Number Publication Date
CN103390278A true CN103390278A (en) 2013-11-13
CN103390278B CN103390278B (en) 2016-03-09

Family

ID=49534537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310311800.0A Expired - Fee Related CN103390278B (en) 2013-07-23 2013-07-23 A kind of video unusual checking system

Country Status (1)

Country Link
CN (1) CN103390278B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631917A (en) * 2013-11-28 2014-03-12 中国科学院软件研究所 Emergency event detecting method based on mobile object data stream
CN104318244A (en) * 2014-10-16 2015-01-28 深圳锐取信息技术股份有限公司 Behavior detection method and behavior detection device based on teaching video
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
CN106446820A (en) * 2016-09-19 2017-02-22 清华大学 Background feature point identification method and device in dynamic video editing
CN107067649A (en) * 2017-05-23 2017-08-18 重庆邮电大学 A kind of typical behaviour real-time identification method based on wireless wearable aware platform
CN107335220A (en) * 2017-06-06 2017-11-10 广州华多网络科技有限公司 A kind of recognition methods of passive user, device and server
CN107451595A (en) * 2017-08-04 2017-12-08 河海大学 Infrared image salient region detection method based on hybrid algorithm
CN107832716A (en) * 2017-11-15 2018-03-23 中国科学技术大学 Method for detecting abnormality based on active-passive Gauss on-line study
CN108805002A (en) * 2018-04-11 2018-11-13 杭州电子科技大学 Monitor video accident detection method based on deep learning and dynamic clustering
CN109472484A (en) * 2018-11-01 2019-03-15 凌云光技术集团有限责任公司 A kind of production process exception record method based on flow chart
CN109784175A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Abnormal behaviour people recognition methods, equipment and storage medium based on micro- Expression Recognition
CN109859251A (en) * 2017-11-30 2019-06-07 安讯士有限公司 For tracking the method and system of multiple objects in image sequence
CN114067314A (en) * 2022-01-17 2022-02-18 泗水县锦川花生食品有限公司 Neural network-based peanut mildew identification method and system
CN117333929A (en) * 2023-12-01 2024-01-02 贵州省公路建设养护集团有限公司 Method and system for identifying abnormal personnel under road construction based on deep learning
CN117911930A (en) * 2024-03-15 2024-04-19 释普信息科技(上海)有限公司 Data security early warning method and device based on intelligent video monitoring
CN117975566A (en) * 2024-02-26 2024-05-03 淄博市农业科学研究院 Milk cow behavior recognition and prediction system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000261712A (en) * 1999-03-05 2000-09-22 Matsushita Electric Ind Co Ltd Device for correcting image movement
US7567704B2 (en) * 2005-11-30 2009-07-28 Honeywell International Inc. Method and apparatus for identifying physical features in video
CN102831442A (en) * 2011-06-13 2012-12-19 索尼公司 Abnormal behavior detection method and equipment and method and equipment for generating abnormal behavior detection equipment
CN102930250A (en) * 2012-10-23 2013-02-13 西安理工大学 Motion recognition method for multi-scale conditional random field model
CN102938070A (en) * 2012-09-11 2013-02-20 广西工学院 Behavior recognition method based on action subspace and weight behavior recognition model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000261712A (en) * 1999-03-05 2000-09-22 Matsushita Electric Ind Co Ltd Device for correcting image movement
US7567704B2 (en) * 2005-11-30 2009-07-28 Honeywell International Inc. Method and apparatus for identifying physical features in video
CN102831442A (en) * 2011-06-13 2012-12-19 索尼公司 Abnormal behavior detection method and equipment and method and equipment for generating abnormal behavior detection equipment
CN102938070A (en) * 2012-09-11 2013-02-20 广西工学院 Behavior recognition method based on action subspace and weight behavior recognition model
CN102930250A (en) * 2012-10-23 2013-02-13 西安理工大学 Motion recognition method for multi-scale conditional random field model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵龙: "多视场景异常目标描述", 《中国博士学位论文全文数据库信息科技辑》, 15 March 2013 (2013-03-15) *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631917A (en) * 2013-11-28 2014-03-12 中国科学院软件研究所 Emergency event detecting method based on mobile object data stream
CN103631917B (en) * 2013-11-28 2017-01-11 中国科学院软件研究所 Emergency event detecting method based on mobile object data stream
CN104318244A (en) * 2014-10-16 2015-01-28 深圳锐取信息技术股份有限公司 Behavior detection method and behavior detection device based on teaching video
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
CN105718857B (en) * 2016-01-13 2019-06-04 兴唐通信科技有限公司 A kind of human body anomaly detection method and system
CN106446820A (en) * 2016-09-19 2017-02-22 清华大学 Background feature point identification method and device in dynamic video editing
CN106446820B (en) * 2016-09-19 2019-05-14 清华大学 Background characteristics point recognition methods and device in dynamic video editor
CN107067649A (en) * 2017-05-23 2017-08-18 重庆邮电大学 A kind of typical behaviour real-time identification method based on wireless wearable aware platform
CN107067649B (en) * 2017-05-23 2019-08-13 重庆邮电大学 A kind of typical behaviour real-time identification method based on wireless wearable aware platform
CN107335220A (en) * 2017-06-06 2017-11-10 广州华多网络科技有限公司 A kind of recognition methods of passive user, device and server
CN107451595A (en) * 2017-08-04 2017-12-08 河海大学 Infrared image salient region detection method based on hybrid algorithm
CN107832716A (en) * 2017-11-15 2018-03-23 中国科学技术大学 Method for detecting abnormality based on active-passive Gauss on-line study
CN109859251A (en) * 2017-11-30 2019-06-07 安讯士有限公司 For tracking the method and system of multiple objects in image sequence
CN109859251B (en) * 2017-11-30 2021-03-26 安讯士有限公司 Method and system for tracking multiple objects in a sequence of images
CN108805002A (en) * 2018-04-11 2018-11-13 杭州电子科技大学 Monitor video accident detection method based on deep learning and dynamic clustering
CN108805002B (en) * 2018-04-11 2022-03-01 杭州电子科技大学 Monitoring video abnormal event detection method based on deep learning and dynamic clustering
CN109472484A (en) * 2018-11-01 2019-03-15 凌云光技术集团有限责任公司 A kind of production process exception record method based on flow chart
CN109472484B (en) * 2018-11-01 2021-08-03 凌云光技术股份有限公司 Production process abnormity recording method based on flow chart
CN109784175A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Abnormal behaviour people recognition methods, equipment and storage medium based on micro- Expression Recognition
CN114067314A (en) * 2022-01-17 2022-02-18 泗水县锦川花生食品有限公司 Neural network-based peanut mildew identification method and system
CN114067314B (en) * 2022-01-17 2022-04-26 泗水县锦川花生食品有限公司 Neural network-based peanut mildew identification method and system
CN117333929A (en) * 2023-12-01 2024-01-02 贵州省公路建设养护集团有限公司 Method and system for identifying abnormal personnel under road construction based on deep learning
CN117333929B (en) * 2023-12-01 2024-02-09 贵州省公路建设养护集团有限公司 Method and system for identifying abnormal personnel under road construction based on deep learning
CN117975566A (en) * 2024-02-26 2024-05-03 淄博市农业科学研究院 Milk cow behavior recognition and prediction system
CN117911930A (en) * 2024-03-15 2024-04-19 释普信息科技(上海)有限公司 Data security early warning method and device based on intelligent video monitoring
CN117911930B (en) * 2024-03-15 2024-06-04 释普信息科技(上海)有限公司 Data security early warning method and device based on intelligent video monitoring

Also Published As

Publication number Publication date
CN103390278B (en) 2016-03-09

Similar Documents

Publication Publication Date Title
CN103390278B (en) A kind of video unusual checking system
CN110111340B (en) Weak supervision example segmentation method based on multi-path segmentation
CN109344736B (en) Static image crowd counting method based on joint learning
Zhou et al. Understanding collective crowd behaviors: Learning a mixture model of dynamic pedestrian-agents
CN109460023A (en) Driver's lane-changing intention recognition methods based on Hidden Markov Model
CN106446922B (en) A kind of crowd's abnormal behaviour analysis method
CN109471436A (en) Based on mixed Gaussian-Hidden Markov Model lane-change Model Parameter Optimization method
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN108985380B (en) Point switch fault identification method based on cluster integration
CN102054176B (en) Method used for establishing semantic scene models for scene images of moving targets by utilizing computer
CN110232319A (en) A kind of ship Activity recognition method based on deep learning
CN104680557A (en) Intelligent detection method for abnormal behavior in video sequence image
CN106355602A (en) Multi-target locating and tracking video monitoring method
CN103854027A (en) Crowd behavior identification method
CN105389550A (en) Remote sensing target detection method based on sparse guidance and significant drive
CN103578119A (en) Target detection method in Codebook dynamic scene based on superpixels
CN105654139A (en) Real-time online multi-target tracking method adopting temporal dynamic appearance model
CN104331716A (en) SVM active learning classification algorithm for large-scale training data
CN101719220A (en) Method of trajectory clustering based on directional trimmed mean distance
CN115131618B (en) Semi-supervised image classification method based on causal reasoning
CN109902564A (en) A kind of accident detection method based on the sparse autoencoder network of structural similarity
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN110728694A (en) Long-term visual target tracking method based on continuous learning
CN105046714A (en) Unsupervised image segmentation method based on super pixels and target discovering mechanism
CN110334584A (en) A kind of gesture identification method based on the full convolutional network in region

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160309

Termination date: 20210723

CF01 Termination of patent right due to non-payment of annual fee