Nothing Special   »   [go: up one dir, main page]

CN103440668B - Method and device for tracing online video target - Google Patents

Method and device for tracing online video target Download PDF

Info

Publication number
CN103440668B
CN103440668B CN201310390529.4A CN201310390529A CN103440668B CN 103440668 B CN103440668 B CN 103440668B CN 201310390529 A CN201310390529 A CN 201310390529A CN 103440668 B CN103440668 B CN 103440668B
Authority
CN
China
Prior art keywords
target
unit
image
field picture
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310390529.4A
Other languages
Chinese (zh)
Other versions
CN103440668A (en
Inventor
葛仕明
文辉
陈水仙
秦伟俊
孙利民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN201310390529.4A priority Critical patent/CN103440668B/en
Publication of CN103440668A publication Critical patent/CN103440668A/en
Application granted granted Critical
Publication of CN103440668B publication Critical patent/CN103440668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method and device for tracing an online video target. The method comprises the following steps that image characteristics of a start frame in an online video are obtained and an initial background model is established; an image of a next frame is obtained; compared with the initial background model and the image of the next frame, so that a comparison result is obtained, and the initial background model is updated according to the comparison result; a foreground image is obtained and a foreground target is extracted; target characteristics are obtained through the online learning method and the foreground target is located so that the position information of the foreground target can be obtained; the position of the foreground target is marked and an image of a next frame after the marking is conducted is output; all the output image of the next frames are combined so that the moving trajectory of the foreground target can be obtained. According to the method for tracing the online video target, a real-time monitored video is processed, the target is traced at the first time, the phenomenon that the target is traced after all original video images are obtained is avoided, the instantaneity and effectiveness of data are guaranteed, and influence on the accuracy rate after multiple targets intersect and are blocked in an existing tracing mode is avoided.

Description

A kind of Online Video method for tracking target and device
Technical field
The present invention relates to the analysis of video flowing and process field, particularly to a kind of Online Video method for tracking target and dress Put.
Background technology
In recent years, with the high speed development of digital media technology and intelligent Video Surveillance Technology, public safety situation is subject to Society and the extensive concern of the public, multimedia and security protection video data are in explosive growth.Time-consuming simply original the leaning on of tradition Artificial browsing mode far can not meet the demand to video information analysis and process for the people.Therefore, in the urgent need to one kind Reason speed is fast, target following is accurate, and has Online Video method for tracking target and the system of good vigorousness.
Target following is exactly to find interested moving target in real time in one section of image sequence, including its position, The kinematic parameter such as speed and acceleration.Target following is the hot issue of computer vision field research, with computer technology Development and obtain swift and violent development, target following technology also resulting in significant progress.The process to image for the last century Mainly concentrate on the process of single image, even if pursuit movement target in dynamic image sequence, also carry dense static state The feature of image procossing.Until last century the eighties bkp horn et al. propose optical flow method (optical flow, referring to determining optical flow,bkp horn,bg schunck,artificial intelligence,1981, Elsevier), the research field of dynamic image sequence has just truly been stepped in target following research.But due to optical flow method High to computer disposal rate request, it is difficult to meet the demand of real-time in practical application area.In addition, video sequence exists Noise can optical flow method be followed the tracks of to produce and greatly disturb, therefore optical flow method is very difficult to apply in practical matter at this stage.
The track algorithm of target tracking domain emerges in an endless stream, and can meet the requirement of some application backgrounds on tracking effect, But shortage versatility.Fukunaga in 1975 et al. proposes average in an estimation with regard to probability density gradient function first (mean shift, referring to the estimation of the gradient of a density function, with for skew Applications in pattern recognition) concept, yizong zheng is in nineteen ninety-five in " mean shift The scope of application of mean shift has been expanded in a mode seeking and clustering " literary composition.Although using mean shift Algorithm has speed fast to target following, and has stronger capacity of resisting disturbance, but this algorithm is to varying environment, different fortune The target of dynamic characteristic is tracked, and also can produce the factor of some impact tracking stabilities.Such as to the target under complex background Follow the tracks of, to motion in there occurs deformation, scale, block situations such as target followed the tracks of for a long time when, its tracking stability is subject to To large effect.For these problems although can be chosen by rational target characteristic, kernel function (kernel effectively Function referring specifically to Huang Jibin, the concept of kernel function, property and its application, Hubei Normal University's journal, 2007), bandwidth Adaptive updates, template renewal and occlusion detection mechanism are solved, but under much different applied environments, accomplish with Upper 4 points is not the easy thing of part.Although there being a lot of scholars to do a lot of researchs for these, and solve to varying degrees Determine the problems referred to above, or but be that algorithm complex is difficult to meet real-time, or being exactly many preconditions, so that real The tracking effect on border is unsatisfactory.In object tracking process, directly matching operation is carried out to all targets in scene, find Best match position, needs to process substantial amounts of redundancy, such operand ratio is larger, and there is no need.One class is common Method is the position that predicted motion object next frame is likely to occur, and finds optimum point in its relevant range.Kalman wave filter (kalman filter, referring to a new approach to linear filtering and prediction Problems, re kalman, journal of basic engineering, 1960) it is a state sequence to dynamical system Row carry out the algorithm of linear minimum-variance estimation, and it describes a dynamical system by state equation and observational equation, is based on Status switch before system makees optimal estimation to next state, has the characteristics that unbiased, stablizes and optimum, and have during prediction Have the characteristics that amount of calculation is little, can calculate in real time, can accurately predict position and the speed of target, but its be suitable only for linear and System in Gauss distribution.
Content of the invention
The technical problem to be solved is to provide a kind of real-time acquisition Online Video and to the mesh in Online Video Mark the Online Video method for tracking target followed the tracks of online and device.
The technical scheme is that a kind of Online Video method for tracking target, including with Lower step:
Step 1: obtain the initial two field picture in Online Video, extract characteristics of image, initial back of the body is set up according to characteristics of image Scape model;
Step 2: obtain next two field picture, proceed to step 3 and step 4 simultaneously;
Step 3: the characteristics of image of the characteristics of image of initial back-ground model and next two field picture is contrasted, is contrasted As a result, initial back-ground model is updated according to comparing result;
Step 4: obtain the foreground image in next two field picture, extract foreground target in foreground image;
Step 5: obtain the target characteristic of foreground target using on-line study method, according to target characteristic in next two field picture In foreground target is positioned, obtain the positional information of foreground target;
Step 6: the positional information according to foreground target is marked to the position of foreground target in next two field picture, will Next two field picture output after labelling;
Step 7: repeated execution of steps 2, to step 6, until Online Video input finishes, combines the next frame of all outputs Image, obtains the running orbit of foreground target.
The invention has the beneficial effects as follows: the present invention is processed for monitor in real time video, in the very first time, target is entered Line trace, need not carry out target following it is ensured that the real-time effectiveness of data again after obtaining whole raw video images, also keep away Exempt from the impact to accuracy rate after existing tracking mode is blocked to multiple target intersection, the algorithm that the present invention adopts has higher Reasonability and operational efficiency, reduce complexity, improve accuracy rate.
On the basis of technique scheme, the present invention can also do following improvement.
Further, described on-line study method is specially and is obtained respectively using boosting learning algorithm and manifold learning Take the target characteristic of foreground target, respectively obtain fisrt feature and second feature, integrate first by the way of weight coefficient special Seek peace second feature, obtain final target characteristic.
Further, described image feature includes textural characteristics.
Further, described step 3 further includes:
Step 3.1: the textural characteristics of the textural characteristics of initial back-ground model and next two field picture are carried out matching primitives;
Step 3.2: if the Image Feature Matching of the characteristics of image of initial back-ground model and next two field picture, by matching part The pixel dividing is labeled as background, proceeds to step 3.3, otherwise, the pixel of non-matching part is labeled as prospect, proceeds to step 3.3;
Step 3.3: the foreground and background according to labelling updates initial back-ground model, proceeds to step 3.1.
Further, a kind of Online Video target tracker, including background modeling unit, Objective extraction unit, target is special Levy on-line study unit, target positioning unit and sequence mark unit;
Described background modeling unit, for obtaining the initial two field picture in Online Video, extracts characteristics of image, according to image Feature sets up initial back-ground model, obtains next two field picture, by the figure of the characteristics of image of initial back-ground model and next two field picture As feature is contrasted, obtain comparing result, initial back-ground model is updated according to comparing result, the information of next two field picture is sent out Give Objective extraction unit;
Described Objective extraction unit, for obtaining the foreground image in next two field picture, extracts prospect in foreground image Target, the information of foreground target is sent to target characteristic on-line study unit;
Described target characteristic on-line study unit, for receiving the information of foreground target, is obtained using on-line study method The target characteristic of foreground target, the information of target characteristic is sent to target positioning unit;
Described target positioning unit, for receiving the information of target characteristic, right in next two field picture according to target characteristic Foreground target is positioned, and obtains the positional information of foreground target, and the positional information of foreground target is sent to sequence mark list Unit;
Described sequence mark unit, for the positional information according to foreground target in next two field picture to foreground target Position is marked, and next two field picture output after labelling repeats Objective extraction unit, target characteristic on-line study list Unit and target positioning unit, until Online Video input finishes, combine next two field picture of all outputs, obtain foreground target Running orbit.
Further, described target characteristic on-line study unit includes boosting feature learning unit, manifold feature learning Unit and weighted comprehensive unit;
Described boosting feature learning unit, the target for obtaining foreground target using boosting learning algorithm is special Levy, obtain fisrt feature, fisrt feature is sent to weighted comprehensive unit;
Described manifold feature learning unit, for being obtained the target characteristic of foreground target using manifold learning, is obtained Second feature, second feature is sent to weighted comprehensive unit;
Described weighted comprehensive unit, for receiving fisrt feature and second feature, integrates by the way of weight coefficient One feature and second feature, obtain final target characteristic.
Further, described image feature includes textural characteristics.
Further, described background modeling unit further includes that acquiring unit, matching unit, indexing unit and renewal are single Unit;
Acquiring unit, for obtaining the initial two field picture in Online Video, extracts characteristics of image, is set up according to characteristics of image Initial back-ground model, obtains next two field picture, and the information of the information of initial back-ground model and next two field picture is sent to coupling Unit;
Described matching unit, for receiving the information of initial back-ground model and the information of next two field picture, by initial background The textural characteristics of model and the textural characteristics of next two field picture carry out matching primitives, and the result of described matching primitives is sent to labelling Unit;
Described indexing unit, for receiving the result of matching primitives, if the characteristics of image of initial back-ground model and next The Image Feature Matching of two field picture, the pixel of compatible portion is labeled as background, executes updating block, otherwise, will mismatch Partial pixel is labeled as prospect, executes updating block;
Described updating block, updates initial back-ground model for the foreground and background according to labelling, executes matching unit.
Further, described Online Video target tracker also includes storage device, display device and image acquiring device;
Described storage device, for storing the running orbit of the foreground target of sequence mark unit generation;
Described display device, the running orbit of the foreground target generating for display sequence indexing unit;
Described image acquisition device, obtains Online Video for real-time, and Online Video is sent to background modeling list Unit.
Brief description
Fig. 1 is the inventive method flow chart of steps;
Fig. 2 is apparatus of the present invention structure chart;
Fig. 3 is input and output effect diagram of the present invention.
In accompanying drawing, the list of parts representated by each label is as follows:
1st, background modeling unit, 1-1, acquiring unit, 1-2, matching unit, 1-3, indexing unit, 1-4, updating block, 2, Objective extraction unit, 3, target characteristic on-line study unit, 3-1, boosting feature learning unit, 3-2, manifold feature learning Unit, 3-3, weighted comprehensive unit, 4, target positioning unit, 5, sequence mark unit 6, storage device, 7, display device, 8, figure As acquisition device.
Specific embodiment
Below in conjunction with accompanying drawing, the principle of the present invention and feature are described, example is served only for explaining the present invention, and Non- for limiting the scope of the present invention.
As shown in figure 1, being the inventive method flow chart of steps;Fig. 2 is apparatus of the present invention structure chart;Fig. 3 is that the present invention is defeated Enter output effect schematic diagram.
Embodiment 1
A kind of Online Video method for tracking target, comprises the following steps:
Step 1: obtain the initial two field picture in Online Video, extract characteristics of image, initial back of the body is set up according to characteristics of image Scape model;
Step 2: obtain next two field picture, proceed to step 3 and step 4 simultaneously;
Step 3: the characteristics of image of the characteristics of image of initial back-ground model and next two field picture is contrasted, is contrasted As a result, initial back-ground model is updated according to comparing result;
Step 4: obtain the foreground image in next two field picture, extract foreground target in foreground image;
Step 5: obtain the target characteristic of foreground target using on-line study method, according to target characteristic in next two field picture In foreground target is positioned, obtain the positional information of foreground target;
Step 6: the positional information according to foreground target is marked to the position of foreground target in next two field picture, will Next two field picture output after labelling;
Step 7: repeated execution of steps 2, to step 6, until Online Video input finishes, combines the next frame of all outputs Image, obtains the running orbit of foreground target.
Described on-line study method is specially and obtains prospect respectively using boosting learning algorithm and manifold learning The target characteristic of target, respectively obtains fisrt feature and second feature, integrates fisrt feature and by the way of weight coefficient Two features, obtain final target characteristic.
Described image feature includes textural characteristics.
Described step 3 further includes:
Step 3.1: the textural characteristics of the textural characteristics of initial back-ground model and next two field picture are carried out matching primitives;
Step 3.2: if the Image Feature Matching of the characteristics of image of initial back-ground model and next two field picture, by matching part The pixel dividing is labeled as background, proceeds to step 3.3, otherwise, the pixel of non-matching part is labeled as prospect, proceeds to step 3.3;
Step 3.3: the foreground and background according to labelling updates initial back-ground model, proceeds to step 3.1.
A kind of Online Video target tracker, including background modeling unit 1, Objective extraction unit 2, target characteristic is online Unit 3, target positioning unit 4 and sequence mark unit 5;
Described background modeling unit 1, for obtaining the initial two field picture in Online Video, extracts characteristics of image, according to figure As feature sets up initial back-ground model, obtain next two field picture, by the characteristics of image of initial back-ground model and next two field picture Characteristics of image is contrasted, and obtains comparing result, updates initial back-ground model according to comparing result, by the information of next two field picture It is sent to Objective extraction unit 2;
Described Objective extraction unit 2, for obtaining the foreground image in next two field picture, extracts prospect in foreground image Target, the information of foreground target is sent to target characteristic on-line study unit 3;
Described target characteristic on-line study unit 3, for receiving the information of foreground target, is obtained using on-line study method The target characteristic of foreground target, the information of target characteristic is sent to target positioning unit 4;
Described target positioning unit 4, for receiving the information of target characteristic, right in next two field picture according to target characteristic Foreground target is positioned, and obtains the positional information of foreground target, and the positional information of foreground target is sent to sequence mark list Unit 5;
Described sequence mark unit 5, for the positional information according to foreground target in next two field picture to foreground target Position be marked, by after labelling next two field picture output, repeat Objective extraction unit 2, target characteristic is learned online Practise unit 3 and target positioning unit 4, until Online Video input finishes, combine next two field picture of all outputs, obtain prospect The running orbit of target.
Described target characteristic on-line study unit 3 includes boosting feature learning unit 3-1, manifold feature learning unit 3-2 and weighted comprehensive unit 3-3;
Described boosting feature learning unit 3-1, for obtaining the mesh of foreground target using boosting learning algorithm Mark feature, obtains fisrt feature, fisrt feature is sent to weighted comprehensive unit 3-3;
Described manifold feature learning unit 3-2, for being obtained the target characteristic of foreground target using manifold learning, is obtained To second feature, second feature is sent to weighted comprehensive unit 3-3;
Described weighted comprehensive unit 3-3, for receiving fisrt feature and second feature, is integrated by the way of weight coefficient Fisrt feature and second feature, obtain final target characteristic.
Described image feature includes textural characteristics.
Described background modeling unit 1 further includes acquiring unit 1-1, matching unit 1-2, indexing unit 1-3 and renewal Unit 1-4;
Acquiring unit 1-1, for obtaining the initial two field picture in Online Video, extracts characteristics of image, according to characteristics of image Set up initial back-ground model, obtain next two field picture, the information of the information of initial back-ground model and next two field picture is sent to Matching unit 1-2;
Described matching unit 1-2, for receiving the information of initial back-ground model and the information of next two field picture, will initially carry on the back The textural characteristics of scape model and the textural characteristics of next two field picture carry out matching primitives, and the result of described matching primitives is sent to mark Note unit 1-3;
Described indexing unit 1-3, for receiving the result of matching primitives, if the characteristics of image of initial back-ground model with The Image Feature Matching of one two field picture, the pixel of compatible portion is labeled as background, executes updating block 1-4, otherwise, will not The pixel of compatible portion is labeled as prospect, executes updating block 1-4;
Described updating block 1-4, updates initial back-ground model for the foreground and background according to labelling, executes matching unit 1-2.
Described Online Video target tracker also includes storage device 6, display device 7 and image acquiring device 8;
Described storage device 6, for storing the running orbit of the foreground target of sequence mark unit 5 generation;
Described display device 7, the running orbit of the foreground target generating for display sequence indexing unit 5;
Described image acquisition device 8, obtains Online Video for real-time, and Online Video is sent to background modeling list Unit 1, image acquiring device 8 is used for real-time acquisition video image, and it may be, for example, a monitoring camera.
Moving target in video image is labeled, follows the tracks of by the present invention, to motion in there occurs deformation, scaling, screening The target of situations such as gear can carry out stable long-time tracking, and, the requirement to hardware for the present invention is low, algorithm complex Low.
Each two field picture that the Online Video target tracker of the present invention is used for being directed in real time current acquisition is carried out online Process.That is, obtain image synchronously to carry out, and starting target following process after non-reserved all videos with video frequency object tracking. Online Video target tracker may be provided at a board, graphic process unit (graphics processing unit, gpu) or On embedded processing box.
The video frequency object tracking of the present invention includes to single goal and multiobject tracking, and background modeling unit 1 accepts to be derived from The image of image acquiring device 8, and each two field picture receiving is carried out with the segmentation of foreground image and background image.
Background modeling unit 1 can be using the background modeling based on texture (referring specifically to marko heikkil, matti pietikinen,“a texture-based method for modeling the background and detecting Moving objects ", ieee trans.pattern anal.machine intell, 2006) inputted video image is entered Row background modeling, obtains the background image of each two field picture, and sends Objective extraction unit 2 to.
Objective extraction unit 2, by each two field picture and corresponding background image subtraction, recycles the figure of prior art to cut calculation Method (referring specifically to j.sun, w.zhang, x.tang, h, shum, " background cut ", eccv, 2006) obtain accurately Foreground image.Then the possible position of the foreground image label target with obtaining.
Target characteristic on-line study unit 3 is used for clarification of objective being learnt, to be accurately positioned to target. Online boosting(boosting algorithm, referring specifically to y freund, a short introduction to boosting, journal of japanese society for artificial intelligence,14(5):771-780, September, 1999.) feature learning algorithm has preferable performance in feature learning problem, and boosting algorithm Not only can return, classify, and there is the effect of feature selection.Online boosting feature learning method is primarily upon Differentiating factor between target and background and other prospects, without paying close attention to the Some features of target itself.Due to only from one Target characteristic from the aspect of individual, then be easy to affected by noise and lead to follow the tracks of unsuccessfully, it can be considered to from two angles The mode of Cooperative Study is reaching the correct study to target characteristic.We adopt target online manifold (manifold Learning, referring specifically to zhenyue zhang, adaptive manifold learning, pattern analysis And machine intelligence, 2012) come to target characteristic with the method for boosting feature Cooperative Study Practise statement.Target manifold is come approximately by the linear combination of its Local Subspace, and this learning method focuses more on the spy of target itself Point, the renewal learning online to target characteristic manifold, there is excellent characteristics learning performance.
Target positioning unit 4, after target characteristic on-line study unit 3 learns to target characteristic, fixed by target Bit location 4 is accurately positioned to target using target characteristic.
Target after positioning is marked by sequence mark unit 5, its movement locus is labeled simultaneously.
Storage device 6, for storing the video of sequence mark unit 5 generation.
Display device 7, can be a display screen, supply user to watch for the video after playback process.
This Online Video target tracker may also include a user interface, for deriving video.The so-called moving object of the present invention Body, refers to have recorded the image of the colouring information that certain real moving target occurs in successive frame.This moving target is for example The mobile article such as behaviour, house pet, mobile car body.Moving target is passed by the region that image acquiring device 8 shoots, generally Shot in continuous multiple image by image acquiring device 8.
That is, for a two field picture, its foreground image and current background model are processed simultaneously.
Target characteristic on-line study is another important step, obtains clarification of objective by on-line study, such that it is able to It is accurately positioned target.The method that we employ the online manifold of target and boosting feature Cooperative Study in the present embodiment Target characteristic is carried out study statement.
Manifold is one mathematically has well-defined notion, it is simply that nonlinear space.Simplest Manifold is exactly sphere.Manifold learning arithmetic is exactly it is considered that the relation between data is nonlinear, just as these data It is distributed in a manifold the same.We attempt with some methods, the dimension of data to be lowered.During dimensionality reduction we Keep the non-linear relation between data.
Boosting method is a kind of method for improving weak typing algorithm accuracy, and this method is passed through to construct one Then they be combined into an anticipation function by anticipation function series in some way.It is a kind of frame algorithm, mainly By the operation acquisition sample set to sample set, then with weak typing algorithm, a series of base of generation is trained on sample set Grader.It can be used to improve the discrimination of other weak typing algorithms, that is, other weak typing algorithms are divided as base Class algorithm is put in boosting framework, by the operation to training sample set for the boosting framework, obtains different training samples This subset, goes training to generate base grader with this sample set;Often obtain a sample set just with this base sorting algorithm in this sample One base grader is produced on this collection, so so that it may produce n base grader after given training cycle-index n, then This n base grader is weighted merging by boosting frame algorithm, produces a last result grader, at this n In base grader, the discrimination of each single grader is not necessarily very high, but the result after their joints is had very high knowledge Not rate, so just improves the discrimination of this weak typing algorithm.Generally speaking the core concept of boosting algorithm is exactly to pass through A series of joint to weak learning machines is to produce our desired strong learning machines.
Fig. 3 shows the input of the present invention and the effect image of output.As illustrated, when target (pedestrian) is from t- △ t Carve to enter and monitor that scope starts, system is tracked to it, until current time t, and shows its current location and motion rail Mark.
The Online Video target following mode of the present invention be directed to extract real-time moving object sequence processed it is ensured that The very first time can carry out target following to raw video image, makes target following reach the demand of real-time.
The target of situations such as present invention there occurs deformation in motion, scales, blocks can carry out stable long-time with Track, makes target following reach the demand of high-accuracy.
The algorithm of the present invention has higher reasonability and operational efficiency, reduces complexity.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all spirit in the present invention and Within principle, any modification, equivalent substitution and improvement made etc., should be included within the scope of the present invention.

Claims (7)

1. a kind of Online Video method for tracking target is it is characterised in that comprise the following steps:
Step 1: obtain the initial two field picture in Online Video, extract characteristics of image, initial background mould is set up according to characteristics of image Type;
Step 2: obtain next two field picture, proceed to step 3 and step 4 simultaneously;
Step 3: the characteristics of image of the characteristics of image of initial back-ground model and next two field picture is contrasted, obtains contrast knot Really, initial back-ground model is updated according to comparing result;
Step 4: obtain the foreground image in next two field picture, extract foreground target in foreground image;
Step 5: the target characteristic of foreground target is obtained using on-line study method, right in next two field picture according to target characteristic Foreground target is positioned, and obtains the positional information of foreground target;Described on-line study method is specially and adopts boosting Practise algorithm and manifold learning obtains the target characteristic of foreground target respectively, respectively obtain fisrt feature and second feature, adopt Integrate fisrt feature and second feature with the mode of weight coefficient, obtain final target characteristic;
Step 6: the positional information according to foreground target is marked to the position of foreground target in next two field picture, by labelling Next two field picture output afterwards;
Step 7: repeated execution of steps 2, to step 6, until Online Video input finishes, combines next two field picture of all outputs, Obtain the running orbit of foreground target.
2. Online Video method for tracking target according to claim 1 it is characterised in that: described image feature includes texture Feature.
3. Online Video method for tracking target according to claim 2 it is characterised in that: described step 3 further includes:
Step 3.1: the textural characteristics of the textural characteristics of initial back-ground model and next two field picture are carried out matching primitives;
Step 3.2: if the Image Feature Matching of the characteristics of image of initial back-ground model and next two field picture, by compatible portion Pixel is labeled as background, proceeds to step 3.3, otherwise, the pixel of non-matching part is labeled as prospect, proceeds to step 3.3;
Step 3.3: the foreground and background according to labelling updates initial back-ground model, proceeds to step 3.1.
4. a kind of Online Video target tracker it is characterised in that: include background modeling unit (1), Objective extraction unit (2), target characteristic on-line study unit (3), target positioning unit (4) and sequence mark unit (5);
Described background modeling unit (1), for obtaining the initial two field picture in Online Video, extracts characteristics of image, according to image Feature sets up initial back-ground model, obtains next two field picture, by the figure of the characteristics of image of initial back-ground model and next two field picture As feature is contrasted, obtain comparing result, initial back-ground model is updated according to comparing result, the information of next two field picture is sent out Give Objective extraction unit (2);
Described Objective extraction unit (2), for obtaining the foreground image in next two field picture, extracts prospect mesh in foreground image Mark, the information of foreground target is sent to target characteristic on-line study unit (3);
Described target characteristic on-line study unit (3), for receiving the information of foreground target, before being obtained using on-line study method The target characteristic of scape target, the information of target characteristic is sent to target positioning unit (4);Described target characteristic on-line study list First (3) include boosting feature learning unit (3-1), manifold feature learning unit (3-2) and weighted comprehensive unit (3-3);
Described boosting feature learning unit (3-1), for obtaining the target of foreground target using boosting learning algorithm Feature, obtains fisrt feature, and fisrt feature is sent to weighted comprehensive unit (3-3);
Described manifold feature learning unit (3-2), for being obtained the target characteristic of foreground target using manifold learning, is obtained Second feature, second feature is sent to weighted comprehensive unit (3-3);
Described weighted comprehensive unit (3-3), for receiving fisrt feature and second feature, integrates by the way of weight coefficient One feature and second feature, obtain final target characteristic;
Described target positioning unit (4), for receiving the information of target characteristic, according to target characteristic in next two field picture to front Scape target is positioned, and obtains the positional information of foreground target, and the positional information of foreground target is sent to sequence mark unit (5);
Described sequence mark unit (5), for the positional information according to foreground target in next two field picture to foreground target Position is marked, and next two field picture output after labelling repeats Objective extraction unit (2), target characteristic is learned online Practise unit (3) and target positioning unit (4), until Online Video input finishes, combine next two field picture of all outputs, obtain The running orbit of foreground target.
5. Online Video target tracker according to claim 4 it is characterised in that: described image feature includes texture Feature.
6. Online Video target tracker according to claim 4 it is characterised in that: described background modeling unit (1) Further include acquiring unit (1-1), matching unit (1-2), indexing unit (1-3) and updating block (1-4);
Acquiring unit (1-1), for obtaining the initial two field picture in Online Video, extracts characteristics of image, is built according to characteristics of image Vertical initial back-ground model, obtains next two field picture, and the information of the information of initial back-ground model and next two field picture is sent to Join unit (1-2);
Described matching unit (1-2), for receiving the information of initial back-ground model and the information of next two field picture, by initial background The textural characteristics of model and the textural characteristics of next two field picture carry out matching primitives, and the result of described matching primitives is sent to labelling Unit (1-3);
Described indexing unit (1-3), for receiving the result of matching primitives, if the characteristics of image of initial back-ground model and next The Image Feature Matching of two field picture, the pixel of compatible portion is labeled as background, executes updating block (1-4), otherwise, will not The pixel of compatible portion is labeled as prospect, execution updating block (1-4);
Described updating block (1-4), updates initial back-ground model for the foreground and background according to labelling, executes matching unit (1-2).
7. Online Video target tracker according to claim 4 it is characterised in that: described Online Video target following Device also includes storage device (6), display device (7) and image acquiring device (8);
Described storage device (6), for storing the running orbit of the foreground target that sequence mark unit (5) generates;
Described display device (7), the running orbit of the foreground target generating for display sequence indexing unit (5);
Described image acquisition device (8), obtains Online Video for real-time, and Online Video is sent to background modeling unit (1).
CN201310390529.4A 2013-08-30 2013-08-30 Method and device for tracing online video target Active CN103440668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310390529.4A CN103440668B (en) 2013-08-30 2013-08-30 Method and device for tracing online video target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310390529.4A CN103440668B (en) 2013-08-30 2013-08-30 Method and device for tracing online video target

Publications (2)

Publication Number Publication Date
CN103440668A CN103440668A (en) 2013-12-11
CN103440668B true CN103440668B (en) 2017-01-25

Family

ID=49694361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310390529.4A Active CN103440668B (en) 2013-08-30 2013-08-30 Method and device for tracing online video target

Country Status (1)

Country Link
CN (1) CN103440668B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123732B (en) * 2014-07-14 2017-06-16 中国科学院信息工程研究所 A kind of online method for tracking target and system based on multi-cam
CN104217221A (en) * 2014-08-27 2014-12-17 重庆大学 Method for detecting calligraphy and paintings based on textural features
CN105282496B (en) * 2014-12-02 2018-03-23 四川浩特通信有限公司 A kind of method for tracking target video object
CN106022279A (en) * 2016-05-26 2016-10-12 天津艾思科尔科技有限公司 Method and system for detecting people wearing a hijab in video images
US10140508B2 (en) * 2016-08-26 2018-11-27 Huawei Technologies Co. Ltd. Method and apparatus for annotating a video stream comprising a sequence of frames
CN106815844A (en) * 2016-12-06 2017-06-09 中国科学院西安光学精密机械研究所 Matting method based on manifold learning
CN106934757B (en) * 2017-01-26 2020-05-19 北京中科神探科技有限公司 Monitoring video foreground extraction acceleration method based on CUDA
CN107368188B (en) * 2017-07-13 2020-05-26 河北中科恒运软件科技股份有限公司 Foreground extraction method and system based on multiple spatial positioning in mediated reality
CN109215057B (en) * 2018-07-31 2021-08-20 中国科学院信息工程研究所 High-performance visual tracking method and device
CN109785356B (en) * 2018-12-18 2021-02-05 北京中科晶上超媒体信息技术有限公司 Background modeling method for video image
CN113468916A (en) * 2020-03-31 2021-10-01 顺丰科技有限公司 Model training method, throwing track detection method, device and storage medium
CN112449160A (en) * 2020-11-13 2021-03-05 珠海大横琴科技发展有限公司 Video monitoring method and device and readable storage medium
CN113283279B (en) * 2021-01-25 2024-01-19 广东技术师范大学 Multi-target tracking method and device in video based on deep learning
CN112950676A (en) * 2021-03-25 2021-06-11 长春理工大学 Intelligent robot loop detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054170A (en) * 2011-01-19 2011-05-11 中国科学院自动化研究所 Visual tracking method based on minimized upper bound error
CN103116987A (en) * 2013-01-22 2013-05-22 华中科技大学 Traffic flow statistic and violation detection method based on surveillance video processing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0425937D0 (en) * 2004-11-25 2004-12-29 British Telecomm Method and system for initialising a background model
CN101216943B (en) * 2008-01-16 2010-07-14 湖北莲花山计算机视觉和信息科学研究院 A method for video moving object subdivision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054170A (en) * 2011-01-19 2011-05-11 中国科学院自动化研究所 Visual tracking method based on minimized upper bound error
CN103116987A (en) * 2013-01-22 2013-05-22 华中科技大学 Traffic flow statistic and violation detection method based on surveillance video processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于HOG和Harr特征的行人追踪算法研究;陆星家等;《计算机科学》;20130630;第40卷(第6A期);199-203 *

Also Published As

Publication number Publication date
CN103440668A (en) 2013-12-11

Similar Documents

Publication Publication Date Title
CN103440668B (en) Method and device for tracing online video target
Jiao et al. New generation deep learning for video object detection: A survey
Ke et al. Multi-dimensional traffic congestion detection based on fusion of visual features and convolutional neural network
Bi et al. Dynamic mode decomposition based video shot detection
Tang et al. Weakly supervised salient object detection with spatiotemporal cascade neural networks
Wang et al. Supervised class-specific dictionary learning for sparse modeling in action recognition
Mo et al. Background noise filtering and distribution dividing for crowd counting
Wei et al. End-to-end video saliency detection via a deep contextual spatiotemporal network
Yang et al. Bottom-up foreground-aware feature fusion for practical person search
Zhou Feature extraction of human motion video based on virtual reality technology
Ehsan et al. An accurate violence detection framework using unsupervised spatial–temporal action translation network
Chen et al. MICPL: Motion-Inspired Cross-Pattern Learning for Small-Object Detection in Satellite Videos
Sun et al. Flying Bird Object Detection Algorithm in Surveillance Video Based on Motion Information
Zhong et al. Key frame extraction algorithm of motion video based on priori
Wang et al. Intelligent design and optimization of exercise equipment based on fusion algorithm of yolov5-resnet 50
Ren et al. Student behavior detection based on YOLOv4-Bi
Zhang [Retracted] Sports Action Recognition Based on Particle Swarm Optimization Neural Networks
Gong et al. Research on an improved KCF target tracking algorithm based on CNN feature extraction
Kumar et al. Light-Weight Deep Learning Model for Human Action Recognition in Videos
Yang et al. An end-to-end noise-weakened person re-identification and tracking with adaptive partial information
Song et al. Traffic sign recognition with binarized multi-scale neural networks
Xu et al. DTA: Double LSTM with temporal-wise attention network for action recognition
Cheng et al. Weighted multiple instance-based deep correlation filter for video tracking processing
Peng Computer Information Technology and Network Security Analysis of Intelligent Image Recognition
Mao Real-time small-size pixel target perception algorithm based on embedded system for smart city

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant