CN106295532B - A kind of human motion recognition method in video image - Google Patents
A kind of human motion recognition method in video image Download PDFInfo
- Publication number
- CN106295532B CN106295532B CN201610621491.0A CN201610621491A CN106295532B CN 106295532 B CN106295532 B CN 106295532B CN 201610621491 A CN201610621491 A CN 201610621491A CN 106295532 B CN106295532 B CN 106295532B
- Authority
- CN
- China
- Prior art keywords
- histogram
- classification
- profile energy
- behavior
- energy variation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000033001 locomotion Effects 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 15
- 230000001149 cognitive effect Effects 0.000 claims abstract description 4
- 230000009471 action Effects 0.000 claims description 14
- 239000012535 impurity Substances 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 5
- 238000003064 k means clustering Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000011410 subtraction method Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 abstract description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the human motion recognition methods in a kind of video image, comprising the following steps: and one, each frame input picture is pre-processed to obtain foreground area, foreground area is screened to obtain target area;Two, objective contour is obtained according to target area;Three, obtain the profile energy variation histogram of X and Y-direction;Four, profile energy variation histogram is normalized;Five, the training stage: the classification of motion is carried out to the training set that profile energy variation histogram is formed, human body behavior model is obtained and assigns weight;Six, in cognitive phase: the profile energy histogram of frame to be measured being matched with the human body behavior model that the training stage obtains, execution identification.The present invention obtains profile energy variation histogram by calculating the variation of objective contour in consecutive frame, carries out unsupervised segmentation according to profile energy variation histogram, improves accuracy rate and robustness, while can guarantee real-time.
Description
Technical field
The present invention relates to the human body recognition methods in a kind of video image, belong to the technology neck of image procossing and pattern-recognition
Domain.
Background technique
With the fast development for obtaining video equipment and broadband network, video has been used as the main carriers of information.Greatly
Most videos are all the activities of the people of record, so whether from safety, monitoring and amusement or the angle of personal information storage
Degree, the research identified to the human action in video are just provided with highly important learning value and application prospect.From this
For in matter, Human bodys' response is exactly to extract interested feature to the pedestrian target split, then to extracting
Characteristic carry out sort operation.Currently, common Human bodys' response method can be divided into the method based on template matching
And the method based on state space.Method based on template matching is will to be stored in reference to the sequence image template of human body behavior
In database, the reference sequences image stored in testing image and database is matched later, to find similarity
Highest reference sequences image, and then determine human body behavior classification to be tested.Human bodys' response method based on template is multiple
Miscellaneous degree is lower, but does not account for the dynamic characteristic of human body behavior in the video sequence, and very sensitive to noise jamming.
Method based on state space works as the basic poses of human body behavior and is made a state by the feature of description human motion,
It is promoted between these states by certain probabilistic relation, wherein applying at most is hidden Markov model.But human body row
To identify that human body is non-rigid targets, and everyone does identical movement at present there is also the problem for much needing to overcome
Having differences property, this just brings difficulty to the generality of Activity recognition, moreover there is also similitudes between certain human actions
And act many kinds of, this is considered the problems of when designing Activity recognition method.
Currently, requirement of the intelligent monitoring for real-time and accuracy is higher and higher, and conventional method is difficult to meet now
The demand of practical application.
Summary of the invention
It is an object of the invention to overcome deficiency in the prior art, the human action provided in a kind of video image is known
Other method, it is low based on model generalization ability in model matching method to solve tradition, noise resisting ability difference and empty based on state
Between in method the technical issues of action classification similitude.
In order to solve the above technical problems, the present invention provides the human motion recognition method in a kind of video image, it is special
Sign is, comprising the following steps:
Step 1 is pre-processed to obtain foreground area, is screened to obtain mesh to foreground area to each frame input picture
Mark region;
Step 2 obtains objective contour according to target area;
Step 3 obtains the profile energy variation histogram of X and Y-direction;
Profile energy variation histogram is normalized in step 4;
The training stage: step 5 carries out the classification of motion to the training set that profile energy variation histogram is formed, obtains human body
Behavior model simultaneously assigns weight;
Step 6, in cognitive phase: the human body behavior mould that the profile energy histogram of frame to be measured and training stage are obtained
Type matching, execution identification.
Further, in said step 1, pretreatment uses background subtraction method, and screening uses minimum circumscribed rectangle frame
Method.
Further, in the step 3, profile energy variation histogram method is obtained are as follows:
31) the edge image I of adjacent two field pictures is obtainededgeAnd Ilast_edge, using 10 × 10 windows by column traversal edge
Image Iedge;
32) when traversing, when in window there are when edge pixel, in previous frame image Ilast_edgeIn same area find
The smallest edge pixel of Euclidean distance matches therewith, and the size of Euclidean distance is changed as the point edge pixel energy
Value;
33) after the completion of traversing, using row number as the abscissa of histogram, the corresponding energy change value of each column is as histogram
Ordinate, obtain profile energy variation histogram.
Further, in the step 4, normalized process is that place first is normalized to histogram ordinate
Reason is in its value between 0 to 1, is then the histogram of fixed size to an abscissa by Histogram Mapping.
Further, in the step 5, classification method are as follows:
51) cluster mass center collection is obtained using k-means clustering method, major class division, obtained each division is carried out to behavior
Classification Ci, wherein 1≤i≤n, n are behavior classification numbers;
52) using Euclidean distance to each CiIt is compared two-by-two, obtains Geordie impurity level Gi, Geordie impurity level GiAs class
Other CiWeight.
Further, in the step 6, the detailed process in identification of behavior to be measured are as follows:
61) for behavior S to be measuredq={ K1,K2,K3,.......,KlCarry out step 1 to three processing obtain its profile energy
Amount variation histogram, judges image KtHistogram and each classification mass center Euclidean distance, choose Euclidean distance it is the smallest
Classification is as image KtAffiliated classification Ci, wherein 1≤t≤l;
62) by SqA possibility that belonging to each swooping template action behavior is set as Aq={ A1,A2,A3,......,An, wherein may
Property AiIt is according to Geordie impurity level GiTo CiIt optimizes to obtain, Ai=Gi/Ci;
63) according to each frame image generic Ai, select maximum value AmaxSo that it is determined that SqType of action.
Compared with prior art, the beneficial effects obtained by the present invention are as follows being: the present invention is calculated adjacent by Euclidean distance
The variation of objective contour obtains profile energy variation histogram in frame, is obtained using k-means clustering method to each frame image
Profile energy variation histogram carries out unsupervised segmentation, assigns weight to classification results by Geordie impurity level, it is accurate to improve
Rate and robustness, while can guarantee real-time, solve traditional low based on model generalization ability in model matching method, anti-noise
The problem of sound ability is poor and is based on action classification similitude in state-space method.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is the image of boxing behavior in KTH database of the embodiment of the present invention.
Fig. 3 is the image of handclapping behavior in KTH database of the embodiment of the present invention.
Fig. 4 is the image of handwaving behavior in KTH database of the embodiment of the present invention.
Fig. 5 is the image of jogging behavior in KTH database of the embodiment of the present invention.
Fig. 6 is the image of running behavior in KTH database of the embodiment of the present invention.
Fig. 7 is the image of walking behavior in KTH database of the embodiment of the present invention.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.Following embodiment is only used for clearly illustrating the present invention
Technical solution, and not intended to limit the protection scope of the present invention.
As shown in Figure 1, the human motion recognition method in a kind of video image of the invention, characterized in that including following
Step:
Step 1 is pre-processed to obtain foreground area, is screened to obtain mesh to foreground area to each frame input picture
Mark region;
Behavioral training collection S={ S1,S2,S3,......,Sn(n is behavior classification number), wherein behavior Si(wherein, 1≤i
≤ n), behavior Si={ K1,K2,K3,.......,Km(m is number of image frames), Kj(wherein, 1≤j≤m) is composition behavior Si's
Each frame image obtains foreground area using background subtraction method to each frame input picture in a video, and detailed process is referring to existing
There is technology, then by minimum circumscribed rectangle frame include foreground area, to judge whether it is human body target region, filters out mesh
Mark region.
Step 2 obtains objective contour according to target area;
The method for obtaining objective contour is first to be filtered to input picture using 2D gaussian filtering template, then utilized
Canny operator extracts human body attitude two-value profile frame by frame, then calculates it by Sobel operator to each edge pixel in image
The size and Orientation of gradient.
Step 3 obtains the profile energy variation histogram of X and Y-direction;
The variation that objective contour in consecutive frame is calculated by Euclidean distance obtains profile energy variation histogram, obtains wheel
The detailed process of wide energy variation histogram are as follows:
31) the edge image I of adjacent two field pictures is obtainededgeAnd Ilast_edge, using 10 × 10 windows by column traversal edge
Image Iedge;
32) when traversing, when in window there are when edge pixel, in previous frame image Ilast_edgeIn same area find
The smallest edge pixel of Euclidean distance matches therewith, and the size of Euclidean distance is changed as the point edge pixel energy
Value;
33) after the completion of traversing, using row number as the abscissa of histogram, the corresponding energy change value of each column is as histogram
Ordinate, obtain profile energy variation histogram.
Profile energy variation histogram is normalized in step 4;
Normalized process is that first histogram ordinate is normalized, and is in its value between 0 to 1, so
It is afterwards the histogram of fixed size to an abscissa by Histogram Mapping.
The training stage: step 5 carries out the classification of motion to the training set that profile energy variation histogram is formed, obtains human body
Behavior model simultaneously assigns weight;
Unsupervised segmentation is carried out to the profile energy variation histogram that each frame image obtains using k-means clustering method,
The detailed process of classification are as follows:
51) k object is randomly choosed in the training set being made of profile variations energy histogram, each object represents one
The mass center of a cluster;Wherein the value of k empirically chooses 3≤k≤n;
52) remaining each object is assigned to it therewith according to the distance between the object and each cluster mass center
In most like cluster;
53) the new mass center of each cluster is calculated;
54) above-mentioned 51) -53 are repeated) process, until criterion function is assembled;
55) according to cluster mass center collection R achieved aboven, major class division is carried out to behavior S, each stroke obtained is classified as Ci;
56) using Euclidean distance to CiIt is compared two-by-two, obtains Geordie impurity level Gi, Geordie impurity level GiAs classification
CiWeight;The process of Geordie impurity level is wherein obtained referring to the prior art.
Step 6, in cognitive phase: the human body behavior mould that the profile energy histogram of frame to be measured and training stage are obtained
Type matching, execution identification.
The detailed process in identification of behavior to be measured are as follows:
61) for behavior S to be measuredq={ K1,K2,K3,.......,KlCarry out step 1 to three processing obtain its profile energy
Amount variation histogram, judges image KtThe Euclidean distance of the mass center of the histogram and each classification of (wherein, 1≤t≤l) is chosen
The smallest classification of Euclidean distance is as image KtAffiliated classification Ci,
62) by SqA possibility that belonging to each swooping template action behavior is set as Aq={ A1,A2,A3,......,An, wherein may
Property AiIt is according to Geordie impurity level GiTo CiIt optimizes to obtain, Ai=Gi/Ci;AiValue is bigger, represents SqBelong to the i-th action classification
A possibility that it is bigger, optimized using ratio, the discrimination between inhomogeneity can be improved;
63) according to A a possibility that each frame image generici, select maximum value AmaxSo that it is determined that SqType of action.
Embodiment one
The present invention is using leaving-one method (assuming that having N number of sample, using each sample as test sample, other N-1 samples
As training sample) cross validation is carried out to method, test sample uses KTH human body behavior database, which includes 6 classes
Behavior: boxing, jogging, running, boxing, handwaving, handclapping are held by 25 different people
Capable, respectively under four scenes (outdoor background, camera lens, which furthers, to zoom out, video camera light exercise, room background), one is shared
599 sections of videos.Fig. 2 to Fig. 7 be respectively boxing, handclapping in KTH database, handwaving, jogging,
The image of running and walking behavior.In the prior art carry out human action identification using method have Schindler,
Ahmad, Jhuang, Rodriguez and Mikolajczyk.Based on 6 class behavior images in KTH human body behavior database, this implementation
Example in the method that the method for the present invention is used with the prior art is tested respectively, wherein Schindler, Ahmad, Jhuang and
Rodriguez method uses Split Method, and the method for the present invention and Mikolajczyk method use leaving-one method.The test knot of each method
Fruit is as shown in table 1, and method of the invention reaches 93.3% for the average recognition rate of each behavior, has been more than the identification of other methods
Rate, discrimination with higher.
Each method discrimination in table 1:KTH database
Method | Evaluation of programme | Discrimination (%) |
The method of the present invention | Leaving-one method | 93.3 |
Schindler | Split Method | 90.73 |
Ahmad | Split Method | 87.63 |
Jhuang | Split Method | 91.68 |
Rodriguez | Split Method | 88.66 |
Mikolajczyk | Leaving-one method | 93.17 |
In conclusion the invention has the following advantages: calculating objective contour in consecutive frame by Euclidean distance
Variation obtains profile energy variation histogram, and the profile energy variation obtained using k-means clustering method to each frame image is straight
Side's figure carries out unsupervised segmentation, can be improved the accuracy rate of identification by the method that Geordie impurity level assigns weight to classification results
And robustness, while can guarantee real-time, solve traditional low based on model generalization ability in model matching method, antinoise
The problem of ability is poor and is based on action classification similitude in state-space method.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, without departing from the technical principles of the invention, several improvements and modifications, these improvements and modifications can also be made
Also it should be regarded as protection scope of the present invention.
Claims (5)
1. the human motion recognition method in a kind of video image, characterized in that the following steps are included:
Step 1 is pre-processed to obtain foreground area, is screened to obtain target area to foreground area to each frame input picture
Domain;
Step 2 obtains objective contour according to target area;
Step 3 obtains the profile energy variation histogram of X and Y-direction;
Profile energy variation histogram is normalized in step 4;
The training stage: step 5 carries out the classification of motion to the training set that profile energy variation histogram is formed, obtains human body behavior
Model simultaneously assigns weight;
Step 6, in cognitive phase: the human body behavior mould that the profile energy variation histogram of frame to be measured and training stage are obtained
Type matching, execution identification;
In the step 3, profile energy variation histogram method is obtained are as follows:
31) the edge image I of adjacent two field pictures is obtainededgeAnd Ilast_edge, using 10 × 10 windows by column traversal edge image
Iedge;
32) when traversing, when in window there are when edge pixel, in previous frame image Ilast_edgeIn same area find therewith
The smallest edge pixel of Euclidean distance matches, and the value that the size of Euclidean distance is changed as the point edge pixel energy;
33) after the completion of traversing, using row number as the abscissa of histogram, the corresponding energy change value of each column is as the vertical of histogram
Coordinate obtains profile energy variation histogram.
2. the human motion recognition method in a kind of video image according to claim 1, characterized in that in the step
In one, pretreatment uses background subtraction method, and screening uses minimum circumscribed rectangle frame method.
3. the human motion recognition method in a kind of video image according to claim 1, characterized in that in the step
In four, normalized process is that first histogram ordinate is normalized, and is in its value between 0 to 1, then
It is the histogram of fixed size by Histogram Mapping a to abscissa.
4. the human motion recognition method in a kind of video image according to claim 1, characterized in that in the step
In five, classification method are as follows:
51) cluster mass center collection is obtained using k-means clustering method, major class division is carried out to behavior, obtains each division classification Ci,
Wherein, 1≤i≤n, n are behavior classification numbers;
52) using Euclidean distance to each CiIt is compared two-by-two, obtains Geordie impurity level Gi, Geordie impurity level GiAs classification Ci's
Weight.
5. the human motion recognition method in a kind of video image according to claim 4, characterized in that in the step
In six, the detailed process in identification of behavior to be measured are as follows:
61) for behavior S to be measuredq={ K1,K2,K3,.......,KlCarry out step 1 to three processing obtain its profile energy quantitative change
Change histogram, judges image KtHistogram and each classification mass center Euclidean distance, choose the smallest classification of Euclidean distance
As image KtAffiliated classification Ci, wherein 1≤t≤l;
62) by SqA possibility that belonging to each swooping template action behavior is set as Aq={ A1,A2,A3,......,An, wherein possibility AiIt is
According to Geordie impurity level GiTo CiIt optimizes to obtain, Ai=Gi/Ci;
63) according to each frame image generic Ai, select maximum value AmaxSo that it is determined that SqType of action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610621491.0A CN106295532B (en) | 2016-08-01 | 2016-08-01 | A kind of human motion recognition method in video image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610621491.0A CN106295532B (en) | 2016-08-01 | 2016-08-01 | A kind of human motion recognition method in video image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106295532A CN106295532A (en) | 2017-01-04 |
CN106295532B true CN106295532B (en) | 2019-09-24 |
Family
ID=57664039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610621491.0A Expired - Fee Related CN106295532B (en) | 2016-08-01 | 2016-08-01 | A kind of human motion recognition method in video image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106295532B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392945B (en) * | 2017-06-11 | 2019-10-25 | 杭州巨梦科技有限公司 | A kind of two-dimensional silhouette matching process |
CN107272468A (en) * | 2017-08-01 | 2017-10-20 | 刘太龙 | Electronic security(ELSEC) based on communication ensures platform |
CN108182416A (en) * | 2017-12-30 | 2018-06-19 | 广州海昇计算机科技有限公司 | A kind of Human bodys' response method, system and device under monitoring unmanned scene |
CN109867186B (en) * | 2019-03-18 | 2020-11-10 | 浙江新再灵科技股份有限公司 | Elevator trapping detection method and system based on intelligent video analysis technology |
CN110866435B (en) * | 2019-08-13 | 2023-09-12 | 广州三木智能科技有限公司 | Far infrared pedestrian training method for self-similarity gradient orientation histogram |
CN110597251B (en) * | 2019-09-03 | 2022-10-25 | 三星电子(中国)研发中心 | Method and device for controlling intelligent mobile equipment |
CN112070016B (en) * | 2020-09-08 | 2023-12-26 | 浙江铂视科技有限公司 | Detection method for identifying child behavior and action |
CN113221880B (en) * | 2021-04-29 | 2022-08-05 | 上海勃池信息技术有限公司 | OCR layout analysis method based on kini purity |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101576953A (en) * | 2009-06-10 | 2009-11-11 | 北京中星微电子有限公司 | Classification method and device of human body posture |
CN101882217A (en) * | 2010-02-26 | 2010-11-10 | 杭州海康威视软件有限公司 | Target classification method of video image and device |
CN102136066A (en) * | 2011-04-29 | 2011-07-27 | 电子科技大学 | Method for recognizing human motion in video sequence |
CN102682302A (en) * | 2012-03-12 | 2012-09-19 | 浙江工业大学 | Human body posture identification method based on multi-characteristic fusion of key frame |
CN103310233A (en) * | 2013-06-28 | 2013-09-18 | 青岛科技大学 | Similarity mining method of similar behaviors between multiple views and behavior recognition method |
CN103400391A (en) * | 2013-08-09 | 2013-11-20 | 北京博思廷科技有限公司 | Multiple-target tracking method and device based on improved random forest |
CN105139417A (en) * | 2015-07-27 | 2015-12-09 | 河海大学 | Method for real-time multi-target tracking under video surveillance |
-
2016
- 2016-08-01 CN CN201610621491.0A patent/CN106295532B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101576953A (en) * | 2009-06-10 | 2009-11-11 | 北京中星微电子有限公司 | Classification method and device of human body posture |
CN101882217A (en) * | 2010-02-26 | 2010-11-10 | 杭州海康威视软件有限公司 | Target classification method of video image and device |
CN102136066A (en) * | 2011-04-29 | 2011-07-27 | 电子科技大学 | Method for recognizing human motion in video sequence |
CN102682302A (en) * | 2012-03-12 | 2012-09-19 | 浙江工业大学 | Human body posture identification method based on multi-characteristic fusion of key frame |
CN103310233A (en) * | 2013-06-28 | 2013-09-18 | 青岛科技大学 | Similarity mining method of similar behaviors between multiple views and behavior recognition method |
CN103400391A (en) * | 2013-08-09 | 2013-11-20 | 北京博思廷科技有限公司 | Multiple-target tracking method and device based on improved random forest |
CN105139417A (en) * | 2015-07-27 | 2015-12-09 | 河海大学 | Method for real-time multi-target tracking under video surveillance |
Also Published As
Publication number | Publication date |
---|---|
CN106295532A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106295532B (en) | A kind of human motion recognition method in video image | |
CN109740413B (en) | Pedestrian re-identification method, device, computer equipment and computer storage medium | |
CN108921083B (en) | Illegal mobile vendor identification method based on deep learning target detection | |
WO2017190574A1 (en) | Fast pedestrian detection method based on aggregation channel features | |
Pan et al. | A robust system to detect and localize texts in natural scene images | |
CN110929593B (en) | Real-time significance pedestrian detection method based on detail discrimination | |
US20230289979A1 (en) | A method for video moving object detection based on relative statistical characteristics of image pixels | |
CN103279768B (en) | A kind of video face identification method based on incremental learning face piecemeal visual characteristic | |
CN107491749B (en) | Method for detecting global and local abnormal behaviors in crowd scene | |
CN105718866B (en) | A kind of detection of sensation target and recognition methods | |
KR101697161B1 (en) | Device and method for tracking pedestrian in thermal image using an online random fern learning | |
CN106384345B (en) | A kind of image detection and flow statistical method based on RCNN | |
CN110119726A (en) | A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model | |
CN110991397B (en) | Travel direction determining method and related equipment | |
Rahman Ahad et al. | Action recognition based on binary patterns of action-history and histogram of oriented gradient | |
CN102663411A (en) | Recognition method for target human body | |
CN113850221A (en) | Attitude tracking method based on key point screening | |
Nabi et al. | Temporal poselets for collective activity detection and recognition | |
CN106874825A (en) | The training method of Face datection, detection method and device | |
CN110008899B (en) | Method for extracting and classifying candidate targets of visible light remote sensing image | |
CN104200218B (en) | A kind of across visual angle action identification method and system based on timing information | |
CN108509861A (en) | A kind of method for tracking target and device combined based on sample learning and target detection | |
Najibi et al. | Towards the success rate of one: Real-time unconstrained salient object detection | |
CN108509825A (en) | A kind of Face tracking and recognition method based on video flowing | |
CN104200202B (en) | A kind of upper half of human body detection method based on cumulative perceptron |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190924 |
|
CF01 | Termination of patent right due to non-payment of annual fee |