Nothing Special   »   [go: up one dir, main page]

CN111191667B - Crowd counting method based on multiscale generation countermeasure network - Google Patents

Crowd counting method based on multiscale generation countermeasure network Download PDF

Info

Publication number
CN111191667B
CN111191667B CN201811356818.1A CN201811356818A CN111191667B CN 111191667 B CN111191667 B CN 111191667B CN 201811356818 A CN201811356818 A CN 201811356818A CN 111191667 B CN111191667 B CN 111191667B
Authority
CN
China
Prior art keywords
crowd
density
generator
density map
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811356818.1A
Other languages
Chinese (zh)
Other versions
CN111191667A (en
Inventor
咸良
杨建兴
周圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University Marine Technology Research Institute
Original Assignee
Tianjin University Marine Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University Marine Technology Research Institute filed Critical Tianjin University Marine Technology Research Institute
Priority to CN201811356818.1A priority Critical patent/CN111191667B/en
Publication of CN111191667A publication Critical patent/CN111191667A/en
Application granted granted Critical
Publication of CN111191667B publication Critical patent/CN111191667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The crowd density prediction is carried out by adopting a countermeasure training mode based on a crowd counting method for generating a countermeasure network in a multi-scale manner. The maximum and minimum problems are optimized by adopting a training mode of joint alternating iteration for the generating model and the judging model. Wherein the training generation network is used for generating accurate crowd density graphs so as to cheat the discriminator, and conversely, the discriminator is trained for discriminating the generated density graphs and the real density graph labels. Meanwhile, the output of the discriminator provides feedback of the density map position and the prediction accuracy for the generator. The two networks compete for training at the same time to enhance the effect of the generation until the samples generated by the generator cannot be correctly judged by the arbiter. After the fight loss is introduced, the crowd density detection algorithm provided by the patent adopts a fight training mode to enable the convolutional neural network to generate a density map with higher quality, so that the accuracy of network crowd counting is improved.

Description

Crowd counting method based on multiscale generation countermeasure network
Technical Field
The invention relates to the field of image processing and computer vision, in particular to a crowd counting algorithm based on a multiscale generation countermeasure network.
Background
With the continuous growth of population in China, the occasions of large-scale population aggregation are more and more. In order to effectively control the crowd quantity in public places and prevent accidents caused by crowd density overload, video monitoring becomes a current main means. In the fields of video monitoring and security, crowd analysis attracts more and more researchers' attention, and becomes a very hot research topic in the current computer vision field. The crowd counting task is to accurately estimate the total number of the crowd in the picture and simultaneously give out the distribution condition of crowd density. Picture population count can be used in many areas such as accident prevention, space planning, consumer habit analysis, traffic scheduling, and the like.
Currently, the crowd counting algorithms applied to the main stream on intelligent monitoring are mainly divided into two categories: a population counting algorithm based on detection and a population counting algorithm based on regression. Based on the detected crowd counting algorithm, namely in the monitoring video, it is assumed that all pedestrians in each frame of image can be accurately detected and positioned through a manually designed visual target detector, and an estimated value of the number of people is obtained by accumulating all detected targets. Papageorgiou et al, early 1998, proposed training SVM classifiers by extracting wavelet features of different scales in images for use in pedestrian detection tasks. Lin et al in 2001 proposed an improved method, namely by pre-performing histogram equalization and Haar wavelet transformation on an image, then extracting multi-scale human head outline statistical features, and finally using SVM to perform detector training, the algorithm can obtain more accurate crowd detection counting results when the video definition is higher, but the algorithm is greatly influenced by environmental change and monitoring lens visual angles. Dalal et al in 2005 proposed a pedestrian detection algorithm based on gradient direction Histogram (HOG) features, and combined with linear SVM to perform classification detection on the crowd in the image and count to obtain the crowd number, so that the accuracy of pedestrian detection is further improved.
However, when the population density in the monitored scene is high, the problem of population occlusion always results in the failure of detector-based population counting algorithms to accurately detect and track a large portion of pedestrians.
Disclosure of Invention
In order to solve the problems in the prior art, the crowd counting method based on the multi-scale generation countermeasure network fuses the characteristics of the single-column convolutional neural network in different depths, solves the problems of scale change, shielding and the like in crowd images, meanwhile, the network model is added into the countermeasure loss of the discriminator, and adopts a countermeasure training mode to conduct crowd density prediction, so that a density map with higher quality is generated.
The crowd counting method based on the multiscale generation countermeasure network comprises the following specific steps:
1. crowd scene Gaussian kernel density map
The present crowd counting training data set provides crowd images and corresponding marked head coordinates, and the head coordinates cannot be directly used as training labels, and the present invention converts given head coordinate data into a form of a crowd density distribution map, and in the crowd images of the data set, the head coordinates are givenDiscrete +.>Representing the coordinate positions of the corresponding heads, the positions of the N heads in each image may be marked as:
to convert the head position coordinate function into a continuous density function, a Gaussian filter is usedConvolving the head position function to obtain a density equation, wherein the specific equation is as follows:
2. constructing a multiscale generation countermeasure network
The crowd counting method based on the deep convolutional neural network is not ideal in quality of the predicted crowd density map under the complex kernel high-density crowd scene, and is mainly because the pedestrian and the background have high similarity under the complex crowd scene, and the convolutional neural network method can have the phenomenon of error detection and classification. At the same time, the quality of the predicted density map greatly affects the accuracy of crowd counting. The invention provides a crowd counting method for generating an countermeasure network (Multi-Scale Generative Adversarial Networks, MS-GAN) based on Multi-scale convolution, which introduces a Multi-immunity loss function to improve the accuracy of prediction.
The structure diagram of the multiscale generation countermeasure network model is shown in the figure I, and mainly comprises two parts: a generator and a arbiter. The generator takes crowd images as input, outputs the crowd images as the images which are already introduced in the predicted crowd density, then superimposes the obtained density images and the crowd images and inputs the superimposed images into the discriminator at the same time, and trains the discriminator to distinguish whether the generated density images or the real density images are input. Meanwhile, due to the superposition input of crowd images, the discriminator also needs to discriminate whether the generated density map is matched with the crowd images.
3. Content loss function based design
In the network model proposed herein, the generator is configured to learn a mapping from a crowd image to a corresponding crowd density map, the output of the network model is a predicted crowd density map, a pixel-level-based loss function is used herein, the euclidean distance between the predicted and real density maps is calculated as a loss function of the network, and the loss function uses an average square error (Mean Square Error, MSE) at the pixel level:
wherein Representing the density map generated by the generator, parameter +.>As parameters of the generator network model. In addition to (I)>Represents->Image of the tensor group->Representing crowd image +.>Is a true label density map of (a). />Expressed as the number of all training images.
4. Fight loss function design
The purpose of the arbiter is to distinguish the differences between the generated density map and the actual label density map. Thus, the density map label generated is labeled 1 for the true density map labeled 0 herein. The output of the arbiter represents the probability that the generated density map is a true density map. An additional contrast loss function is employed in the method to enhance the quality of the resulting density map. The Loss of challenge (universal Loss) function is represented by the following formula:
wherein ,density map representing predictions->Matching degree with the corresponding crowd image. The output of the discriminator is in the form of a tensor, which images the crowd +.>And the density map generated->Or the actual density map label->The superposition is performed in the third dimension. Finally, the loss function for the generator is a weighted sum of the average squared error and the counterloss, the loss function for the generator is as follows:
effect of the numerous experiments herein, set upThe specific gravity of the two loss values is weighed. The model proves that the combination of the two loss functions enables the training of the network to be more stable and the prediction of the density map to be more accurate in the actual training process.
5. Challenge network joint training
The crowd density map prediction model based on the countermeasure network is different from the original purpose of generating the countermeasure network, and firstly, the purpose of crowd density estimation is to generate an accurate density map instead of generating a real natural image. Thus, the input in the crowd density estimation model presented herein is no longer random noise subject to a positive too distribution but is input as a crowd image. Secondly, because the crowd images contain the distribution information of the crowd scene, the crowd images are used as the condition information of the crowd density map in the crowd density prediction model provided by the invention, and the crowd density map and the crowd images are simultaneously input into the discriminator. In the actual training process, a conditional challenge network model is used herein, and the purpose of the joint training is to estimate a high quality crowd density map. The joint training formula of the generator and the arbiter is as follows:
wherein ,represented as generator, generator network with crowd image +.>As input, the output of the network is the predicted crowd density map +.>,/>Represented as a discriminant whose output is the probability that the crowd density image is a true density map. The purpose of the discriminator is to discriminate the generated density map of the generator +.>And true tag Density map->. While the training generator produces a high quality density map that is not discernable by the discriminant.
Crowd density prediction is performed by adopting a countermeasure training mode based on a crowd counting model for generating a countermeasure network. The maximum and minimum problems are optimized by adopting a training mode of joint alternating iteration for the generating model and the judging model. Wherein the training generation network is used for generating accurate crowd density graphs so as to cheat the discriminator, and conversely, the discriminator is trained for discriminating the generated density graphs and the real density graph labels. Meanwhile, the output of the discriminator provides feedback of the density map position and the prediction accuracy for the generator. The two networks compete for training at the same time to enhance the effect of the generation until the samples generated by the generator cannot be correctly judged by the arbiter. After the fight loss is introduced, the crowd density detection algorithm provided by the patent adopts a fight training mode to enable the convolutional neural network to generate a density map with higher quality, so that the accuracy of network crowd counting is improved.
Drawings
First, a multi-scale generation countermeasure network structure diagram is shown.
Detailed Description
The invention needs to solve the problem of giving a frame in a crowd image or video, and then estimating the density of the crowd and the total number of people in each area of the image.
The structure of the multi-level convolutional neural network is shown in the generator part in fig. 1, and in the first three convolutional blocks of the network, multi-scale feature extraction is carried out on Conv-1, conv-2 and Conv-3 respectively by adopting a multi-scale convolutional module (index) which adopts three different scale sizesAnd (3) removing the convolution kernels of the depth feature to obtain features with different scales, wherein each multi-scale convolution module carries out multi-scale feature expression on the depth feature. In order to make the sizes of the specific graphs of different scales consistent, feature graphs of different sizes are pooled to a uniform size by a pooling method. Wherein conv-1 adopts two layers of pooling, conv-2 adopts one layer of pooling, and finally the size of conv-3 is consistent. Finally, features with different scales at different levels are input to a conv-4 convolution layer, and feature fusion is carried out by adopting a convolution kernel of 1 multiplied by 1. Finally, three features with different scales are fused in the network, and density map regression is performed by using the fused feature map. The network can greatly improve the detection of small-scale pedestrians in a scene of a high-density crowd, and finally improves the crowd density map prediction effect.
The multi-scale generation countermeasure network model of the fusion discriminator is shown in fig. 1, the model mainly comprises two parts, a generator and the discriminator, wherein the generator is the multi-scale convolution neural network which has been introduced in the foregoing, the generator takes crowd images as input and outputs the crowd images as predicted crowd density images, then the obtained density images and the crowd images are overlapped and input into the discriminator at the same time, and the discriminator is trained to distinguish whether the generated density images or the real density images are input. Meanwhile, due to the superposition input of crowd images, the discriminator also needs to discriminate whether the generated density map is matched with the crowd images.
In the experiment, training is performed on NVIDIA GeForce GTX TITAN X graphic display cards based on a Tensorflow deep learning framework, a random gradient descent method (SGD) is adopted for the whole network, and an Adam algorithm is used for carrying out parameter optimization on the network, wherein the learning rate is set as followsThe momentum was set to 0.9. The parameters of both the generator and the arbiter are initialized using normal distribution functions. Because the data size of the training data set adopted herein is smaller, the batch size (batch size) is set to be 1, in the training process, the generator and the discriminator alternately iterate and optimize, and firstly, the generator is trained on the basis of the average square loss to 20 epochs, the discriminator is added on the basis, and the two networks are alternately optimized to train 100 epochs. The input of the discriminator is input in the form of tensor, the tensor structure mainly comprises an RGB three-channel original image and a single channel density map, and the final composition dimension is +.>Tensors of (c).
The present invention compares to other methods on the UCF CC 50 dataset. The evaluation criteria of the experimental results used Mean Absolute Error (MAE):and Mean Square Error (MSE)>(N is the number of pictures, ">For the actual number of people in the ith image, is->The number of people output for the ith image through the network provided by the present invention)The accuracy of the algorithm is measured. On the UCF_CC_50 market dataset, the invention is compared with the technology of the prior algorithm, and the following table shows (MS-GAN is the algorithm of the invention):
the experimental performance comparison in the table shows that the method is better than MCNN and CrowdNet in accuracy and stability, and MS-GAN is the best in a plurality of crowd counting algorithms based on convolutional neural networks on MSE and MAE, so that the number of people estimation errors of the MS-GAN in a plurality of scenes are relatively average, and the method has relatively higher stability. The quality of the predicted crowd density map is obviously superior to other crowd counting methods based on convolutional neural networks.

Claims (1)

1. The crowd counting method based on the multiscale generation countermeasure network is characterized by comprising the following steps of: the method comprises the following specific steps:
1 crowd scene gaussian kernel density map:
converting the given human head coordinate data into the form of human population density distribution map, wherein in the human population image of the data set, the human head coordinate data is used for the given human headDiscrete +.>Representing the coordinate positions of the corresponding heads, the positions of the N heads in each image may be marked as:
to convert the head position coordinate function into a continuous density function, a Gaussian filter is usedFunction of head positionConvolution yields a density equation, the specific equation is as follows:
2, constructing a multi-scale generation countermeasure network:
the multiscale generation countermeasure network model structure mainly comprises two parts: the system comprises a generator and a discriminator, wherein the generator is a multi-scale convolutional neural network, the generator takes crowd images as input and outputs the crowd images as predicted crowd density images, then the obtained density images and the crowd images are overlapped and input into the discriminator at the same time, the discriminator is trained to discriminate whether the generated density images or real density images are input, and meanwhile, the discriminator also needs to discriminate whether the generated density images are matched with the crowd images or not due to the overlapped input of the crowd images;
3 design based on content loss function:
the Euclidean distance between the predicted density map and the real density map is calculated as the loss function of the network by adopting a loss function based on the pixel level, the loss function adopts the average square error Mean Square Error of the pixel level, and MSE:
wherein Representing the density map generated by the generator, parameter +.>As a parameter of the generator network model, the network model is, in addition to this,representing the ith crowd image,/->Representing crowd image +.>Is a true label density map of->Expressed as the number of all training images;
4, design of an anti-loss function:
an additional contrast Loss function is used to improve the quality of the generated density map, and the contrast Loss, the generalized Loss function, is expressed as follows:
wherein ,density map representing predictions->The output of the discriminator is a tensor form to match the crowd image +.>And the density map generated->Or the actual density map label->The superposition is performed in the third dimension, and finally the loss function for the generator is a weighted sum of the average squared error and the counterloss, as follows:
effect by a large number of experimentsFruit, set upThe specific gravity of two loss values is weighed, and the combination of two loss functions of the model enables the training of a network to be more stable and the prediction of a density map to be more accurate;
5 fight against network joint training:
the condition countermeasure network model is adopted, so that in order to estimate a high-quality crowd density map, a joint training formula of a generator and a discriminator is as follows:
wherein ,represented as generator, generator network with crowd image +.>As input, the output of the network is the predicted crowd density map +.>,/>Expressed as a discriminator whose output is the probability that the crowd density image is a true density map, the purpose of the discriminator is to discriminate the density map generated by the generator +.>And true tag Density map->While the training generator produces a high quality density map that is not discernable by the discriminant.
CN201811356818.1A 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network Active CN111191667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811356818.1A CN111191667B (en) 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811356818.1A CN111191667B (en) 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111191667A CN111191667A (en) 2020-05-22
CN111191667B true CN111191667B (en) 2023-08-18

Family

ID=70707024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811356818.1A Active CN111191667B (en) 2018-11-15 2018-11-15 Crowd counting method based on multiscale generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111191667B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832413B (en) * 2020-06-09 2021-04-02 天津大学 People flow density map estimation, positioning and tracking method based on space-time multi-scale network
CN111898903A (en) * 2020-07-28 2020-11-06 北京科技大学 Method and system for evaluating uniformity and comprehensive quality of steel product
CN112818945A (en) * 2021-03-08 2021-05-18 北方工业大学 Convolutional network construction method suitable for subway station crowd counting
CN112818944A (en) * 2021-03-08 2021-05-18 北方工业大学 Dense crowd counting method for subway station scene
CN113392779A (en) * 2021-06-17 2021-09-14 中国工商银行股份有限公司 Crowd monitoring method, device, equipment and medium based on generation of confrontation network
CN113313118A (en) * 2021-06-25 2021-08-27 哈尔滨工程大学 Self-adaptive variable-proportion target detection method based on multi-scale feature fusion
CN114463694B (en) * 2022-01-06 2024-04-05 中山大学 Pseudo-label-based semi-supervised crowd counting method and device
CN114648724B (en) * 2022-05-18 2022-08-12 成都航空职业技术学院 Lightweight efficient target segmentation and counting method based on generation countermeasure network
CN114972111B (en) * 2022-06-16 2023-01-10 慧之安信息技术股份有限公司 Dense crowd counting method based on GAN image restoration
CN115983142B (en) * 2023-03-21 2023-08-29 之江实验室 Regional population evolution model construction method based on depth generation countermeasure network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740945A (en) * 2016-02-04 2016-07-06 中山大学 People counting method based on video analysis
WO2016183766A1 (en) * 2015-05-18 2016-11-24 Xiaogang Wang Method and apparatus for generating predictive models
CN107862261A (en) * 2017-10-25 2018-03-30 天津大学 Image people counting method based on multiple dimensioned convolutional neural networks
CN108764085A (en) * 2018-05-17 2018-11-06 上海交通大学 Based on the people counting method for generating confrontation network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11024009B2 (en) * 2016-09-15 2021-06-01 Twitter, Inc. Super resolution using a generative adversarial network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016183766A1 (en) * 2015-05-18 2016-11-24 Xiaogang Wang Method and apparatus for generating predictive models
CN105740945A (en) * 2016-02-04 2016-07-06 中山大学 People counting method based on video analysis
CN107862261A (en) * 2017-10-25 2018-03-30 天津大学 Image people counting method based on multiple dimensioned convolutional neural networks
CN108764085A (en) * 2018-05-17 2018-11-06 上海交通大学 Based on the people counting method for generating confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络人群计数的研究与实现;吴淑窈;刘希庚;胡昌振;王忠策;;科教导刊(上旬刊)(09);全文 *

Also Published As

Publication number Publication date
CN111191667A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111191667B (en) Crowd counting method based on multiscale generation countermeasure network
Pervaiz et al. Hybrid algorithm for multi people counting and tracking for smart surveillance
CN108830252B (en) Convolutional neural network human body action recognition method fusing global space-time characteristics
Sun et al. Benchmark data and method for real-time people counting in cluttered scenes using depth sensors
Chung et al. Image-based learning to measure traffic density using a deep convolutional neural network
Shahzad et al. A smart surveillance system for pedestrian tracking and counting using template matching
CN105022982A (en) Hand motion identifying method and apparatus
US20120219213A1 (en) Embedded Optical Flow Features
CN110765833A (en) Crowd density estimation method based on deep learning
CN104050685B (en) Moving target detecting method based on particle filter visual attention model
CN107818307B (en) Multi-label video event detection method based on LSTM network
CN105138982A (en) Crowd abnormity detection and evaluation method based on multi-characteristic cluster and classification
CN107301376B (en) Pedestrian detection method based on deep learning multi-layer stimulation
CN101866429A (en) Training method of multi-moving object action identification and multi-moving object action identification method
Luo et al. Traffic analytics with low-frame-rate videos
CN110909672A (en) Smoking action recognition method based on double-current convolutional neural network and SVM
Ma et al. Scene invariant crowd counting using multi‐scales head detection in video surveillance
Saif et al. Moment features based violence action detection using optical flow
CN104123569B (en) Video person number information statistics method based on supervised learning
Pathak et al. Applying transfer learning to traffic surveillance videos for accident detection
CN105893967B (en) Human behavior classification detection method and system based on time sequence retention space-time characteristics
Lamba et al. A large scale crowd density classification using spatio-temporal local binary pattern
Trung Estimation of Crowd Density Using Image Processing Techniques with Background Pixel Model and Visual Geometry Group
Khan et al. Multiple moving vehicle speed estimation using Blob analysis
Ma et al. Crowd estimation using multi-scale local texture analysis and confidence-based soft classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant