CN111126152A - Video-based multi-target pedestrian detection and tracking method - Google Patents
Video-based multi-target pedestrian detection and tracking method Download PDFInfo
- Publication number
- CN111126152A CN111126152A CN201911165287.2A CN201911165287A CN111126152A CN 111126152 A CN111126152 A CN 111126152A CN 201911165287 A CN201911165287 A CN 201911165287A CN 111126152 A CN111126152 A CN 111126152A
- Authority
- CN
- China
- Prior art keywords
- target
- detection
- tracking
- pedestrian
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 98
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000001914 filtration Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000003064 k means clustering Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 210000004556 brain Anatomy 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000010365 information processing Effects 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract 1
- 230000005540 biological transmission Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 210000000225 synapse Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video-based multi-target pedestrian detection and tracking method, which utilizes a YOLO3 target detection algorithm which is better in speed and accuracy, and overcomes the influence of illumination change and visual angle change to ensure that multi-target pedestrians are efficiently detected by constructing video images under different scenes and training a detection model; the multi-target pedestrian tracking method based on the Kalman filtering algorithm and the Hungary algorithm is adopted to effectively track multi-target pedestrians, and the problem that target repeated detection is frequent in multi-target detection is avoided, so that the multi-target pedestrian tracking method with the Deep-SORT algorithm as the core is realized. The method has the characteristics of efficient multi-target pedestrian detection and efficient multi-target pedestrian tracking.
Description
Technical Field
The present invention relates to a computer vision system, and more particularly, to a computer vision based detection and tracking method.
Background
The pedestrian detection and tracking refers to a process of detecting the position information of a pedestrian from a video sequence, continuously tracking a moving pedestrian target and determining the motion track of the moving pedestrian target. Pedestrian detection is the basis and premise of pedestrian tracking, so a good pedestrian detection algorithm can provide powerful support and guarantee for the pedestrian detection algorithm. Pedestrian detection belongs to the category of target detection, and in recent years, thanks to the development of target detection technology, a multi-target tracking method based on detection becomes a main method of current multi-target tracking. The tracking process of the multi-target tracking method based on detection can be converted into a data association problem, the method has high dependence degree on the detection result, the shielding between a complex background and a target can greatly influence the target detection, further influence the data association, and meanwhile, a target model on which a precise corresponding relation is established between a plurality of detection values and a plurality of tracking values can also greatly influence the multi-target tracking effect. Therefore, it is a necessary problem for those skilled in the art to provide a pedestrian detection and tracking algorithm with higher robustness and more accurate accuracy.
Disclosure of Invention
In order to solve the problems, the method for multi-target pedestrian detection and tracking based on the video overcomes the problem of repeated detection and realizes efficient pedestrian detection and efficient pedestrian tracking.
In order to achieve the purpose, the invention provides the following technical scheme: a video-based multi-target pedestrian detection and tracking method is characterized by comprising the following steps:
the method comprises the following steps: the method comprises the steps of training a pedestrian detection model by utilizing collected video images, firstly separating each frame of the video images, then directly detecting confidence degrees and boundary frame information of all pedestrian targets on an image layer by utilizing the trained detection model, and when the confidence degrees are larger than a set threshold value, considering the pedestrian targets and reserving target frames. And removing the redundant frame by using a non-maximum suppression algorithm to obtain a final detection target candidate frame.
Step two: and extracting the characteristics of the region corresponding to the candidate frame in the pedestrian detection network according to the pedestrian target candidate frame obtained by the pedestrian detection algorithm.
Step three: and based on a Kalman filtering algorithm, calculating the position of each tracking target average orbit predicted by Kalman and the distance between the two detected target candidate boxes in the first step and the second step. The regions with smaller distances are the predicted position regions of the targets, and thus the predicted position set of each target is obtained.
Step four: and matching the tracking target with the detection target by using a Hungarian algorithm. And updating the Kalman tracker by the matched target detection frame in the current video frame, updating the state, and outputting the state update value as the tracking frame of the current frame. The tracker is reinitialized for objects that are not matched in the current video frame. And continuously updating the tracking state of each target so as to realize the tracking of the target.
Step five: continuously detecting new targets, finding out images with the highest matching degree with the disappeared targets from candidate images obtained by the pedestrian detection network, and if not, redistributing new IDs; and updating the feature matrix in time to facilitate the next frame of calculation.
Step six: and repeatedly executing the steps to realize the multi-target pedestrian tracking method taking the Deep-SORT algorithm as the core, and finally outputting the action track of each detection target.
Through the scheme, the positions and information of pedestrians are detected from the video sequence based on the YOLOv3 algorithm, and the moving pedestrian targets are continuously tracked based on a multi-target pedestrian tracking method taking the Deep-SORT algorithm as the core, so that the process of finally and accurately tracking the targets is obtained by determining the motion tracks of the moving pedestrian targets.
Further, in the training stage, the YOLOv3 algorithm obtains the number and the value of anchors suitable for pedestrian detection through a k-means clustering method on model parameters pre-trained by using ImageNet, and then trains the YOLOv3 model through the constructed pedestrian data set.
Through the scheme, a good network initialization value is obtained mainly, so that the partial minimum value is prevented from being trapped in subsequent training, and the convergence speed of the network can be increased.
Further, the Kalman filtering algorithm utilizes the historical track of the tracking target, models the tracking target and predicts the position state of the tracking target in the next video frame, calculates the association degree between the detection target and the current prediction by fusing the spatial position information and the appearance depth characteristic information of the detection target to establish a cost matrix, and solves the tracking result of the target in the current frame based on the cost matrix established by the Kalman filtering algorithm.
Through the scheme, a Deep-SORT algorithm is introduced to track the detected target for a period of time, so that the defect that correlation information between upper and lower video frames of a video is ignored when the target is detected by the YOLOv3 is overcome, the phenomenon that the target falls off in the video-based target detection mode by the YOLOv3 is relieved, and the problems of target shielding and the like are also inhibited to a certain extent.
Further, the bounding box of the step is centered on each pixel of the target, and a plurality of bounding boxes with different sizes and aspect ratios are generated.
Further, the information transmission involved in the steps adopts neural network rapid transmission, and is a mathematical model for information processing by applying a structure similar to brain nerve synapse connection.
The invention discloses a video-based multi-target pedestrian detection and tracking method, which provides a robust and more accurate-precision multi-target pedestrian detection and tracking algorithm by utilizing the cooperation of multiple algorithms, overcomes the problem of common target repeated detection of multi-target detection, and has the characteristics of high-efficiency multi-target pedestrian detection and high-efficiency multi-target pedestrian tracking.
Drawings
Fig. 1 is a schematic diagram of the principle of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention discloses a video-based multi-target pedestrian detection and tracking method, which is characterized by comprising the following steps of:
the method comprises the following steps: the method comprises the steps of training a pedestrian detection model by utilizing collected video images, firstly separating each frame of the video images, then directly detecting confidence degrees and boundary frame information of all pedestrian targets on an image layer by utilizing the trained detection model, and when the confidence degrees are larger than a set threshold value, considering the pedestrian targets and reserving target frames. And removing the redundant frame by using a non-maximum suppression algorithm to obtain a final detection target candidate frame.
Step two: and extracting the characteristics of the region corresponding to the candidate frame in the pedestrian detection network according to the pedestrian target candidate frame obtained by pedestrian detection.
Step three: and based on a Kalman filtering algorithm, calculating the position of each tracking target average orbit predicted by Kalman and the distance between the two detected target candidate boxes in the first step and the second step. The regions with smaller distances are the predicted position regions of the targets, and thus the predicted position set of each target is obtained.
Step four: and matching the tracking target with the detection target by using a Hungarian algorithm. And updating the Kalman tracker by the matched target detection frame in the current frame, updating the state, and outputting a state updating value as a tracking frame of the current frame. And re-initializing the tracker for the target which is not matched in the current frame. And continuously updating the tracking state of each target so as to realize the tracking of the target.
Step five: continuously detecting new targets, finding the images with the highest matching degree with the disappeared targets from the candidate images obtained by the pedestrian detection network, and if not, redistributing new IDs; and updating the feature matrix in time to facilitate the next frame of calculation.
Step six: and repeatedly executing the steps to realize the multi-target pedestrian tracking method taking the Deep-SORT algorithm as the core, and finally outputting the action track of each detection target.
In the training stage, the YOLOv3 algorithm obtains the number and the value of anchors suitable for pedestrian detection through a k-means clustering method on model parameters pre-trained by ImageNet, and then trains a YOLOv3 model through the constructed pedestrian data set.
The Kalman filtering algorithm utilizes the historical track of the tracking target to model the tracking target and predict the position information of the next video frame, and combines the spatial position information and the appearance depth characteristic information of the detection target to calculate the association degree between the detection and the current observation to establish a cost matrix.
The Hungarian algorithm is based on the cost matrix established for the Kalman filtering algorithm to solve to obtain the tracking result of the target in the current frame.
The bounding box of the step is centered on each pixel of the target, and a plurality of bounding boxes with different sizes and aspect ratios are generated.
The information transmission involved in the steps adopts neural network rapid transmission, and is a mathematical model for information processing by applying a structure similar to brain nerve synapse connection.
Based on the above scheme, the system of the present invention at least has the following conditions:
starting detection;
inputting a video image;
tracking target loss;
receiving an instruction;
the pedestrian detection method comprises the following steps: the method adopts a YOLO series target detection algorithm to judge detection as a regression task, does not extract a candidate target region in advance, but directly regresses a boundary frame and a classification recognition probability of a detection target in an image layer through rapid propagation of a neural network for an input image, has very high detection speed, and therefore meets the aim of multi-target pedestrian detection based on video, selects a YOLOv3 algorithm which comprehensively shows better speed and accuracy, adopts a method of constructing monitoring video images under different scenes, obtains the number and the value of anchors suitable for pedestrian detection through a k-means clustering method, trains a target detection model based on YOLOv3, can efficiently detect pedestrians in a monitoring video, and further realizes the aim of subsequent pedestrian tracking:
A1. constructing pedestrian data sets with labeled information
Monitoring video images under different scenes are collected, wherein the monitoring video images comprise various pedestrian postures, various illumination influences and video images at different time intervals in one day, and the influence caused by factors such as illumination, complex background and the like can be reduced by the data set constructed in the way; and expanding the data set by methods of image mirroring, angle rotation, size scaling, cutting, random noise addition and the like.
A2. Training model
Continuing training on the basis of the model parameters pre-trained by ImageNet, firstly modifying the full connection layer of the network model, and outputting two types of output types, namely pedestrians and non-pedestrians; obtaining the number and the value of anchors suitable for pedestrian detection by a k-means clustering method; training a model based on YOLOv3 on the constructed pedestrian data set, and using a pre-training method, mainly aiming at obtaining a good network initialization value so as to avoid trapping a partial minimum value in subsequent training and simultaneously accelerating the convergence speed of the network.
A3. Testing
Performing human detection on the test data set by using the trained model;
the method comprises the following steps of pedestrian tracking: the invention provides a multi-target pedestrian tracking method taking a Deep-SORT algorithm as a core, wherein the specific algorithm structure of the Deep-SORT algorithm can be divided into two parts:
kalman filtering and hungarian algorithm:
B1. kalman filtering
According to the information such as the position and the size of the target of the previous frame obtained through the target detection algorithm, Kalman filtering carries out tracking prediction on the information, so that the information such as the tracking position and the size of the target of the next frame is obtained; due to the addition of the tracking part, each frame of image can obtain target detection information obtained based on a detection algorithm and also can obtain target tracking information obtained based on a tracking algorithm; meanwhile, because the objects concerned by the two types of information are possibly the same, if all the objects are output as target information, the same target is repeatedly detected, so that the performance of the whole algorithm is reduced;
B2. hungarian algorithm
The Hungarian algorithm performs data association matching on the two parts of information, and the association between the two parts of information is converted into a certain data representation form through a certain measurement rule, so that a data association matrix is constructed. The Hungarian algorithm aims to find the optimal matching solution of a plurality of targets of two frames before and after to obtain the final detection tracking result.
A method for tracking pedestrians with multiple targets introduces Deep-SORT algorithm, tracks the detected targets for a period of time by calculating the cosine distance of apparent characteristics of predicted positions and detected positions obtained based on Kalman filtering between the Mahalanobis space distance and the target frame and finally tracking and matching the multiple targets by using Hungary algorithm, thereby overcoming the defect that correlation information between upper and lower frames of a video is ignored when the targets are detected by YOLOv3, relieving the phenomenon that the targets fall off frames in the video-based target detection by YOLOv3 and simultaneously inhibiting the shielding of the targets to a certain extent. The Hungarian algorithm carries out data association on the detection box and the tracking box, removes the repeatedly marked target area, and solves the problem that the target detection and the tracking are marked out possibly for the same target, so that the target repeated detection is caused.
The practical application examples of efficient detection tracking and avoidance of repeated detection are as follows:
if the target A is detected by the T frame, the target A is judged to be a newly appeared target, a tracking list is added to each unassociated target, a new ID is allocated to each unassociated target, otherwise, the target A is considered to be the false detection, and the target A is deleted from the tracking list;
when the detected multi-target pedestrian information contains an unassociated tracking target B, the target B may leave the scene or the detector fails to detect to cause the unassociated tracking target B, so that the target B is not detected in continuous T frames, the target B is deleted from the tracking list, and the state of the target B is predicted and updated through Kalman filtering.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the design concept of the present invention should be included in the scope of the present invention.
Claims (6)
1. A video-based multi-target pedestrian detection and tracking method is characterized by comprising the following steps:
the method comprises the following steps: training a pedestrian detection model by using the acquired video images, firstly separating each frame of the video images, then directly detecting the confidence degrees and the bounding box information of all pedestrian targets on an image layer by using the trained detection model, and when the confidence degrees are greater than a set threshold value, considering the pedestrian targets and reserving target frames; removing redundant frames by using a non-maximum suppression algorithm to obtain a final detection target candidate frame;
step two: extracting the characteristics of the region corresponding to the candidate frame in the pedestrian detection network according to the pedestrian target candidate frame obtained by the pedestrian detection algorithm;
step three: based on a Kalman filtering algorithm, calculating the position of each tracking target average orbit predicted by Kalman and the distance between the first and second detected target candidate boxes; the region with smaller distance is the prediction position region of the target, so as to obtain the prediction position set of each target;
step four: and matching the tracking target with the detection target by using a Hungarian algorithm. And updating the Kalman tracker by the matched target detection frame in the current video frame, updating the state, and outputting the state update value as the tracking frame of the current frame. Reinitializing a tracker for objects which are not matched in the current video frame; continuously updating the tracking state of each target so as to realize the tracking of the target;
step five: continuously detecting new targets, finding out images with the highest matching degree with the disappeared targets from candidate images obtained by the pedestrian detection network, and if not, redistributing new IDs; updating the feature matrix in time to facilitate the calculation of the next frame;
step six: and repeatedly executing the steps to realize the multi-target pedestrian tracking method taking the Deep-SORT algorithm as the core, and finally outputting the action track of each detection target.
2. The video-based multi-target pedestrian detection and tracking method according to claim 1, wherein in a training phase, the YOLOv3 algorithm obtains the number and the value of anchors suitable for pedestrian detection through a k-means clustering method on model parameters pre-trained by using ImageNet, and trains a model based on YOLOv3 on the constructed pedestrian data set.
3. The video-based multi-target pedestrian detection and tracking method according to claim 1, wherein the kalman filter algorithm models the tracking target and predicts the position state in the next video frame by using the historical track of the tracking target, and calculates the association between the detection target and the current observation by fusing the spatial position information and the appearance depth characteristic information of the detection target to establish the cost matrix.
4. The video-based multi-target pedestrian detection and tracking method according to claim 3, characterized in that the Hungarian algorithm is based on data association between the detection frame and the tracking frame, and the established cost matrix is used for calculating the optimal matching of the tracking target in the current video frame, so as to achieve the purpose of accurately tracking the target.
5. The video-based multi-target pedestrian detection and tracking method of claim 1, wherein the information dissemination involved in the step is fast dissemination using a neural network, and is a mathematical model for information processing using a structure similar to brain neurosynaptic connections.
6. The video-based multi-target pedestrian detection and tracking method of claim 1, wherein the bounding boxes of the step are centered around each pixel of the target, and a plurality of bounding boxes with different sizes and aspect ratios are generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911165287.2A CN111126152B (en) | 2019-11-25 | 2019-11-25 | Multi-target pedestrian detection and tracking method based on video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911165287.2A CN111126152B (en) | 2019-11-25 | 2019-11-25 | Multi-target pedestrian detection and tracking method based on video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111126152A true CN111126152A (en) | 2020-05-08 |
CN111126152B CN111126152B (en) | 2023-04-11 |
Family
ID=70496629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911165287.2A Active CN111126152B (en) | 2019-11-25 | 2019-11-25 | Multi-target pedestrian detection and tracking method based on video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111126152B (en) |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598066A (en) * | 2020-07-24 | 2020-08-28 | 之江实验室 | Helmet wearing identification method based on cascade prediction |
CN111738075A (en) * | 2020-05-18 | 2020-10-02 | 深圳奥比中光科技有限公司 | Joint point tracking method and system based on pedestrian detection |
CN111767847A (en) * | 2020-06-29 | 2020-10-13 | 佛山市南海区广工大数控装备协同创新研究院 | Pedestrian multi-target tracking method integrating target detection and association |
CN111860282A (en) * | 2020-07-15 | 2020-10-30 | 中国电子科技集团公司第三十八研究所 | Subway section passenger flow volume statistics and pedestrian retrograde motion detection method and system |
CN111932588A (en) * | 2020-08-07 | 2020-11-13 | 浙江大学 | Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning |
CN112200021A (en) * | 2020-09-22 | 2021-01-08 | 燕山大学 | Target crowd tracking and monitoring method based on limited range scene |
CN112270827A (en) * | 2020-06-23 | 2021-01-26 | 北京航空航天大学 | Vehicle-road cooperative system and road pedestrian detection method |
CN112417946A (en) * | 2020-09-17 | 2021-02-26 | 国网天津静海供电有限公司 | Boundary crossing detection method and system for designated area of power construction site |
CN112418118A (en) * | 2020-11-27 | 2021-02-26 | 招商新智科技有限公司 | Method and device for detecting pedestrian intrusion under unsupervised bridge |
CN112489090A (en) * | 2020-12-16 | 2021-03-12 | 影石创新科技股份有限公司 | Target tracking method, computer-readable storage medium and computer device |
CN112488042A (en) * | 2020-12-15 | 2021-03-12 | 东南大学 | Pedestrian traffic bottleneck discrimination method and system based on video analysis |
CN112528925A (en) * | 2020-12-21 | 2021-03-19 | 深圳云天励飞技术股份有限公司 | Pedestrian tracking and image matching method and related equipment |
CN112528730A (en) * | 2020-10-20 | 2021-03-19 | 福州大学 | Cost matrix optimization method based on space constraint under Hungary algorithm |
CN112597877A (en) * | 2020-12-21 | 2021-04-02 | 中船重工(武汉)凌久高科有限公司 | Factory personnel abnormal behavior detection method based on deep learning |
CN112633162A (en) * | 2020-12-22 | 2021-04-09 | 重庆大学 | Rapid pedestrian detection and tracking method suitable for expressway outfield shielding condition |
CN112633205A (en) * | 2020-12-28 | 2021-04-09 | 北京眼神智能科技有限公司 | Pedestrian tracking method and device based on head and shoulder detection, electronic equipment and storage medium |
CN112651994A (en) * | 2020-12-18 | 2021-04-13 | 零八一电子集团有限公司 | Ground multi-target tracking method |
CN112669349A (en) * | 2020-12-25 | 2021-04-16 | 北京竞业达数码科技股份有限公司 | Passenger flow statistical method, electronic equipment and storage medium |
CN112734800A (en) * | 2020-12-18 | 2021-04-30 | 上海交通大学 | Multi-target tracking system and method based on joint detection and characterization extraction |
CN112767711A (en) * | 2021-01-27 | 2021-05-07 | 湖南优美科技发展有限公司 | Multi-class multi-scale multi-target snapshot method and system |
CN112866643A (en) * | 2021-01-08 | 2021-05-28 | 中国船舶重工集团公司第七0七研究所 | Multi-target visual management system and method for key areas in ship |
CN112884810A (en) * | 2021-03-18 | 2021-06-01 | 沈阳理工大学 | Pedestrian tracking method based on YOLOv3 |
CN112926474A (en) * | 2021-03-08 | 2021-06-08 | 商汤集团有限公司 | Behavior recognition and feature extraction method, device, equipment and medium |
CN113077496A (en) * | 2021-04-16 | 2021-07-06 | 中国科学技术大学 | Real-time vehicle detection and tracking method and system based on lightweight YOLOv3 and medium |
CN113158897A (en) * | 2021-04-21 | 2021-07-23 | 新疆大学 | Pedestrian detection system based on embedded YOLOv3 algorithm |
CN113205108A (en) * | 2020-11-02 | 2021-08-03 | 哈尔滨理工大学 | YOLOv 4-based multi-target vehicle detection and tracking method |
CN113256690A (en) * | 2021-06-16 | 2021-08-13 | 中国人民解放军国防科技大学 | Pedestrian multi-target tracking method based on video monitoring |
CN113259630A (en) * | 2021-06-03 | 2021-08-13 | 南京北斗创新应用科技研究院有限公司 | Multi-camera pedestrian track aggregation system and method |
CN113538513A (en) * | 2021-07-13 | 2021-10-22 | 中国工商银行股份有限公司 | Method, device and equipment for controlling access of monitored object and storage medium |
CN113536915A (en) * | 2021-06-09 | 2021-10-22 | 苏州数智源信息技术有限公司 | Multi-node target tracking method based on visible light camera |
CN113628165A (en) * | 2021-07-12 | 2021-11-09 | 杨龙 | Livestock rotating fence checking method, device and storage medium |
CN113674317A (en) * | 2021-08-10 | 2021-11-19 | 深圳市捷顺科技实业股份有限公司 | Vehicle tracking method and device of high-order video |
CN113838091A (en) * | 2021-09-23 | 2021-12-24 | 哈尔滨工程大学 | Sparse target tracking method |
CN113850839A (en) * | 2020-06-28 | 2021-12-28 | 中国电子科技网络信息安全有限公司 | Real-time multi-target tracking method |
CN114066931A (en) * | 2020-07-31 | 2022-02-18 | 复旦大学 | Image enhancement method using target tracking sequence |
CN114299428A (en) * | 2021-12-24 | 2022-04-08 | 空间视创(重庆)科技股份有限公司 | Cross-media video character recognition method and system |
CN114863364A (en) * | 2022-05-20 | 2022-08-05 | 碧桂园生活服务集团股份有限公司 | Security detection method and system based on intelligent video monitoring |
CN116912882A (en) * | 2023-07-13 | 2023-10-20 | 广西民族大学 | Enhanced deep single-lens pedestrian tracking algorithm based on head and trunk detection |
CN117455955A (en) * | 2023-12-14 | 2024-01-26 | 武汉纺织大学 | Pedestrian multi-target tracking method based on unmanned aerial vehicle visual angle |
CN117953015A (en) * | 2024-03-26 | 2024-04-30 | 武汉工程大学 | Multi-row person tracking method, system, equipment and medium based on video super-resolution |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170286774A1 (en) * | 2016-04-04 | 2017-10-05 | Xerox Corporation | Deep data association for online multi-class multi-object tracking |
CN108009473A (en) * | 2017-10-31 | 2018-05-08 | 深圳大学 | Based on goal behavior attribute video structural processing method, system and storage device |
CN110378259A (en) * | 2019-07-05 | 2019-10-25 | 桂林电子科技大学 | A kind of multiple target Activity recognition method and system towards monitor video |
CN110443210A (en) * | 2019-08-08 | 2019-11-12 | 北京百度网讯科技有限公司 | A kind of pedestrian tracting method, device and terminal |
-
2019
- 2019-11-25 CN CN201911165287.2A patent/CN111126152B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170286774A1 (en) * | 2016-04-04 | 2017-10-05 | Xerox Corporation | Deep data association for online multi-class multi-object tracking |
CN108009473A (en) * | 2017-10-31 | 2018-05-08 | 深圳大学 | Based on goal behavior attribute video structural processing method, system and storage device |
CN110378259A (en) * | 2019-07-05 | 2019-10-25 | 桂林电子科技大学 | A kind of multiple target Activity recognition method and system towards monitor video |
CN110443210A (en) * | 2019-08-08 | 2019-11-12 | 北京百度网讯科技有限公司 | A kind of pedestrian tracting method, device and terminal |
Non-Patent Citations (1)
Title |
---|
陈志鸿等: "基于卡尔曼滤波和多种信息融合的在线多目标跟踪算法", 《信息通信》 * |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738075A (en) * | 2020-05-18 | 2020-10-02 | 深圳奥比中光科技有限公司 | Joint point tracking method and system based on pedestrian detection |
CN112270827A (en) * | 2020-06-23 | 2021-01-26 | 北京航空航天大学 | Vehicle-road cooperative system and road pedestrian detection method |
CN113850839A (en) * | 2020-06-28 | 2021-12-28 | 中国电子科技网络信息安全有限公司 | Real-time multi-target tracking method |
CN111767847A (en) * | 2020-06-29 | 2020-10-13 | 佛山市南海区广工大数控装备协同创新研究院 | Pedestrian multi-target tracking method integrating target detection and association |
CN111860282A (en) * | 2020-07-15 | 2020-10-30 | 中国电子科技集团公司第三十八研究所 | Subway section passenger flow volume statistics and pedestrian retrograde motion detection method and system |
CN111598066A (en) * | 2020-07-24 | 2020-08-28 | 之江实验室 | Helmet wearing identification method based on cascade prediction |
CN114066931A (en) * | 2020-07-31 | 2022-02-18 | 复旦大学 | Image enhancement method using target tracking sequence |
CN111932588A (en) * | 2020-08-07 | 2020-11-13 | 浙江大学 | Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning |
CN111932588B (en) * | 2020-08-07 | 2024-01-30 | 浙江大学 | Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning |
CN112417946A (en) * | 2020-09-17 | 2021-02-26 | 国网天津静海供电有限公司 | Boundary crossing detection method and system for designated area of power construction site |
CN112200021B (en) * | 2020-09-22 | 2022-07-01 | 燕山大学 | Target crowd tracking and monitoring method based on limited range scene |
CN112200021A (en) * | 2020-09-22 | 2021-01-08 | 燕山大学 | Target crowd tracking and monitoring method based on limited range scene |
CN112528730A (en) * | 2020-10-20 | 2021-03-19 | 福州大学 | Cost matrix optimization method based on space constraint under Hungary algorithm |
CN112528730B (en) * | 2020-10-20 | 2022-06-10 | 福州大学 | Cost matrix optimization method based on space constraint under Hungary algorithm |
CN113205108A (en) * | 2020-11-02 | 2021-08-03 | 哈尔滨理工大学 | YOLOv 4-based multi-target vehicle detection and tracking method |
CN112418118A (en) * | 2020-11-27 | 2021-02-26 | 招商新智科技有限公司 | Method and device for detecting pedestrian intrusion under unsupervised bridge |
CN112488042A (en) * | 2020-12-15 | 2021-03-12 | 东南大学 | Pedestrian traffic bottleneck discrimination method and system based on video analysis |
CN112489090B (en) * | 2020-12-16 | 2024-06-04 | 影石创新科技股份有限公司 | Method for tracking target, computer readable storage medium and computer device |
CN112489090A (en) * | 2020-12-16 | 2021-03-12 | 影石创新科技股份有限公司 | Target tracking method, computer-readable storage medium and computer device |
CN112651994A (en) * | 2020-12-18 | 2021-04-13 | 零八一电子集团有限公司 | Ground multi-target tracking method |
CN112734800A (en) * | 2020-12-18 | 2021-04-30 | 上海交通大学 | Multi-target tracking system and method based on joint detection and characterization extraction |
CN112597877A (en) * | 2020-12-21 | 2021-04-02 | 中船重工(武汉)凌久高科有限公司 | Factory personnel abnormal behavior detection method based on deep learning |
CN112528925A (en) * | 2020-12-21 | 2021-03-19 | 深圳云天励飞技术股份有限公司 | Pedestrian tracking and image matching method and related equipment |
CN112528925B (en) * | 2020-12-21 | 2024-05-07 | 深圳云天励飞技术股份有限公司 | Pedestrian tracking and image matching method and related equipment |
CN112633162B (en) * | 2020-12-22 | 2024-03-22 | 重庆大学 | Pedestrian rapid detection and tracking method suitable for expressway external field shielding condition |
CN112633162A (en) * | 2020-12-22 | 2021-04-09 | 重庆大学 | Rapid pedestrian detection and tracking method suitable for expressway outfield shielding condition |
CN112669349B (en) * | 2020-12-25 | 2023-12-05 | 北京竞业达数码科技股份有限公司 | Passenger flow statistics method, electronic equipment and storage medium |
CN112669349A (en) * | 2020-12-25 | 2021-04-16 | 北京竞业达数码科技股份有限公司 | Passenger flow statistical method, electronic equipment and storage medium |
CN112633205A (en) * | 2020-12-28 | 2021-04-09 | 北京眼神智能科技有限公司 | Pedestrian tracking method and device based on head and shoulder detection, electronic equipment and storage medium |
CN112866643A (en) * | 2021-01-08 | 2021-05-28 | 中国船舶重工集团公司第七0七研究所 | Multi-target visual management system and method for key areas in ship |
CN112767711A (en) * | 2021-01-27 | 2021-05-07 | 湖南优美科技发展有限公司 | Multi-class multi-scale multi-target snapshot method and system |
CN112767711B (en) * | 2021-01-27 | 2022-05-27 | 湖南优美科技发展有限公司 | Multi-class multi-scale multi-target snapshot method and system |
CN112926474A (en) * | 2021-03-08 | 2021-06-08 | 商汤集团有限公司 | Behavior recognition and feature extraction method, device, equipment and medium |
CN112884810A (en) * | 2021-03-18 | 2021-06-01 | 沈阳理工大学 | Pedestrian tracking method based on YOLOv3 |
CN112884810B (en) * | 2021-03-18 | 2024-02-02 | 沈阳理工大学 | Pedestrian tracking method based on YOLOv3 |
CN113077496A (en) * | 2021-04-16 | 2021-07-06 | 中国科学技术大学 | Real-time vehicle detection and tracking method and system based on lightweight YOLOv3 and medium |
CN113158897A (en) * | 2021-04-21 | 2021-07-23 | 新疆大学 | Pedestrian detection system based on embedded YOLOv3 algorithm |
CN113259630A (en) * | 2021-06-03 | 2021-08-13 | 南京北斗创新应用科技研究院有限公司 | Multi-camera pedestrian track aggregation system and method |
CN113259630B (en) * | 2021-06-03 | 2021-09-28 | 南京北斗创新应用科技研究院有限公司 | Multi-camera pedestrian track aggregation system and method |
CN113536915A (en) * | 2021-06-09 | 2021-10-22 | 苏州数智源信息技术有限公司 | Multi-node target tracking method based on visible light camera |
CN113256690B (en) * | 2021-06-16 | 2021-09-17 | 中国人民解放军国防科技大学 | Pedestrian multi-target tracking method based on video monitoring |
CN113256690A (en) * | 2021-06-16 | 2021-08-13 | 中国人民解放军国防科技大学 | Pedestrian multi-target tracking method based on video monitoring |
CN113628165A (en) * | 2021-07-12 | 2021-11-09 | 杨龙 | Livestock rotating fence checking method, device and storage medium |
CN113538513A (en) * | 2021-07-13 | 2021-10-22 | 中国工商银行股份有限公司 | Method, device and equipment for controlling access of monitored object and storage medium |
CN113674317B (en) * | 2021-08-10 | 2024-04-26 | 深圳市捷顺科技实业股份有限公司 | Vehicle tracking method and device for high-level video |
CN113674317A (en) * | 2021-08-10 | 2021-11-19 | 深圳市捷顺科技实业股份有限公司 | Vehicle tracking method and device of high-order video |
CN113838091B (en) * | 2021-09-23 | 2023-12-12 | 哈尔滨工程大学 | Sparse target tracking method |
CN113838091A (en) * | 2021-09-23 | 2021-12-24 | 哈尔滨工程大学 | Sparse target tracking method |
CN114299428A (en) * | 2021-12-24 | 2022-04-08 | 空间视创(重庆)科技股份有限公司 | Cross-media video character recognition method and system |
CN114863364B (en) * | 2022-05-20 | 2023-03-07 | 碧桂园生活服务集团股份有限公司 | Security detection method and system based on intelligent video monitoring |
CN114863364A (en) * | 2022-05-20 | 2022-08-05 | 碧桂园生活服务集团股份有限公司 | Security detection method and system based on intelligent video monitoring |
CN116912882A (en) * | 2023-07-13 | 2023-10-20 | 广西民族大学 | Enhanced deep single-lens pedestrian tracking algorithm based on head and trunk detection |
CN117455955A (en) * | 2023-12-14 | 2024-01-26 | 武汉纺织大学 | Pedestrian multi-target tracking method based on unmanned aerial vehicle visual angle |
CN117455955B (en) * | 2023-12-14 | 2024-03-08 | 武汉纺织大学 | Pedestrian multi-target tracking method based on unmanned aerial vehicle visual angle |
CN117953015A (en) * | 2024-03-26 | 2024-04-30 | 武汉工程大学 | Multi-row person tracking method, system, equipment and medium based on video super-resolution |
Also Published As
Publication number | Publication date |
---|---|
CN111126152B (en) | 2023-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126152B (en) | Multi-target pedestrian detection and tracking method based on video | |
CN111797716B (en) | Single target tracking method based on Siamese network | |
CN107563313B (en) | Multi-target pedestrian detection and tracking method based on deep learning | |
US11836931B2 (en) | Target detection method, apparatus and device for continuous images, and storage medium | |
CN103259962B (en) | A kind of target tracking method and relevant apparatus | |
Yang et al. | Spatio-temporal action detection with cascade proposal and location anticipation | |
CN112395957B (en) | Online learning method for video target detection | |
US11748896B2 (en) | Object tracking method and apparatus, storage medium, and electronic device | |
US10388022B2 (en) | Image target tracking method and system thereof | |
CN111476817A (en) | Multi-target pedestrian detection tracking method based on yolov3 | |
CN105741319B (en) | Improvement visual background extracting method based on blindly more new strategy and foreground model | |
KR20160044316A (en) | Device and method for tracking people based depth information | |
KR20180070258A (en) | Method for detecting and learning of objects simultaneous during vehicle driving | |
Zhang et al. | An optical flow based moving objects detection algorithm for the UAV | |
CN112116629A (en) | End-to-end multi-target tracking method using global response graph | |
CN112884835A (en) | Visual SLAM method for target detection based on deep learning | |
CN113052136B (en) | Pedestrian detection method based on improved Faster RCNN | |
Jin et al. | Fusing Canny operator with vibe algorithm for target detection | |
Cai et al. | A target tracking method based on KCF for omnidirectional vision | |
CN104616323A (en) | Space-time significance detecting method based on slow characteristic analysis | |
CN115188081B (en) | Complex scene-oriented detection and tracking integrated method | |
Singh et al. | Human activity tracking using star skeleton and activity recognition using hmms and neural network | |
CN114898287A (en) | Method and device for dinner plate detection early warning, electronic equipment and storage medium | |
CN108510517B (en) | Self-adaptive visual background extraction method and device | |
Nateghinia et al. | Video-based multiple vehicle tracking at intersections |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231109 Address after: Room 402, 36 guanri Road, phase II, Xiamen Software Park, Fujian Province Patentee after: STATE GRID INFO-TELECOM GREAT POWER SCIENCE AND TECHNOLOGY Co.,Ltd. Patentee after: State Grid Siji Location Service Co.,Ltd. Address before: Room 402, 36 guanri Road, phase II, Xiamen Software Park, Fujian Province Patentee before: STATE GRID INFO-TELECOM GREAT POWER SCIENCE AND TECHNOLOGY Co.,Ltd. |