CN108009473B - Video structuralization processing method, system and storage device based on target behavior attribute - Google Patents
Video structuralization processing method, system and storage device based on target behavior attribute Download PDFInfo
- Publication number
- CN108009473B CN108009473B CN201711055281.0A CN201711055281A CN108009473B CN 108009473 B CN108009473 B CN 108009473B CN 201711055281 A CN201711055281 A CN 201711055281A CN 108009473 B CN108009473 B CN 108009473B
- Authority
- CN
- China
- Prior art keywords
- target
- frame
- detection
- targets
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003860 storage Methods 0.000 title claims description 13
- 238000003672 processing method Methods 0.000 title abstract description 11
- 238000001514 detection method Methods 0.000 claims abstract description 228
- 206010000117 Abnormal behaviour Diseases 0.000 claims abstract description 90
- 230000033001 locomotion Effects 0.000 claims abstract description 75
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000012544 monitoring process Methods 0.000 claims abstract description 49
- 238000012545 processing Methods 0.000 claims abstract description 47
- 230000003287 optical effect Effects 0.000 claims description 119
- 230000006399 behavior Effects 0.000 claims description 34
- 238000012937 correction Methods 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 7
- 238000004806 packaging method and process Methods 0.000 claims description 2
- 230000003542 behavioural effect Effects 0.000 claims 1
- 238000004458 analytical method Methods 0.000 abstract description 23
- 230000002159 abnormal effect Effects 0.000 abstract description 5
- 230000005540 biological transmission Effects 0.000 abstract description 5
- 239000013598 vector Substances 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 241001465754 Metazoa Species 0.000 description 12
- 238000000605 extraction Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000012549 training Methods 0.000 description 8
- 230000005856 abnormality Effects 0.000 description 6
- 238000010921 in-depth analysis Methods 0.000 description 6
- 238000002372 labelling Methods 0.000 description 6
- 239000003086 colorant Substances 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000002547 anomalous effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/71—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video structuring processing method based on target behavior attributes, which comprises the following steps: acquiring basic target attributes by using a YOLO target detection algorithm; acquiring track information of the detected target by using a multi-target tracking algorithm; extracting abnormal video frames by using an abnormal behavior analysis algorithm based on the motion light flow characteristics; according to a custom-built metadata structure, acquiring corresponding characteristic information such as target category attributes and target tracks by using the method; correcting false detection data existing in the extracted metadata by adopting a weighting judgment method; and uploading the acquired data to a server at the back end for further processing. Through the mode, the unstructured video data can be converted into structured data with practical value, the network transmission efficiency of the video monitoring system is improved, and the load rate of a rear-end server is reduced. The invention also provides a real-time processing system and a real-time processing device based on the target behavior attribute.
Description
Technical Field
The invention relates to the field of computer vision, in particular to a video structuralization processing method and system based on target behavior attributes and a storage device.
Background
With the development of intelligent monitoring technology, it is especially important for processing video. In the prior art, an image feature detection method is mostly adopted for video processing, but the dimensionality in a video is very high, and a large number of redundant features and irrelevant features exist, so that the pressure of video processing is caused, the video cannot be rapidly processed, and the accuracy of obtaining target features can be reduced. Therefore, in order to meet the requirement of the development of the intelligent monitoring technology, a video processing method and a video processing system with high real-time performance and high accuracy are needed.
Disclosure of Invention
The invention mainly solves the technical problem of providing a video structured processing method, a system and a storage device based on target behavior attributes, which can realize high real-time and high-accuracy processing on videos.
In order to solve the above technical problems, the technical solution adopted by the present invention is to provide a method for video structuring processing based on target behavior attributes, comprising the following steps:
carrying out target detection and identification on the single-frame picture;
tracking the target to obtain a tracking result; and/or
And detecting abnormal behaviors of the target.
In order to solve the technical problem, the invention adopts another technical scheme that: the video structured processing system based on the target behavior attribute comprises a processor and a memory which are electrically connected with each other, wherein the processor is coupled with the memory, executes instructions to realize the video processing method when in work, and stores processing results generated by the execution instructions in the memory.
In order to solve the above technical problem, another technical solution of the present invention is to provide an apparatus having a storage function, which stores program data that, when executed, implements the above video processing method. The beneficial effects of the above technical scheme are: different from the situation of the prior art, the method provided by the invention has the advantages that the video is cut into single-frame pictures, the single-frame pictures are subjected to target detection and identification, the identified targets are tracked to obtain tracking results, and the identified targets are subjected to abnormal behavior detection.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a method for video structuring based on target behavior attributes;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a method for video structuring based on target behavior attributes according to the present application;
FIG. 3 is a schematic flow chart diagram illustrating a method for video structuring based on target behavior attributes according to another embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a method for video structuring based on target behavior attributes according to another embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating a method for video structuring based on target behavior attributes according to yet another embodiment of the present application;
FIG. 6 is a schematic flow chart diagram illustrating a method for video structuring based on target behavior attributes according to yet another embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating a method for video structuring based on target behavior attributes according to yet another embodiment of the present application;
FIG. 8 is a schematic flow chart diagram illustrating one embodiment of step S243 in the embodiment provided in FIG. 7;
FIG. 9 is a schematic diagram of a motion spatiotemporal container in an embodiment of the method of structured processing of video based on target behavior attributes of the present application;
FIG. 10 is a schematic diagram of one embodiment of an apparatus with storage functionality of the present application;
fig. 11 is a schematic structural diagram of an embodiment of a video structured processing system based on target behavior attributes according to the present application.
Detailed Description
Hereinafter, exemplary embodiments of the present application will be described with reference to the accompanying drawings. Well-known functions or constructions are not described in detail for clarity or conciseness. Terms described below, which are defined in consideration of functions in the present application, may be different according to intentions or implementations of users and operators. Therefore, the terms should be defined based on the disclosure of the entire specification.
Fig. 1 is a schematic flow chart of a video monitoring method based on video structured data and deep learning according to a first embodiment of the present invention. The method comprises the following steps:
s10: the video is read.
Optionally, reading the video includes reading a real-time video captured by the camera and/or pre-recording data of the saved video. The camera for collecting the real-time video can be one of a USB camera and a network camera based on rtsp protocol stream, or other types of cameras.
In one embodiment, the read video is a video acquired by real-time shooting of a USB camera or a network camera based on rtsp protocol stream.
In another embodiment, the read video is a pre-recorded video, which is read by inputting from a local storage or an external storage device such as a usb disk, a hard disk, or a video called from a network, which is not described in detail herein.
S20: and carrying out structuring processing on the video to obtain structured data.
Optionally, the step of performing a structuring process on the video to obtain structured data specifically means to convert the unstructured video data read in step S10 into structured data, and specifically, the structured data refers to data important for subsequent analysis. Optionally, the structured data comprises at least one of location of the object, object category, object attribute, object motion state, object motion trajectory, object dwell time, wherein it is understood that the structured data may also comprise other categories of information that are required by the user (the person using the method or system described in the present invention). Other data is not particularly important or may be mined from relevant information such as structured data. The specific information included in the structured information depends on different requirements. How the structured data is processed to obtain the structured data is described in detail below.
S30: and uploading the structured data to a cloud server, and carrying out deep analysis on the structured data to obtain a preset result.
Optionally, after the video is structured in step S20, the resulting structured data is uploaded to the cloud server and stored in the storage area of the cloud server.
In one embodiment, the data obtained by the video structuring processing is directly saved in a storage area of a cloud server to retain files and also used as a database for perfecting the system.
Optionally, after the video is processed in step S20, the obtained structured data is uploaded to a cloud server, and the cloud server performs further deep analysis on the structured data.
Optionally, the cloud server performs further in-depth analysis on the structured data uploaded from each monitoring node, wherein the in-depth analysis includes target trajectory analysis and target traffic analysis or other required analysis, and the target includes at least one of a person, a vehicle, an animal and the like.
In an embodiment, the cloud server further deeply analyzes the structured data uploaded from each monitoring node, namely, trajectory analysis, and further determines whether the target is suspicious, whether the target is retained in a certain area for a long time, and whether abnormal behaviors such as area intrusion occur or not according to the rule of the trajectory of the uploaded target and the residence time of the scene.
In another embodiment, the cloud server further analyzes the structured data uploaded from each monitoring node in a deep manner to obtain target traffic analysis, and performs statistics on a target appearing at a certain monitoring point according to the structured data uploaded from each monitoring point, and obtains the traffic of the target in each time period of the monitoring node through the statistics. The target may be a pedestrian or a vehicle, and the peak time or the low peak time of the target flow rate may be obtained. The target flow related data is calculated to reasonably prompt pedestrians and drivers, avoid traffic rush hours and provide reference basis for public resources such as illumination.
According to the method, the video is subjected to structuring processing to obtain the structured data which is critical to deep analysis, and then only the structured data is uploaded to the cloud instead of transmitting the whole video to the cloud, so that the problems of high network transmission pressure and high data flow cost are solved.
In an embodiment, according to a preset setting, when each monitoring node uploads structured data obtained by processing through a video structured processing system (hereinafter, referred to as a video processing system) based on a target behavior attribute to a cloud server, the cloud server performs deep analysis on the structured data after storing the structured data.
In another embodiment, when each monitoring node uploads the structured data processed by the video processing system to the cloud server, the server needs the user to select whether to perform deep analysis after saving the structured data.
In yet another embodiment, when the user needs, the structured data that has completed one in-depth analysis at the time of initial upload can be re-analyzed again for the set in-depth analysis.
Optionally, the deep analysis of the structured data uploaded by each monitoring node further includes: and counting and analyzing the structured data to obtain the behavior types and abnormal behaviors of one or more targets, alarming the abnormal behaviors and the like, or analyzing and processing the contents required by other users.
With respect to how the video structured data is processed to obtain the structured data, the following elaborates that the present application also provides a method for video structured processing based on target behavior attributes. In one embodiment, the video structured data processing is an intelligent analysis module embedded with deep learning object detection and recognition algorithm, multi-object tracking algorithm, abnormal behavior recognition based on moving optical flow features, and other algorithms, and converts the unstructured video data read in step S10 into structured data.
Referring to fig. 2, a flowchart of an embodiment of a method for video structuring processing based on target behavior attributes is provided, and the method also includes steps S22 to S23 in step S20 of the above embodiment.
S22: and carrying out target detection and identification on the single-frame picture.
Optionally, in step S22, the single frame picture is subject to object detection and recognition. The target detection and identification object comprises pedestrian detection and identification, vehicle detection and identification, animal detection and identification and the like.
Optionally, the step S22 of performing target detection and identification on the single-frame picture includes: and extracting the characteristic information of the target in the single-frame picture. Feature information of all objects, categories of the objects, position information of the objects and the like are extracted from the single-frame picture, wherein the objects can be pedestrians, vehicles, animals and the like.
In an embodiment, when only pedestrians are contained in a single-frame picture, the target detection identification is detection identification of the pedestrians, that is, feature information of all the pedestrians in the picture is extracted.
In another embodiment, when multiple types of objects such as pedestrians, vehicles, etc. are contained in the single-frame picture, the object detection identification is to perform detection identification on multiple types of pedestrians, vehicles, etc., that is, to extract feature information of pedestrians, vehicles, etc. in the single-frame picture, it can be understood that the type of object identification can be specified by the user's specification.
Optionally, the algorithm used in the step S22 for performing target detection and identification on the single-frame picture is an optimized target detection algorithm based on deep learning. Specifically, a YOLOV2 deep learning target detection framework can be used for target detection and identification, and the core of the algorithm is to use the whole image as network input and directly regress the position of the bounding box and the category to which the bounding box belongs in the output layer.
Optionally, the target detection is composed of two parts of model training and model testing.
In one embodiment, in the aspect of model training, 50% of the pedestrian or vehicle images from the VOC data set and the COCO data set are taken, and the remaining 50% of the data are taken from real street, indoor aisle, square, etc. monitoring data. It can be understood that the ratio of the data in the common data set (VOC data set and COCO data set) and the data in the real monitoring data set used in the model training can be adjusted as required, wherein when the ratio of the data in the common data set is higher, the accuracy of the obtained data model in the real monitoring scene is relatively poor, and conversely, when the ratio of the data in the real monitoring data set is higher, the accuracy is relatively improved.
Optionally, in an embodiment, after the target is detected in the single-frame picture in step S22, the pedestrian target is placed in a tracking queue (hereinafter also referred to as a tracking chain), and then a target tracking algorithm is further used to perform preset tracking and analysis on the target.
Optionally, the step of extracting the feature information of the target in the single-frame picture further includes: a metadata structure is constructed. Optionally, the feature information of the target is extracted according to a metadata structure, that is, the feature information of the target in the single-frame picture is extracted according to the metadata structure. In one embodiment, the metadata structure includes basic attribute units for pedestrians, such as: the system comprises at least one of a camera address, the time of the target entering and exiting the camera, track information of the target at the current monitoring node, the color worn by the target or a screenshot of the target. For example, a pedestrian's metadata structure may be seen in table 1 below, where the metadata structure may also include information desired by other users but not included in the table below.
Optionally, in an embodiment, in order to save resources of network transmission, the metadata structure only includes some basic attribute information, and other attributes may be obtained by mining and calculating related information such as a target trajectory.
TABLE 1 pedestrian metadata Structure
Attribute name | Type (B) | Description of the invention |
Camera ID | short | Camera node numbering |
Target time of occurrence | long | Target entry time to monitor node |
Target departure time | long | Target departure monitoring node time |
Target motion trajectory | point | Motion trail of target at current node |
Object ID | short | Object ID identification number |
Target jacket color | short | Predefining 10 colors |
Target pant color | short | Predefining 5 colors |
Target Whole screenshot | image | Recording the target Whole screenshot |
Screenshot of the target head and shoulder | image | Recording target head screenshot |
In another embodiment, the metadata structure may further include basic attribute information of the vehicle, such as: the camera address, the time of the target entering and exiting the camera, the track information of the target at the current monitoring node, the appearance color of the target, the license plate number of the target or the screenshot of the target.
It is understood that the definition of the information specifically included in the metadata structure and the data type of the metadata may be initially set as needed, or may be specific attribute information that needs to be acquired, which is specifically specified in a set plurality of information according to the needs of a user after the initial setting.
In an embodiment, the structure of the metadata initially sets the category of a camera address, time for the target to enter and exit the camera, track information of the target at the current monitoring node, color worn by the target, or a screenshot of the target, and the like, and when identifying the target, the user can particularly specify the time for obtaining the target to enter and exit the camera according to the needs of the user.
In an embodiment, when the target in the single frame picture is a pedestrian, extracting feature information of the pedestrian according to a preset structure of metadata of the pedestrian, that is, extracting at least one of time when the pedestrian enters or exits the camera, a current camera address where the pedestrian is located, time when the pedestrian enters or exits the camera, trajectory information of the pedestrian at a current monitoring node, a color worn by the pedestrian, or a current screenshot of the pedestrian, or according to other target attribute information specifically specified by a user, such as time when the pedestrian enters or exits the camera, a wearing color of the pedestrian, and the like.
Alternatively, when an object is detected and recognized from a single-frame picture, while feature information of the object is acquired, an image of the object is cut out from an original video frame, and then model training is performed by using a framework based on yolov2(yolov2 is a method for detecting and recognizing the object based on deep learning proposed by Joseph Redmon in 2016).
In one embodiment, when the target detection is performed on the single-frame picture, if the detected target is a pedestrian, the image of the detected pedestrian is cut out from the original video frame, then the pedestrian is subjected to part segmentation by using a frame training model based on yolov2, the clothing color information of the upper and lower body parts of the pedestrian is judged, and the head and shoulder picture of the pedestrian is cut out.
In another embodiment, when the detected target is a vehicle when the single-frame picture is subjected to target detection, an image of the detected vehicle is cut out from an original video frame, then a detection model of the vehicle is trained by using a frame based on yolov2 to perform detection and identification on the vehicle, judge the appearance color of the vehicle body, identify the license plate information, and cut out the picture of the vehicle. It is understood that since the identified target category can be selected by user settings, the detection identification of the vehicle is decided by the administrator whether or not to proceed.
In another embodiment, when the detected object is an animal when the single frame picture is subjected to object detection, an image of the detected animal is cut out from the original video frame, then the animal is detected and identified by using a detection model of the animal trained based on the yolov2 framework, the appearance color, the variety and other information of the animal are judged, and the picture of the animal is cut out. It will be appreciated that the detection of the animal is determined by the user whether or not to proceed, since the target type of identification can be selected by the user setting.
Optionally, each single frame of picture identified by target detection may be one, or multiple single frames of pictures may be performed simultaneously.
In an embodiment, the single-frame picture for performing the target detection and identification each time is one, that is, only the target in one single-frame picture is subjected to the target detection and identification each time.
In another embodiment, the target detection and identification can be performed on multiple pictures at a time, that is, the target detection and identification can be performed on the targets in multiple single-frame pictures at the same time.
Optionally, id (identity) labeling is performed on the detected targets after model training for yolov 2-based framework to facilitate correlation at subsequent tracking. The ID numbers of the different object categories may be preset, and the upper limit of the ID numbers is set by the user.
Alternatively, the ID labeling may be performed automatically on the detected and identified object, or may be performed manually.
In one embodiment, the detected and identified objects are labeled, wherein the labeled ID numbers have differences according to the type of the detected object, for example, the ID number of the pedestrian can be set as: number + number, vehicle: capital + number, animal: the lower case letters + numbers facilitate association during subsequent tracking. The set rule can be set according to the habit and preference of the user, and is not described herein in detail.
In another embodiment, the detected and identified objects are labeled, wherein the intervals to which the labeled ID numbers of the objects belong differ depending on the category of the detected object. For example, the ID number of the detected pedestrian object is set in the section 1 to 1000000, and the ID number of the detected vehicle object is set in the section 1000001 to 2000000. Specifically, the setting can be determined by the initial setting personnel, and the adjustment and the change can be carried out according to the requirement.
Alternatively, ID labeling of the detected target may be performed automatically by the system by presetting, or may be performed manually by the user.
In one embodiment, when an object is detected in a single frame picture that identifies a pedestrian or a vehicle, the system automatically labels the detected object by its category and then automatically labels the ID number that has been previously labeled.
In another embodiment, the user manually ID labels objects in the picture. The ID labeling can be carried out on the single-frame picture targets which do not pass through the system automatic ID labeling, or the ID labeling can be carried out by the user independently on the missed targets or other targets outside the preset detection target types.
Optionally, referring to fig. 3, in an embodiment, before performing the target detection and identification on the single-frame picture in step S22, the method further includes:
s21: the video is sliced into single frame pictures.
Alternatively, the step of cutting the video into single-frame pictures is to cut the video read in the step S10 into single-frame pictures, in preparation for the step S22.
Optionally, in an embodiment, the step of segmenting the video into single-frame pictures is segmenting the video read in step S10 into frames with equal intervals or frames with unequal intervals.
In one embodiment, the step of dividing the video into single-frame pictures is to divide the video read in step S10 into equally spaced skipped frames, and the skipped frames are the same, i.e. equally spaced skipped same frames are divided into single-frame pictures, wherein the skipped frames are frames that do not contain important information, i.e. frames that can be ignored. For example, 1 frame is skipped in the middle of the equal interval, and video segmentation is performed, that is, the t th frame, the t +2 th frame and the t +4 th frame are taken, the skipped frames are the t +1 th frame and the t +3 th frame, the skipped frames are the frames of important information which is judged not to be contained, or the skipped frames are the frames which are coincident with the taken frames or the frames with high coincidence degree.
In another embodiment, the step of dividing the video into single-frame pictures is to divide the video read in step S10 into frames with unequal intervals, that is, the skipped frames may not be the same, and the frame with unequal intervals is divided into single-frame pictures, wherein the skipped frames are the frames that do not contain important information, that is, the skipped frames are negligible frames, wherein the frames that do not contain important information are determined, and the determination result is really unimportant frames. For example, the frame skipping division with unequal intervals is to take the t-th frame, then take the t +3 frame by skipping the 2 frame, then take the t +5 frame by skipping the 1 frame, and then take the t +9 frame by skipping the 3 frame, wherein the skipped frame numbers respectively include the t +1 frame, the t +2 frame, the t +4 frame, the t +6 frame, the t +7 frame, the t +8 frame and other frame numbers, and the skipped frame numbers are the frame numbers which are judged not to contain the information required by the analysis.
In different embodiments, the step of cutting the video into single-frame pictures may be that the system automatically cuts the read video into single-frame pictures, or the user selects whether to cut the video into single-frame pictures, or the user manually inputs the single-frame pictures that have been cut in advance.
Optionally, in an embodiment, after the step of segmenting the video into the single-frame pictures is completed, that is, when the step of segmenting the read-in video into the single-frame pictures is completed, step S22 is automatically performed on the segmented single-frame pictures, that is, the target detection and identification are performed on the segmented single-frame pictures, or the user selects and determines whether to perform the target detection and identification described in step S22 on the segmented single-frame pictures.
Optionally, in the process of detecting and identifying the targets, statistical calculation is performed on the values of the detection and identification of each target according to a certain rule.
In one embodiment, after step S22, for the detected object, the total number of frames (total number of frames appeared) in the current monitoring node is counted, wherein the detected value is the number of frames a, the detected value is the number of frames B, and so on (there may be more or one detected value, based on the detected result), and the counted result is saved for calling.
Alternatively, the correction method is mainly divided into trajectory correction and target attribute correction.
Optionally, after the structured data of each target is obtained through target detection, the obtained structured data is corrected. That is, the false detection data in the structured data is corrected, the correction is performed according to the weight ratio, the data value of the probability of the majority is the accurate value, and the data value of the minority result is the false detection value.
In an embodiment, after statistical calculation (calling the statistical result), it is found that the number of frames of the object appearing in the current monitoring node is detected and identified in step S22 is 200 frames, wherein 180 frames detect that the color of the top of the object is red, 20 frames detect that the color of the top of the object is black, voting is performed according to the weight ratio, the accurate value of the object is finally corrected to be red, and the corresponding value in the structured data is modified to be red, and finally the correction is completed.
Optionally, the trajectory correction is specifically as follows: assuming that a target appears for a time period of T frames in a certain monitoring scene, a set of trajectory points G ═ p1, p2, … …, p is obtainedNCalculating the mean value and the deviation of the track points on the X axis and the Y axis, and then eliminating abnormal and noise track points, wherein the specific expression is as follows:
in one embodiment, track points with small deviation or average value are eliminated in the track correction, and noise point interference is reduced.
Optionally, the target attribute correction is specifically as follows: the target attribute correction is to correct the attribute value of the same target based on a weighted decision method. Let the color label of the jacket of a certain object be label { "red", "black", "white", … … }, i.e., a certain attribute value has T classifications. Firstly, it is converted into digital code L ═ m1,m2,m3,……,mT](ii) a Then, the code value x with the highest frequency and the frequency F thereof are obtained, and finally, the attribute value Y (accurate value) of the target is directly output. The specific expression is as follows:
F=T-||M-mx||0
Y=label[mx]
optionally, in an embodiment, the present invention combines the YOLO target detection framework to perform target recognition and positioning, and uses the google lenet network to extract the feature vector of each target, so as to facilitate subsequent target matching. Google lenet is a 22-layer deep CNN neural network proposed by Google corporation in 2014, which is widely used in the fields of image classification, recognition and the like. The feature vectors extracted by the deep-level deep learning network have better robustness and differentiability, so that the accuracy of follow-up target tracking can be better improved by the steps.
S23: and tracking the target to obtain a tracking result.
Optionally, in the step of tracking the detected target to obtain a tracking result, the tracked target is the target detected in step S22 or another target specifically designated by the user, and step S23 further includes: and tracking the target, and recording the time when the target enters or leaves the monitoring node and each position where the target passes to obtain the motion track of the target. The application provides an improved multi-target tracking method based on KCF and Kalman, and details will be described below.
In another embodiment, the video processing method provided by the present application further includes step S24 on the basis that the above embodiment includes steps S21, S22, and S23, or the embodiment includes only steps S21, S22, and S24, see fig. 4 and 5. It can be understood that the video structuring processing method (video structuring processing for short) based on the target behavior attribute realizes the conversion of video data into structured data, wherein the specific conversion process includes: target detection and identification, target track tracking and extraction and target abnormal behavior detection. In one embodiment, the video structuring process comprises target detection recognition and target track extraction. In another embodiment, the video structuring process comprises target detection, target trajectory extraction, and target anomalous behavior detection.
S24: and detecting abnormal behaviors of the target.
Alternatively, step S24 is an operation of performing abnormal behavior detection on the target detected and identified in the above-described step S21.
Optionally, the abnormal behavior detection includes pedestrian abnormal behavior detection and vehicle abnormal behavior detection, wherein the abnormal behavior of the pedestrian includes: running, fighting and harassment, the abnormal behavior of traffic includes: impact and overspeed, etc.
The video is processed by the method to obtain important data, so that overlarge data volume can be avoided, and the pressure of network transmission is greatly reduced.
In one embodiment, when the abnormal behavior detection is performed on the pedestrian target detected in step S21, it is determined that the running of a preset number or more of people in a monitoring node occurs, and it may be determined that the crowd disturbance occurs. Such as: it may be set that when the running abnormality is determined to occur in the 10 person in step S24, the occurrence of the crowd disturbance may be determined, and in other embodiments, the threshold number of people determining the disturbance is determined according to specific situations.
In another embodiment, it may be set that when it is determined in step S24 that collision abnormality occurs in 2 vehicles, it may be determined that a traffic accident occurs, and when it is determined in step S24 that collision abnormality occurs in more than 3 vehicles, it may be determined that a major car accident occurs. It will be appreciated that the number of vehicles determined may be adjusted as desired.
In another embodiment, when the speed of the vehicle is detected to exceed the preset speed value in step S24, the vehicle may be determined to be an overspeed vehicle, and the corresponding video of the vehicle may be stored in a screenshot form to identify the vehicle. Wherein the information of the vehicle includes a license plate number.
Optionally, in an embodiment, when the abnormal behavior is detected in step S24, the monitoring node performs an audible and visual alarm process.
In one embodiment, the content of the sound and light alarm includes the following broadcast voice prompt content: for example, "please do not crowd with the house, pay attention to safety! "or other predetermined voice prompt content; the acousto-optic alarm content also comprises: and opening the warning lamp corresponding to the monitoring node to remind passing people and vehicles to pay attention to safety.
Optionally, the level of the bad abnormal behavior is set according to the number of people who have abnormal behavior, and different levels of the bad behavior correspond to different emergency treatment measures. The severity level of abnormal behavior may be divided into yellow, orange and red. The emergency measure corresponding to the yellow-grade abnormal behavior is to perform sound-light alarm, the emergency measure corresponding to the orange-grade abnormal behavior is to perform sound-light alarm and simultaneously connect with the security personnel monitoring the responsible point, and the abnormal behavior measure of the red early warning is to perform sound-light alarm and simultaneously connect with the security personnel monitoring the responsible point to perform on-line alarm.
In one embodiment, when the number of people with abnormal behaviors is 3 or less, the abnormal behaviors are set to the people with yellow level; the orange-grade crowd abnormal behaviors when the number of people with abnormal behaviors is more than 3 and less than or equal to 5; the abnormal behavior of the population set to the red level when the number of people who have abnormal behavior exceeds 5. The specific number of people to be set can be adjusted according to actual needs, which is not described in detail herein.
Optionally, in an embodiment, the step of detecting the abnormal behavior of the target further includes the following steps: and if the abnormal behavior is detected, storing the screenshot of the current video frame image, packaging the screenshot and the characteristic information of the target with the detected abnormal behavior, and sending the characteristic information to the cloud server.
Optionally, the corresponding characteristic information of the target in which the abnormal behavior occurs may include: the camera ID, the type of the abnormal event, the occurrence of the abnormal behavior, the screenshot of the abnormal behavior, etc., and may also include other types of information as needed. The information contained in the metadata structure of the abnormal behavior sent to the cloud server includes the structure in table 2 below, and may also include other types of information.
TABLE 2 Meta data Structure for abnormal behavior
Attribute name | Data type | Description of the invention |
Camera ID | short | Unique ID identification of camera |
Type of exception event | short | Predefining two abnormal behaviors |
Time of occurrence of abnormality | long | Time of occurrence of abnormal situation |
Abnormal situation screenshot | image | Recording abnormal behavior screenshot |
In one embodiment, when the abnormal behavior of the target is detected, if the abnormal behavior that a pedestrian sends a frame is detected, the corresponding screenshot of the current video frame image is stored, and the screenshot and the structured data corresponding to the target with the abnormal behavior are packaged and sent to the cloud server. And when the screenshot of the detected abnormal behavior is sent to the cloud server, the monitoring node performs sound-light alarm processing and starts corresponding emergency measures according to the grade of the abnormal behavior.
In another embodiment, when abnormal behaviors of a target are detected and crowd disturbance is detected, the current video frame image screenshot is stored and sent to the cloud server for further processing by the cloud server, and meanwhile, the monitoring node performs sound-light alarm and starts corresponding emergency measures according to the level of the abnormal behaviors.
Specifically, in an embodiment, the step of detecting the abnormal behavior of the target includes: and extracting optical flow motion information of a plurality of feature points of one or more targets, and carrying out clustering and abnormal behavior detection according to the optical flow motion information. Based on the above, the present application also provides an abnormal behavior detection method based on the clustered optical flow features, which will be described in detail below.
According to the method for video structured processing based on the target behavior attributes, the unstructured video data can be converted into the structured data, and the real-time performance of food processing analysis is improved.
Referring to fig. 6, a flow chart of an embodiment of an improved multi-target tracking method based on KCF and Kalman provided by the present application is also shown, and the method is also step S23 in the above embodiment, and specifically includes steps S231 to S234. The method specifically comprises the following steps:
s231: and predicting a tracking frame of each target in the first plurality of targets in the current frame by combining the tracking chain and the detection frames corresponding to the first plurality of targets in the picture of the previous frame.
Optionally, the tracking chain is calculated according to tracking of multiple targets in all single-frame pictures or partial continuous single-frame pictures segmented from the video before the current frame picture, and track information and empirical values of multiple targets in all previous pictures are collected.
In one embodiment, the tracking chain is calculated from the target tracking of all pictures before the current frame picture, and includes all the information of all the targets in all the pictures before the current frame picture.
In another embodiment, the tracking chain is calculated from target tracking of a partially consecutive picture preceding the current frame picture. The more the number of continuous pictures in the tracking calculation, the higher the accuracy of the budget.
Optionally, in combination with feature information of the objects in the tracking chain and according to a detection frame corresponding to the first plurality of objects in the previous frame picture, a tracking frame of the tracked first plurality of objects in the current frame picture is predicted, for example, a position where the first plurality of objects may appear in the current frame is predicted.
In an embodiment, the above steps may predict the positions of the tracking frames of the first plurality of targets in the current frame, that is, obtain the predicted values of the first plurality of targets.
In another embodiment, the above steps may predict the position of the tracking frame of the first plurality of targets in a frame next to the current frame. And the predicted positions of the first plurality of targets in the tracking frame of the next frame of the current frame are slightly larger than the error of the predicted positions of the first plurality of targets in the tracking frame of the current frame.
Optionally, the first plurality of targets refers to all detected targets in the last frame of picture.
S232: and acquiring a tracking frame corresponding to the first plurality of targets in the previous frame of picture in the current frame and a detection frame of the second plurality of targets in the current frame of picture.
Specifically, the second plurality of targets refers to all detected targets in the current frame picture.
Optionally, a tracking frame of the first plurality of targets in the previous frame picture in the current frame and a detection frame of the second plurality of targets in the current frame picture are obtained. Where the tracking box is a rectangular box, or other shaped box, that includes one or more objects in the box, in predicting where the first plurality of objects will appear in the current frame.
Optionally, when a tracking frame corresponding to the first plurality of targets in the previous frame of picture in the current frame and a detection frame of the second plurality of targets in the current frame of picture are obtained, the obtained tracking frame and detection frame include feature information of the targets corresponding to the tracking frame and the detection frame, respectively. Such as location information, color features, texture features, etc. of the object. Optionally, the corresponding feature information may be set by the user as needed.
S233: and establishing a target incidence matrix of a tracking frame of the first plurality of targets in the current frame and a detection frame of the second plurality of targets in the current frame.
Optionally, the target association matrix is established according to the tracking frame corresponding to the first plurality of targets in the previous frame of picture acquired in step S232 in the current frame and the detection frame corresponding to the second plurality of targets detected in the current frame of picture.
In one embodiment, for example, if the number of the first plurality of objects in the previous frame of picture is N and the number of the detected objects in the current frame is M, an object association matrix W with a size of M × N is established, where:
Aij(0<i≤M;0<j ≦ N) is determined by dist (i, j), IOU (i, j), m (i, j), and specifically, the following formula may be expressed:
wherein, IW、IhIs the width and height of the image frame; dist (i, j) is the centroid distance between the next frame tracking frame predicted by the jth target in the tracking chain obtained in the previous frame and the detection frame of the ith target detected and identified in the current frame, d (i, j) is the centroid distance normalized by adopting the image frame diagonal 1/2 distance, m (i, j) is the Euclidean distance of two target feature vectors, FMi、FNjThe feature vector extracted based on the GoogleLeNet network is more robust and distinguishable than the traditional manual feature extraction by adopting a CNN framework model for feature extraction. The purpose of normalization is to ensure that d (i, j) and IOU (i, j) have consistent influence on A (i, j). The IOU (i, j) represents the overlapping rate of the tracking frame in the current frame and the detection frame of the jth target detected and identified in the current frame, which is predicted by the jth target in the tracking chain of the previous frame, i.e. the intersection of the tracking frame and the detection frame is compared with the union thereof. The IOU specific expression is as follows:
optionally, the value range of the IOU (i, j) is 0 ≦ IOU (i, j) ≦ 1, and the larger the value, the larger the overlapping rate of the tracking frame and the detection frame is.
In one embodiment, the eye is a Chinese character Dang GuiWhen the mark is static, the centroid positions detected by the same target in the two frames before and after should be at the same point or have small deviation, so the value of IOU should be approximate to 1, d (i, j) should also tend to 0, so AijWhen the targets are matched, the value of m (i, j) is small, so that the probability that the target with the ID j and the detection target with the ID i in the detection chain are successfully matched in the tracking chain is higher; if the positions of the same target detection frame of the two previous frames and the two next frames are far away from each other and are not overlapped, the IOU should be 0, and the value of m (i, j) is large, so the value of d (i, j) is large, and therefore the probability that the target with the ID of j and the detection target with the ID of i in the tracking chain are successfully matched is small.
Optionally, the establishment of the target incidence matrix refers to the centroid distance, the IOU, and the euclidean distance of the feature vector of the target, and may also refer to other feature information of the target, such as: color features, texture features, etc. It is understood that the accuracy is higher when more indexes are referred to, but the real-time performance is slightly reduced due to the increase of the calculation amount.
Optionally, in an embodiment, when it is required to ensure better real-time performance, the target association matrix is established only by referring to the position information of the target in the two taken images in most cases.
In one embodiment, a target association matrix of a tracking frame corresponding to a first plurality of targets and a detection frame of a current frame corresponding to a second plurality of targets is established with reference to position information of the targets and wearing colors of the targets (or appearance colors of the targets).
S234: and correcting by using a target matching algorithm to obtain the actual position corresponding to the first part of targets of the current frame.
Optionally, the target value is corrected by using a target matching algorithm according to the observed value of the actually detected target and the predicted value corresponding to the target detection frame in step S231, so as to obtain the actual positions of the first multiple targets in the current frame, that is, the actual positions of the second multiple targets, which are simultaneously present in the current frame, in the first multiple targets in the previous frame. It can be understood that, since the observed values of the second plurality of targets in the current frame have a certain error due to factors such as the sharpness of the split picture, the predicted positions of the first plurality of targets in the current frame are corrected by using the detection frame in which the tracking chain and the first plurality of targets in the previous frame are combined in the previous frame picture.
Optionally, the target matching algorithm is Hungarian algorithm (Hungarian), the observed value is feature information of the target obtained when the target is detected and identified in step S22, the observed value includes a category of the target and position information of the target, and the predicted value of the target is a position value of the target in the current frame, predicted by combining the tracking chain and the position of the target in the previous frame in step S231, and other feature information. The position information of the target is used as a primary judgment basis, and other characteristic information is used as a secondary judgment basis.
Optionally, in an embodiment, a target in the second plurality of targets, which is successfully matched with the tracking frame of the first plurality of targets in the current frame, is defined as a first partial target, and a target in the first plurality of targets, which is successfully matched with the tracking frame of the current frame and the detection frame of the second plurality of targets in the current frame, is also defined as a first partial target, that is, each group of tracking frames and detection frames that are successfully matched are from the same target. It can be understood that, when the detection frame in the second plurality of targets is successfully matched with the tracking frame of the first plurality of targets in the current frame, the following steps are performed: the position information and other characteristic information are in one-to-one correspondence, or the corresponding item number is more, namely the matching is successful if the corresponding item number probability is higher.
In another embodiment, the number of the first part of the objects is smaller than that of the first plurality of objects, that is, only part of the tracking frames of the first plurality of objects in the current frame can be successfully matched with the detection frames of the second plurality of objects, and another part of the tracking frames of the first plurality of objects in the current frame cannot be successfully matched according to the feature information of the matching basis.
Optionally, in a different implementation, the step of successfully matching the detection frame of the second plurality of objects in the current frame with the tracking frame of the first plurality of objects in the previous frame in the current frame includes: and judging whether the matching is successful according to the centroid distance and/or the overlapping rate of the detection frame of the second plurality of targets in the current frame and the tracking frame of the first plurality of targets in the previous frame in the current frame.
In an embodiment, when the centroid distance between the detection frame of one or more of the second plurality of targets in the current frame and the detection frame of one or more of the first plurality of targets in the previous frame in the tracking frame in the current frame is very close, and the overlap ratio is very high, it is determined that the target matching is successful. It can be understood that the time interval of the segmentation of the two adjacent frames of pictures is very short, that is, the distance that the target moves in the time interval is very small, so that it can be determined that the target in the two frames of pictures is successfully matched at this time.
Optionally, the second plurality of targets includes a first portion of targets and a second portion of targets, wherein, as can be seen from the above, the first portion of targets is: and matching the detection frame in the second plurality of targets with the tracking frame of the first plurality of targets in the current frame to obtain a successful target. The second part targets are: and the detection frame in the second plurality of targets and the target which is not successfully matched with the tracking frame of the first plurality of targets in the current frame define the target which is not recorded in the tracking chain in the second part of targets as a new target. It will be appreciated that, in the second partial target, there may be another type of target in addition to the new target: there are no targets in the first plurality that match successfully but have appeared in the tracking chain.
In an embodiment, the number of the second partial targets may be 0, that is, the detection frame of the second plurality of targets in the current frame and the tracking frame of the first plurality of targets in the current frame may both be successfully matched, so that the number of the second partial targets at this time is 0.
Optionally, after the step of performing a correction analysis by using a target matching algorithm to obtain an actual position corresponding to the first part of the targets in the current frame, the method includes: screening new targets in the second part of targets; and adding the newly added target into the tracking chain. Another embodiment further comprises: and initializing the corresponding filter tracker according to the initial position and/or characteristic information of the newly added target. The filter tracker in one embodiment includes a Kalman filter (kalman), a coring correlation filter (kcf), and a filter that combines the Kalman filter and the coring correlation filter. The Kalman filter, the coring correlation filter and the filter combining the Kalman filter and the coring correlation filter are all multi-target tracking algorithms realized based on programming. The filter combining the kalman filter and the coring correlation filter is a filter structure implemented by an algorithm structure combining the structures of the kalman filter and the coring correlation filter. In other embodiments, the filter tracker may be other types of filters as long as the same function can be achieved.
Optionally, the data of the tracking chain is calculated by training data of the previous frame and all frames before the previous frame, and the targets in the tracking chain include the first partial target and the third partial target described above. Specifically, the first part of the targets refers to: the tracking frame in the current frame of the first plurality of objects matches the successfully detected object of the second plurality of objects. The third part of the target is: the target in the tracking chain is not matched with the target in the second plurality of targets successfully.
It will be appreciated that the third portion of targets is substantially all targets in the tracking chain except for the first portion of targets that successfully match the second plurality of targets.
Optionally, the step of performing a correction analysis by using a target matching algorithm in step S234 to obtain an actual position corresponding to the first part of the targets in the current frame includes: and adding 1 to a target lost frame number count value corresponding to the third part of targets, and removing the corresponding target from the tracking chain when the target lost frame number count value is greater than or equal to a preset threshold value. It can be understood that the preset threshold of the count value of the number of lost frames is preset and can be adjusted as required.
In an embodiment, when the count value of the number of lost frames corresponding to a certain target in the third part of targets is greater than or equal to a preset threshold, the certain target is removed from the current tracking chain.
Optionally, when a certain target is removed from the current tracking chain, the structured data corresponding to the target is uploaded to the cloud server, and the cloud server may perform in-depth analysis on the track or the abnormal behavior of the target again with respect to the structured data of the target or the empirical value in the database.
It can be understood that, when the structured data corresponding to the target removed from the tracking chain is sent to the cloud server, the system executing the method can select trust, and interrupt the cloud server from further analyzing the target.
Optionally, the step of performing a correction analysis by using a target matching algorithm in step S234 to obtain an actual position corresponding to the first part of the targets in the current frame includes: and adding 1 to the target lost frame number count value corresponding to the third part of targets, and locally tracking the third part of targets to obtain a current tracking value when the count value is smaller than a preset threshold value.
Further, in an embodiment, the current tracking value of the third part of targets and the predicted value corresponding to the third part of targets are corrected to obtain the actual position of the third part of targets. Specifically, in an embodiment, the current tracking value is obtained when the third part of the targets are locally tracked by a coring correlation filter and a filter in which a kalman filter and the coring correlation filter are combined, and the predicted value is a position value of the third part of the targets predicted by the kalman filter (kalman).
Alternatively, tracking the target detected in the above step S22 is performed by a filter in which filters of a kalman filter tracker (kalman) and a coring correlation filter tracker (kcf) are combined.
In one embodiment, when the tracked targets are all targets that can be matched, that is, when a lost target is undoubtedly detected, only a kalman filter tracker (kalman) is called to complete the tracking work of the target.
In another embodiment, when a suspected lost target exists in the tracked targets, a filter combined by a Kalman filtering tracker (kalman) and a coring correlation filtering tracker (kcf) is called to cooperate together to complete the tracking work of the target, or the Kalman filtering tracker (kalman) and the coring correlation filtering tracker (kcf) cooperate together in sequence.
Optionally, in an embodiment, the step S234 of performing correction by using a target matching algorithm to obtain an actual position corresponding to the first part target of the current frame includes: and correcting each target in the first part of targets according to the predicted value corresponding to the current frame tracking frame corresponding to each target and the observed value corresponding to the current frame detection frame to obtain the actual position of each target in the first part of targets.
In an embodiment, the predicted value corresponding to the tracking frame in the current frame for each of the first partial targets may be understood as: and predicting the position information of each target in the first part of targets in the current frame by combining the empirical values in the tracking chain and the position information in the previous frame, and correcting the actual position of each target in the first part of targets by combining the observed actual position (namely, observed value) of the first part of targets in the current frame. This operation is performed to reduce the inaccuracy of the measured actual values of the respective targets due to errors in the measured values or the observed values.
Optionally, in an embodiment, the improved multi-target tracking method based on KCF and Kalman may implement tracking analysis on multiple targets, record the time of the target entering the monitoring node and each movement position in the monitoring scene, thereby generating a trajectory chain, and may specifically and clearly reflect the movement information of the target at the current monitoring node.
Referring to fig. 7, a schematic flowchart of an embodiment of an abnormal behavior detection method based on clustered optical flow features is also provided in the present application, and the method is also step 24 of the above embodiment, and includes steps S241 to S245. The method comprises the following specific steps:
s241: and carrying out optical flow detection on the area where the detection frame of the one or more targets is located.
Optionally, before the abnormal behavior detection is performed on the targets, the detection and identification of the targets are completed based on a preset algorithm, and a detection frame corresponding to each target and a position where the detection frame is located when the targets in the single-frame picture are subjected to the target detection are acquired, and then the optical flow detection is performed on the detection frames of one or more targets. The optical flow contains motion information of the object. Alternatively, the preset algorithm may be yolov2 algorithm, or may be other algorithms with similar functions.
It can be understood that the center of the detection frame and the center of gravity of the target are approximately coincident with each other, so that the position information of each pedestrian target or other types of targets in each frame of image can be obtained.
In one embodiment, the essence of performing optical flow detection on one or more detection frames of the target is to acquire motion information of optical flow points in the detection frames corresponding to the target, including the speed magnitude and the motion direction of the motion of the optical flow points.
Alternatively, the optical flow detection is to obtain the motion characteristic information of each optical flow point, and is performed by LK (Lucas-Kanade) pyramid optical flow method or other optical flow methods with the same or similar functions.
Alternatively, optical flow detection may be performed on one detection frame of an object in each frame of picture, or optical flow detection may be performed on a plurality of detection frames of objects in each frame of picture at the same time, and the number of objects subjected to optical flow detection per time generally depends on the system initial setting. It is understood that this setting can be adjusted as needed, and when rapid optical flow detection is needed, the setting can be set to detect the detection frames of multiple targets in each frame of picture at the same time. When very fine optical flow detection is required, it is possible to adjust the detection frame set to perform optical flow detection on one object at a time in each frame of picture.
Alternatively, in an embodiment, optical flow detection is performed on the detection frame of one object in consecutive multi-frame pictures at a time, or the detection frame of one object in a single-frame picture may be detected.
Optionally, in another embodiment, optical flow detection is performed on detection frames of a plurality of or all of the objects in consecutive multi-frame pictures at a time, or optical flow detection may be performed on detection frames of a plurality of or all of the objects in a single-frame picture at a time.
Alternatively, in an embodiment, before performing optical flow detection on the target, in the above step, an approximate position area of the target is detected, and then optical flow detection is directly performed on an area where the target appears (which may be understood as a target detection area) in two consecutive frame images. Two consecutive frames of images subjected to optical flow detection are images having the same size.
Optionally, in an embodiment, performing optical flow detection on the area where the detection frame of the target is located may perform optical flow detection on the area where the detection frame of the target is located in one frame of the picture, then store the obtained data and information in the local memory, and then perform optical flow detection on the area where the detection frame of the target is located in the picture in the next frame or in a preset frame.
In one embodiment, optical flow detection is performed on the detection frame and the area of one object at a time, and optical flow detection is performed on the detection frames of all the objects in the picture one by one.
In another embodiment, optical flow detection is performed on multiple objects in one picture at a time, that is, it can be understood that optical flow detection is performed on all or part of the detection frames of the objects in one single-frame picture at a time.
In yet another embodiment, optical flow detection is performed on detection frames of all objects in a plurality of single-frame pictures at a time.
In still another embodiment, optical flow detection is performed on target detection frames of the same category specified in a plurality of single-frame pictures at a time.
Alternatively, the optical flow information obtained after step S241 is added to the spatio-temporal model, so that the optical flow vector information of the preceding and following multi-frame images is obtained through statistical calculation.
S242: and extracting the optical flow motion information of the feature points corresponding to the detection frame in at least two continuous frames of images, and calculating the information entropy of the area where the detection frame is located.
Optionally, in step 242, extracting optical flow motion information of feature points corresponding to the detection frame in at least two consecutive images, calculating entropy of information of an area where the detection frame is located, and calculating feature points corresponding to the detection frame area in at least two consecutive images, where the optical flow motion information refers to the magnitude of the motion direction and the motion speed of the optical flow point, that is, the motion direction and the motion distance of the optical flow point are extracted, and then calculating the motion speed of the optical flow point, where the feature point is a set of one or more pixel points that can represent object feature information.
Alternatively, after extracting the optical flow motion information of the feature points corresponding to the detection frame in the two consecutive frames of images, and calculating the information entropy of the area where the detection frame is located according to the extracted optical flow motion information, it can be understood that the information entropy is calculated based on the optical flow information of all the optical flow points in the target detection area.
Optionally, in step 242, extracting optical flow motion information of feature points corresponding to the detection frames in at least two consecutive frames of images, calculating information entropy of the area where the detection frames are located, and extracting pixel optical flow feature information in a rectangular frame area where adjacent frames only contain the pedestrian object by using an LK (Lucas-Kanade) pyramid optical flow method (the LK pyramid optical flow method is hereinafter referred to as the LK optical flow method for short)And the LK optical flow extraction algorithm is accelerated by using a Graphics Processing Unit (Graphics Processing Unit), so that the optical flow characteristic information of the pixels is extracted on line in real time. The optical flow feature information is optical flow vector information, which may be referred to as an optical flow vector for short.
Alternatively, optical flow vectors extracted due to optical flow algorithmsIs formed by two-dimensional matrix vectorsIs composed of, i.e.
Wherein each point in the matrix corresponds to each pixel position in the image;representing the pixel interval of the same pixel point in the adjacent frames moving on the X axis,and the pixel interval representing the movement of the same pixel point in the adjacent frames on the Y axis.
Alternatively, the pixel interval refers to the distance that the feature point moves in the two adjacent frame images, and can be directly extracted by the LK optical flow extraction algorithm.
In an embodiment, in step 242, optical flow motion information of feature points corresponding to the detection frame of each target in the single-frame image after the target detection is completed and the image of the detection frame obtained when the target detection is obtained is calculated. Wherein a feature point can also be interpreted to mean a point where the image grey value changes drastically or a point where the curvature is larger at the edge of the image (i.e. the intersection of two edges). This operation can reduce the amount of calculation and improve the calculation efficiency.
Alternatively, in step S242, the optical flow information of the feature points corresponding to all or part of the detection frames in two consecutive images may be calculated at the same time, or the optical flow information of the feature points corresponding to all the detection frames in more than two consecutive images may be calculated at the same time, and the number of images calculated at each time is set in advance in the system and may be set as needed.
In one embodiment, step S242 calculates optical flow information of feature points corresponding to all detection frames in two consecutive images at the same time.
In another embodiment, step S242 calculates optical flow information of feature points corresponding to all detection frames in more than two consecutive images at the same time.
Alternatively, step S242 may simultaneously calculate optical flow information of detection frames corresponding to all objects in at least two consecutive images, or may simultaneously calculate optical flow information of detection frames of objects specifically specified and corresponding in at least two consecutive images.
In one embodiment, step S242 is to calculate optical flow information of detection frames corresponding to all objects in at least two consecutive images, such as: and optical flow information of detection frames corresponding to all the targets in the t frame and the t +1 frame images.
In another embodiment, step S242 is to calculate the detection frame of the specific and corresponding target in at least two consecutive images, such as: optical flow information of detection frames corresponding to the tth frame A type object and the t +1 th frame image A' type object, objects having ID numbers of 1 to 3, that is, simultaneously extracting and calculating the object A1、A2、A3And its corresponding target A1’、A2’、A3Optical flow information of the detection frame of' is detected.
S243: and establishing clustering points according to the optical flow motion information and the information entropy.
Alternatively, the clustering points are established based on the optical flow motion information extracted in step S242 and the calculated information entropy. The optical flow motion information is information reflecting motion characteristics of an optical flow, and comprises the motion direction and the motion speed, and can also comprise other related motion characteristic information, and the information entropy is obtained by calculation according to the optical flow motion information.
In one embodiment, the optical flow motion information extracted in step S242 includes at least one of a direction of motion, a distance of motion, a speed of motion, and other related motion characteristic information.
Optionally, before the step S243 establishes the clustering points according to the optical flow motion information and the calculated information entropy, the optical flows are clustered by using a K-mean algorithm (K-mean). The number of the clustering points can be determined according to the number of detection frames during target detection, and the clustering of the optical flow is based on: and establishing the optical flow points with the same motion direction and motion speed into clustering points. Optionally, in an embodiment, a value range of K is 6 to 9, and certainly, the value of K may be other values, which is not described herein.
Optionally, the cluster point is a set of optical flow points with the same or approximately the same magnitude of motion direction and motion speed.
S244: and calculating the kinetic energy of the clustering points or the kinetic energy of the area where the target detection frame is located. Specifically, the kinetic energy of the clustering points established in step S245 is calculated in units of the clustering points established in step S243, or the kinetic energy of the region where the target detection box is located is calculated at the same time.
In one embodiment, at least one of the kinetic energy of the cluster point or the kinetic energy of the region where the target is located, which is established in step S243, is calculated. It is understood that, in different embodiments, one of the required calculation modes may be configured according to specific requirements, or two calculation modes of calculating the kinetic energy of the clustering point or the kinetic energy of the region where the target is located may be configured at the same time, and when only one of the calculation modes needs to be calculated, the other calculation mode may be manually selected and not calculated. Optionally, a motion space-time container is established by using motion vectors of N frames before and after the cluster point according to the position of the cluster point, and an information entropy of an optical flow Histogram (HOF) of a detection region where each cluster point is located and an average kinetic energy of a cluster point set are calculated.
Optionally, the formula of the kinetic energy of the region where the target detection frame is located is as follows:
alternatively, i is 0, …, k-1 indicates the number of optical flows in the area where the single object detection frame is located, k indicates the total number of optical flows after clustering of the single object area, and for convenience of calculation, m is 1. Optionally, in an embodiment, a value range of K is 6 to 9, and certainly, the value of K may be other values, which is not described herein.
S245: and judging abnormal behaviors according to the kinetic energy and/or the information entropy of the clustering points.
Optionally, it is determined whether an abnormal behavior occurs in the target corresponding to the cluster point according to the kinetic energy of the cluster point or the kinetic energy of the area where the target detection frame is located, where the abnormal behavior includes running, fighting and harassment when the target is a pedestrian, and includes collision and overspeed when the target is a vehicle.
Specifically, the two abnormal behaviors of fighting and running are related to the information entropy of the region where the target detection frame is located and the kinetic energy of the clustering point. That is, when the abnormal behavior is fighting, the entropy of the optical flow information of the area where the target detection frame is located is large, and the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target is located is also large. When the abnormal behavior is running, the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target is located is larger, and the entropy of the optical flow information of the area where the target detection frame is located is smaller. When no abnormal behavior occurs, the entropy of the optical flow information of the area where the detection frame corresponding to the target is located is small, and the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target is located is also small.
Optionally, in an embodiment, the step of S245 determining the abnormal behavior according to the kinetic energy and/or the information entropy of the cluster point further includes: and if the entropy of the optical flow information of the area where the detection frame corresponding to the target is located is larger than or equal to a first threshold value, and the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target detection frame is located is larger than or equal to a second threshold value, judging that the abnormal behavior is fighting.
Optionally, in another embodiment, the step of determining the abnormal behavior according to the kinetic energy and/or the information entropy of the cluster point further includes: if the information entropy of the area where the detection frame corresponding to the target is located is greater than or equal to the third threshold and smaller than the first threshold, the kinetic energy of the clustering point corresponding to the target or the kinetic energy of the area where the target detection frame is located is greater than the second threshold. The abnormal behavior is judged to be running.
In one embodiment, for example, the entropy of information is represented by H and the kinetic energy is represented by E.
Optionally, the formula for determining the target running behavior is as follows:
in one embodiment, the present invention trains for running behaviorA value range ofλ1A value of 3000, whereinIs used for indicating the location of the target detection frameThe ratio of the optical flow information entropy H of the region to the kinetic energy E of the region in which the target detection frame is located, λ1Is a preset kinetic energy value.
Optionally, the formula for determining the target fighting behavior is as follows:
in one embodiment, the present invention trains for fighting behaviorA value range ofλ2A value of 3.0, whereinIs used to express the ratio of information entropy H and kinetic energy E, lambda2Is a preset information entropy value.
Alternatively, the judgment formula of normal behavior:
in one embodiment, in the present invention, the normal behavior λ obtained by training3Take 1500, λ4Take 1.85, λ3Is a predetermined kinetic energy value and is less than lambda1,λ4Is a preset information entropy value and is less than lambda2。
In an embodiment, when a certain pedestrian object runs, the optical flow kinetic energy of the clustering point corresponding to the pedestrian object is larger, and the optical flow information entropy is smaller.
Optionally, when crowd disturbance occurs, firstly, multiple pedestrian targets are detected in one single-frame picture, then when abnormal behavior detection is performed on the detected multiple pedestrian targets, it is found that running abnormality occurs on all the multiple targets, and at this time, the crowd disturbance can be determined to occur.
In one embodiment, when abnormal behavior detection is performed on a plurality of targets detected in a single-frame picture, the motion kinetic energy of cluster points corresponding to the targets exceeding a preset threshold number is larger, and the entropy of optical flow information is smaller; at this time, it can be judged that crowd disturbance may occur.
Alternatively, when the target is a vehicle, the determination of the abnormal behavior is also based on the magnitude of the distance between the detected vehicles (which can be calculated from the position information) and the majority of the optical flow directions in the detection frame corresponding to the target, and whether or not the collision has occurred. It is understood that when the majority of the optical flow directions of the detection frames of the two vehicle objects are opposite and the distance between the two vehicles is close, it is possible to determine that the collision event is suspected to occur.
Optionally, the result of the abnormal behavior determined in step S245 is saved and sent to the cloud server.
The method described in the above steps S241 to S245 can effectively improve the efficiency and real-time performance of detecting abnormal behavior.
Optionally, in an embodiment, the step S242 of extracting optical flow motion information of feature points corresponding to the detection frame in at least two consecutive images, and the step of calculating the information entropy of the area where the detection frame is located further includes: and extracting the characteristic points of at least two continuous frames of images.
Optionally, the feature points of at least two consecutive frames of images are extracted, the feature points of the target detection frame in the two consecutive frames of images may be extracted each time, or the feature points of the target detection frame in multiple frames (more than two frames) of consecutive images may be extracted each time, where the number of images extracted each time is set by initializing the system, and may be adjusted as needed. The feature point refers to a point where the image gray value changes drastically or a point where the curvature is large on the edge of the image (i.e., the intersection of two edges).
Optionally, in an embodiment, in step S242, extracting optical flow motion information of feature points corresponding to the detection frame in at least two consecutive images, and the step of calculating the information entropy of the area where the detection frame is located further includes: and calculating matched feature points of the targets in the two continuous frames of images by adopting a preset algorithm, and removing unmatched feature points in the two continuous frames of images.
Optionally, first, an image processing function (goodffeaturetotrack ()) is called to extract feature points (also called Shi-Tomasi corner points) in a target area which has been detected in an image of a previous frame, then a function calcoptical flow pyrlk () in an LK-pyramid optical flow extraction algorithm is called to calculate feature points of a target which is detected in a current frame and is matched with the previous frame, and feature points which are not moved in the previous frame and the next frame are removed, so that optical flow motion information of pixel points is obtained. The feature points in this embodiment may be Shi-Tomasi corner points, or simply corner points.
Optionally, in an embodiment, the step S245 of establishing a cluster point according to the optical flow motion information further includes: the optical flow motion direction of the feature points is drawn in the image.
In an embodiment, the step of establishing the clustering points according to the optical flow motion information further includes, before the step of establishing the clustering points according to the optical flow motion information, drawing the optical flow motion direction of each feature point in each frame of image.
Optionally, referring to fig. 8, in an embodiment, after the step of establishing the cluster point according to the optical flow motion information in step S243, step S2431 and step S2432 are further included:
s2431: a spatiotemporal container is established based on the position and motion vectors of the target detection region.
Optionally, a space-time container is established based on the position information of the target detection area, i.e. the target detection frame, and the motion vector relationship of the clustering points in the detection frame between the previous frame and the next frame.
Optionally, referring to FIG. 9, a diagram of a motion spatiotemporal container in an embodiment, where AB is the two-dimensional height of the spatiotemporal container, BC is the two-dimensional width of the spatiotemporal container, and CE is the depth of the spatiotemporal container. The depth CE of the space-time container is the video frame number, ABCD represents the two-dimensional size of the space-time container, and the two-dimensional size represents the size of a target detection frame during target detection. It is understood that the model of the spatiotemporal container may be other graphics, and when the graphics of the target detection box change, the model of the spatiotemporal container changes accordingly.
Optionally, in an embodiment, when the graph of the target detection box changes, the corresponding created spatiotemporal container changes according to the graph change of the target detection box.
S2432: and calculating the average information entropy and the average motion kinetic energy of the optical flow histogram of the detection frame corresponding to each clustering point.
Optionally, an average information entropy and an average kinetic energy of the optical flow histogram of the detection frame corresponding to each cluster point are calculated. Optical flow histogram HOF (histogram of ordered Optical flow) is used to count the probability of Optical flow point distribution in a specific direction.
Optionally, the basic idea of the HOF is to project each optical flow point into a corresponding histogram bin according to its direction value, and perform weighting according to the magnitude of the optical flow, in the present invention, the bin is 12, where the calculation formula of the magnitude and direction of the motion speed of each optical flow point is as follows, and T refers to the time between two adjacent frames of images.
In this case, the optical flow histogram is used to reduce the influence of factors such as the size of the target, the motion direction of the target, and noise in the video on the optical flow characteristics of the target pixels.
Optionally, the category of abnormal behavior in the different embodiments includes one of fighting running, harassment, or traffic abnormality.
In one embodiment, when the target is a pedestrian, the anomalous behavior comprises: fighting, running, and messing.
In another embodiment, when the target is a vehicle, the abnormal behavior is, for example: impact and overspeed.
Optionally, in an embodiment, the average information entropy and the average kinetic energy of the optical flow histogram of the detection frame corresponding to each cluster point are calculated, which are substantially the average information entropy and the average kinetic energy of the optical flow of each cluster center in the previous and next N frames of images.
The abnormal behavior detection method can effectively improve the intelligence of the existing security, can also effectively reduce the calculated amount in the abnormal behavior detection process, and improves the efficiency, the real-time performance and the accuracy of the system for detecting the abnormal behavior of the target.
Optionally, the step of tracking the target to obtain the tracking result further includes: and sending the structured data of the target object which leaves the current monitoring node to the cloud server.
Optionally, when the target is tracked, when the feature information, particularly the position information, of a certain target is not updated within a preset time, it can be determined that the target has left the current monitoring node, and the structured data of the target is sent to the cloud server. The preset time may be set by a user, for example, 5 minutes or 10 minutes, and is not described herein.
In an embodiment, when the target is tracked, when it is found that the position information, i.e., the coordinate value, of a certain pedestrian is not updated within a certain preset time, it can be determined that the pedestrian has left the current monitoring node, and the structured data corresponding to the pedestrian is sent to the cloud server.
In another embodiment, when the target is tracked, when the position coordinate of a certain pedestrian or a certain vehicle is found to stay at the view angle edge of the monitoring node all the time, it can be determined that the pedestrian or the vehicle has left the current monitoring node, and the structured data of the pedestrian or the vehicle is sent to the cloud server.
Optionally, preset feature information (such as a target attribute value, a motion trajectory, a target screenshot, and other required information) of a target determined to leave the current monitoring node is packaged into a preset metadata structure, and then is encoded into a preset format and sent to the cloud server, and the cloud server analyzes the received packaged data, extracts metadata of the target, and stores the metadata in the database.
In one embodiment, the preset feature information of the target which is determined to leave the current node is packaged into a preset metadata structure, then the preset feature information is encoded into a JSON data format and sent to a cloud server through a network, the cloud server analyzes the received JSON data packet, the metadata structure is extracted, and the metadata structure is stored in a database of the cloud server. It can be understood that the preset feature information can be adjusted and set as needed, which is not described herein any more.
Referring to fig. 10, the present invention further provides a device 400 having a storage function, which stores program data, and when the program data is executed, the method for implementing video structuring processing based on target behavior attributes as described above and the method described in the embodiment are implemented. Specifically, the apparatus with a storage function may be one of a memory, a personal computer, a server, a network device, or a usb disk.
Referring to fig. 11, fig. 11 is a schematic diagram of an embodiment of a video structured processing system based on target behavior attributes according to the present invention, in this embodiment, a video processing system 400 includes: a memory 404 coupled to the processor 402, when operating, executes instructions to implement a method such as that described above with respect to one of the video processing methods and embodiments, and stores processing results from executing the instructions in the memory 404.
Optionally, the step S23 tracks the target to obtain a tracking result and the step S24 detects abnormal behavior of the target, both based on the step S22 performing target detection and identification on the single frame picture, so that the target can be tracked and the abnormal behavior of the target can be detected.
Alternatively, the abnormal behavior detection of the target in step S24 may be performed directly after step S22 is completed, or simultaneously with step S23, or after step S23 and based on the tracking result in step S23.
Alternatively, when the abnormal behavior detection of the target in the step S24 is performed based on the tracking of the target in the step S23 to obtain the tracking result, the detection of the abnormal behavior of the target may be more accurate.
The method for video structuring processing based on the target behavior attribute in steps S21 to S24 can effectively reduce the pressure of network transmission of the monitoring video, effectively improve the real-time performance of the monitoring system, and greatly reduce the data traffic fee.
Optionally, the step of performing target detection and identification on the single-frame picture further includes extracting feature information of a target in the single-frame picture. It can be understood that after the read video is divided into a plurality of single-frame pictures, the target detection and identification are performed on the single-frame pictures after the division.
Optionally, feature information of an object in a single frame picture obtained by cutting the video is extracted, wherein the object includes pedestrians, vehicles and animals, and feature information of a building or a road and bridge can also be extracted according to needs.
In one embodiment, when the object is a pedestrian, the extracted feature information includes: the position of the pedestrian, the clothing color of the pedestrian, the sex of the pedestrian, the motion state, the motion track, the dwell time and other available information.
In another embodiment, when the target is a vehicle, the extracted feature information includes: the type of the vehicle, the color of the vehicle body, the running speed of the vehicle, the license plate number of the vehicle and the like.
In yet another embodiment, when the object is a building, the extracted feature information includes: basic information of the building: such as building story height, building appearance color, etc.
In still another embodiment, when the target is a road bridge, the extracted feature information includes: the width of the road, the name of the road, the speed limit value of the road and the like.
Optionally, the step of detecting abnormal behavior of the target includes: and extracting motion vectors of multiple pixel points of one or more targets, and detecting abnormal behaviors according to the relation between the motion vectors.
In one embodiment, for more details, reference is made to a method of abnormal behavior detection as described above.
In an embodiment, the structured data acquired in the video processing stage is initially set to include at least one of a position of the target, a category of the target, a property of the target, a motion state of the target, a motion trajectory of the target, and a residence time of the target. The method can be adjusted according to the needs of the user, and only the position information of the target is acquired in the video processing stage, or the position and the target category of the target are acquired simultaneously. It will be appreciated that the video processing stage obtains information and the user may select the type of information that needs to be obtained during the video processing stage.
Optionally, after the video structuring is finished, the obtained structured data is uploaded to a cloud server, and the cloud server stores the structured data uploaded by each monitoring node and deeply analyzes the structured data uploaded by each monitoring node to obtain a preset result.
Optionally, the step of deeply analyzing the structured data uploaded by each monitoring node by the cloud server may be set to be performed automatically by the system, or may be performed manually by the user.
In an embodiment, basic analysis contents included in the in-depth analysis of the cloud server are preset, such as the number of statistical pedestrians, target trajectory analysis, whether abnormal behaviors occur in the target, and the number of targets in which the abnormal behaviors occur, and other contents that need to be specially selected by the user, such as the proportion of each time period of the target, the speed of the target, and the like.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (6)
1. A method for video structuring processing based on target behavior attributes is characterized by comprising the following steps:
carrying out target detection and identification on the single-frame picture, and obtaining the structured data of each target;
tracking the target to obtain a tracking result; and/or
Detecting abnormal behaviors of the target;
when abnormal behaviors are detected, starting corresponding emergency measures according to the grade of the abnormal behaviors, storing the screenshot of the single-frame picture, packaging the screenshot of the single-frame picture and the characteristic information of the target with the detected abnormal behaviors, and sending the screenshot of the single-frame picture and the characteristic information of the target to a cloud server;
wherein, the target detection and identification of the single-frame picture further comprises: extracting the characteristic information of the target in the single-frame picture, and performing target detection identification by adopting a YOLOV2 deep learning target detection framework;
the step of extracting the feature information of the target in the single-frame picture further comprises: constructing a metadata structure;
extracting feature information of the target according to a metadata structure, wherein the metadata comprises part of basic attribute information of the target;
the tracking the target to obtain a tracking result further comprises:
predicting a tracking frame of each target in the first plurality of targets in the current frame by combining the tracking chain and a detection frame corresponding to the first plurality of targets in the previous frame picture; the tracking chain is obtained by tracking and calculating a plurality of targets in all single-frame pictures or partial continuous single-frame pictures which are obtained by segmenting from the video before the current frame picture, and gathering track information and empirical values of the plurality of targets in all the previous pictures;
acquiring a tracking frame corresponding to a first plurality of targets in a previous frame of picture in a current frame and a detection frame of a second plurality of targets in the current frame of picture;
establishing a target incidence matrix of a tracking frame of a first plurality of targets in a current frame and a detection frame of a second plurality of targets in the current frame;
correcting by using a target matching algorithm to obtain the actual position corresponding to the first part of targets of the current frame; the first part of targets are targets which are successfully matched with a tracking frame in the current frame and a detection frame in a second plurality of targets in the first plurality of targets;
recording the time when the target enters or leaves the monitoring node and each position where the target passes to obtain the motion track of the target;
after structured data of each target are obtained through detection, correcting false detection data in the structured data, wherein the correction comprises track correction and target attribute correction, voting is carried out according to a weight ratio through the correction, a data value of a majority probability is an accurate value, and a data value of a minority result is a false detection value.
2. The method for video structuring processing based on object behavior attribute according to claim 1, wherein the step of tracking the object to obtain the tracking result further comprises: and sending the structured data of the target object which leaves the current monitoring node to a cloud server.
3. The method for video structuring processing based on object behavior attribute according to claim 1, wherein the step of detecting abnormal behavior of the object comprises:
and extracting optical flow motion information of a plurality of feature points of one or more targets, and carrying out clustering and abnormal behavior detection according to the optical flow motion information.
4. The method for video structuring processing based on objective behavioral attributes according to claim 1, wherein the abnormal behavior further comprises: at least one of running, fighting, harassment, or traffic anomalies.
5. A video structured processing system based on target behavior attributes, comprising a processor and a memory electrically connected to each other, wherein the processor is coupled to the memory, and when in operation, executes instructions to implement the method according to any one of claims 1 to 4, and stores processing results generated by the execution instructions in the memory.
6. An apparatus having a storage function, characterized in that program data are stored which, when executed, implement the method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711055281.0A CN108009473B (en) | 2017-10-31 | 2017-10-31 | Video structuralization processing method, system and storage device based on target behavior attribute |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711055281.0A CN108009473B (en) | 2017-10-31 | 2017-10-31 | Video structuralization processing method, system and storage device based on target behavior attribute |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108009473A CN108009473A (en) | 2018-05-08 |
CN108009473B true CN108009473B (en) | 2021-08-24 |
Family
ID=62051189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711055281.0A Expired - Fee Related CN108009473B (en) | 2017-10-31 | 2017-10-31 | Video structuralization processing method, system and storage device based on target behavior attribute |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108009473B (en) |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108810616B (en) * | 2018-05-31 | 2019-06-14 | 广州虎牙信息科技有限公司 | Object localization method, image display method, device, equipment and storage medium |
CN110706247B (en) * | 2018-06-22 | 2023-03-07 | 杭州海康威视数字技术股份有限公司 | Target tracking method, device and system |
CN109188390B (en) * | 2018-08-14 | 2023-05-23 | 苏州大学张家港工业技术研究院 | High-precision detection and tracking method for moving target |
CN108900898A (en) * | 2018-08-21 | 2018-11-27 | 北京深瞐科技有限公司 | Video structural method, apparatus and system |
CN108984799A (en) * | 2018-08-21 | 2018-12-11 | 北京深瞐科技有限公司 | A kind of video data handling procedure and device |
CN109146910B (en) * | 2018-08-27 | 2021-07-06 | 公安部第一研究所 | Video content analysis index evaluation method based on target positioning |
CN109124782B (en) * | 2018-08-29 | 2020-09-22 | 合肥工业大学 | Intelligent integrated endoscope system |
CN109171606B (en) * | 2018-08-29 | 2020-09-01 | 合肥德易电子有限公司 | Intelligent integrated robot endoscope system |
CN109241898B (en) * | 2018-08-29 | 2020-09-22 | 合肥工业大学 | Method and system for positioning target of endoscopic video and storage medium |
CN109171605B (en) * | 2018-08-29 | 2020-09-01 | 合肥工业大学 | Intelligent edge computing system with target positioning and endoscope video enhancement processing functions |
CN109350239A (en) * | 2018-08-29 | 2019-02-19 | 合肥德易电子有限公司 | Intelligent integral robot cavity mirror system with target positioning function |
CN109363614B (en) * | 2018-08-29 | 2020-09-01 | 合肥德易电子有限公司 | Intelligent integrated robot cavity mirror system with high-definition video enhancement processing function |
CN109124702B (en) * | 2018-08-29 | 2020-09-01 | 合肥工业大学 | Intelligent endoscope system with pneumoperitoneum control and central control module |
CN109034124A (en) * | 2018-08-30 | 2018-12-18 | 成都考拉悠然科技有限公司 | A kind of intelligent control method and system |
CN109389543B (en) * | 2018-09-11 | 2022-03-04 | 深圳大学 | Bus operation data statistical method, system, computing device and storage medium |
CN109308469B (en) * | 2018-09-21 | 2019-12-10 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating information |
CN109658128A (en) * | 2018-11-19 | 2019-04-19 | 浙江工业大学 | A kind of shops based on yolo and centroid tracking enters shop rate statistical method |
CN109859250B (en) * | 2018-11-20 | 2023-08-18 | 北京悦图遥感科技发展有限公司 | Aviation infrared video multi-target detection and tracking method and device |
CN111277816B (en) * | 2018-12-05 | 2024-05-14 | 北京奇虎科技有限公司 | Method and device for testing video detection system |
CN109784173A (en) * | 2018-12-14 | 2019-05-21 | 合肥阿巴赛信息科技有限公司 | A kind of shop guest's on-line tracking of single camera |
CN109743497B (en) * | 2018-12-21 | 2020-06-30 | 创新奇智(重庆)科技有限公司 | Data set acquisition method and system and electronic device |
CN109903312B (en) * | 2019-01-25 | 2021-04-30 | 北京工业大学 | Football player running distance statistical method based on video multi-target tracking |
CN111753587B (en) * | 2019-03-28 | 2023-09-29 | 杭州海康威视数字技术股份有限公司 | Ground falling detection method and device |
CN110163103B (en) * | 2019-04-18 | 2021-07-30 | 中国农业大学 | Live pig behavior identification method and device based on video image |
CN110135317A (en) * | 2019-05-08 | 2019-08-16 | 深圳达实智能股份有限公司 | Behavior monitoring and management system and method based on cooperated computing system |
CN110378200A (en) * | 2019-06-03 | 2019-10-25 | 特斯联(北京)科技有限公司 | A kind of intelligent security guard prompt apparatus and method for of Behavior-based control feature clustering |
CN112347301B (en) * | 2019-08-09 | 2024-07-23 | 北京字节跳动网络技术有限公司 | Image special effect processing method, device, electronic equipment and computer readable storage medium |
CN110795595B (en) * | 2019-09-10 | 2024-03-05 | 安徽南瑞继远电网技术有限公司 | Video structured storage method, device, equipment and medium based on edge calculation |
CN111126152B (en) * | 2019-11-25 | 2023-04-11 | 国网信通亿力科技有限责任公司 | Multi-target pedestrian detection and tracking method based on video |
CN110706266B (en) * | 2019-12-11 | 2020-09-15 | 北京中星时代科技有限公司 | Aerial target tracking method based on YOLOv3 |
CN111209807A (en) * | 2019-12-25 | 2020-05-29 | 航天信息股份有限公司 | Yolov 3-based video structuring method and system |
CN111524113A (en) * | 2020-04-17 | 2020-08-11 | 中冶赛迪重庆信息技术有限公司 | Lifting chain abnormity identification method, system, equipment and medium |
CN111814783B (en) * | 2020-06-08 | 2024-05-24 | 深圳市富浩鹏电子有限公司 | Accurate license plate positioning method based on license plate vertex offset estimation |
CN111800507A (en) * | 2020-07-06 | 2020-10-20 | 湖北经济学院 | Traffic monitoring method and traffic monitoring system |
CN111914839B (en) * | 2020-07-28 | 2024-03-19 | 特微乐行(广州)技术有限公司 | Synchronous end-to-end license plate positioning and identifying method based on YOLOv3 |
CN113705643B (en) * | 2021-08-17 | 2022-10-28 | 荣耀终端有限公司 | Target detection method and device and electronic equipment |
CN118656516A (en) * | 2024-08-13 | 2024-09-17 | 浩神科技(北京)有限公司 | Video data structuring processing method and system for intelligent video generation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101902617A (en) * | 2010-06-11 | 2010-12-01 | 公安部第三研究所 | Device and method for realizing video structural description by using DSP and FPGA |
CN103941866A (en) * | 2014-04-08 | 2014-07-23 | 河海大学常州校区 | Three-dimensional gesture recognizing method based on Kinect depth image |
CN104301697A (en) * | 2014-07-15 | 2015-01-21 | 广州大学 | Automatic public place violence incident detection system and method thereof |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719216B (en) * | 2009-12-21 | 2012-01-04 | 西安电子科技大学 | Movement human abnormal behavior identification method based on template matching |
CN101859436B (en) * | 2010-06-09 | 2011-12-14 | 王巍 | Large-amplitude regular movement background intelligent analysis and control system |
US9894324B2 (en) * | 2014-07-15 | 2018-02-13 | Alcatel-Lucent Usa Inc. | Method and system for modifying compressive sensing block sizes for video monitoring using distance information |
CN104915655A (en) * | 2015-06-15 | 2015-09-16 | 西安电子科技大学 | Multi-path monitor video management method and device |
CN106683121A (en) * | 2016-11-29 | 2017-05-17 | 广东工业大学 | Robust object tracking method in fusion detection process |
-
2017
- 2017-10-31 CN CN201711055281.0A patent/CN108009473B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101902617A (en) * | 2010-06-11 | 2010-12-01 | 公安部第三研究所 | Device and method for realizing video structural description by using DSP and FPGA |
CN103941866A (en) * | 2014-04-08 | 2014-07-23 | 河海大学常州校区 | Three-dimensional gesture recognizing method based on Kinect depth image |
CN104301697A (en) * | 2014-07-15 | 2015-01-21 | 广州大学 | Automatic public place violence incident detection system and method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN108009473A (en) | 2018-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108009473B (en) | Video structuralization processing method, system and storage device based on target behavior attribute | |
CN108053427B (en) | Improved multi-target tracking method, system and device based on KCF and Kalman | |
CN108052859B (en) | Abnormal behavior detection method, system and device based on clustering optical flow characteristics | |
CN108062349B (en) | Video monitoring method and system based on video structured data and deep learning | |
CN110738127B (en) | Helmet identification method based on unsupervised deep learning neural network algorithm | |
TWI749113B (en) | Methods, systems and computer program products for generating alerts in a video surveillance system | |
CN104303193B (en) | Target classification based on cluster | |
CN112001339A (en) | Pedestrian social distance real-time monitoring method based on YOLO v4 | |
CN102163290B (en) | Method for modeling abnormal events in multi-visual angle video monitoring based on temporal-spatial correlation information | |
Singh et al. | Visual big data analytics for traffic monitoring in smart city | |
CN111814638B (en) | Security scene flame detection method based on deep learning | |
CN103986910A (en) | Method and system for passenger flow statistics based on cameras with intelligent analysis function | |
CN108376246A (en) | A kind of identification of plurality of human faces and tracking system and method | |
CN110633643A (en) | Abnormal behavior detection method and system for smart community | |
KR102122850B1 (en) | Solution for analysis road and recognition vehicle license plate employing deep-learning | |
CN111353338B (en) | Energy efficiency improvement method based on business hall video monitoring | |
CN113743256A (en) | Construction site safety intelligent early warning method and device | |
CA3196344A1 (en) | Rail feature identification system | |
CN107330414A (en) | Act of violence monitoring method | |
KR20210062256A (en) | Method, program and system to judge abnormal behavior based on behavior sequence | |
CN104463232A (en) | Density crowd counting method based on HOG characteristic and color histogram characteristic | |
CN114092877A (en) | Garbage can unattended system design method based on machine vision | |
CN116311166A (en) | Traffic obstacle recognition method and device and electronic equipment | |
CN113657250B (en) | Flame detection method and system based on monitoring video | |
CN113920585A (en) | Behavior recognition method and device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210824 |