Nothing Special   »   [go: up one dir, main page]

CN111626275B - Abnormal parking detection method based on intelligent video analysis - Google Patents

Abnormal parking detection method based on intelligent video analysis Download PDF

Info

Publication number
CN111626275B
CN111626275B CN202010748484.3A CN202010748484A CN111626275B CN 111626275 B CN111626275 B CN 111626275B CN 202010748484 A CN202010748484 A CN 202010748484A CN 111626275 B CN111626275 B CN 111626275B
Authority
CN
China
Prior art keywords
vehicle
target
time
video frame
vehicle target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010748484.3A
Other languages
Chinese (zh)
Other versions
CN111626275A (en
Inventor
马小骏
贺安鹰
华漪
刘超
邬志烨
白波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jinzhian Technology Co ltd
Original Assignee
Jiangsu Jinzhian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jinzhian Technology Co ltd filed Critical Jiangsu Jinzhian Technology Co ltd
Priority to CN202010748484.3A priority Critical patent/CN111626275B/en
Publication of CN111626275A publication Critical patent/CN111626275A/en
Application granted granted Critical
Publication of CN111626275B publication Critical patent/CN111626275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an abnormal parking detection method based on intelligent video analysis, which comprises the steps of carrying out target detection on a current video frame by using a convolutional neural network model, obtaining attribute information of a detected target in the current video frame, analyzing and judging whether an unmoved vehicle exists in the current video frame by combining a historical tracking queue containing the attribute information of different vehicle targets in a preset time period from a time corresponding to a last video frame of the current video frame to a historical time direction, detecting whether other vehicle targets except the unmoved vehicle target move or not by adding pedestrian detection, and comprehensively judging an abnormal parking condition according to a detection result. The method provided by the invention greatly reduces the false alarm rate and the missing alarm rate of abnormal parking.

Description

Abnormal parking detection method based on intelligent video analysis
Technical Field
The invention relates to the technical field of intelligent video analysis, in particular to an abnormal parking detection method based on intelligent video analysis.
Background
In road traffic, abnormal parking is a traffic violation that affects traffic efficiency and often causes traffic accidents. How to accurately and timely detect the abnormal parking behavior is a key function in the traffic monitoring system. The intelligent video analysis technology is widely applied to abnormal parking detection, but due to the limitation of the prior art, the conditions of false alarm and missed alarm often exist, the accuracy of judging abnormal parking is influenced, and the practicability of the abnormal parking detection function is greatly reduced. In the abnormal parking detection based on intelligent video analysis, target detection is a key, and corresponding functions can be realized only if moving targets such as people and vehicles in the monitoring video are correctly detected.
Target detection algorithms can generally be divided into two broad categories: one is target detection based on a traditional image algorithm, and the other is target detection based on a deep learning algorithm. Research and practice over the last decade has demonstrated that deep learning based target detection is far superior to traditional target detection algorithms, both in speed and accuracy.
Disclosure of Invention
The purpose of the invention is as follows: the abnormal parking detection method based on intelligent video analysis is high in accuracy.
The technical scheme is as follows: the invention provides an abnormal parking detection method based on intelligent video analysis, which realizes abnormal parking detection based on a video image captured by a fixed angle monitoring device, and comprises the following steps:
step 1, acquiring a current video frame and detection time corresponding to the video frame;
step 2, performing target detection on the current video frame by using a convolutional neural network model, acquiring each vehicle target in the current video frame, and further acquiring attribute information of the vehicle target in the current video frame; the attribute information of the vehicle target comprises the position of the vehicle in the corresponding video frame, the feature vector of the vehicle, the first time when the vehicle is detected, the last time when the vehicle is detected, and the target ID of the unique identification of the vehicle;
step 3, judging whether a vehicle target which does not move exists in the current video frame or not according to the attribute information of the vehicle target in the current video frame and the tracking queue, and if so, executing step 4; otherwise, correcting the judgment result according to the steps A to C; the tracking queue is a set of attribute information of different vehicle targets in a preset time period from the moment corresponding to the last video frame of the current video frame to the historical moment direction;
step A, aiming at each vehicle target i in the tracking queue, wherein i is the value of the target ID of the vehicle target in the tracking queue: life Time of vehicle target i Life _ TimeIDiUsing equation (8) to calculate:
Life_TimeIDi=Now_Time–Update_timeIDi (8)
wherein,the Now _ Time is the detection Time corresponding to the current video frame, Update _ TimeIDiThe detection time when the vehicle target i is detected last time;
compare Life _ TimeIDiAnd a preset Track vehicle target Expiration Threshold value Track _ Expiration _ Threshold, if:
Life_TimeIDi>track _ exception _ Threshold, deleting the vehicle target i including the attribute information thereof from the tracking queue, adding the vehicle target which appears in the current video frame and does not appear in the tracking queue including the attribute information thereof into the tracking queue, and acquiring the updated tracking queue;
step B, comparing the updated data length Track _ Q _ Len in the tracking queue with a preset tracking queue length Threshold value Track _ Q _ Threshold, and if the Track _ Q _ Len is less than the Track _ Q _ Threshold, executing the step 1; otherwise, executing step C;
step C, judging whether a pedestrian target exists in the current video frame, if so, judging that a vehicle target with abnormal parking exists in the current video frame, otherwise, executing the step 1;
step 4, respectively executing the following steps aiming at each vehicle which does not move in the current video frame:
step 401, in combination with the tracking queue, judging whether the staying time of the vehicle target which does not move in the current video frame is larger than a preset abnormal parking time threshold, if so, executing step 402, otherwise, executing step 5;
step 402, judging whether other moving vehicles exist in the current video frame, if so, judging that the vehicle which does not move in the step 401 and has the stay time longer than a preset abnormal parking time threshold value is an abnormal parking, and executing the step 5; otherwise, go to step 403;
step 403, judging whether a pedestrian target exists in the current video frame, if so, judging that the vehicle which does not move and has the stay time longer than a preset abnormal parking time threshold value in the step 401 is abnormal parking, and executing the step 5; otherwise, judging that the vehicle which does not move in the step 401 and has the stay time longer than the preset abnormal parking time threshold value is in non-abnormal parking, and executing the step 5;
and 5, updating the tracking queue information.
In step 2, in the process of performing target detection on the video frame by using the convolutional neural network model: and marking the vehicle target by using the surrounding rectangular frame, acquiring the position coordinates of the surrounding rectangular frame in the corresponding video frame, and marking the acquired position coordinates of the surrounding rectangular frame as the position of the corresponding vehicle in the video frame.
In step 3, the method for determining whether there is a vehicle that does not move in the current video frame specifically includes the following steps:
for each vehicle target j, j in the current video frame as the value of the target ID of the vehicle target in the current video frame, the following operations are executed:
step 301A, for each vehicle target i, i in the tracking queue, the value of the target ID of the vehicle target in the tracking queue: referring to equation (1), the intersection ratio IoU of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j is calculatedij
IoUij=Interij/Unionij (1)
Wherein, InterijIs the intersection of the bounding rectangular box of vehicle object i and the bounding rectangular box of vehicle object j, UnionijIs the union of the bounding rectangular frame of vehicle object i and the bounding rectangular frame of vehicle object j; interijAnd UnionijThe calculation formula of (a) is as follows:
Interij=Inter_Wij*Inter_Hij (2)
Unionij=(righti–lefti)*(topi–bottomi)+(rightj–leftj)*(topj–bottomj)-Interij (3)
Inter_Wij=max(0,min(righti,rightj)-max(lefti,leftj)) (4)
Inter_Hij=max(0,min(topi,topj)-max(bottomi,bottomj)) (5)
wherein, Inter _ WijIs the width of the intersection of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j, Inter _ HijThe height of the intersection part of the surrounding rectangular frame of the vehicle target i and the surrounding rectangular frame of the vehicle target j; wherein lefti、topiAny corner m in the surrounding rectangular frame of the vehicle target iiAbscissa and ordinate, righti、bottomiAt corner m in a bounding rectangular frame of a vehicle object i, respectivelyiThe abscissa and ordinate of the corner point of the diagonal position of (a); leftj、topjAny corner m in the surrounding rectangular frame of the vehicle target j respectivelyjAbscissa and ordinate, rightj、bottomjAt corner m in a bounding rectangular frame of a vehicle object j, respectivelyjThe abscissa and ordinate of the corner point of the diagonal position of (a);
step 302A, compare IoUijIntersecting with a preset rectangular frame and comparing with a Threshold value IoU _ Threshold if IoUijIf the value is less than IoU _ Threshold, the vehicle target i and the vehicle target j are judged not to be at the same spatial position on the road; otherwise, the same spatial positions of the vehicle target i and the vehicle target j on the road are judged, and the step 303A is executed;
step 303A, calculating cosine distance Dist (ID) between the characteristic vector of the vehicle i and the characteristic vector of the vehicle target j according to formula (6)j,IDi):
Figure GDA0002686592420000031
Wherein r isIDiApplying a convolutional neural network to the vehicle target i to obtain a normalized feature vector;
Figure GDA0002686592420000041
the vehicle target j is a vector obtained by transposing a normalized feature vector obtained by applying a convolutional neural network;
step 304A, compare Dist (ID)j,IDi) And a preset feature vector threshold value Dist _ ThrAn estimate of Dist (ID)j,IDi) Judging that the vehicle i and the vehicle j are the same vehicle and the vehicle j does not move if the distance between the vehicle i and the vehicle j is not larger than the Dist _ Threshold; otherwise, the vehicle j is determined to move.
In step 3, the method for determining whether there is a vehicle that does not move in the current video frame specifically includes the following steps:
for each vehicle target j, j in the current video frame as the value of the target ID of the vehicle target in the current video frame, the following operations are executed:
step 301B, for each vehicle target i, i in the tracking queue, the value of the target ID of the vehicle target in the tracking queue: referring to equation (1), the intersection ratio IoU of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j is calculatedij
IoUij=Interij/Unionij (1)
Wherein, InterijIs the intersection of the bounding rectangular box of vehicle object i and the bounding rectangular box of vehicle object j, UnionijIs the union of the bounding rectangular frame of vehicle object i and the bounding rectangular frame of vehicle object j; interijAnd UnionijThe calculation formula of (a) is as follows:
Interij=Inter_Wij*Inter_Hij (2)
Unionij=(righti–lefti)*(topi–bottomi)+(rightj–leftj)*(topj–bottomj)-Interij (3)
Inter_Wij=max(0,min(righti,rightj)-max(lefti,leftj)) (4)
Inter_Hij=max(0,min(topi,topj)-max(bottomi,bottomj)) (5)
wherein, Inter _ WijIs the width of the intersection of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j, Inter _ HijIs a vehicleThe height of the intersection of the bounding rectangular frame of vehicle object i and the bounding rectangular frame of vehicle object j; wherein lefti、topiAny corner m in the surrounding rectangular frame of the vehicle target iiAbscissa and ordinate, righti、bottomiAt corner m in a bounding rectangular frame of a vehicle object i, respectivelyiThe abscissa and ordinate of the corner point of the diagonal position of (a); leftj、topjAny corner m in the surrounding rectangular frame of the vehicle target j respectivelyjAbscissa and ordinate, rightj、bottomjAt corner m in a bounding rectangular frame of a vehicle object j, respectivelyjThe abscissa and ordinate of the corner point of the diagonal position of (a);
step 302B, compare IoUijIntersecting with a preset rectangular frame and comparing with a Threshold value IoU _ Threshold if IoUijIf the value is less than IoU _ Threshold, the vehicle target i and the vehicle target j are judged not to be at the same spatial position on the road; otherwise, the same spatial positions of the vehicle target i and the vehicle target j on the road are judged, and step 303B is executed;
step 303B, calculating cosine distance Dist' (ID) between the feature vector of vehicle i and the feature vector of vehicle target j according to formula (7)j,IDi):
Figure GDA0002686592420000051
In the formula,
Figure GDA0002686592420000052
obtaining a normalized feature vector by applying a convolutional neural network to a vehicle target i in an nth historical video frame with the vehicle target i, wherein M is the number of the historical video frames with the vehicle target i;
Figure GDA0002686592420000053
the vehicle target j is a vector obtained by transposing a normalized feature vector obtained by applying a convolutional neural network;
step 304B, compare Dist' (ID)j,IDi) And a predetermined feature vector Threshold value Dist _ Threshold if Dist' (ID)j,IDi) Judging that the vehicle i and the vehicle j are the same vehicle and the vehicle j does not move if the distance between the vehicle i and the vehicle j is not larger than the Dist _ Threshold; otherwise, the vehicle j is determined to move.
In step C, after it is determined that there is a vehicle target with abnormal parking in the current video frame, the method further includes reporting an abnormal parking event.
In step 401, the method further comprises: calculating the dwell time parkingtime of the vehicle object s that is not moving in the current video frame using equation (9)S
Parking_timeS=Update_timeIDs-Init_timeIDd (9)
Wherein Update _ timeIDsInit _ time, which is the time when the vehicle object s that has not moved in the current video frame was last detectedIDdThe time when the vehicle target d corresponding to the vehicle target s which does not move in the current video frame is detected for the first time in the tracking queue; s is the value of the target ID of the vehicle target that does not move in the current video frame, and d is the value of the target ID of the vehicle target in the tracking queue that corresponds to the vehicle target s that does not move in the current video frame.
In step 402 and in step 403, after it is determined that the vehicle which does not move in step 401 and has the stay time longer than the preset abnormal parking time threshold is abnormally parked, the method further includes reporting an abnormal parking event.
In step 5, the method for updating the tracking queue information comprises the following steps:
step 501, replacing the vehicle position, the vehicle characteristic vector and the time of the vehicle which is detected last time of the vehicle in the tracking queue by using the vehicle position, the vehicle characteristic vector and the detection time of the vehicle target which does not move in the current video frame;
step 502, aiming at the vehicle target k except the vehicle target k participating in the operation of the step 701 in the tracking queue, and the survival Time Life _ Time of the vehicle target kIDkUsing equation (10) to calculate:
Life_TimeIDk=Now_Time–Update_timeIDk (10)
wherein, the Now _ Time is the detection Time corresponding to the current video frame, Update _ TimeIDkThe detection time when the vehicle target k is detected last time;
compare Life _ TimeIDkAnd a preset tracking vehicle target Expiration Threshold value Track _ Expiration _ Threshold: if:
Life_TimeIDk>track _ Expiration _ Threshold, then remove the vehicle target k, including its attribute information, from the tracking queue; otherwise, the vehicle target k is reserved, and the attribute information of the vehicle target k is not changed; and adding the attribute information of the vehicle target which appears in the current video frame and does not appear in the tracking queue into the tracking queue.
The convolutional neural network model is a YoloV3 convolutional neural network model.
Has the advantages that: compared with the prior art, the method provided by the invention has the following advantages:
(1) the condition that the definition of a video image is not enough or the shielding is serious and the vehicle cannot be correctly tracked is solved, and the missing report rate of abnormal parking is effectively reduced by adding a pedestrian detection method;
(2) and by adding pedestrian detection, detecting whether other vehicle targets except the vehicle target which does not move or not, and comprehensively judging the abnormal parking condition according to the detection result, the false alarm rate and the missing alarm rate of the abnormal parking are greatly reduced.
Drawings
Fig. 1 is a flowchart of an abnormal parking detection method provided according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of object detection provided in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of abnormal parking detection at time t1 according to an embodiment of the present invention;
fig. 4 is a schematic diagram of abnormal parking detection at time t2 according to the embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Referring to fig. 1, the present invention performs abnormal parking detection based on a video image captured by a fixed-angle monitoring device; the method provided by the invention comprises the following steps:
step S1, inputting a first video frame, recording the detection time t1 corresponding to the video frame, and marking the detection time t1 as the time stamp of the video frame; the process advances to step S2.
S2, carrying out target detection on the video frame in the step 1 by using a pre-trained deep convolutional neural network model, and entering the step S3; the deep convolutional neural network model used in the present embodiment is a YoloV3 convolutional neural network model; the method specifically comprises the following steps:
step S201, loading a YoloV3 convolutional neural network model to a GPU display card and setting corresponding weight parameters;
step S202, carrying out target detection on the input video frame by using a YoloV3 convolutional neural network model, and marking the detected target by using a surrounding rectangular frame; when the target detection is carried out, the detected targets comprise vehicle targets and pedestrian targets, and other targets comprising traffic lights, traffic signs and the like are ignored and are not processed;
the rectangular box of each marker has six attributes, which are: the target ID, the target category, the target initial time, the target updating time, the feature vector of the target, the target ID of the unique target identification and the position coordinates of the bounding rectangular frame of the target in the corresponding video frame are used for identifying the detected target;
wherein the object class comprises a vehicle or a pedestrian; the target initial time is the time of the first video frame where the target appears, namely the time when the target is detected for the first time; the target update time is the time of the latest video frame appearing latest by the target, namely the time when the target is detected last time; the characteristic vector of the target is an image characteristic describing the target, the characteristic vectors of different targets are different, and the position coordinates of the target enclosing rectangular frame in the corresponding video frame are used for identifying the position of the corresponding vehicle in the video frame.
Referring to the schematic diagram of object detection shown in fig. 2: each detected target is surrounded by a rectangular frame, and a digital ID is arranged above each surrounding rectangular frame and used for distinguishing different targets; in practical applications, the surrounding rectangular frames of the vehicle object and the pedestrian object can be distinguished by different colors, in this embodiment, the surrounding rectangular frame of the vehicle object is green, and the surrounding rectangular frame of the pedestrian object is red.
Step S3, placing the vehicle targets detected according to the step 1 and the step 2 into a tracking queue Q, and entering the step 1;
each vehicle target in the tracking queue Q is called a tracking vehicle target; each vehicle target includes six attributes: target ID, target category, target initial time, target update time, feature vector of the target and position coordinates of a target surrounding rectangular frame; the tracking queue Q is attribute information of different vehicle targets in a preset time period from the moment corresponding to the last video frame of the current video frame to the historical moment.
In this embodiment, the video frame at the next time instant of the time instant t1 is the current video frame, the detection time mark corresponding to the video frame is t2, and the abnormal parking detection at the time instant t2 is performed according to the video frame at the time instant t2 and the tracking queue Q;
step 1, acquiring a video frame at the time t2, and entering step 2;
step 2, referring to step S202, performing target detection on the input video frame by using a YoloV3 convolutional neural network model, marking the detected target by using a bounding rectangle frame, acquiring attribute information of the detected target, referring the detected vehicle target as a detected vehicle target, and entering step 3;
step 3, comparing each detected vehicle target in the video frame at the time t2 with each tracked vehicle target in the tracking queue Q; judging whether a tracked vehicle target does not move within a certain time period in the video frame at the time t2 according to the comparison result; if yes, executing step 4; otherwise, the judgment result is corrected.
In one embodiment, the method of determining whether the tracked vehicle is moving is as follows:
performing the following operation on each detected vehicle target j in the video frame at the time t2, wherein j is the value of the target ID of the vehicle target in the video frame;
step 301A, for each vehicle i, i in the tracking queue Q, the value of the target ID of the vehicle target in the tracking queue:
representing the bounding rectangle of the vehicle object i as boxi:(lefti,topi,righti,bottomi) The bounding rectangle of the vehicle object j is denoted boxj:(leftj,topj,rightj,bottomj) Wherein lefti、topiAny corner m in the surrounding rectangular frame of the vehicle target iiAbscissa and ordinate, righti、bottomiAt corner m in a bounding rectangular frame of a vehicle object i, respectivelyiThe abscissa and ordinate of the corner point of the diagonal position of (a); leftj、topjAny corner m in the surrounding rectangular frame of the vehicle target j respectivelyjAbscissa and ordinate, rightj、bottomjAt corner m in a bounding rectangular frame of a vehicle object j, respectivelyjThe abscissa and ordinate of the corner point of the diagonal position of (a);
in this embodiment, lefti、topiThe abscissa and the ordinate, right, of the corner point at the upper left corner in the bounding rectangle for the vehicle object ii、bottomiRespectively representing the abscissa and the ordinate of a corner point of the vehicle target i surrounding the lower right corner in the rectangular frame; leftj、topjThe abscissa and the ordinate, right, of the corner point at the upper left corner in the bounding rectangle for the vehicle object jj、bottomjRespectively the abscissa and the ordinate of the corner point of the vehicle object j surrounding the lower right corner in the rectangular frame.
Referring to formula (1), a bounding rectangular box of the vehicle object i is calculatediAnd a bounding rectangular box of the vehicle object jjCross-over ratio IoUij
IoUij=Interij/Unionij (1)
Wherein, InterijBounding rectangle box for vehicle object iiAnd a bounding rectangular box of the vehicle object jjIntersect, UnionijBounding rectangle box for vehicle object iiAnd a bounding rectangular box of the vehicle object jjA union of (1); interijAnd UnionijThe calculation formula of (a) is as follows:
Interij=Inter_Wij*Inter_Hij (2)
Unionij=(righti–lefti)*(topi–bottomi)+(rightj–leftj)*(topj–bottomj)-Interij (3)
wherein, Inter _ WijBounding rectangle box for vehicle object iiAnd a bounding rectangular box of the vehicle object jjWidth of the intersection, InterHijBounding rectangle box for vehicle object iiAnd a bounding rectangular box of the vehicle object jjThe height of the intersection; inter _ WijAnd Inter _ HijThe calculation formula of (a) is as follows:
Inter_Wij=max(0,min(righti,rightj)-max(lefti,leftj)) (4)
Inter_Hij=max(0,min(topi,topj)-max(bottomi,bottomj)) (5)
the Intersection ratio (IoU) is the ratio of the Intersection and the Union of the surrounding rectangular frames of the vehicle target i and the vehicle target j, and the value is between [0 and 1 ]:
when IoU is 0, the bounding rectangular boxes representing the two vehicle targets do not intersect, indicating that vehicle target i and vehicle target j are not the same vehicle, or understood as: even with the same vehicle, the vehicle has moved because they are already located at different road locations on the video frame;
when IoU is equal to 1, the bounding rectangular boxes representing the vehicle object i and the vehicle object j coincide, indicating that the vehicle object i and the vehicle object j are located at the same spatial position on the road at the time t2 and the time t1, but the vehicle object i and the vehicle object j may be the same vehicle or different vehicles; if the bounding rectangular frames of the vehicle object i and the vehicle object j overlap, but the vehicle object i and the vehicle object j are different vehicles, the vehicle object i is separated from the original position at time t2, and the vehicle object j is moved to the original position at time t 2.
Step 302A, due to technical limitations, the target detection algorithm cannot ensure that the generated bounding rectangle can completely and accurately and very stably enclose the vehicle target or the pedestrian target, and even for the same vehicle target which does not move in the time interval of two video frames, the positions of the bounding rectangles generated for the vehicle target or the pedestrian target through the target detection algorithm may have errors, so that IoU _ Threshold is preset in the invention for comparing the calculated intersection and parallel ratio with IoU _ Threshold, so as to improve the accuracy of judgment; 0 is not less than IoU and not more than 1.
Comparison IoUijIntersection with a preset rectangular box and comparison with a Threshold IoU _ Threshold:
if IoUijLess than IoU _ Threshold, it is determined that vehicle target i and vehicle target j are not at the same spatial location on the road, i.e., vehicle target j has moved in road space relative to vehicle target i;
if IoU is equal to or greater than IoU _ Threshold, then it is determined that vehicle object j has not moved on the spatial road relative to vehicle object i, and after determining that vehicle object j has not moved on the spatial road relative to vehicle object i, step 303A is executed to further determine whether vehicle object j and vehicle object i are the same vehicle.
Step 303A, calculating cosine distance Dist (ID) between the characteristic vector of the vehicle i and the characteristic vector of the vehicle target j according to formula (6)j,IDi):
Figure GDA0002686592420000101
Wherein r isIDiA normalized 128-dimensional feature vector obtained by applying a convolutional neural network to the vehicle target i;
Figure GDA0002686592420000102
transposing a normalized 128-dimensional feature vector obtained by applying a convolutional neural network to a vehicle target j to obtain a vector; dist (ID)j,IDi) Ranges between 0 and 2, a value of Dist (ID2, ID1) closer to 0 indicates a greater similarity between two feature vectors, whereas a value of Dist (ID2, ID1) greater indicates a lower similarity between two feature vectors.
Step 304A, compare Dist (ID)j,IDi) And a preset feature vector Threshold value Dist _ Threshold, Dist _ Threshold is greater than or equal to 0 and less than or equal to 2:
if Dist (ID)j,IDi) Judging that the vehicle i and the vehicle j are the same vehicle and the vehicle j does not move if the distance between the vehicle i and the vehicle j is not larger than the Dist _ Threshold;
if Dist (ID)j,IDi) If Dist _ Threshold, then vehicle target i and vehicle target j are not considered to be the same vehicle.
In another embodiment, the method of determining whether the tracked vehicle is moving is as follows:
for each vehicle target j, j in the current video frame as the value of the target ID of the vehicle target in the current video frame, the following operations are executed:
step 301B, for each vehicle target i, i in the tracking queue, the value of the target ID of the vehicle target in the tracking queue: referring to equation (1), the intersection ratio IoU of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j is calculatedij
IoUij=Interij/Unionij (1)
Wherein, InterijIs the intersection of the bounding rectangular box of vehicle object i and the bounding rectangular box of vehicle object j, UnionijA bounding rectangle for vehicle object i and a bounding rectangle for vehicle object jUnion of frames; interijAnd UnionijThe calculation formula of (a) is as follows:
Interij=Inter_Wij*Inter_Hij (2)
Unionij=(righti–lefti)*(topi–bottomi)+(rightj–leftj)*(topj–bottomj)-Interij (3)
Inter_Wij=max(0,min(righti,rightj)-max(lefti,leftj)) (4)
Inter_Hij=max(0,min(topi,topj)-max(bottomi,bottomj)) (5)
wherein, Inter _ WijIs the width of the intersection of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j, Inter _ HijThe height of the intersection part of the surrounding rectangular frame of the vehicle target i and the surrounding rectangular frame of the vehicle target j; wherein lefti、topiRespectively, the abscissa and ordinate of the corner point of the surrounding upper left corner of the rectangular frame of the vehicle object i, righti、bottomiRespectively representing the abscissa and the ordinate of a corner point at the lower right corner in a surrounding rectangular frame of the vehicle target i; leftj、topjRespectively the abscissa and ordinate of the corner point of the surrounding upper left corner of the rectangular frame of the vehicle object j, rightj、bottomjRespectively representing the abscissa and the ordinate of the corner point at the lower right corner in the surrounding rectangular frame of the vehicle target j;
step 302B, compare IoUijIntersecting with a preset rectangular frame and comparing with a Threshold value IoU _ Threshold if IoUijIf the value is less than IoU _ Threshold, the vehicle target i and the vehicle target j are judged not to be at the same spatial position on the road; otherwise, the same spatial positions of the vehicle target i and the vehicle target j on the road are judged, and step 303B is executed;
step 303B, in order to avoid the occurrence of errors in single target detection, generating shadows in the calculation of cosine distances between the characteristic vectors of the vehicle target i and the vehicle target jAnd thirdly, for the same vehicle target i, the system keeps the feature vector of the past M frames (M is more than or equal to 1) of the same vehicle target i, and calculates the cosine distance Dist' (ID) between the feature vector of the vehicle i and the feature vector of the vehicle target j according to the formula (7)j,IDi):
Figure GDA0002686592420000111
In the formula,
Figure GDA0002686592420000112
obtaining a normalized feature vector by applying a convolutional neural network to a vehicle target i in an nth historical video frame with the vehicle target i, wherein M is the number of the historical video frames with the vehicle target i;
Figure GDA0002686592420000113
the vehicle target j is a vector obtained by transposing a normalized feature vector obtained by applying a convolutional neural network;
step 304B, compare Dist' (ID)j,IDi) And a predetermined feature vector Threshold Dist _ Threshold, 0 ≦ Dist _ Threshold ≦ 2, if Dist' (ID)j,IDi) Judging that the vehicle i and the vehicle j are the same vehicle and the vehicle j does not move if the distance between the vehicle i and the vehicle j is not larger than the Dist _ Threshold; otherwise, the vehicle target i and the vehicle target j are not the same vehicle.
According to the method, whether at least one vehicle target does not move in the current video frame is judged, if no vehicle which does not move does not exist in the current video frame, the judgment result is corrected, and the method specifically comprises the following steps:
step A, aiming at each vehicle target i in the tracking queue, wherein i is the value of the target ID of the vehicle target in the tracking queue: life Time of vehicle target i Life _ TimeIDiUsing equation (8) to calculate:
Life_TimeIDi=Now_Time–Update_timeIDi (8)
wherein, the Now _ Time is the detection Time corresponding to the current video frame, Update _ TimeIDiThe detection time at which the vehicle object i was detected last time.
Compare Life _ TimeIDiAnd a preset tracked vehicle target Expiration Threshold value, wherein the preset tracked vehicle target Expiration Threshold value is Track _ Expiration _ Threshold;
if Life _ TimeIDi>Track _ exception _ Threshold, deleting the vehicle target i from the tracking queue Q; otherwise, keeping tracking the vehicle target i, and not changing any attribute information of the vehicle target i;
adding a new detected vehicle target into a tracking queue, namely: adding the vehicle targets which appear in the current video frame and do not appear in the tracking queue into the tracking queue, including the attribute information of the vehicle targets;
step B, comparing the updated data length Track _ Q _ Len in the tracking queue with a preset tracking queue length Threshold value Track _ Q _ Threshold:
if the Track _ Q _ Len is less than Track _ Q _ Threshold, the road condition is considered to be not congested, and the step 1 is executed;
if the Track _ Q _ Len is larger than or equal to the Track _ Q _ Threshold, the road condition is considered to be congested, and the step C is executed;
wherein the tracking queue length Threshold value Track _ Q _ Threshold is set empirically.
C, judging whether a pedestrian target exists in the current video frame, if so, judging that a vehicle target with abnormal parking exists in the current video frame, and reporting abnormal parking time; otherwise, executing step 1.
Step 4, respectively executing steps 401 to 403 for each vehicle which does not move in the current video frame:
step 401, in combination with the tracking queue Q, judging whether the staying time of the vehicle target which does not move in the video frame at the time t2 is greater than a preset abnormal parking time threshold, if so, executing step 402, otherwise, executing step 5;
according to the method, intersection comparison judgment analysis is carried out on a vehicle target j in a video frame at the time t2 and a surrounding rectangular frame of the vehicle targets in a tracking queue, after the fact that the vehicle target j in the video frame at the time t2 does not change in spatial position on the road relative to the vehicle target i in the tracking queue according to the analysis result of the intersection comparison, calculation analysis of cosine distances between feature vectors of the two vehicle targets is carried out according to the feature vectors of the two vehicle targets, when the two vehicle targets are judged to be the same vehicle according to the analysis result of the cosine distances between the feature vectors of the two vehicle targets, the fact that the vehicle target j in the video frame at the time t2 does not move is judged, the vehicle target j and the vehicle target i are considered to be matched, and the vehicle target i is the vehicle corresponding to the vehicle target j in the tracking queue.
Parking _ time for the dwell time of a vehicle object s that is not moving in the video frame at time t2sThe calculation formula for the calculation is shown in formula (9):
Parking_timeS=Update_timeIDs-Init_timeIDd (9)
wherein Update _ timeIDsThe latest time when the vehicle target s which does not move in the current video frame is detected is taken as the time when the vehicle in the current video frame is detected, and the latest time when the vehicle in the current video frame is detected is the detection time corresponding to the current video frame; init _ timeIDdThe time when the vehicle target d corresponding to the vehicle target s which does not move in the current video frame is detected for the first time in the tracking queue; s is the value of the target ID of the vehicle target that does not move in the current video frame, and d is the value of the target ID of the vehicle target in the tracking queue that corresponds to the vehicle target s that does not move in the current video frame.
Step 402, except for the vehicle target which does not move, detecting whether other moving vehicle targets exist in the current video frame, if so, judging that the vehicle which does not move and has the stay time longer than the preset abnormal parking time threshold value in the step 401 is abnormal parking, reporting an abnormal parking event, and executing the step 5; otherwise, step 403 is performed.
The method for detecting whether other vehicle targets move in the current video frame is to monitor the tracking queue and check whether the following two conditions can be met simultaneously:
condition 1. there is a tracked vehicle target in the tracking queue because of the Time of survival Life _ TimeIDi>Track _ Expiration _ Threshold is deleted;
condition 2. a new detected vehicle target is added to the tracking queue.
If the 2 conditions are simultaneously met, indicating that other vehicle targets move in the video; otherwise it indicates that no other vehicle object is moving in the video.
Step 403, judging whether a pedestrian target exists in the current video frame, if so, judging that the vehicle which does not move and has the stay time longer than a preset abnormal parking time threshold value in the step 401 is abnormal parking, reporting an abnormal parking event, and executing the step 5; otherwise, judging that the current road is congested, namely the vehicles which do not move in the step 4 and have the stay time longer than the preset abnormal parking time threshold value are non-abnormally parked, and executing the step 5;
in fig. 3 and 4, a pedestrian target is detected, fig. 3 and 4 are schematic diagrams of abnormal parking monitoring at two different moments, respectively, and it is found by comparison that a vehicle target with a target ID of C1 or C2 in fig. 3 and 4 does not move in a corresponding time period.
Step 5, updating the tracking queue information, and the specific method comprises the following steps:
step 501, respectively updating the target updating time of the vehicle target, the feature vector of the vehicle target and the position coordinates of the target surrounding rectangular frame in the corresponding video frame in the tracking queue to the target updating time of the corresponding vehicle target, the feature vector of the vehicle target and the position coordinates of the target surrounding rectangular frame in the corresponding video frame at the moment t 2; namely: replacing the vehicle position of the vehicle target, the characteristic vector of the vehicle target and the time when the vehicle is detected last time, which correspond to the vehicle target, in the tracking queue by using the vehicle position of the vehicle target, the characteristic vector of the vehicle target and the detection time which do not move in the current video frame;
step 502, aiming at the vehicle target k except the vehicle target k participating in the operation of the step 501 in the tracking queue, and the survival Time Life _ Time of the vehicle target kIDkUsing equation (10) to calculate:
Life_TimeIDk=Now_Time–Update_timeIDk (10)
wherein, the Now _ Time is the detection Time corresponding to the current video frame, Update _ TimeIDkThe detection time when the vehicle target k is detected last time;
compare Life _ TimeIDkAnd a preset tracking vehicle target Expiration Threshold value Track _ Expiration _ Threshold:
if Life _ TimeIDk>Track _ Exration _ Threshold, then remove the vehicle target k from the tracking queue; otherwise, the vehicle target k is reserved, and any attribute information of the vehicle target k is not changed; adding the newly detected vehicle target into a tracking queue, namely: and adding the attribute information of the vehicle target which appears in the current video frame and does not appear in the tracking queue into the tracking queue.
The invention provides an abnormal parking detection method based on intelligent video analysis, which comprises the following steps: for the condition that the definition of a video image is not enough or the shielding is serious and the vehicle cannot be correctly tracked, the missing report rate of abnormal parking is effectively reduced by adding a pedestrian detection method; the method has the advantages that the false alarm rate and the false alarm rate of abnormal parking are greatly reduced by adding the methods of pedestrian detection and checking whether other vehicle targets are still moving except the vehicle target which does not move.
The above description is only a preferred embodiment of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be considered as the protection scope of the present invention.

Claims (9)

1. An abnormal parking detection method based on intelligent video analysis is characterized in that the method comprises the following steps of:
step 1, acquiring a current video frame and detection time corresponding to the video frame;
step 2, performing target detection on the current video frame by using a convolutional neural network model, acquiring each vehicle target in the current video frame, and further acquiring attribute information of the vehicle target in the current video frame; the attribute information of the vehicle target comprises the position of the vehicle in the corresponding video frame, the feature vector of the vehicle, the first time when the vehicle is detected, the last time when the vehicle is detected, and the target ID of the unique identification of the vehicle;
step 3, judging whether a vehicle target which does not move exists in the current video frame or not according to the attribute information of the vehicle target in the current video frame and the tracking queue, and if so, executing step 4; otherwise, correcting the judgment result according to the steps A to C; the tracking queue is a set of attribute information of different vehicle targets in a preset time period from the moment corresponding to the last video frame of the current video frame to the historical moment direction;
step A, aiming at each vehicle target i in the tracking queue, wherein i is the value of the target ID of the vehicle target in the tracking queue: life Time of vehicle target i Life _ TimeIDiUsing equation (8) to calculate:
Life_TimeIDi=Now_Time–Update_timeIDi (8)
wherein, the Now _ Time is the detection Time corresponding to the current video frame, Update _ TimeIDiThe detection time when the vehicle target i is detected last time;
compare Life _ TimeIDiAnd a preset Track vehicle target Expiration Threshold value Track _ Expiration _ Threshold, if:
Life_TimeIDi>track _ exception _ Threshold, deleting the vehicle target i including the attribute information thereof from the tracking queue, adding the vehicle target which appears in the current video frame and does not appear in the tracking queue including the attribute information thereof into the tracking queue, and acquiring the updated tracking queue;
step B, comparing the updated data length Track _ Q _ Len in the tracking queue with a preset tracking queue length Threshold value Track _ Q _ Threshold, if:
track _ Q _ Len < Track _ Q _ Threshold, performing step 1; otherwise, executing step C;
step C, judging whether a pedestrian target exists in the current video frame, if so, judging that a vehicle target with abnormal parking exists in the current video frame, otherwise, executing the step 1;
step 4, respectively executing the following steps aiming at each vehicle which does not move in the current video frame:
step 401, in combination with the tracking queue, judging whether the staying time of the vehicle target which does not move in the current video frame is larger than a preset abnormal parking time threshold, if so, executing step 402, otherwise, executing step 5;
step 402, judging whether other moving vehicles exist in the current video frame, if so, judging that the vehicle which does not move in the step 401 and has the stay time longer than a preset abnormal parking time threshold value is an abnormal parking, and executing the step 5; otherwise, go to step 403;
step 403, judging whether a pedestrian target exists in the current video frame, if so, judging that the vehicle which does not move and has the stay time longer than a preset abnormal parking time threshold value in the step 401 is abnormal parking, and executing the step 5; otherwise, judging that the vehicle which does not move in the step 401 and has the stay time longer than the preset abnormal parking time threshold value is in non-abnormal parking, and executing the step 5;
and 5, updating the tracking queue information.
2. The abnormal parking detection method based on intelligent video analysis of claim 1, wherein in step 2, the convolutional neural network model is used for performing target detection on the video frame: and marking the vehicle target by using the surrounding rectangular frame, acquiring the position coordinates of the surrounding rectangular frame in the corresponding video frame, and marking the acquired position coordinates of the surrounding rectangular frame as the position of the corresponding vehicle in the video frame.
3. The abnormal parking detection method based on intelligent video analysis as claimed in claim 2, wherein in step 3, the method for determining whether there is a vehicle that does not move in the current video frame specifically comprises the following steps:
for each vehicle target j, j in the current video frame as the value of the target ID of the vehicle target in the current video frame, the following operations are executed:
step 301A, for each vehicle target i, i in the tracking queue, the value of the target ID of the vehicle target in the tracking queue: referring to equation (1), the intersection ratio IoU of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j is calculatedij
IoUij=Interij/Unionij (1)
Wherein, InterijIs the intersection of the bounding rectangular box of vehicle object i and the bounding rectangular box of vehicle object j, UnionijIs the union of the bounding rectangular frame of vehicle object i and the bounding rectangular frame of vehicle object j; interijAnd UnionijThe calculation formula of (a) is as follows:
Interij=Inter_Wij*Inter_Hij (2)
Unionij=(righti–lefti)*(topi–bottomi)+(rightj–leftj)*(topj–bottomj)-Interij(3)
Inter_Wij=max(0,min(righti,rightj)-max(lefti,leftj)) (4)
Inter_Hij=max(0,min(topi,topj)-max(bottomi,bottomj)) (5)
wherein, Inter _ WijIs the width of the intersection of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j, Inter _ HijThe height of the intersection part of the surrounding rectangular frame of the vehicle target i and the surrounding rectangular frame of the vehicle target j; wherein lefti、topiAny corner m in the surrounding rectangular frame of the vehicle target iiAbscissa and ordinate, righti、bottomiAt corner m in a bounding rectangular frame of a vehicle object i, respectivelyiThe abscissa of the diagonally positioned corner point ofA vertical coordinate; leftj、topjAny corner m in the surrounding rectangular frame of the vehicle target j respectivelyjAbscissa and ordinate, rightj、bottomjAt corner m in a bounding rectangular frame of a vehicle object j, respectivelyjThe abscissa and ordinate of the corner point of the diagonal position of (a);
step 302A, compare IoUijIntersecting with a preset rectangular frame and comparing with a Threshold value IoU _ Threshold if IoUijIf the value is less than IoU _ Threshold, the vehicle target i and the vehicle target j are judged not to be at the same spatial position on the road; otherwise, the same spatial positions of the vehicle target i and the vehicle target j on the road are judged, and the step 303A is executed;
step 303A, calculating cosine distance Dist (ID) between the characteristic vector of the vehicle i and the characteristic vector of the vehicle target j according to formula (6)j,IDi):
Figure FDA0002686592410000031
Wherein r isIDiApplying a convolutional neural network to the vehicle target i to obtain a normalized feature vector;
Figure FDA0002686592410000032
the vehicle target j is a vector obtained by transposing a normalized feature vector obtained by applying a convolutional neural network;
step 304A, compare Dist (ID)j,IDi) And a preset feature vector Threshold value Dist _ Threshold, if:
Dist(IDj,IDi) Judging that the vehicle i and the vehicle j are the same vehicle and the vehicle j does not move if the distance between the vehicle i and the vehicle j is not larger than the Dist _ Threshold; otherwise, the vehicle j is determined to move.
4. The abnormal parking detection method based on intelligent video analysis as claimed in claim 2, wherein in step 3, the method for determining whether there is a vehicle that does not move in the current video frame specifically comprises the following steps:
for each vehicle target j, j in the current video frame as the value of the target ID of the vehicle target in the current video frame, the following operations are executed:
step 301B, for each vehicle target i, i in the tracking queue, the value of the target ID of the vehicle target in the tracking queue: referring to equation (1), the intersection ratio IoU of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j is calculatedij
IoUij=Interij/Unionij (1)
Wherein, InterijIs the intersection of the bounding rectangular box of vehicle object i and the bounding rectangular box of vehicle object j, UnionijIs the union of the bounding rectangular frame of vehicle object i and the bounding rectangular frame of vehicle object j; interijAnd UnionijThe calculation formula of (a) is as follows:
Interij=Inter_Wij*Inter_Hij (2)
Unionij=(righti–lefti)*(topi–bottomi)+(rightj–leftj)*(topj–bottomj)-Interij(3)
Inter_Wij=max(0,min(righti,rightj)-max(lefti,leftj)) (4)
Inter_Hij=max(0,min(topi,topj)-max(bottomi,bottomj)) (5)
wherein, Inter _ WijIs the width of the intersection of the bounding rectangular frame of the vehicle object i and the bounding rectangular frame of the vehicle object j, Inter _ HijThe height of the intersection part of the surrounding rectangular frame of the vehicle target i and the surrounding rectangular frame of the vehicle target j; wherein lefti、topiAny corner m in the surrounding rectangular frame of the vehicle target iiAbscissa and ordinate, righti、bottomiMiddle of bounding rectangle frame for vehicle target iAt the corner point miThe abscissa and ordinate of the corner point of the diagonal position of (a); leftj、topjAny corner m in the surrounding rectangular frame of the vehicle target j respectivelyjAbscissa and ordinate, rightj、bottomjAt corner m in a bounding rectangular frame of a vehicle object j, respectivelyjThe abscissa and ordinate of the corner point of the diagonal position of (a);
step 302B, compare IoUijIntersecting with a preset rectangular frame and comparing with a Threshold value IoU _ Threshold if IoUijIf the value is less than IoU _ Threshold, the vehicle target i and the vehicle target j are judged not to be at the same spatial position on the road; otherwise, the same spatial positions of the vehicle target i and the vehicle target j on the road are judged, and step 303B is executed;
step 303B, calculating cosine distance Dist' (ID) between the feature vector of vehicle i and the feature vector of vehicle target j according to formula (7)j,IDi):
Figure FDA0002686592410000041
In the formula,
Figure FDA0002686592410000042
obtaining a normalized feature vector by applying a convolutional neural network to a vehicle target i in an nth historical video frame with the vehicle target i, wherein M is the number of the historical video frames with the vehicle target i;
Figure FDA0002686592410000043
the vehicle target j is a vector obtained by transposing a normalized feature vector obtained by applying a convolutional neural network;
step 304B, compare Dist' (ID)j,IDi) And a predetermined feature vector Threshold value Dist _ Threshold if Dist' (ID)j,IDi) Judging that the vehicle i and the vehicle j are the same vehicle and the vehicle j does not move if the distance between the vehicle i and the vehicle j is not larger than the Dist _ Threshold; otherwise, the vehicle j is determined to move.
5. The intelligent video analysis-based abnormal parking detection method according to claim 1, wherein in step C, after determining that there is a vehicle target with abnormal parking in the current video frame, the method further comprises reporting an abnormal parking event.
6. The abnormal parking detection method based on intelligent video analysis of claim 1, wherein in step 401, the method further comprises: calculating the dwell time parkingtime of the vehicle object s that is not moving in the current video frame using equation (9)S
Parking_timeS=Update_timeIDs-Init_timeIDd (9)
Wherein Update _ timeIDsInit _ time, which is the time when the vehicle object s that has not moved in the current video frame was last detectedIDdThe time when the vehicle target d corresponding to the vehicle target s which does not move in the current video frame is detected for the first time in the tracking queue; s is the value of the target ID of the vehicle target that does not move in the current video frame, and d is the value of the target ID of the vehicle target in the tracking queue that corresponds to the vehicle target s that does not move in the current video frame.
7. The method for detecting abnormal parking based on intelligent video analysis of claim 1, wherein in step 402 and in step 403, after the vehicle which is not moving in step 401 and has the staying time longer than the preset abnormal parking time threshold value is determined to be abnormally parked, the method further comprises reporting an abnormal parking event.
8. The abnormal parking detection method based on intelligent video analysis of claim 1, wherein in step 5, the method for updating the tracking queue information comprises the following steps:
step 501, replacing the vehicle position, the vehicle characteristic vector and the time of the vehicle which is detected last time of the vehicle in the tracking queue by using the vehicle position, the vehicle characteristic vector and the detection time of the vehicle target which does not move in the current video frame;
step 502, aiming at the vehicle target k except the vehicle target k participating in the operation of the step 701 in the tracking queue, and the survival Time Life _ Time of the vehicle target kIDkUsing equation (10) to calculate:
Life_TimeIDk=Now_Time–Update_timeIDk (10)
wherein, the Now _ Time is the detection Time corresponding to the current video frame, Update _ TimeIDkThe detection time when the vehicle target k is detected last time;
compare Life _ TimeIDkAnd a preset tracked vehicle target expiration threshold
Track _ Expiration _ Threshold: if:
Life_TimeIDk>track _ Expiration _ Threshold, then remove the vehicle target k, including its attribute information, from the tracking queue; otherwise, the vehicle target k is reserved, and the attribute information of the vehicle target k is not changed; and adding the attribute information of the vehicle target which appears in the current video frame and does not appear in the tracking queue into the tracking queue.
9. The abnormal parking detection method based on intelligent video analysis of any one of claims 1 to 8, wherein the convolutional neural network model is a yoloV3 convolutional neural network model.
CN202010748484.3A 2020-07-30 2020-07-30 Abnormal parking detection method based on intelligent video analysis Active CN111626275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010748484.3A CN111626275B (en) 2020-07-30 2020-07-30 Abnormal parking detection method based on intelligent video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010748484.3A CN111626275B (en) 2020-07-30 2020-07-30 Abnormal parking detection method based on intelligent video analysis

Publications (2)

Publication Number Publication Date
CN111626275A CN111626275A (en) 2020-09-04
CN111626275B true CN111626275B (en) 2020-11-10

Family

ID=72272206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010748484.3A Active CN111626275B (en) 2020-07-30 2020-07-30 Abnormal parking detection method based on intelligent video analysis

Country Status (1)

Country Link
CN (1) CN111626275B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112509315B (en) * 2020-11-04 2022-02-15 杭州远眺科技有限公司 Traffic accident detection method based on video analysis
CN112767681B (en) * 2020-12-16 2022-08-19 济南博观智能科技有限公司 Traffic state detection method, device and related equipment
CN113033471A (en) * 2021-04-15 2021-06-25 北京百度网讯科技有限公司 Traffic abnormality detection method, apparatus, device, storage medium, and program product
CN114241401B (en) * 2021-11-02 2024-10-15 中国铁道科学研究院集团有限公司电子计算技术研究所 Abnormality determination method, abnormality determination device, abnormality determination apparatus, abnormality determination medium, and abnormality determination product
CN114155715B (en) * 2022-02-07 2022-05-06 北京图盟科技有限公司 Conflict point detection method, device, equipment and readable storage medium
CN114693742B (en) * 2022-04-20 2024-11-22 江苏东大金智信息系统有限公司 A method for automatically labeling moving targets in surveillance videos
CN116469059A (en) * 2023-06-20 2023-07-21 松立控股集团股份有限公司 Parking lot entrance and exit vehicle backlog detection method based on DETR

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616501A (en) * 2015-02-06 2015-05-13 河海大学常州校区 Intelligent vision based vehicle law-violation parking detection system and method
CN107730903A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Parking offense and the car vision detection system that casts anchor based on depth convolutional neural networks
CN110491135A (en) * 2019-08-20 2019-11-22 深圳市商汤科技有限公司 Detect the method and relevant apparatus of parking offense
CN111402612A (en) * 2019-01-03 2020-07-10 北京嘀嘀无限科技发展有限公司 Traffic incident notification method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619750B (en) * 2019-08-15 2020-09-11 重庆特斯联智慧科技股份有限公司 Intelligent aerial photography identification method and system for illegal parking vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616501A (en) * 2015-02-06 2015-05-13 河海大学常州校区 Intelligent vision based vehicle law-violation parking detection system and method
CN107730903A (en) * 2017-06-13 2018-02-23 银江股份有限公司 Parking offense and the car vision detection system that casts anchor based on depth convolutional neural networks
CN111402612A (en) * 2019-01-03 2020-07-10 北京嘀嘀无限科技发展有限公司 Traffic incident notification method and device
CN110491135A (en) * 2019-08-20 2019-11-22 深圳市商汤科技有限公司 Detect the method and relevant apparatus of parking offense

Also Published As

Publication number Publication date
CN111626275A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111626275B (en) Abnormal parking detection method based on intelligent video analysis
EP2243125B1 (en) Vision based real time traffic monitoring
US8902053B2 (en) Method and system for lane departure warning
Kanhere et al. Real-time incremental segmentation and tracking of vehicles at low camera angles using stable features
US20180204070A1 (en) Image processing apparatus and image processing method
Song et al. Vehicle behavior analysis using target motion trajectories
KR101735365B1 (en) The robust object tracking method for environment change and detecting an object of interest in images based on learning
CN106934817B (en) Multi-attribute-based multi-target tracking method and device
CN111199647B (en) Monitoring video detection method for continuous lane changing and illegal turning of road vehicles
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
Mithun et al. Video-based tracking of vehicles using multiple time-spatial images
CN102509306A (en) Specific target tracking method based on video
CN111292432A (en) Vehicle charging type distinguishing method and device based on vehicle type recognition and wheel axle detection
CN110718061A (en) Traffic intersection traffic flow statistics method, device, storage medium and electronic device
CN103903282A (en) Target tracking method based on LabVIEW
CN111325048A (en) Personnel gathering detection method and device
CN110490150A (en) A kind of automatic auditing system of picture violating the regulations and method based on vehicle retrieval
CN111696135A (en) Intersection ratio-based forbidden parking detection method
CN114648748A (en) Motor vehicle illegal parking intelligent identification method and system based on deep learning
CN113029185A (en) Road marking change detection method and system in crowdsourcing type high-precision map updating
CN111489380B (en) Target object track analysis method
CN113505638A (en) Traffic flow monitoring method, traffic flow monitoring device and computer-readable storage medium
CN116682268A (en) Portable urban road vehicle violation inspection system and method based on machine vision
CN110610120A (en) A face trajectory matching method
Płaczek A real time vehicle detection algorithm for vision-based sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant