Nothing Special   »   [go: up one dir, main page]

CN112541096A - Video monitoring method for smart city - Google Patents

Video monitoring method for smart city Download PDF

Info

Publication number
CN112541096A
CN112541096A CN202011460959.5A CN202011460959A CN112541096A CN 112541096 A CN112541096 A CN 112541096A CN 202011460959 A CN202011460959 A CN 202011460959A CN 112541096 A CN112541096 A CN 112541096A
Authority
CN
China
Prior art keywords
vehicle
monitoring
image
target
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011460959.5A
Other languages
Chinese (zh)
Other versions
CN112541096B (en
Inventor
刘应森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Highway Engineering Consultants Corp
CHECC Data Co Ltd
Original Assignee
Guangyuan Liangzhihui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangyuan Liangzhihui Technology Co ltd filed Critical Guangyuan Liangzhihui Technology Co ltd
Priority to CN202011460959.5A priority Critical patent/CN112541096B/en
Publication of CN112541096A publication Critical patent/CN112541096A/en
Application granted granted Critical
Publication of CN112541096B publication Critical patent/CN112541096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence and smart cities, and discloses a video monitoring method for a smart city, which is applied to a smart city safety monitoring system and comprises an administrator client and a safety monitoring platform. The administrator client acquires a target vehicle image according to the license plate number and the vehicle registration information and generates a vehicle monitoring query request; the vehicle monitoring acquisition module acquires a target monitoring video according to the vehicle monitoring query request to obtain a vehicle monitoring image packet; the image preprocessing module removes redundant information of the target vehicle image to obtain a vehicle standard image; the predicted image generation module generates a plurality of vehicle predicted images under different monitoring angles according to the vehicle standard image and the monitoring angle information, and performs characteristic normalization judgment on the vehicle predicted images to obtain a vehicle predicted image packet; and the target vehicle identification module inputs the vehicle monitoring image packet and the vehicle prediction image packet into the vehicle identification convolutional neural network model to obtain a target vehicle query result.

Description

Video monitoring method for smart city
The invention relates to a divisional application of an intelligent city safety monitoring method based on artificial intelligence, wherein the original application number is 202010733584.9, the original application date is 27/07/2020, and the original invention name is.
Technical Field
The invention relates to the field of artificial intelligence and smart cities, in particular to a video monitoring method for a smart city.
Background
Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
The smart city is a city informatization advanced form which fully applies a new generation of information technology to various industries in the city and is based on the innovation of the next generation of knowledge society, realizes the deep integration of informatization, industrialization and urbanization, is beneficial to relieving the large urban diseases, improves the urbanization quality, realizes the fine and dynamic management, improves the urban management effect and improves the quality of life of citizens.
For convenience and mobility of two places, the popularity of vehicles is growing every year, accidents caused by vehicles are relatively improved, and the vehicles are seriously threatened to social security when being used as criminal tools by illegal persons, so that the inquiry of the vehicles through a monitoring system is particularly important for improving urban management effect and guaranteeing the life security of citizens.
Disclosure of Invention
In the prior art, when vehicle inquiry is carried out, the influence of the shooting angle and the shooting direction of the monitoring equipment on image formation is not considered, so that the accuracy of a vehicle inquiry result is low, and the conditions of vehicle misinspection and missed inspection are caused.
In view of this, the present invention provides a smart city safety monitoring method based on artificial intelligence, which includes:
the method comprises the following steps that an administrator client side obtains a target vehicle image according to a license plate number and vehicle registration information and generates a vehicle monitoring query request, wherein the vehicle monitoring query request comprises the target vehicle image, target monitoring time and a target monitoring position;
a vehicle monitoring acquisition module of the safety monitoring platform acquires a target monitoring video according to the target monitoring time and the target monitoring position, and processes the target monitoring video to obtain a vehicle monitoring image packet;
the vehicle monitoring acquisition module generates monitoring angle information according to the installation position, the installation direction and the installation height of the monitoring equipment corresponding to the target monitoring video;
the image preprocessing module removes redundant information of the target vehicle image to obtain a vehicle standard image;
the predicted image generation module generates a plurality of vehicle predicted images under different monitoring angles according to the vehicle standard image and the monitoring angle information, and performs characteristic normalization judgment on all the vehicle predicted images to obtain a vehicle predicted image packet;
and the target vehicle identification module inputs the vehicle monitoring image packet and the vehicle prediction image packet into the vehicle identification convolutional neural network model to obtain a target vehicle query result.
According to a preferred embodiment, the vehicle standard image is a vehicle image of the target vehicle after removing spatial redundancy and visual redundancy, and the target vehicle image is an appearance image of the target vehicle.
According to a preferred embodiment, the monitoring angle information includes monitoring angle information and monitoring orientation information, and the monitoring angle includes: frontal view, lateral view, overhead view, and overhead view.
According to a preferred embodiment, the feature normalization judgment of the predicted vehicle image by the predicted image generation module comprises the following steps:
the predicted image generation module extracts the vehicle standard features of the vehicle standard image;
the predicted image generation module extracts the vehicle prediction characteristics of each vehicle predicted image;
the predicted image generation module performs feature normalization processing on the vehicle prediction features of each vehicle predicted image to obtain unified monitoring angle features of each vehicle predicted image;
the predicted image generation module calculates the feature similarity of the unified monitoring angle feature and the vehicle standard feature of each vehicle predicted image and compares the feature similarity with a similarity threshold.
According to a preferred embodiment, the step of performing feature normalization processing on the vehicle prediction features of each vehicle prediction image by the prediction image generation module to obtain the unified monitoring angle features of each vehicle prediction image comprises the following steps:
the predicted image generation module obtains a vehicle prediction vector of each vehicle predicted image according to the vehicle prediction characteristics of each vehicle predicted image;
the predicted image generating module obtains the unified monitoring angle characteristics of each vehicle predicted image according to the vehicle prediction vector of each vehicle predicted image.
According to a preferred embodiment, the step of performing feature normalization processing on the vehicle prediction features of each vehicle prediction image by the prediction image generation module to obtain the unified monitoring angle features of each vehicle prediction image further comprises the following steps:
Figure BDA0002831719520000031
wherein R is the characteristic of the unified monitoring angle, j is the characteristic index, m is the characteristic number of the vehicle prediction vector, qjIs the weight of the jth feature,
Figure BDA0002831719520000032
is the feature vector after the jth dimension normalization processing, u is the monitoring angle corresponding to the vehicle predicted image,
Figure BDA0002831719520000033
is a multidimensional loss function.
According to a preferred embodiment, the vehicle monitoring obtaining module processes the target monitoring video to obtain a vehicle monitoring image packet, including:
the vehicle monitoring acquisition module divides the target monitoring video into monitoring images of one frame and one frame to obtain a monitoring image packet;
the vehicle monitoring acquisition module identifies vehicle images in the monitoring image packet, and marks the monitoring time and the monitoring position of the vehicle images to obtain vehicle monitoring images;
the vehicle monitoring acquisition module processes all vehicle monitoring images to obtain a vehicle monitoring image packet.
According to a preferred embodiment, the training process of the vehicle identification convolutional neural network model comprises the following steps:
selecting a plurality of vehicle monitoring images and corresponding vehicle predicted images, taking the plurality of vehicle monitoring images as a vehicle monitoring image data set, and taking the corresponding plurality of vehicle predicted images as a vehicle predicted image data set;
training a convolutional neural network model by taking the vehicle monitoring image data set and the corresponding vehicle prediction image data set as input, wherein the convolutional neural network model comprises a convolutional layer and a pooling layer;
performing convolution operation on the vehicle monitoring image and the vehicle predicted image by adopting a convolution layer, and performing pooling operation on the vehicle monitoring image and the vehicle predicted image by adopting a pooling layer;
optimizing the weight of the convolutional neural network and minimizing the characteristic loss by adopting a random gradient descent algorithm;
the convolutional neural network model is trained until convergence to obtain a trained vehicle identification convolutional neural network model.
According to a preferred embodiment, minimizing the loss of features comprises:
Figure BDA0002831719520000041
where i is the feature index, n is the number of features, tiCharacteristic value, Q (t), of the ith characteristic of the vehicle monitoring imageiK) is the characteristic value of the ith characteristic of the kth vehicle predicted image, and k is the index of the vehicle predicted image.
The invention at least comprises the following beneficial effects:
according to the method and the device, the monitoring angle information of the target monitoring video is obtained through the mounting position, the mounting angle and the mounting height of the monitoring device corresponding to the target monitoring video, and the vehicle predicted images at different monitoring angles and different positions are generated through the vehicle standard image and the monitoring angle information, so that the influence of different monitoring visual angles and different monitoring positions on vehicle query is eliminated, the vehicle query accuracy is improved, and the conditions of missed query and mistaken query of the target vehicle are avoided.
Drawings
Fig. 1 is a flowchart of a smart city security monitoring method according to an exemplary embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various devices, elements, components or elements, these devices, elements, components or elements should not be limited by these terms. These terms are only used to distinguish one device, element, component or element from another device, element, component or element.
Referring to fig. 1, in one embodiment, a smart city safety monitoring method based on artificial intelligence includes:
and S1, the administrator client acquires the target vehicle image according to the license plate number and the vehicle registration information and generates a vehicle monitoring query request, wherein the vehicle monitoring query request comprises the target vehicle image, the target monitoring time and the target monitoring position.
Optionally, the administrator client queries the vehicle registration information according to the license plate number, and acquires a vehicle image in the vehicle registration information.
Optionally, the target monitoring time is a time when the target vehicle may appear, and the target monitoring position is a position range where the target vehicle may appear.
In a specific embodiment, when a hit-and-run event occurs in a certain area, the police inquires vehicle registration information according to the license plate number to acquire a target vehicle image after receiving an alarm, and judges a possible run-out route of a hit-and-run driver according to the hit-and-run time and a case-and-run place to acquire a target monitoring time and a target monitoring range.
And S2, the vehicle monitoring obtaining module of the safety monitoring platform obtains the target monitoring video according to the target monitoring time and the target monitoring position, and processes the target monitoring video to obtain the vehicle monitoring image packet.
Specifically, step S2 includes:
and S2.1, the vehicle monitoring acquisition module acquires a target monitoring video according to the target monitoring time and the target monitoring position.
Optionally, the vehicle monitoring obtaining module obtains the monitoring videos of all monitoring ranges in the target monitoring time period from the database according to the target monitoring time, and then obtains the monitoring videos of the target monitoring position range in the target monitoring time period from the monitoring videos of all monitoring ranges in the target monitoring time period according to the target monitoring position to obtain the target monitoring video, where the target monitoring video may be one or more monitoring videos shot by one or more monitoring devices.
And S2.2, the vehicle monitoring acquisition module divides the target monitoring video into monitoring images one frame by one frame to obtain a monitoring image packet. The monitoring image package comprises a plurality of monitoring images.
And S2.3, identifying the vehicle image in the monitoring image packet by the vehicle monitoring acquisition module, and marking the monitoring time and the monitoring position of the vehicle image to obtain the vehicle monitoring image.
Specifically, the vehicle monitoring acquisition module identifies all images of vehicles appearing in the monitoring image packet to obtain all vehicle images in the monitoring image packet, and labels monitoring time and monitoring positions for each vehicle image according to the time of the vehicle image appearing in the target monitoring video and the monitoring position of the target monitoring video to which the vehicle image belongs to obtain the vehicle monitoring image.
And S2.4, the vehicle monitoring acquisition module processes all vehicle monitoring images to obtain a vehicle monitoring image packet.
The vehicle monitoring acquisition module integrates all vehicle monitoring images to obtain a vehicle monitoring image package, and the vehicle monitoring image package comprises a plurality of vehicle monitoring images.
In a specific embodiment, the police acquires a target monitoring video which is possible to shoot a vehicle causing trouble according to the target monitoring time and the target monitoring range, divides the target monitoring video into one frame and one frame of monitoring images, identifies the images of the vehicles in all the monitoring images to obtain vehicle images, and marks the monitoring time and the monitoring position of the vehicle images to obtain a vehicle monitoring image packet.
And S3, the vehicle monitoring acquisition module generates monitoring angle information according to the installation position, the installation direction and the installation height of the monitoring equipment corresponding to the target monitoring video.
The monitoring equipment is intelligent equipment with a monitoring camera function and comprises wired monitoring and wireless monitoring.
Optionally, wired monitoring is that a mode of laying network cables and power lines or only laying network cables is adopted when the monitoring system is installed, so that the monitoring system works stably in the later period, and the phenomenon that pictures are lost due to the fact that WiFi signals are not uniform is avoided, and the defect is that the cost is relatively high.
The wireless mode is that only the power cord is arranged at the installation position for the power supply of the monitoring system, and the transmission signal is carried out in a wireless WiFi covering mode, so that the current WiFi covering condition is quite consistent, the technology of the wireless equipment is mature, and the requirement of monitoring high-frequency transmission can be met.
Optionally, the monitoring device periodically uploads the monitoring video to the security monitoring platform and stores the monitoring video in a database of the security monitoring platform, and the uploading period can be set manually.
The security monitoring platform automatically deletes the monitoring videos which are stored in the database and have a storage period longer than the storage period so as to reduce the storage burden of the security monitoring platform, and the storage period is manually set according to the storage space and the actual requirements of the security monitoring platform.
Optionally, the installation height, the installation position and the installation orientation of each monitoring device are measured, and monitoring angle information is obtained according to the installation height, the installation position and the installation orientation of the monitoring device.
Optionally, the monitoring angle information includes monitoring view angle information and monitoring azimuth information, the monitoring view angle information includes a monitoring view angle type and a monitoring view angle size, and the monitoring view angle type includes: frontal view, lateral view, overhead view, and overhead view. The size of the monitoring view angle is the angle size of the shooting view angle with the target monitoring equipment as a reference center.
The monitoring position information comprises a monitoring position type and a monitoring azimuth of the monitoring equipment, and the monitoring position type comprises: the system comprises a left direction, a right direction and a square position, wherein the monitoring azimuth angle is an angle of the target monitoring equipment deviating from the front surface.
In a specific embodiment, the monitoring angle information is obtained according to the installation heights, installation positions and installation orientations of all the monitoring devices corresponding to all the target monitoring videos.
And S4, removing redundant information of the target vehicle image by the image preprocessing module to obtain a vehicle standard image.
Optionally, the redundant information includes spatial redundancy, visual redundancy, information entropy redundancy, and structural redundancy. The spatial redundancy refers to redundancy caused by strong correlation existing between adjacent pixels in an image; visual redundancy refers to partial image information which cannot be perceived or insensitive by human eyes; the information entropy redundancy is that the bit number used by each pixel in the image is larger than the information entropy of the image; structural redundancy means that there is a strong texture or self-similarity in the image.
Optionally, the vehicle standard image is a vehicle image obtained by removing spatial redundancy, visual redundancy, information entropy redundancy and structural redundancy from a target vehicle image, and the target vehicle image is a vehicle appearance image of the target vehicle.
In the invention, the image preprocessing module removes the redundant information of the target vehicle image to obtain the vehicle standard image so as to reduce the influence of the image redundant information and the image, so that the generated vehicle predicted image is closer to the appearance image of the vehicle, and the accuracy of vehicle query is improved.
And S5, the predictive image generating module generates a plurality of predictive images of the vehicle under different monitoring angles according to the standard images of the vehicle and the information of the monitoring angles, and performs characteristic normalization judgment on all the predictive images of the vehicle to obtain a predictive image packet of the vehicle.
Specifically, step S5 includes:
s5.1, the predicted image generation module extracts the vehicle standard features of the vehicle standard images, and the predicted image generation module extracts the vehicle prediction features of each vehicle predicted image.
And S5.2, the predicted image generating module performs feature normalization processing on the vehicle predicted features of each vehicle predicted image to obtain unified monitoring angle features of each vehicle predicted image.
Optionally, the predicted image generation module obtains a vehicle prediction vector of each vehicle predicted image according to the vehicle prediction feature of each vehicle predicted image;
the predicted image generating module obtains the unified monitoring angle characteristics of each vehicle predicted image according to the vehicle prediction vector of each vehicle predicted image.
The predicted image generation module performs feature normalization processing on the vehicle prediction features of each vehicle predicted image to obtain unified monitoring angle features of each vehicle predicted image, and the unified monitoring angle features further comprise:
Figure BDA0002831719520000081
wherein R is the characteristic of the unified monitoring angle, j is the characteristic index, m is the characteristic number of the vehicle prediction vector, qjIs the weight of the jth feature,
Figure BDA0002831719520000082
is the feature vector after the jth dimension normalization processing, u is the monitoring angle corresponding to the vehicle predicted image,
Figure BDA0002831719520000083
is a multidimensional loss function.
And S5.3, the predicted image generating module calculates the feature similarity of the unified monitoring angle feature and the vehicle standard feature of each vehicle predicted image, and compares the feature similarity with a similarity threshold.
Specifically, the prediction image generation module generates a uniform monitoring angle vector according to the uniform monitoring angle feature,
T=[t1,t2…tp]
wherein T monitors the angle vector T uniformlypThe characteristic value of the p-th characteristic in the unified monitoring angle vector is obtained, and p is the number of the characteristics in the unified monitoring angle vector.
The prediction image generation module generates a vehicle standard feature vector according to the vehicle standard feature,
W=[w1,w2…wp]
wherein, W is the vehicle standard characteristic vector, WpThe characteristic value of the p-th characteristic in the vehicle standard characteristic vector is represented, and p is the number of the characteristics in the vehicle standard characteristic vector.
The predictive image generating module calculates the feature similarity of the unified monitoring angle feature and the vehicle standard feature according to the unified monitoring angle vector and the vehicle standard feature vector,
Figure BDA0002831719520000084
wherein S is the feature similarity of the unified monitoring angle feature and the vehicle standard feature, p is the number of features in the unified monitoring angle vector, a is the feature index, waIs the eigenvalue, t, of the a-th characteristic in the standard characteristic vector of the vehicleaAnd uniformly monitoring the characteristic value of the a-th characteristic in the angle vector.
And S5.4, integrating all the vehicle predicted images with the feature similarity larger than the similarity threshold value by the predicted image generating module to obtain a vehicle predicted image packet.
When the feature similarity is not greater than the similarity threshold, the vehicle predicted image which corresponds to the monitoring direction and the monitoring angle and is generated is judged to be failed, and the predicted image generating module deletes the vehicle predicted image and regenerates the vehicle predicted image.
Optionally, the predicted image generation module performs feature normalization discrimination on all the vehicle predicted images to improve the degree of reality of the generated vehicle predicted images at different monitoring angles and monitoring orientations, so that the predicted images can be used as standard reference images when the vehicle predicted images at different monitoring angles and monitoring orientations are subjected to vehicle query.
In practical application, when a vehicle is queried through a monitoring video, the vehicle is influenced by different shooting monitoring visual angles and monitoring directions, appearance features extracted from vehicle monitoring images of a target vehicle under different monitoring visual angles and monitoring directions are different, in addition, different monitoring visual angles and monitoring directions can shield the corresponding vehicle appearance to different degrees, so that certain important information is lost, and the accuracy of a vehicle query result is not high.
Optionally, the similarity threshold is a numerical value preset by a manager according to an actual situation, and is used for verifying whether the generated corresponding vehicle predicted image meets a standard.
Optionally, the vehicle predicted image is used for inquiring and identifying the vehicle monitoring image of the target vehicle under the same monitoring visual angle and monitoring direction, so that the accuracy of vehicle inquiry can be improved.
Preferably, the vehicle prediction image packet includes vehicle images of the target vehicle in different monitoring orientations and different monitoring perspectives.
In a particular embodiment, vehicle prediction images of the offending vehicle at different shooting angles and different shooting orientations are generated based on the monitoring angle information and the vehicle standard images, and the vehicle prediction images of all the offending vehicles are subjected to integration processing to obtain a vehicle prediction image packet of the offending vehicle.
According to the vehicle query method and the vehicle query device, the vehicle predicted images under different monitoring angles and monitoring directions can be generated according to the vehicle standard images and the monitoring angle information, when the target vehicle is queried, the influence of different monitoring viewing angles and different monitoring directions on vehicle query is eliminated, the vehicle query accuracy is improved, and the conditions of missed query and mistaken query of the target vehicle are avoided.
And S6, the target vehicle identification module inputs the vehicle monitoring image packet and the vehicle prediction image packet into the vehicle identification convolutional neural network model to obtain a target vehicle query result.
Preferably, the training process of the vehicle identification convolutional neural network model comprises the following steps:
selecting a plurality of vehicle monitoring images and corresponding vehicle predicted images, taking the plurality of vehicle monitoring images as a vehicle monitoring image data set, and taking the corresponding plurality of vehicle predicted images as a vehicle predicted image data set;
training a convolutional neural network model by taking the vehicle monitoring image data set and the corresponding vehicle prediction image data set as input, wherein the convolutional neural network model comprises a convolutional layer and a pooling layer;
performing convolution operation on the vehicle monitoring image and the vehicle predicted image by adopting a convolution layer, and performing pooling operation on the vehicle monitoring image and the vehicle predicted image by adopting a pooling layer;
optimizing the weight of the convolutional neural network and minimizing the characteristic loss by adopting a random gradient descent algorithm;
the convolutional neural network model is trained until convergence to obtain a trained vehicle identification convolutional neural network model.
Specifically, minimizing feature loss includes:
Figure BDA0002831719520000101
where i is the feature index, n is the number of features, tiCharacteristic value, Q (t), of the ith characteristic of the vehicle monitoring imageiK) is the characteristic value of the ith characteristic of the kth vehicle predicted image, and k is the vehicleThe index of the picture is predicted.
In one embodiment, the target vehicle identification module inputs the vehicle surveillance image packet and the vehicle predictive image packet for the offending vehicle into the vehicle identification convolutional neural network model to query whether the offending vehicle is present in the target surveillance video, and obtains the location and time of the occurrence of the offending vehicle when the offending vehicle is present in the target surveillance video.
In practical application, when a vehicle is queried through a monitoring video, the vehicle is influenced by different shooting monitoring visual angles and monitoring directions, appearance features extracted from vehicle monitoring images of a target vehicle under different monitoring visual angles and monitoring directions are different, in addition, different monitoring visual angles and monitoring directions can shield the corresponding vehicle appearance to different degrees, so that certain important information is lost, and the accuracy of a vehicle query result is not high.
According to the method and the device, the monitoring angle information of the target monitoring video is obtained through the mounting position, the mounting angle and the mounting height of the monitoring device corresponding to the target monitoring video, and the vehicle predicted images at different monitoring angles and different positions are generated through the vehicle standard image and the monitoring angle information, so that the influence of different monitoring visual angles and different monitoring positions on vehicle query is eliminated, the vehicle query accuracy is improved, and the conditions of missed query and mistaken query of the target vehicle are avoided.
In one embodiment, an artificial intelligence smart city safety monitoring system comprises: safety monitoring platform and administrator's terminal, administrator's terminal and safety monitoring platform have communication connection, and wherein the intelligent equipment that has communication, transmission data and memory function that the administrator's terminal was held for the safety management person, it includes: smart phones, smart watches, smart wearable devices, laptops, tablets, and desktop computers.
The administrator client is used for acquiring a target vehicle image according to the license plate number and the vehicle registration information and generating a vehicle monitoring query request, wherein the vehicle monitoring query request comprises the target vehicle image, target monitoring time and a target monitoring position.
The safety monitoring platform comprises: the system comprises a vehicle monitoring acquisition module, an image preprocessing module, a predicted image generation module and a target vehicle identification module, wherein communication connection is formed among the modules.
The vehicle monitoring acquisition module is used for acquiring a target monitoring video according to the target monitoring time and the target monitoring position, processing the target monitoring video to obtain a vehicle monitoring image packet, and then generating monitoring angle information according to the installation position, the installation direction and the installation height of the monitoring equipment corresponding to the target monitoring video.
The image preprocessing module removes redundant information of the target vehicle image to obtain a vehicle standard image
The predicted image generation module is used for generating a plurality of vehicle predicted images under different monitoring angles according to the vehicle standard images and the monitoring angle information, and performing characteristic normalization judgment on all the vehicle predicted images to obtain a vehicle predicted image packet.
And the target vehicle identification module is used for inputting the vehicle monitoring image packet and the vehicle prediction image packet into the vehicle identification convolutional neural network model to obtain a target vehicle query result.
Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the invention is limited only by the appended claims. The order of features in the claims does not imply any specific order in which the features must be worked. Furthermore, in the claims, the word "comprising" does not exclude other elements, and the indefinite article "a" or "an" does not exclude a plurality.

Claims (10)

1. The utility model provides a video monitoring method for wisdom city, its characterized in that is applied to wisdom city safety monitoring system, wisdom city safety monitoring system include administrator's customer end and safety monitoring platform, safety monitoring platform includes vehicle monitoring acquisition module, image preprocessing module, prediction image generation module and target vehicle identification module, and it includes:
the method comprises the following steps that an administrator client side obtains a target vehicle image according to a license plate number and vehicle registration information and generates a vehicle monitoring query request, wherein the vehicle monitoring query request comprises the target vehicle image, target monitoring time and a target monitoring position;
the vehicle monitoring acquisition module acquires a target monitoring video according to the target monitoring time and the target monitoring position and processes the target monitoring video to obtain a vehicle monitoring image packet;
the vehicle monitoring and acquiring module generates monitoring angle information according to the installation position, the installation position and the installation height of the monitoring equipment corresponding to the target monitoring video, wherein the monitoring angle information comprises monitoring visual angle information and monitoring position information, and the monitoring visual angle comprises: a frontal view, a lateral view, a supine view, and a pronged view;
the image preprocessing module removes the spatial redundancy and the visual redundancy of the target vehicle image to obtain a vehicle standard image;
the predicted image generation module generates a plurality of vehicle predicted images under different monitoring angles according to the vehicle standard image and the monitoring angle information, and performs characteristic normalization judgment on all the vehicle predicted images to obtain a vehicle predicted image packet;
and the target vehicle identification module inputs the vehicle monitoring image packet and the vehicle prediction image packet into the vehicle identification convolutional neural network model to obtain a target vehicle query result.
2. The method of claim 1, wherein the vehicle monitoring obtaining module processes the target monitoring video to obtain a vehicle monitoring image packet, comprising:
the vehicle monitoring acquisition module divides the target monitoring video into monitoring images of one frame and one frame to obtain a monitoring image packet;
the vehicle monitoring acquisition module identifies vehicle images in the monitoring image packet, and marks the monitoring time and the monitoring position of the vehicle images to obtain vehicle monitoring images;
the vehicle monitoring acquisition module processes all vehicle monitoring images to obtain a vehicle monitoring image packet.
3. The method of claim 1 or 2, wherein the monitoring position information comprises a monitoring position type and a monitoring azimuth of a monitoring device, the monitoring position type comprising: the monitoring device comprises a left direction, a right direction and a square position, wherein the monitoring azimuth angle is an angle of the monitoring device deviating from the front surface.
4. The method according to claim 3, wherein the predictive image generation module performing feature normalization discrimination on the vehicle predictive image comprises:
the predicted image generation module extracts the vehicle standard features of the vehicle standard image;
the predicted image generation module extracts the vehicle prediction characteristics of each vehicle predicted image;
the predicted image generation module performs feature normalization processing on the vehicle prediction features of each vehicle predicted image to obtain unified monitoring angle features of each vehicle predicted image;
the predictive image generation module calculates the feature similarity of the unified monitoring angle feature and the vehicle standard feature of each vehicle predictive image and compares the feature similarity with a similarity threshold;
and the predicted image generation module integrates all the vehicle predicted images with the characteristic similarity larger than the similarity threshold value to obtain a vehicle predicted image packet.
5. The method according to claim 4, wherein the predictive image generation module deletes the vehicle predictive image and regenerates it when the feature similarity is not greater than the similarity threshold.
6. The method according to claim 4, wherein the predictive image generation module performs feature normalization processing on the vehicle prediction features of each vehicle predictive image to obtain the unified monitoring angle features of each vehicle predictive image comprises:
the predicted image generation module obtains a vehicle prediction vector of each vehicle predicted image according to the vehicle prediction characteristics of each vehicle predicted image;
the predicted image generating module obtains the unified monitoring angle characteristics of each vehicle predicted image according to the vehicle prediction vector of each vehicle predicted image.
7. The method of claim 6, wherein the training process of the vehicle identification convolutional neural network model comprises:
selecting a plurality of vehicle monitoring images and corresponding vehicle predicted images, taking the plurality of vehicle monitoring images as a vehicle monitoring image data set, and taking the corresponding plurality of vehicle predicted images as a vehicle predicted image data set;
training a convolutional neural network model by taking the vehicle monitoring image data set and the corresponding vehicle prediction image data set as input, wherein the convolutional neural network model comprises a convolutional layer and a pooling layer;
performing convolution operation on the vehicle monitoring image and the vehicle predicted image by adopting a convolution layer, and performing pooling operation on the vehicle monitoring image and the vehicle predicted image by adopting a pooling layer;
optimizing the weight of the convolutional neural network and minimizing the characteristic loss by adopting a random gradient descent algorithm;
the convolutional neural network model is trained until convergence to obtain a trained vehicle identification convolutional neural network model.
8. The method of claim 7, wherein minimizing feature loss comprises:
Figure FDA0002831719510000031
where i is the feature index, n is the number of features, tiCharacteristic value, Q (t), of the ith characteristic of the vehicle monitoring imageiK) is the characteristic value of the ith characteristic of the kth vehicle predicted image, and k is the index of the vehicle predicted image.
9. The method according to claim 8, wherein the target vehicle identification module inputs the vehicle surveillance image packet and the vehicle predictive image packet of the offending vehicle into the vehicle identification convolutional neural network model to query whether the offending vehicle is present in the target surveillance video, and acquires the location and time of occurrence of the offending vehicle when the offending vehicle is present in the target surveillance video.
10. The method according to one of claims 1 to 9, wherein the monitoring device is a smart device with a monitoring camera function, which comprises a wired monitoring device and a wireless monitoring device;
the monitoring equipment periodically uploads the monitoring video to the safety monitoring platform and stores the monitoring video in a database of the safety monitoring platform.
CN202011460959.5A 2020-07-27 2020-07-27 Video monitoring method for smart city Active CN112541096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011460959.5A CN112541096B (en) 2020-07-27 2020-07-27 Video monitoring method for smart city

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010733584.9A CN111881321B (en) 2020-07-27 2020-07-27 Smart city safety monitoring method based on artificial intelligence
CN202011460959.5A CN112541096B (en) 2020-07-27 2020-07-27 Video monitoring method for smart city

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010733584.9A Division CN111881321B (en) 2020-07-27 2020-07-27 Smart city safety monitoring method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN112541096A true CN112541096A (en) 2021-03-23
CN112541096B CN112541096B (en) 2023-01-24

Family

ID=73200809

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010733584.9A Active CN111881321B (en) 2020-07-27 2020-07-27 Smart city safety monitoring method based on artificial intelligence
CN202011460959.5A Active CN112541096B (en) 2020-07-27 2020-07-27 Video monitoring method for smart city

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010733584.9A Active CN111881321B (en) 2020-07-27 2020-07-27 Smart city safety monitoring method based on artificial intelligence

Country Status (1)

Country Link
CN (2) CN111881321B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294603A (en) * 2022-08-02 2022-11-04 南京莱科智能工程研究院有限公司 Method for constructing human-vehicle weight recognition algorithm aiming at multi-dimensional image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881321B (en) * 2020-07-27 2021-04-20 东来智慧交通科技(深圳)有限公司 Smart city safety monitoring method based on artificial intelligence

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320710A (en) * 2014-08-05 2016-02-10 北京大学 Illumination variation resistant vehicle retrieval method and device
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107679078A (en) * 2017-08-29 2018-02-09 银江股份有限公司 A kind of bayonet socket image vehicle method for quickly retrieving and system based on deep learning
CN108197538A (en) * 2017-12-21 2018-06-22 浙江银江研究院有限公司 A kind of bayonet vehicle searching system and method based on local feature and deep learning
US20180297525A1 (en) * 2017-04-17 2018-10-18 GM Global Technology Operations LLC Display control systems and methods for a vehicle
CN110704652A (en) * 2019-08-22 2020-01-17 长沙千视通智能科技有限公司 Vehicle image fine-grained retrieval method and device based on multiple attention mechanism
CN111259777A (en) * 2020-01-13 2020-06-09 天地伟业技术有限公司 End-to-end multitask vehicle brand identification method
US20200180692A1 (en) * 2018-12-07 2020-06-11 GM Global Technology Operations LLC System and method to model steering characteristics
CN111881321A (en) * 2020-07-27 2020-11-03 广元量知汇科技有限公司 Smart city safety monitoring method based on artificial intelligence

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9904852B2 (en) * 2013-05-23 2018-02-27 Sri International Real-time object detection, tracking and occlusion reasoning
US20150378014A1 (en) * 2013-08-07 2015-12-31 Sandia Corporation Ascertaining class of a vehicle captured in an image
US10043035B2 (en) * 2013-11-01 2018-08-07 Anonos Inc. Systems and methods for enhancing data protection by anonosizing structured and unstructured data and incorporating machine learning and artificial intelligence in classical and quantum computing environments
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN108764018A (en) * 2018-04-03 2018-11-06 北京交通大学 A kind of multitask vehicle based on convolutional neural networks recognition methods and device again
US10176405B1 (en) * 2018-06-18 2019-01-08 Inception Institute Of Artificial Intelligence Vehicle re-identification techniques using neural networks for image analysis, viewpoint-aware pattern recognition, and generation of multi- view vehicle representations
KR102111363B1 (en) * 2018-09-03 2020-05-15 주식회사 월드씨엔에스 Accident monitoring system in tunnel using camera grouping of IoT based
EP3853764A1 (en) * 2018-09-20 2021-07-28 NVIDIA Corporation Training neural networks for vehicle re-identification
CN110097068B (en) * 2019-01-17 2021-07-30 北京航空航天大学 Similar vehicle identification method and device
CN110210378B (en) * 2019-05-30 2023-04-07 中国电子科技集团公司第三十八研究所 Embedded video image analysis method and device based on edge calculation
CN110414432B (en) * 2019-07-29 2023-05-16 腾讯科技(深圳)有限公司 Training method of object recognition model, object recognition method and corresponding device
CN110516583A (en) * 2019-08-21 2019-11-29 中科视语(北京)科技有限公司 A kind of vehicle recognition methods, system, equipment and medium again
CN110704666B (en) * 2019-08-30 2022-06-03 北京大学 Method and system for improving accurate retrieval of cross-view vehicles
CN110929589B (en) * 2019-10-31 2023-07-07 浙江大华技术股份有限公司 Method, apparatus, computer apparatus and storage medium for identifying vehicle characteristics
CN110826484A (en) * 2019-11-05 2020-02-21 上海眼控科技股份有限公司 Vehicle weight recognition method and device, computer equipment and model training method
CN111291722A (en) * 2020-03-10 2020-06-16 无锡物联网创新中心有限公司 Vehicle weight recognition system based on V2I technology

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320710A (en) * 2014-08-05 2016-02-10 北京大学 Illumination variation resistant vehicle retrieval method and device
US20180297525A1 (en) * 2017-04-17 2018-10-18 GM Global Technology Operations LLC Display control systems and methods for a vehicle
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107679078A (en) * 2017-08-29 2018-02-09 银江股份有限公司 A kind of bayonet socket image vehicle method for quickly retrieving and system based on deep learning
CN108197538A (en) * 2017-12-21 2018-06-22 浙江银江研究院有限公司 A kind of bayonet vehicle searching system and method based on local feature and deep learning
US20200180692A1 (en) * 2018-12-07 2020-06-11 GM Global Technology Operations LLC System and method to model steering characteristics
CN110704652A (en) * 2019-08-22 2020-01-17 长沙千视通智能科技有限公司 Vehicle image fine-grained retrieval method and device based on multiple attention mechanism
CN111259777A (en) * 2020-01-13 2020-06-09 天地伟业技术有限公司 End-to-end multitask vehicle brand identification method
CN111881321A (en) * 2020-07-27 2020-11-03 广元量知汇科技有限公司 Smart city safety monitoring method based on artificial intelligence

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JITIAN WANG ET AL.: "Vehicle Type Recognition in Surveillance Images From Labeled Web-Nature Data Using Deep Transfer Learning", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
刘凯等: "车辆再识别技术综述", 《智能科学与技术学报》 *
梁光胜 等: "基于深度学习的机动车违规行为监测方法", 《计算机应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294603A (en) * 2022-08-02 2022-11-04 南京莱科智能工程研究院有限公司 Method for constructing human-vehicle weight recognition algorithm aiming at multi-dimensional image

Also Published As

Publication number Publication date
CN112541096B (en) 2023-01-24
CN111881321A (en) 2020-11-03
CN111881321B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN111179345B (en) Front vehicle line-crossing violation behavior automatic detection method and system based on vehicle-mounted machine vision
CN109271554B (en) Intelligent video identification system and application thereof
CN111881321B (en) Smart city safety monitoring method based on artificial intelligence
CN108564052A (en) Multi-cam dynamic human face recognition system based on MTCNN and method
CN111160149B (en) Vehicle-mounted face recognition system and method based on motion scene and deep learning
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN108234927A (en) Video frequency tracking method and system
CN108710827B (en) A kind of micro- police service inspection in community and information automatic analysis system and method
CN108416256A (en) The family's cloud intelligent monitor system and monitoring method of feature based identification
CN110867046A (en) Intelligent car washer video monitoring and early warning system based on cloud computing
CN112102367B (en) Video analysis computing power real-time distribution scheduling method based on motion model
KR102043922B1 (en) A cctv divided management system and method it
CN111753651A (en) Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis
CN114067438B (en) Method and system for identifying actions of person on tarmac based on thermal infrared vision
CN111901564B (en) Smart city safety monitoring system based on artificial intelligence
CN112132861B (en) Vehicle accident monitoring method, system and computer equipment
CN115761618A (en) Key site security monitoring image identification method
CN111695512A (en) Unattended cultural relic monitoring method and device
CN111091093A (en) Method, system and related device for estimating number of high-density crowds
CN115170059A (en) Intelligent safety monitoring system for outdoor construction site and working method
CN112016492A (en) Teaching attention monitoring system and method based on vision
CN114154017A (en) Unsupervised visible light and infrared bidirectional cross-mode pedestrian searching method
CN109509368A (en) A kind of parking behavior algorithm based on roof model
WO2021248564A1 (en) Panoramic big data application monitoring and control system
CN111159190A (en) Campus risk analysis system and method based on cloud platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230105

Address after: 908, block a, floor 8, No. 116, Zizhuyuan Road, Haidian District, Beijing 100089

Applicant after: ZHONGZI DATA CO.,LTD.

Applicant after: CHINA HIGHWAY ENGINEERING CONSULTING Corp.

Address before: 628000 Xuefeng Qiao Road 338, Lizhou District, Guangyuan, Sichuan

Applicant before: GUANGYUAN LIANGZHIHUI TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant