CN112115800A - Vehicle combination recognition system and method based on deep learning target detection - Google Patents
Vehicle combination recognition system and method based on deep learning target detection Download PDFInfo
- Publication number
- CN112115800A CN112115800A CN202010861109.XA CN202010861109A CN112115800A CN 112115800 A CN112115800 A CN 112115800A CN 202010861109 A CN202010861109 A CN 202010861109A CN 112115800 A CN112115800 A CN 112115800A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- license plate
- module
- image
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000013135 deep learning Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 28
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 238000004891 communication Methods 0.000 claims description 15
- 230000008707 rearrangement Effects 0.000 claims description 15
- 230000006399 behavior Effects 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 4
- 238000012986 modification Methods 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000033228 biological regulation Effects 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a vehicle combination recognition system and method based on deep learning target detection. The method carries out combined recognition and processing on the car logo, the license plate and the car face of the car, and compares the car logo, the license plate and the car face with the car information in the database, thereby judging whether the car meets the regulations or not. The invention is a rapid and efficient vehicle identification method, and can effectively help traffic management and law enforcement personnel to reduce the difficulty and the workload of fighting against illegal vehicles.
Description
Technical Field
The invention relates to the field of intelligent traffic supervision, in particular to a vehicle combination identification system and method based on deep learning target detection.
Background
As the number of automobiles increases year by year, urban roads face increasing traffic pressure. How to efficiently manage traffic becomes a focus problem of real life. Meanwhile, illegal cases such as vehicle fake plate, illegal refitting and even hit-and-run can also happen. The illegal behaviors seriously threaten the life and property safety of people, influence social security and disturb the management and control of public security by public security organs and traffic control departments. However, the technology of the conventional traffic road monitoring and management system mainly focuses on behaviors such as speeding, running red light, and driving without road, and the local license plate recognition field of fixed scenes such as parking lots. For illegal behaviors such as vehicle fake plate, illegal refitting and even hit-and-run, most of them depend on public security and traffic police to carry out manual identification, and vehicles running on roads are more and more nowadays, which undoubtedly greatly increases the difficulty of law enforcement. Therefore, a fast and efficient vehicle identification method is needed to help the traffic management law enforcement personnel reduce the difficulty and the workload of fighting against illegal vehicles.
The target detection is an important application of the deep learning technology in the field of computer vision, and is more widely applied to the fields of intelligent driving and intelligent transportation in recent years. The target detection algorithm based on deep learning is generally realized by adopting a convolutional neural network, so that the characteristics of the image can be fully extracted, and the category and the position of the target in the image can be quickly and accurately identified and positioned. The method is applied to the identification of illegal vehicles in traffic control, and can effectively improve the identification speed and accuracy.
Disclosure of Invention
The invention provides a vehicle combination identification system and method based on deep learning target detection, which can effectively help traffic control law enforcement personnel to improve the attack efficiency on illegal vehicles.
In order to solve the technical problems, the invention provides the following technical scheme:
a vehicle combination identification system based on deep learning target detection, comprising: the system comprises an image acquisition module, an image processing module, a target detection module, a target classification module, a license plate recognition module, an information comparison module and a communication module;
the image acquisition module is used for acquiring an image containing a vehicle target in real time and transmitting the image to the image processing module;
the image processing module is used for carrying out image denoising, image enhancement and other processing on the acquired image so as to facilitate subsequent identification, and transmitting the processed image to the target detection module;
the target detection module is used for marking a vehicle logo area, a vehicle face area and a license plate area of a vehicle in the acquired traffic image, transmitting the vehicle logo area and the vehicle face area to the target classification module and transmitting the license plate area to the license plate recognition module;
the target classification module determines a target vehicle brand and a vehicle model according to the received vehicle logo area and the vehicle face area and transmits the target vehicle brand and the vehicle model to the information comparison module;
the license plate recognition module recognizes a license plate number according to the received license plate area and transmits the license plate number to the information comparison module;
the information comparison module comprises an information storage unit and an information comparison unit, wherein the information storage unit is used for storing vehicle information and illegal information of illegal vehicles, and the vehicle information comprises license plate numbers, vehicle brands, vehicle models and whether unprocessed illegal behaviors exist, wherein the license plate numbers, the vehicle brands and the vehicle models are registered on a private network of a traffic department; the information comparison unit is used for matching the received license plate number, the vehicle brand, the vehicle model and the vehicle information stored in the information storage unit, judging that the vehicle is a suspected vehicle with a forged license plate if no license plate number is matched, and transmitting the received license plate number, the vehicle brand, the vehicle model and a judgment result to the communication module; if the model corresponding to the matched license plate number is inconsistent with the received vehicle model, the vehicle is judged to be a vehicle with a fake plate or suspected of illegal modification, and the received license plate number, the vehicle brand, the vehicle model and the judgment result are transmitted to the communication module; if the vehicle types are consistent but unprocessed illegal behaviors exist, judging the vehicles as vehicles with other illegal suspicions, and transmitting the received license plate number, vehicle brand, vehicle model and judgment results to the communication module;
the communication module transmits the received license plate number, the vehicle brand, the vehicle model and the judgment result to an information center of a traffic management department to remind traffic management personnel to intercept and control related vehicles.
As a further optimization scheme of the vehicle combination recognition system based on deep learning target detection, the target classification module comprises a primary classification unit and a secondary classification unit, wherein the primary classification unit is used for sending the received vehicle logo area into a pre-trained primary classification network model to obtain a vehicle brand; the secondary classification unit is used for sending the received car face area into a secondary classification network model which is trained in advance under the car brand and obtained by the primary classification unit, and obtaining the car model.
The invention also discloses an identification method of the vehicle combination identification system based on the deep learning target detection, wherein the target detection module is used for detecting based on a target detection network;
the target detection network comprises a DarkNet-19 convolutional neural network model and a Passtrough layer;
the DarkNet-19 convolutional neural network model comprises 19 convolutional layers and 5 maximum pooling layers, and the number of convolutional cores of the last convolutional layer is 8;
the Passtthroughput layer comprises two 3 × 3 convolutional layers and one 1 × 1 convolutional layer, is used for receiving a feature map output by the 13 th convolutional layer of the Darknet-19 convolutional neural network model to perform feature rearrangement processing, and transmits the feature map subjected to feature rearrangement to a concat node after the 20 th convolutional layer of the Darknet-19 convolutional neural network model and a feature map output by the 20 th convolutional layer of the Darknet-19 convolutional neural network model to perform channel splicing, so that feature fusion is realized, and detection accuracy is improved;
the concrete steps of the Passtthroughput layer for carrying out feature rearrangement processing on the feature graph output by the 13 th convolutional layer of the Darknet-19 convolutional neural network model are as follows: the received feature maps are firstly convolved by 1 x 1 to reduce the dimension of the channel number, then the dot sampling is respectively carried out according to the rows and the columns to obtain 4 feature maps with the size reduced by half, and then the 4 feature maps are spliced according to the channels to obtain the feature maps subjected to feature rearrangement.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
the invention discloses a vehicle combination recognition system and method based on deep learning target detection, which are characterized in that target detection algorithms in the field of deep learning are used for detecting areas of car logos, license plates and car faces in traffic images, and then the car logos and the car faces are sent into a classification model trained in advance to detect specific brands and models of vehicles; and carrying out further processing such as fine positioning, segmentation and the like on the license plate area to identify the license plate number. And comparing the identified license plate number or the detected specific vehicle type information with the original vehicle type, brand and other related information of the vehicle registered on the special network of the department of transportation, thereby judging whether the state of the vehicle is in accordance with the regulations or not, and taking the judged information as the basis for the traffic management and law enforcement personnel to attack the illegal criminal vehicles. The combined identification method provided by the invention improves the accuracy of vehicle identification and greatly improves the efficiency of traffic control departments in identifying illegal vehicles.
Drawings
FIG. 1 is a block diagram of a vehicle combination identification method according to the present invention;
FIG. 2 is a flowchart of the operation of the information comparison module;
FIG. 3 is a flowchart of the operation of the license plate recognition module;
FIG. 4 is a block diagram of a target detection model;
FIG. 5 illustrates the vehicle logo, vehicle face and license plate regions detected by the target detection model;
FIG. 6-1 is a car logo region extracted from the target detection model output;
FIG. 6-2 is a diagram showing the result of the binarization process performed on FIG. 6-1;
FIG. 7-1 is a vehicle face region extracted from the target detection model output;
FIG. 7-2 is a graph showing the results of edge extraction performed on FIG. 7-1;
FIG. 8-1 is a license plate region extracted from the target detection model output;
FIG. 8-2 is a graph showing the result of fine positioning of FIG. 8-1.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a vehicle combination recognition system based on deep learning target detection includes: the system comprises an image acquisition module, an image processing module, a target detection module, a target classification module, a license plate recognition module, an information comparison module and a communication module;
the image acquisition module is used for acquiring an image containing a vehicle target in real time and transmitting the image to the image processing module;
the image processing module is used for carrying out image denoising, image enhancement and other processing on the acquired image so as to facilitate subsequent identification, and transmitting the processed image to the target detection module;
the target detection module is used for marking a vehicle logo area, a vehicle face area and a license plate area of a vehicle in the acquired traffic image, transmitting the vehicle logo area and the vehicle face area to the target classification module and transmitting the license plate area to the license plate recognition module;
the target classification module determines a target vehicle brand and a vehicle model according to the received vehicle logo area and the vehicle face area and transmits the target vehicle brand and the vehicle model to the information comparison module;
the license plate recognition module recognizes a license plate number according to the received license plate area and transmits the license plate number to the information comparison module;
the information comparison module comprises an information storage unit and an information comparison unit, wherein the information storage unit is used for storing vehicle information and illegal information of illegal vehicles, and the vehicle information comprises license plate numbers, vehicle brands, vehicle models and whether unprocessed illegal behaviors exist, wherein the license plate numbers, the vehicle brands and the vehicle models are registered on a private network of a traffic department; the information comparison unit is used for matching the received license plate number, the vehicle brand, the vehicle model and the vehicle information stored in the information storage unit, judging that the vehicle is a suspected vehicle with a forged license plate if no license plate number is matched, and transmitting the received license plate number, the vehicle brand, the vehicle model and a judgment result to the communication module; if the model corresponding to the matched license plate number is inconsistent with the received vehicle model, the vehicle is judged to be a vehicle with a fake plate or suspected of illegal modification, and the received license plate number, the vehicle brand, the vehicle model and the judgment result are transmitted to the communication module; if the vehicle types are consistent but unprocessed illegal behaviors exist, judging the vehicles as vehicles with other illegal suspicions, and transmitting the received license plate number, vehicle brand, vehicle model and judgment results to the communication module;
the communication module transmits the received license plate number, the vehicle brand, the vehicle model and the judgment result to an information center of a traffic management department to remind traffic management personnel to intercept and control related vehicles.
The target classification module comprises a primary classification unit and a secondary classification unit, wherein the primary classification unit is used for sending the received vehicle logo area into a pre-trained primary classification network model to obtain a vehicle brand; the secondary classification unit is used for sending the received car face area into a secondary classification network model which is trained in advance under the car brand and obtained by the primary classification unit, and obtaining the car model.
According to the vehicle combination identification method of one embodiment of the present disclosure, the image acquisition module mainly includes a CMOS or CCD camera.
In addition, the image collected by the camera may be affected by environment, light and the like, and thus, the problem of noise or too high or too low brightness may occur, and therefore, before the collected image is sent to the target detection module, the collected image needs to be subjected to image denoising and image enhancement processing in the image processing module. Preferably, the image denoising adopts an ROF model, and in the model, the objective function is to find the denoised image U, which is expressed by the following formulaThe value of (a) is minimum, wherein the norm I-U is a measure of the difference between the denoised image U and the original image I. In addition, preferably, the image enhancement uses a histogram equalization method, which generally flattens the gray level histogram of the image, so that the distribution probability of each gray level value in the transformed image is the same, thereby achieving the effect of enhancing the image contrast.
The invention also discloses an identification method of the vehicle combination identification system based on the deep learning target detection, wherein the target detection module is used for detecting based on a target detection network;
the target detection network comprises a DarkNet-19 convolutional neural network model and a Passtrough layer;
the DarkNet-19 convolutional neural network model comprises 19 convolutional layers and 5 maximum pooling layers, and the number of convolutional cores of the last convolutional layer is 8;
the Passtthroughput layer comprises two 3 × 3 convolutional layers and one 1 × 1 convolutional layer, is used for receiving a feature map output by the 13 th convolutional layer of the Darknet-19 convolutional neural network model to perform feature rearrangement processing, and transmits the feature map subjected to feature rearrangement to a concat node after the 20 th convolutional layer of the Darknet-19 convolutional neural network model and a feature map output by the 20 th convolutional layer of the Darknet-19 convolutional neural network model to perform channel splicing, so that feature fusion is realized, and detection accuracy is improved;
the concrete steps of the Passtthroughput layer for carrying out feature rearrangement processing on the feature graph output by the 13 th convolutional layer of the Darknet-19 convolutional neural network model are as follows: the received feature maps are firstly convolved by 1 x 1 to reduce the dimension of the channel number, then the dot sampling is respectively carried out according to the rows and the columns to obtain 4 feature maps with the size reduced by half, and then the 4 feature maps are spliced according to the channels to obtain the feature maps subjected to feature rearrangement.
As shown in fig. 4, a total of 22 convolutional layers and 5 pooling layers are included. A Passtthrough layer is added on the basis of the original DarkNet-19 for fusing the deep and shallow layer features, and the Passtthrough layer comprises a 1 x 1 convolution layer and feature rearrangement calculation. The specific fusion mode is as follows: and sending the feature map of the last 3 × 3 convolutional layer of the convolution block 3 into a Passtough layer for feature rearrangement, and then performing channel splicing on the feature map subjected to feature rearrangement and the feature map output by the convolution block 4. If the input picture size is 448 x 3 (width x height channel number), the feature maps of 26 x 512 obtained by the third rolling block are respectively sampled according to row and column spacing points, so that 4 feature maps of 13 x 512 can be obtained, the 4 features are spliced according to the channel number to obtain a feature map of 13 x 2048, and then the feature map is subjected to channel splicing with the feature map of 13 x 1024 obtained by the fourth rolling block. This feature fusion facilitates the detection of small objects, such as car logos. And mapping the information of the output layer to the original image to obtain the rectangular frame area where the car logo, the car face and the license plate are located. The number of signature channels in the last convolutional layer of the network is 8. Among them, there are 3 categories prediction: car logo, car face, and license plate; 4 position prediction values: position information of each region (x, y coordinates of the center point and width w and height h of the rectangular frame); 1 confidence prediction value.
The training sample of the target model is an original image acquired by an image acquisition device at different time intervals at different city street intersections and the label of the corresponding image. The labeling information comprises category information of the car logo, the license plate and the car face and position information xmin, ymin, xmax and ymax of the corresponding target. As shown in fig. 5, the target detection network detects a target rectangular frame area.
As shown in fig. 6-1, for the car logo region detected by the target detection module, when the car logo region is sent into the primary classification unit of the target classification module, in order to distinguish the foreground and the background of the car logo region and thereby improve the classification accuracy of the primary classification unit, that is, accurately obtain the brand of the vehicle, the binarization processing is performed on the car logo region before the car logo region is sent into the primary classification network, as shown in fig. 6-2, a threshold is respectively set for three channels of the car logo region, and preferably, the threshold of the RGB three channels is red _ threshold 200, green _ threshold 200, and blue _ threshold 200, respectively. When all the channel values of the pixel points on the car logo area picture are greater than the threshold values of all the channels, the channel values of the pixel points are set to be [255,255 and 255], namely white. The channel values of other pixels which do not satisfy this condition are set to [0,0,0], i.e., black. The first-level classification network model adopts a VGG16 network or a VGG19 (a person in the art can flexibly select the number of network layers according to actual requirements, and the application is not strictly limited). The network model is trained in advance, and the training steps are as follows: preparing enough colored car logo pictures or screenshots of various car brands (each brand also needs enough samples to be shot under the conditions of different angles, different distances and the like); carrying out the binarization processing on the prepared picture set, endowing corresponding brand labels, and combining the brand labels to be used as a data set; the data set is input into a network for training.
In addition, the intercepted car face area (as shown in fig. 7-1) is sent to a secondary classification unit for target classification, edge detection extraction is carried out on the car face area before the car face area is sent to a secondary classification network, and an extracted car face contour map is shown in fig. 7-2. Preferably, Canny edge detection is employed. The specific process is as follows: (1) carrying out gray level processing on the car face area image; (2) performing Gaussian smoothing filtering, namely performing convolution operation on the gray image I and a Gaussian kernel: i isσ=I*GσWherein denotes a convolution operation, GσIs a two-dimensional gaussian kernel with standard deviation σ, defined as:(3) calculating the gradient magnitude and direction of the image, wherein the gradient magnitude is expressed asThe direction is represented by a gradient angle:α=arctan2(Iy,Ix). Wherein IxAnd IyDerivatives in x and y directions of the gray image are respectively, and a Sobel filter is adopted when the derivatives are calculated:and(4) non-maxima suppression is performed and edges are detected and connected using a dual threshold algorithm. And (4) sending the extracted car face contour map into a secondary classification network model (the specifically sent secondary classification network model is a classification network under the corresponding brand output by the primary classification unit). Because the information of the car face area is more, the network structure adopts the ResNet50 or the ResNet101 (a person in the art can flexibly select the number of network layers according to actual requirements, and the application is not strictly limited). The secondary classification network model has a plurality of (each automobile brand corresponds to a network) and is trained in advance, taking the Audi brand as an example, the training step of the secondary classification model of the brand is as follows: preparing car face pictures of various car models under enough Audi brands (each car model also needs enough samples, wherein the pictures are taken under the conditions of different angles, different distances and the like); carrying out the Canny edge detection on the car face pictures to extract the outline, and endowing the corresponding car type with a label; and combining the contour maps and the labels corresponding to all the vehicle types into a data set, and sending the data set into a network for training.
As shown in fig. 8-1, the license plate number is recognized by sending the license plate region detected by the target detection module to the license plate recognition module. As shown in fig. 3, the license plate recognition module includes a license plate fine positioning unit, a license plate character segmentation unit, and a license plate character recognition unit.
Since the license plate region detected by the target detection module is a rough detection region, a fine positioning unit is required to perform further fine positioning processing. The specific process is as follows: processing the gray level of the license plate area; detecting edges; and (6) carrying out binarization processing. The resulting fine location area of the license plate is shown in FIG. 8-2. The license plate character segmentation unit segments the license plate characters in the located license plate area, so that characters on the license plate are obtained. Preferably, the character segmentation employs a modified horizontal projection algorithm: (1) and removing the regions outside the upper and lower boundaries of the license plate characters. And scanning the grayed license plate image line by line from bottom to top, counting the number of pixels with the pixel value of 255 in each line, and when the number of the pixels with the pixel value of 255 is more than 7 (the license plate has 7 characters), considering that the lower boundary of the license plate character is found. Similarly, the upper boundary of the license plate characters can be found by scanning line by line from top to bottom. And removing the regions outside the upper and lower boundaries of the license plate characters. After the upper and lower boundaries of the license plate characters are removed, setting the height of the license plate as height and the width as width; (2) scanning the license plate image from left to right row by row, recording and counting the number of pixels with the pixel value of 255 in each row, and storing the result in a one-bit array count [ width +1], wherein the count [ i ] is used for storing the number of pixels with the pixel value of 255 in the ith row; (3) the first character of the license plate in China is a Chinese character, two thresholds are set to divide the first Chinese character of the license plate according to the characteristics of the Chinese character, and the two thresholds are threshold 1 and threshold 2 respectively. Scanning the grayed license plate image from left to right, wherein the first column which is larger than a threshold value threshold 1 is the starting position of the Chinese character and is marked as S. And then continuing to scan the license plate image until finding a column which is less than a threshold value threshold 1 and is marked as H, comparing the width H-S of the two columns with the size of threshold 2, and if H-S < threshold 2, continuing to scan the image until finding a column which has the width different from that of the S column which is greater than the threshold 2 and satisfies that the number of pixels with the pixel value of 255 is less than the threshold value. The found column is the final column of the Chinese characters of the license plate. When the Chinese characters which are not communicated are segmented, the improved method plays a remarkable role; (4) the rest characters are English letters and Arabic numerals, the characters have no problem of incoherence, and therefore the rest characters of the license plate can be segmented by only utilizing a first threshold value threshold 1; (5) when the first Chinese character of the license plate is divided, the license plate area image is continuously scanned, when the number of the pixels with the pixel value of 255 in a certain column is greater than the threshold 1, the certain column is the starting position of the license plate character, and when the number of the pixels with the pixel value of 255 in the certain column is less than the threshold 1, the certain column is the ending position of the license plate character. The steps are repeated until the characters left on the license plate are also segmented. The license plate character recognition unit recognizes the segmented characters, and the character recognition adopts a feature statistics-based matching algorithm. The main principle is that the statistical characteristics of the license plate characters in an input mode are extracted, and then classification judgment is carried out according to a certain rule and a determined decision function. The statistical characteristics of the character include the number of pixel blocks, the number of outlines of the character, the shape of the outlines, and the like. The pixel block is a pixel block of a connected region formed by all white pixel regions which are mutually connected in the upper, lower, left and right directions in the binary image, so that the pixel block of the Chinese character is more than 1, and the number of the pixel blocks of the English letters and the numbers is 1. The Chinese character recognition is that the character point matrix is regarded as a whole, the character is decomposed into one or more combinations of horizontal, vertical, left falling, right falling and the like according to different stroke feature points of each character, corresponding features are obtained through statistics, and then the corresponding features are matched with feature sets in a character library to obtain the recognition result of the input character.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented using a general purpose computing device, which may be centralized on a single computing device or distributed across a network of computing devices. Alternatively, they may be implemented in program code that is executable by a computing device such that it is executed by the computing device, or that is separately fabricated into individual integrated circuit modules, or that is fabricated with multiple modules or steps into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (3)
1. A vehicle combination recognition system based on deep learning target detection, comprising: the system comprises an image acquisition module, an image processing module, a target detection module, a target classification module, a license plate recognition module, an information comparison module and a communication module;
the image acquisition module is used for acquiring an image containing a vehicle target in real time and transmitting the image to the image processing module;
the image processing module is used for carrying out image denoising, image enhancement and other processing on the acquired image so as to facilitate subsequent identification, and transmitting the processed image to the target detection module;
the target detection module is used for marking a vehicle logo area, a vehicle face area and a license plate area of a vehicle in the acquired traffic image, transmitting the vehicle logo area and the vehicle face area to the target classification module and transmitting the license plate area to the license plate recognition module;
the target classification module determines a target vehicle brand and a vehicle model according to the received vehicle logo area and the vehicle face area and transmits the target vehicle brand and the vehicle model to the information comparison module;
the license plate recognition module recognizes a license plate number according to the received license plate area and transmits the license plate number to the information comparison module;
the information comparison module comprises an information storage unit and an information comparison unit, wherein the information storage unit is used for storing vehicle information and illegal information of illegal vehicles, and the vehicle information comprises license plate numbers, vehicle brands, vehicle models and whether unprocessed illegal behaviors exist, wherein the license plate numbers, the vehicle brands and the vehicle models are registered on a private network of a traffic department; the information comparison unit is used for matching the received license plate number, the vehicle brand, the vehicle model and the vehicle information stored in the information storage unit, judging that the vehicle is a suspected vehicle with a forged license plate if no license plate number is matched, and transmitting the received license plate number, the vehicle brand, the vehicle model and a judgment result to the communication module; if the model corresponding to the matched license plate number is inconsistent with the received vehicle model, the vehicle is judged to be a vehicle with a fake plate or suspected of illegal modification, and the received license plate number, the vehicle brand, the vehicle model and the judgment result are transmitted to the communication module; if the vehicle types are consistent but unprocessed illegal behaviors exist, judging the vehicles as vehicles with other illegal suspicions, and transmitting the received license plate number, vehicle brand, vehicle model and judgment results to the communication module;
the communication module transmits the received license plate number, the vehicle brand, the vehicle model and the judgment result to an information center of a traffic management department to remind traffic management personnel to intercept and control related vehicles.
2. The vehicle combination recognition system based on deep learning target detection as claimed in claim 1, wherein the target classification module comprises a primary classification unit and a secondary classification unit, wherein the primary classification unit is configured to send the received vehicle logo area to a pre-trained primary classification network model to obtain a vehicle brand; the secondary classification unit is used for sending the received car face area into a secondary classification network model which is trained in advance under the car brand and obtained by the primary classification unit, and obtaining the car model.
3. The recognition method of the vehicle combination recognition system based on the deep learning target detection is characterized in that the target detection module performs detection based on a target detection network;
the target detection network comprises a DarkNet-19 convolutional neural network model and a Passtrough layer;
the DarkNet-19 convolutional neural network model comprises 19 convolutional layers and 5 maximum pooling layers, and the number of convolutional cores of the last convolutional layer is 8;
the Passtthroughput layer comprises two 3 × 3 convolutional layers and one 1 × 1 convolutional layer, is used for receiving a feature map output by the 13 th convolutional layer of the Darknet-19 convolutional neural network model to perform feature rearrangement processing, and transmits the feature map subjected to feature rearrangement to a concat node after the 20 th convolutional layer of the Darknet-19 convolutional neural network model and a feature map output by the 20 th convolutional layer of the Darknet-19 convolutional neural network model to perform channel splicing, so that feature fusion is realized, and detection accuracy is improved;
the concrete steps of the Passtthroughput layer for carrying out feature rearrangement processing on the feature graph output by the 13 th convolutional layer of the Darknet-19 convolutional neural network model are as follows: the received feature maps are firstly convolved by 1 x 1 to reduce the dimension of the channel number, then the dot sampling is respectively carried out according to the rows and the columns to obtain 4 feature maps with the size reduced by half, and then the 4 feature maps are spliced according to the channels to obtain the feature maps subjected to feature rearrangement.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010861109.XA CN112115800A (en) | 2020-08-25 | 2020-08-25 | Vehicle combination recognition system and method based on deep learning target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010861109.XA CN112115800A (en) | 2020-08-25 | 2020-08-25 | Vehicle combination recognition system and method based on deep learning target detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112115800A true CN112115800A (en) | 2020-12-22 |
Family
ID=73805584
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010861109.XA Pending CN112115800A (en) | 2020-08-25 | 2020-08-25 | Vehicle combination recognition system and method based on deep learning target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112115800A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950566A (en) * | 2021-02-25 | 2021-06-11 | 哈尔滨市科佳通用机电股份有限公司 | Windshield damage fault detection method |
CN113685770A (en) * | 2021-09-06 | 2021-11-23 | 盐城香农智能科技有限公司 | Street lamp for environment monitoring and monitoring method |
CN113780244A (en) * | 2021-10-12 | 2021-12-10 | 山东矩阵软件工程股份有限公司 | Vehicle anti-license plate replacement detection method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107730905A (en) * | 2017-06-13 | 2018-02-23 | 银江股份有限公司 | Multitask fake license plate vehicle vision detection system and method based on depth convolutional neural networks |
CN110378236A (en) * | 2019-06-20 | 2019-10-25 | 西安电子科技大学 | Testing vehicle register identification model construction, recognition methods and system based on deep learning |
CN110837807A (en) * | 2019-11-11 | 2020-02-25 | 内蒙古大学 | Identification method and system for fake-licensed vehicle |
-
2020
- 2020-08-25 CN CN202010861109.XA patent/CN112115800A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107730905A (en) * | 2017-06-13 | 2018-02-23 | 银江股份有限公司 | Multitask fake license plate vehicle vision detection system and method based on depth convolutional neural networks |
CN110378236A (en) * | 2019-06-20 | 2019-10-25 | 西安电子科技大学 | Testing vehicle register identification model construction, recognition methods and system based on deep learning |
CN110837807A (en) * | 2019-11-11 | 2020-02-25 | 内蒙古大学 | Identification method and system for fake-licensed vehicle |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950566A (en) * | 2021-02-25 | 2021-06-11 | 哈尔滨市科佳通用机电股份有限公司 | Windshield damage fault detection method |
CN113685770A (en) * | 2021-09-06 | 2021-11-23 | 盐城香农智能科技有限公司 | Street lamp for environment monitoring and monitoring method |
CN113780244A (en) * | 2021-10-12 | 2021-12-10 | 山东矩阵软件工程股份有限公司 | Vehicle anti-license plate replacement detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109993056B (en) | Method, server and storage medium for identifying vehicle illegal behaviors | |
CN105373794B (en) | A kind of licence plate recognition method | |
CN109740478B (en) | Vehicle detection and identification method, device, computer equipment and readable storage medium | |
CN101334836B (en) | License plate positioning method incorporating color, size and texture characteristic | |
CN108268867B (en) | License plate positioning method and device | |
CN103824066B (en) | A kind of licence plate recognition method based on video flowing | |
CN106600977B (en) | Multi-feature recognition-based illegal parking detection method and system | |
CN107301405A (en) | Method for traffic sign detection under natural scene | |
Abdullah et al. | YOLO-based three-stage network for Bangla license plate recognition in Dhaka metropolitan city | |
Wang et al. | An effective method for plate number recognition | |
CN109800752B (en) | Automobile license plate character segmentation and recognition algorithm based on machine vision | |
Saha et al. | License Plate localization from vehicle images: An edge based multi-stage approach | |
CN112115800A (en) | Vehicle combination recognition system and method based on deep learning target detection | |
CN105930791A (en) | Road traffic sign identification method with multiple-camera integration based on DS evidence theory | |
CN103116751A (en) | Automatic license plate character recognition method | |
CN107180230B (en) | Universal license plate recognition method | |
CN112651293B (en) | Video detection method for road illegal spreading event | |
CN105184301B (en) | A kind of method that vehicle heading is differentiated using four-axle aircraft | |
Islam et al. | Bangla license plate detection, recognition and authentication with morphological process and template matching | |
CN107392115B (en) | Traffic sign identification method based on hierarchical feature extraction | |
CN111178359A (en) | License plate number recognition method, device and equipment and computer storage medium | |
Nguwi et al. | Number plate recognition in noisy image | |
Sridevi et al. | Vehicle identification based on the model | |
Pratama et al. | Vehicle license plate detection for parking offenders using automatic license-plate recognition | |
Mohammad et al. | An Efficient Method for Vehicle theft and Parking rule Violators Detection using Automatic Number Plate Recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |