Nothing Special   »   [go: up one dir, main page]

CN103886760A - Real-time vehicle type detection system based on traffic video - Google Patents

Real-time vehicle type detection system based on traffic video Download PDF

Info

Publication number
CN103886760A
CN103886760A CN201410142327.2A CN201410142327A CN103886760A CN 103886760 A CN103886760 A CN 103886760A CN 201410142327 A CN201410142327 A CN 201410142327A CN 103886760 A CN103886760 A CN 103886760A
Authority
CN
China
Prior art keywords
gradient
vehicle
image
template
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410142327.2A
Other languages
Chinese (zh)
Other versions
CN103886760B (en
Inventor
李涛
叶茂
向涛
李冬梅
朱晓珺
张栋梁
包志均
唐红强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Chantu Intelligent Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410142327.2A priority Critical patent/CN103886760B/en
Publication of CN103886760A publication Critical patent/CN103886760A/en
Application granted granted Critical
Publication of CN103886760B publication Critical patent/CN103886760B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a real-time vehicle type detection system based on a traffic video. Key regions of the shape of a vehicle are used as salient regions for vehicle type representation, and therefore vehicle type judgment is achieved. Firstly, the salient regions are utilized for intensive point collection, a gradient map is constructed in the mode of sparse point collection in non-salient regions, and a template is established; secondly, a parallel and fast-table-lookup matching algorithm is implemented through a hierarchical template established by extending binary coding of the gradient, cosine similarity, a pre-stored gradient response table, the linearization memory technology and k-means clustering. The real-time vehicle type detection system based on the traffic video is simple and fast and has excellent robustness.

Description

The real-time vehicle detection system based on traffic video
Technical field
The present invention relates to traffic vehicle detection technique field, be specifically related to a kind of real-time vehicle detection system based on traffic video.
Background technology
Automobile as fast easily the vehicles be widely used, but increasing rapidly to urban transportation of its quantity brought immense pressure in recent years, also allows corresponding management personnel's work become day by day heavy.Along with the fast development of computer vision technique and hardware product, in order to solve these day by day serious traffic problems, intelligent transportation system (the Intell igent Transportation Systems that arises at the historic moment, be called for short ITS), wherein vehicle identification is the important composition composition of intelligent transportation system, is also widely applied about some existing correlation techniques of vehicle identification.The occasion that these technology have stable condition at some has shown good effect to specific vehicle, as bus, three box car etc.
Present vehicle identification mainly concentrates on the description of vehicle feature and vehicle template profile matching, it is mainly that the feature that the video image that collects from watch-dog obtains vehicle is described that vehicle feature is described, thereby the vehicle of portraying, reaches the object that in video, vehicle detects.The feature of current main description vehicle concentrates on Harris Corner Feature, HOG feature, and Gabor feature, and the single features such as SIFT feature, or single features combines, the union feature of formation.
And be mainly vehicle template base and the template search mechanisms of how Criterion about the description of vehicle template, obtain target by the video image collecting from watch-dog, then the target of obtaining and vehicle template base corresponding template are mated, thereby determine relevant vehicle.Mostly be unrestricted open environment because automobile is of living in, complicated and changeable, there is illumination variation, view transformation etc., so need a kind of accurately, in real time and there is the vehicle detection method of high-adaptability in complicated occasion.
Retrieval one of prior art: the yellow writing brush of In South China Polytechnics, Lin Zhenze, Zhu provide naughty etc. invention " the vehicle automatic identifying method based on vehicle direct picture and template matches ", publication number: CN103324920A.
This disclosure of the invention a kind of vehicle automatic identifying method based on vehicle direct picture and template matches, on gray level image, determine vehicle region by car plate, and the unified big or small vehicle template of foundation, carry out associated gradients calculating in template, after Grad normalization, put into neural network and train, output eight class vehicle results.This vehicle algorithm flow as shown in Figure 1.
In the prior art, first the vehicle direct picture collecting is carried out to gray processing and obtain gray-scale map, and calculate the transverse gradients figure of gray-scale map, reason is to have considered car plate shape and placement feature, is easily obtained car plate position and is calculated car plate width by transverse gradients figure.
In the prior art, because car plate width, the information such as position are fixed mostly, so the method is by the definite position of the transverse gradients figure that obtains and amplify by correlation proportion centered by the position obtaining, roughly can obtain vehicle region, the region unification obtaining is zoomed in feature extraction template
In the prior art, according to the template Grad obtaining, be normalized, obtain the feature of related data as vehicle judgement, be input to neural network, the neural network model obtaining by training, then utilizes this model to obtain the vehicle information that detects data output.
There is following shortcoming in the prior art: this patent utilization the gradient information of vehicle, obtain car plate relative widths and positional information and determine thus vehicle template, but single utilization gradient information and do not consider the impact of gradient around reference point, make vehicle characteristics statement lack completeness, can cause to a certain extent wrong report.In addition, adopt its speed of convergence of neural metwork training slow, have the shortcomings such as local extremum, and the training result of neural network is too dependent on the vehicle sample of selection.
Two of retrieval prior art: ancestor people Lee of China Petroleum Univ. (East-China), public thread are super, the invention " a kind of dynamic vehicle model recognizing method for intelligent transportation system " of Liu Yujie etc., CN103258213A.
This disclosure of the invention a kind of dynamic vehicle model recognizing method for intelligent transportation system.The first trained stage is utilized the image after normalization to extract HOG feature and describes the GIST feature of overall texture, obtains two sorters respectively as input by SVM.Then while detection, Output rusults is merged in conjunction with D-S evidence theory by two sorters that obtain, obtain maximum probability, thereby complete vehicle identification, this algorithm detailed process as shown in Figure 2.
In the prior art, first introduced in the training and testing stage GIST feature that HOG feature and entirety are described, the information when overcoming single features and describing vehicle is certain, has merged whole and part feature.
In the prior art, after two features are obtained, utilize SVM to obtain respectively two judgement models based on two features in the training stage, in the time that test judges vehicle, obtain relevant output by the judgement model that the HOG feature of detection vehicle is trained with the input of GIST feature, just form the basis of the cascading judgement of knowing clearly.
In the prior art, utilize two SVM models to detecting the dependent probability of the judgement vehicle that obtains of vehicle, merged the relevant information of two SVM outputs by D-S theory, thereby obtain most probable value, the vehicle classification that this most probable value is corresponding, be the classification of current vehicle to be identified, so far realized cascading judgement vehicle, obtain final detection result.
There is following shortcoming in the prior art: this patent utilization the HOG feature at edge and overall GIST feature described, strengthen the robustness that vehicle is characterized, and this patent carried out information fusion to the testing result of the judgement model training respectively, realize cascading judgement.But the method need to be trained and detection accuracy is largely subject to the impact of sample set, but sample set can not comprise the vehicle under all ambient conditions, in practical engineering application, the accuracy that its vehicle detects can not guarantee.
Summary of the invention
The object of this invention is to provide a kind of vehicle detection technique based on traffic video that can apply in real time in intelligent transportation system.
For achieving the above object, the present invention adopts following technical scheme: a kind of real-time vehicle detection system based on traffic video, comprises under line and mate two parts in training and line;
Described Xian Xia training department divides and comprises the following steps: (1) calculates Harris angle point, obtains marking area; Then adopt a little marking area is intensive, adopt a little non-marking area is sparse; (2) in the image of adopting after having put, to calculate corresponding expansion gradient and form vehicle template figure, the binary coding of the driving section mould plate figure that goes forward side by side, according to the similar pre-stored gradient response chart of cosine, completes parallel computation design; (3) difference of finally describing according to vehicle feature utilizes the mode of k-means cluster to build different subspace, sets up stratified vehicle template index, logging template relevant information;
On described line, coupling comprises the following steps: (1) is by obtaining the vehicle image of band identification under traffic scene; (2) marking area of computed image vehicle and non-marking area, then a non-homogeneous acquisition gradient map of adopting; (3) gradient point is expanded and binary coding; (4) by the corresponding gradient response diagram of the similar acquisition of cosine; (5) adopt the mode of parallel computation to carry out fast zoom table coupling; (6) obtain vehicle matching result, judge vehicle, complete vehicle and detect.
Under described line, in the step (1) of training part, the detailed step that calculates the gradient map of the nonuniform sampling point that obtains marking area and non-marking area is:
(1.1) first obtain the Harris angle point on vehicle wheel profile;
(1.2), take Harris angle point as the center of circle, circle is drawn on the size blank image the same with vehicle template image in neighborhood of pixels radius R=6, then finds connected domain on this image, thereby orients the marking area of vehicle image;
(1.3) intensive the adopting a little of marking area obtaining, non-marking area carry out sparse adopting a little; Calculate the image gradient of tri-passages of RGB of the image after nonuniform sampling, get the greatest gradient value of this o'clock in three passages for the Grad of each gradient point; Then passing threshold retains the larger gradient point of Grad; The gradient of trying to achieve, be quantified as the individual gradient direction of N (for example N=5), the gradient direction that then occurrence number is maximum in each gradient point field is as the gradient direction of this gradient point;
(1.4) quantize gradient direction afterwards and carry out corresponding binary coding mark, gradient direction is represented with the binary string that length is N=5, form the gradient map of binary representation.
Under described line, the step (2) of training part also comprises the Gradient Features information the pre-stored response table that obtain template, comprises the following steps:
(2.1) expansion of image gradient point is that the image gradient figure of binarization is processed, gradient point expansion process for example, carries out gradient expansion (by the processing of step-by-step OR operation to each gradient point in T × T (T=3) neighborhood, make each point contain the gradient direction occurring in the neighborhood that radius is T/2), thus the binary coding figure after expansion obtained;
(2.2) obtain after the gradient image after expansion, the similarity of template matches adopts the method for asking for cosine similarity to realize; In the process of coupling, this gradient point in T × T neighborhood in all gradient directions, has the cosine response value maximum that the gradient direction of a gradient direction and current matching obtains, and so just thinks that this gradient direction is the gradient direction mating most; Because gradient is quantified as N=5 grade, open gradient response diagram so obtain N=5, each gradient direction is corresponding gradient response table respectively, the maximum cosine response value of the field inside gradient direction set of each gradient response table and binary coding representative is can precompute, and is kept in internal memory for searching the corresponding maximum cosine response value of coding.
Under described line, in the step (3) of training part, K-means cluster is determined vehicle subspace, sets up level index and comprises the following steps:
(3.1) in order to improve the plain speed of searching, reduce the template number of the vehicle while coupling each time, this method adopts k-means clustering method to carry out thick cluster to template base figure according to outward appearance; Form different vehicle space distributions;
(3.2) on the basis of vehicle space distribution, vehicle template base is divided into the two-layer level index of setting up, ground floor template is the large class template of vehicle, second layer template is the concrete template of vehicle.
On described line, the step of compatible portion (1) obtains vehicle detection image to be identified, and the mode of first upgrading by mixed Gauss model and adaptive background obtains vehicle detection image to be identified; In this step, remove as much as possible unnecessary prospect, dwindle the computer capacity of follow-up matching algorithm, improve detection efficiency.
The gradient map of calculating acquisition vehicle nonuniform sampling point to be identified on described line in the step of compatible portion (2) comprises the following steps:
(2.1) first obtain the Harris angle point on vehicle wheel profile;
(2.2), take Harris angle point as the center of circle, radius R=6 pixel is drawn circle on the size blank image the same with vehicle template image, then finds connected domain on this image, thereby orients the marking area of vehicle image;
(2.3) intensive the adopting a little of marking area obtaining, rather than marking area carry out sparse adopting a little.Calculate the image gradient of tri-passages of RGB of the image after nonuniform sampling, get the greatest gradient value of this o'clock in three passages for the Grad of each gradient point.Then passing threshold retains the larger gradient point of Grad.The gradient of trying to achieve, be quantified as the individual gradient direction of N (N=5 in this programme), the gradient direction that then occurrence number is maximum in each gradient point field is as the gradient direction of this gradient point;
(2.4) quantize gradient direction afterwards and carry out corresponding binary coding mark, gradient direction is represented with the binary string that length is N=5, form the gradient map of binary representation.
On described line, in the step of compatible portion (3), vehicle gradient point to be detected is expanded and binary coding, wherein the expansion of image gradient point is that the image gradient figure of binarization is processed, gradient point expansion process carries out gradient expansion (by the processing of step-by-step OR operation to each gradient point in T × T (T=3 in this programme) neighborhood, make each point contain the gradient direction occurring in the neighborhood that radius is T/2), thus the binary coding figure after expansion obtained.
Compute gradient response diagram in the step of compatible portion (4) on described line, wherein in the T × T at gradient point position place (T=3) field in all gradient directions, have the cosine response value maximum that the gradient direction of a gradient direction and current matching obtains, so just think that this gradient direction is the gradient direction mating most.
On described line, the step of compatible portion (5) is fallen into a trap to calculate and is mated, and comprises the following steps:
(5.1) in order further to improve the speed of algorithm, adopt the parallel computation of gradient response diagram.First gradient response diagram is carried out to linearization, form the linearization internal memory of the individual gradient response diagram of cell*cell (getting cell=2 herein).5 gradient response diagram linearities are turned to 4 (cell*cell=4) individual row vector;
(5.2) realize parallel computation by linear internal memory, just can calculate the template matches similarity of multiple windows at every turn simultaneously.In matching process, mate by template base level template, find the linear internal memory of its corresponding gradient response diagram according to the gradient direction of gradient point in template image, and then the position in the region of cell*cell calculates its side-play amount in the linear internal memory (being a row vector) of corresponding gradient response diagram according to this gradient point;
(5.3) finally all row vectors are alignd by side-play amount, relevant position cosine response value is added to summation.In row vector after summation, each element is the similarity of template in this detection window, and its coordinate position corresponding to maximal value place is exactly the position at target place so.
In this programme, N, T and cell are greater than 0 natural number, preferably N=5, T=3, cell=2.
The invention has the beneficial effects as follows: in the solution of the present invention, vehicle detects the coherent detection that is applicable to nearly all vehicle, high for template vehicle detection complexity, the slow-footed problem in search pattern storehouse, this programme is adopted a little by the vehicle marking area after size normalized is intensive, the sparse nonuniform sampling mode of adopting a little of non-marking area, obtain sampled point, then on sampled point, carry out gradient expansion, the modes such as binary coding are set up the vehicle template of being convenient to parallel computation, and by k-means cluster, vehicle outward appearance etc. is carried out to thick cluster and set up different automobile types space, set up multi-level vehicle search, in concrete matching process, adopt to obtain in advance under line and calculate and obtain gradient response table, when coupling, parallel internal memory calculates, by the fast zoom table of the similar calculated response degree of cosine, the real-time that vehicle is detected, the aspects such as accuracy in detection than before scheme had better effect.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of prior art one;
Fig. 2 is the process flow diagram of prior art two;
Fig. 3 is vehicle targets process flow diagram of the present invention;
Fig. 4 obtains based on marking area and non-marking area non-uniform point flow process;
Fig. 5 is that gradient quantizes and corresponding binary scale coding;
Fig. 6 is the gradient expansion of image and forms binary-coded gradient map process;
Fig. 7 is the table of compute gradient response in advance;
Fig. 8 is compute gradient response diagram;
Fig. 9 is the linearization of gradient response diagram;
Figure 10 calculates vehicle template matches similarity figure;
Figure 11 is that example set up in part vehicle template base index;
Figure 12 is vehicle matching process;
Figure 13 is picture size normalization exploded view.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
Embodiment 1: a kind of real-time vehicle detection system based on traffic video, the present invention adopts improved template matching method, mode by a kind of nonuniform sampling obtains sampled point, on sampled point basis, pass through to quantize gradient, and carry out gradient expansion binary representation to quantizing rear gradient direction, obtain respective extension gradient map, thereby set up template, and set up the template base of level by the thick cluster mode of k-means; Then obtain the expansion gradient map of vehicle to be detected by same way, carry out template matches by the mode of level retrieval.In the first stage of setting up template, abandon evenly adopting a little of traditional approach, adopt and a kind ofly adopt point mode based on vehicle region of significance non-homogeneous and build template, the gradient point that has greatly reduced template matches and be calculates, and the calculated amount of response diagram while mating.The subordinate phase forming in template base, for the gradient map that quantizes to obtain after expansion, sets up response chart according to binary numeral numerical value, then result store, and fast finding while being convenient to mate.Adopt parallel mode Rapid matching to improve matching speed in the template matches stage; Template matches is very sensitive to picture size in addition, so will be to picture size normalized.Specifically exactly the size of template image in template base is normalized, the size of the vehicle image extracting is also normalized and so both can reduces template amount in template base simultaneously, also can improve the accuracy rate of matching speed and coupling.Below the specific embodiment of the invention is further described.
1) non-homogeneous adopting a stage
Be different from conventional truck and evenly adopt method a little based on profile, this method adopts and adopts a little based on vehicle marking area is intensive, and the sparse mode of adopting a little of non-marking area is carried out sample point collection, and as shown in Figure 4, its idiographic flow is described below:
(1.1) carry out the calculating of Harris angle point, obtain respective point.Harris angle point is that level and vertical direction change point greatly.
(1.2) obtain utilize after angle point each angle point for center of circle radius R be that 6 pixels are carried out picture circle, then on the size blank image the same with vehicle template image, then find connected domain on this image.
(1.3) by the location with big or small blank image, be transplanted on the vehicle image of relevant position, thereby obtain marking area, except marking area, be defined as non-marking area.
(1.4) intensive the adopting a little of marking area of vehicle, sparse the adopting a little in non-marking area region.
2) build gradient template and realize PARALLEL MATCHING
Vehicle non-homogeneous adopt put after, further build template, carry out gradient quantification, carry out corresponding binary coding etc. and realize corresponding fast parallel coupling, its specific implementation is that specific implementation is:
(2.1) at the computed image RGB on point diagram that adopts obtaining, (rgb color pattern is a kind of color standard of industry member, by to red (R), green (G), the variation of blue (B) three Color Channels and their stacks each other obtain color miscellaneous, RGB be represent red, green, the color of blue three passages, this standard almost comprised mankind eyesights can perception all colours, to use at present one of the widest color system) image gradient of three passages, get the greatest gradient value of this o'clock in three passages for the Grad of each gradient point.In order to strengthen the ability of anti-noise jamming and illumination variation, passing threshold filters gradient image, thereby only leaves the gradient point that Grad is larger.
(2.2) for the point retaining, gradient direction is quantized, be quantified as 5 gradient directions, concrete gradient direction quantitative criteria as shown in Figure 5.The gradient direction that occurrence number is maximum in each gradient point field is as the gradient direction of this gradient point, and each direction is converted to binary coding.
(2.3) in order further to increase the noise resisting ability of characteristics of image, introduce peripheral neighborhood point, the gradient point of image is expanded, incorporate the contextual information of periphery.Specifically: press 3*3 region and divide neighborhood, appearing at the graded of the each point in region for the stack of the gradient direction occurring in neighborhood, then by binary coding mode, all directions are represented, the processing of step-by-step OR operation, obtain new gradient binary representation, its expansion process as shown in Figure 6.
(2.4) obtain after gradient image, adopt the similar mode of cosine to portray match condition, more approaching when the gradient direction at gradient point place in the gradient direction at gradient point place in template and detected image, the cosine response value of calculating is just larger, and both similarities are higher.Concrete formula is described below:
S ( Image , Template , c ) = Σ i ∈ Z ( max i ∈ M ( c + i ) | cos ( ori ( Template , t ) - ori ( Image , i ) ) | )
Wherein, S (Image, Template, c) represents the similarity of current region template matches; C represents current region side-play amount; Z represents template region; M represents template window corresponding region in detected image; The gradient index value of the detected image of i and t and template image.
In the process of mating, in the field at this gradient point position place in all gradient directions, have the cosine response value maximum that the gradient direction of a gradient direction and current matching obtains, so just think that this gradient direction is the gradient direction mating most.Therefore, go out N=5 corresponding to current detected image by process computation above and open gradient response diagram, the respectively corresponding gradient response diagram of each gradient direction.The formula of the table of compute gradient response is in advance:
τ i [ ζ ] = max l ∈ ζ | cos ( i - l ) |
Wherein, ζ represents the binary coded value that the set of neighborhood inside gradient direction forms; I represents the gradient direction (span is 1 to N=5) quantizing.The concrete N=5 that calculates opens gradient response table T1, T2, T3, T4, T5 process as shown in Figure 7.
(2.5) construct N=5 corresponding to current detected image and open gradient response diagram by looking into gradient response table, specifically calculate N=5 and open gradient response table M1, M2, M3, M4, M5 process as shown in Figure 8.
(2.6) while coupling, realize parallel computation, in order to realize the parallel computation of gradient response diagram, first will carry out linearization to gradient response diagram, form the linearization internal memory of the individual gradient response diagram of Cell*Cell (it is 2 that this programme is got Cell).Linearization gradient response diagram detailed process as shown in Figure 9, is opened gradient response diagram linearity by N=5 and is turned to Cell*Cell=4 row vector, be i.e. the linearization internal memory of 4 gradient response diagrams.
(2.7) realize parallel computation by linear internal memory, just can calculate the template matches similarity of multiple windows at every turn simultaneously.In matching process, find the linear internal memory of its corresponding gradient response diagram according to the gradient direction of gradient point in template image, and then the position in the region of cell*cell calculates its side-play amount in the linear internal memory (being a row vector) of corresponding gradient response diagram and is according to this gradient point:
offset=(Y/cell)*(Width/cell)+(X/cell)
Wherein, (X, Y) represents the coordinate position at the gradient point place in template image; Width represents the width of current detected image.The linear internal memory (row vector) that all gradients in template image can be put to corresponding gradient response diagram all finds, calculate again side-play amount separately, finally all row vectors are alignd by side-play amount, relevant position cosine response value is added to summation.In row vector after summation, each element is the similarity of template in this detection window, and its coordinate position corresponding to maximal value place is exactly the position at target place so.Utilize parallel computation design can increase exponentially matching speed.
The process of concrete calculating vehicle template matches similarity figure as shown in figure 10.
3) build different automobile types subspace by k-means cluster and set up level index
Set up vehicle template base index.The template number of the vehicle while coupling each time in order to reduce, to reach the requirement of carrying out in real time vehicle identification, this programme is that vehicle template base is set up index.Adopt k-means clustering method to carry out thick cluster to respective graphical, obtain two layer indexs.Ground floor template is the large class template of vehicle, and second layer template is the concrete template of vehicle.When coupling, first mate with large class template, select that the highest class of matching rate, then mate for the second time by the concrete template of vehicle corresponding to that class, thereby match concrete vehicle.Part vehicle template base index is set up example as shown in figure 11.
4) normalized of picture size
Template matching algorithm is very sensitive to picture size, so will be to picture size normalized.Specifically exactly template image in template base and vehicle image to be identified all will be normalized to unified size, specific practice is to carry out ratio scaling according to image actual aspect ratio, and concrete scaling formula is as follows:
H 2 = W 2 W 1 × H 1
Wherein, W 2, H 2represent figure image width and height after scaling; W 1, H 1represent figure image width and height before scaling.Carry out Matching Experiment by the vehicle template to various sizes and the vehicle image to be identified extracting, experimental result is analyzed to discovery.
(4.1) for the vehicle image to be identified extracting, the vehicle image width W after scaling 2be taken as 160 pixels.Because show by experiment under this size, the Gradient Features of acquisition is counted out and is met the double requirements of efficiency and effect.
(4.2) for vehicle template image, the vehicle template image width W after scaling 2be taken as respectively 155,145,135,125 pixels.Because the vehicle image to be identified extracting can not guarantee that being is whole vehicle size just, have a small amount of non-vehicle region around, but experimental analysis discovery, this non-vehicle region is roughly distributed in certain scope.Therefore analyze by experiment, draw W 2be taken as respectively four sizes of 155,145,135,125 pixel.Specifically as shown in figure 12.
5) vehicle matching process
The matching process that vehicle is concrete is as follows:
(5.1) vehicle image extracting is carried out to picture size normalized (picture size normalized).
(5.2) index of reference template is carried out PARALLEL MATCHING, from this three class template, selects that the highest class of matching similarity.
(5.3) mate for the second time by the concrete template of such corresponding vehicle, thereby match concrete vehicle.
As shown in figure 13, vehicle matching process has been described as an example of " van " example:
In the technical program, do not affecting on the basis of matching effect, to the image after normalized, improve matching efficiency by a kind of nonuniform sampling mode, the binary coding forming after expanding by gradient, the robustness that has not only increased vehicle sign also lays the foundation for the fast zoom table of follow-up parallel computation, carries out Secondary Match in conjunction with the vehicle of the level index of setting up based on thick cluster, has further improved the speed of coupling.This programme has been set up a kind of efficient, and vehicle detects fast.

Claims (10)

1. the real-time vehicle detection system based on traffic video, is characterized in that, comprises under line on training and line and mates two parts;
Described Xian Xia training department divides and comprises the following steps: (1) calculates Harris angle point, obtains marking area; Then adopt a little marking area is intensive, adopt a little non-marking area is sparse; (2) in the image of adopting after having put, to calculate corresponding expansion gradient and form vehicle template figure, the binary coding of the driving section mould plate figure that goes forward side by side, according to the similar pre-stored gradient response chart of cosine, completes parallel computation design; (3) difference of finally describing according to vehicle feature utilizes the mode of k-means cluster to build different subspace, sets up stratified vehicle template index, logging template relevant information;
On described line, coupling comprises the following steps: (1) is by obtaining the vehicle image of band identification under traffic scene; (2) marking area of computed image vehicle and non-marking area, then a non-homogeneous acquisition gradient map of adopting; (3) gradient point is expanded and binary coding; (4) by the corresponding gradient response diagram of the similar acquisition of cosine; (5) adopt the mode of parallel computation to carry out fast zoom table coupling; (6) obtain vehicle matching result, judge vehicle, complete vehicle and detect.
2. the real-time vehicle detection system based on traffic video according to claim 1, it is characterized in that, under described line, in the step (1) of training part, the detailed step that calculates the gradient map of the nonuniform sampling point that obtains marking area and non-marking area is:
(1.1) first obtain the Harris angle point on vehicle wheel profile;
(1.2), take Harris angle point as the center of circle, circle is drawn on the size blank image the same with vehicle template image in neighborhood of pixels radius R=6, then finds connected domain on this image, thereby orients the marking area of vehicle image;
(1.3) intensive the adopting a little of marking area obtaining, non-marking area carry out sparse adopting a little; Calculate the image gradient of tri-passages of RGB of the image after nonuniform sampling, get the greatest gradient value of this o'clock in three passages for the Grad of each gradient point; Then passing threshold retains the larger gradient point of Grad; The gradient of trying to achieve, be quantified as N gradient direction, the gradient direction that then occurrence number is maximum in each gradient point field is as the gradient direction of this gradient point;
(1.4) quantize gradient direction afterwards and carry out corresponding binary coding mark, gradient direction is represented with the binary string that length is N, form the gradient map of binary representation.
3. the real-time vehicle detection system based on traffic video according to claim 1, is characterized in that, under described line, the step (2) of training part also comprises the Gradient Features information the pre-stored response table that obtain template, comprises the following steps:
(2.1) expansion of image gradient point is that the image gradient figure of binarization is processed, and gradient point expansion process carries out gradient expansion to each gradient point in T × T neighborhood, thereby obtains the binary coding figure after expansion;
(2.2) obtain after the gradient image after expansion, the similarity of template matches adopts the method for asking for cosine similarity to realize; In the process of coupling, this gradient point in T × T neighborhood in all gradient directions, has the cosine response value maximum that the gradient direction of a gradient direction and current matching obtains, and so just thinks that this gradient direction is the gradient direction mating most; Because gradient is quantified as N grade, open gradient response diagram so obtain N, each gradient direction is corresponding gradient response table respectively, the maximum cosine response value of the field inside gradient direction set of each gradient response table and binary coding representative is can precompute, and is kept in internal memory for searching the corresponding maximum cosine response value of coding.
4. the real-time vehicle detection system based on traffic video according to claim 1, is characterized in that, under described line, in the step (3) of training part, K-means cluster is determined vehicle subspace, sets up level index and comprises the following steps:
(3.1) in order to improve the plain speed of searching, reduce the template number of the vehicle while coupling each time, this method adopts k-means clustering method to carry out thick cluster to template base figure according to outward appearance; Form different vehicle space distributions;
(3.2) on the basis of vehicle space distribution, vehicle template base is divided into the two-layer level index of setting up, ground floor template is the large class template of vehicle, second layer template is the concrete template of vehicle.
5. the real-time vehicle detection system based on traffic video according to claim 1, it is characterized in that, on described line, the step of compatible portion (1) obtains vehicle detection image to be identified, and the mode of first upgrading by mixed Gauss model and adaptive background obtains vehicle detection image to be identified; In this step, remove as much as possible unnecessary prospect, dwindle the computer capacity of follow-up matching algorithm, improve detection efficiency.
6. the real-time vehicle detection system based on traffic video according to claim 1, is characterized in that, the gradient map of calculating acquisition vehicle nonuniform sampling point to be identified on described line in the step of compatible portion (2) comprises the following steps:
(2.1) first obtain the Harris angle point on vehicle wheel profile;
(2.2), take Harris angle point as the center of circle, radius R=6 pixel is drawn circle on the size blank image the same with vehicle template image, then finds connected domain on this image, thereby orients the marking area of vehicle image;
(2.3) intensive the adopting a little of marking area obtaining, rather than marking area carry out sparse adopting a little; Calculate the image gradient of tri-passages of RGB of the image after nonuniform sampling, get the greatest gradient value of this o'clock in three passages for the Grad of each gradient point; Then passing threshold retains the larger gradient point of Grad; The gradient of trying to achieve, be quantified as N gradient direction, the gradient direction that then occurrence number is maximum in each gradient point field is as the gradient direction of this gradient point;
(2.4) quantize gradient direction afterwards and carry out corresponding binary coding mark, gradient direction is represented with the binary string that length is N, form the gradient map of binary representation.
7. the real-time vehicle detection system based on traffic video according to claim 1, it is characterized in that, on described line, in the step of compatible portion (3), vehicle gradient point to be detected is expanded and binary coding, wherein the expansion of image gradient point is that the image gradient figure of binarization is processed, gradient point expansion process carries out gradient expansion to each gradient point in T × T neighborhood, thereby obtains the binary coding figure after expansion.
8. the real-time vehicle detection system based on traffic video according to claim 1, it is characterized in that, compute gradient response diagram in the step of compatible portion (4) on described line, wherein in T × T field at gradient point position place in all gradient directions, have the cosine response value maximum that the gradient direction of a gradient direction and current matching obtains, so just think that this gradient direction is the gradient direction mating most.
9. the real-time vehicle detection system based on traffic video according to claim 1, is characterized in that, on described line, the step of compatible portion (5) is fallen into a trap to calculate and mated, and comprises the following steps:
(5.1) in order further to improve the speed of algorithm, adopt the parallel computation of gradient response diagram; First gradient response diagram is carried out to linearization, form the linearization internal memory of cell*cell gradient response diagram; 5 gradient response diagram linearities are turned to 4 row vectors;
(5.2) realize parallel computation by linear internal memory, just can calculate the template matches similarity of multiple windows at every turn simultaneously; In matching process, mate by template base level template, find the linear internal memory of its corresponding gradient response diagram according to the gradient direction of gradient point in template image, and then the position in the region of cell*cell calculates its side-play amount in the linear internal memory of corresponding gradient response diagram according to this gradient point;
(5.3) finally all row vectors are alignd by side-play amount, relevant position cosine response value is added to summation; In row vector after summation, each element is the similarity of template in this detection window, and its coordinate position corresponding to maximal value place is exactly the position at target place so.
10. according to the real-time vehicle detection system based on traffic video described in claim 2 or 3 or 7 or 10, it is characterized in that N=5, T=3, cell=2.
CN201410142327.2A 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video Expired - Fee Related CN103886760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410142327.2A CN103886760B (en) 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410142327.2A CN103886760B (en) 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video

Publications (2)

Publication Number Publication Date
CN103886760A true CN103886760A (en) 2014-06-25
CN103886760B CN103886760B (en) 2016-09-21

Family

ID=50955627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410142327.2A Expired - Fee Related CN103886760B (en) 2014-04-02 2014-04-02 Real-time vehicle detecting system based on traffic video

Country Status (1)

Country Link
CN (1) CN103886760B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112122A (en) * 2014-07-07 2014-10-22 叶茂 Vehicle logo automatic identification method based on traffic video
CN104268573A (en) * 2014-09-24 2015-01-07 深圳市华尊科技有限公司 Vehicle detecting method and device
CN104765768A (en) * 2015-03-09 2015-07-08 深圳云天励飞技术有限公司 Mass face database rapid and accurate retrieval method
CN105574944A (en) * 2015-12-15 2016-05-11 重庆凯泽科技有限公司 Highway intelligent toll collection system based on vehicle identification and method thereof
WO2017129015A1 (en) * 2016-01-29 2017-08-03 中兴通讯股份有限公司 Vehicle type recognition method and apparatus
CN108256566A (en) * 2018-01-10 2018-07-06 广东工业大学 A kind of adaptive masterplate matching process and device based on cosine similarity
CN109194952A (en) * 2018-10-31 2019-01-11 清华大学 Wear-type eye movement tracing equipment and its eye movement method for tracing
CN109212605A (en) * 2018-09-28 2019-01-15 中国科学院地质与地球物理研究所 pseudo-differential operator storage method and device
CN109388727A (en) * 2018-09-12 2019-02-26 中国人民解放军国防科技大学 BGP face rapid retrieval method based on clustering
CN112016393A (en) * 2020-07-21 2020-12-01 华人运通(上海)自动驾驶科技有限公司 Vehicle parameter acquisition method, device, equipment and storage medium
CN113705576A (en) * 2021-11-01 2021-11-26 江西中业智能科技有限公司 Text recognition method and device, readable storage medium and equipment
CN117316373A (en) * 2023-10-08 2023-12-29 医顺通信息科技(常州)有限公司 HIS-based medicine whole-flow supervision system and method thereof
CN118397844A (en) * 2024-06-25 2024-07-26 大唐智创(山东)科技有限公司 Intelligent management and control server and terminal integrating machine learning algorithm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3360163B2 (en) * 1997-05-09 2002-12-24 株式会社日立製作所 Traffic flow monitoring device
KR100918837B1 (en) * 2009-07-10 2009-09-28 완전정보통신(주) System for hybrid detection vehicles and method thereof
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification
CN103258213A (en) * 2013-04-22 2013-08-21 中国石油大学(华东) Vehicle model dynamic identification method used in intelligent transportation system
CN103295003A (en) * 2013-06-07 2013-09-11 北京博思廷科技有限公司 Vehicle detection method based on multi-feature fusion
CN103324920A (en) * 2013-06-27 2013-09-25 华南理工大学 Method for automatically identifying vehicle type based on vehicle frontal image and template matching

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3360163B2 (en) * 1997-05-09 2002-12-24 株式会社日立製作所 Traffic flow monitoring device
KR100918837B1 (en) * 2009-07-10 2009-09-28 완전정보통신(주) System for hybrid detection vehicles and method thereof
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification
CN103258213A (en) * 2013-04-22 2013-08-21 中国石油大学(华东) Vehicle model dynamic identification method used in intelligent transportation system
CN103295003A (en) * 2013-06-07 2013-09-11 北京博思廷科技有限公司 Vehicle detection method based on multi-feature fusion
CN103324920A (en) * 2013-06-27 2013-09-25 华南理工大学 Method for automatically identifying vehicle type based on vehicle frontal image and template matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
康维新等: "车辆的Harris与SIFT特征及车型识别", 《哈尔滨理工大学学报》, vol. 17, no. 3, 30 June 2012 (2012-06-30) *
胡方明等: "基于BP神经网络的车型分类器", 《西安电子科技大学学报(自然科学版)》, vol. 32, no. 3, 30 June 2005 (2005-06-30) *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112122A (en) * 2014-07-07 2014-10-22 叶茂 Vehicle logo automatic identification method based on traffic video
CN104268573A (en) * 2014-09-24 2015-01-07 深圳市华尊科技有限公司 Vehicle detecting method and device
CN104268573B (en) * 2014-09-24 2017-12-26 深圳市华尊科技股份有限公司 Vehicle checking method and device
CN104765768A (en) * 2015-03-09 2015-07-08 深圳云天励飞技术有限公司 Mass face database rapid and accurate retrieval method
CN104765768B (en) * 2015-03-09 2018-11-02 深圳云天励飞技术有限公司 The quick and precisely search method of magnanimity face database
CN105574944A (en) * 2015-12-15 2016-05-11 重庆凯泽科技有限公司 Highway intelligent toll collection system based on vehicle identification and method thereof
WO2017129015A1 (en) * 2016-01-29 2017-08-03 中兴通讯股份有限公司 Vehicle type recognition method and apparatus
CN108256566A (en) * 2018-01-10 2018-07-06 广东工业大学 A kind of adaptive masterplate matching process and device based on cosine similarity
CN109388727A (en) * 2018-09-12 2019-02-26 中国人民解放军国防科技大学 BGP face rapid retrieval method based on clustering
CN109212605A (en) * 2018-09-28 2019-01-15 中国科学院地质与地球物理研究所 pseudo-differential operator storage method and device
CN109194952A (en) * 2018-10-31 2019-01-11 清华大学 Wear-type eye movement tracing equipment and its eye movement method for tracing
CN109194952B (en) * 2018-10-31 2020-09-22 清华大学 Head-mounted eye movement tracking device and eye movement tracking method thereof
CN112016393A (en) * 2020-07-21 2020-12-01 华人运通(上海)自动驾驶科技有限公司 Vehicle parameter acquisition method, device, equipment and storage medium
CN113705576A (en) * 2021-11-01 2021-11-26 江西中业智能科技有限公司 Text recognition method and device, readable storage medium and equipment
CN113705576B (en) * 2021-11-01 2022-03-25 江西中业智能科技有限公司 Text recognition method and device, readable storage medium and equipment
CN117316373A (en) * 2023-10-08 2023-12-29 医顺通信息科技(常州)有限公司 HIS-based medicine whole-flow supervision system and method thereof
CN117316373B (en) * 2023-10-08 2024-04-12 医顺通信息科技(常州)有限公司 HIS-based medicine whole-flow supervision system and method thereof
CN118397844A (en) * 2024-06-25 2024-07-26 大唐智创(山东)科技有限公司 Intelligent management and control server and terminal integrating machine learning algorithm

Also Published As

Publication number Publication date
CN103886760B (en) 2016-09-21

Similar Documents

Publication Publication Date Title
CN103886760A (en) Real-time vehicle type detection system based on traffic video
Dornaika et al. Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors
CN103761531B (en) The sparse coding license plate character recognition method of Shape-based interpolation contour feature
CN101551809B (en) Search method of SAR images classified based on Gauss hybrid model
CN103810505B (en) Vehicles identifications method and system based on multiple layer description
CN102622607B (en) Remote sensing image classification method based on multi-feature fusion
CN102194114B (en) Method for recognizing iris based on edge gradient direction pyramid histogram
CN104850850A (en) Binocular stereoscopic vision image feature extraction method combining shape and color
CN104915636A (en) Remote sensing image road identification method based on multistage frame significant characteristics
CN104239898A (en) Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate
CN104240256A (en) Image salient detecting method based on layering sparse modeling
CN106909902A (en) A kind of remote sensing target detection method based on the notable model of improved stratification
CN104112122A (en) Vehicle logo automatic identification method based on traffic video
CN103309982B (en) A kind of Remote Sensing Image Retrieval method of view-based access control model significant point feature
CN102915544A (en) Video image motion target extracting method based on pattern detection and color segmentation
CN103927511A (en) Image identification method based on difference feature description
CN103544488B (en) A kind of face identification method and device
CN104680158A (en) Face recognition method based on multi-scale block partial multi-valued mode
CN110598564A (en) OpenStreetMap-based high-spatial-resolution remote sensing image transfer learning classification method
CN103678552A (en) Remote-sensing image retrieving method and system based on salient regional features
CN105404859A (en) Vehicle type recognition method based on pooling vehicle image original features
CN104680169A (en) Semi-supervised diagnostic characteristic selecting method aiming at thematic information extraction of high-spatial resolution remote sensing image
CN105405138A (en) Water surface target tracking method based on saliency detection
CN109543546A (en) The gait age estimation method returned based on the distribution of depth sequence
CN103324753B (en) Based on the image search method of symbiotic sparse histogram

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Li Tao

Inventor after: Gao Dawei

Inventor after: Tang Hongqiang

Inventor after: Li Dongmei

Inventor after: Qiao Pizhe

Inventor after: Zhu Xiaojun

Inventor after: Zhang Dongliang

Inventor after: Qu Hao

Inventor after: Zou Xiangling

Inventor after: Guo Hangyu

Inventor after: Liu Yong

Inventor before: Li Tao

Inventor before: Ye Mao

Inventor before: Xiang Tao

Inventor before: Li Dongmei

Inventor before: Zhu Xiaojun

Inventor before: Zhang Dongliang

Inventor before: Bao Zhijun

Inventor before: Tang Hongqiang

COR Change of bibliographic data
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170728

Address after: 450048, Henan economic and Technological Development Zone, Zhengzhou Second Avenue West, South all the way Xinghua science and Technology Industrial Park, No. 2, building 9, room 908, -37

Patentee after: ZHENGZHOU CHANTU INTELLIGENT TECHNOLOGY CO.,LTD.

Address before: Yuelu District City, Hunan province 410000 Changsha Lushan Road No. 932

Patentee before: Li Tao

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160921