CN107301405A - Method for traffic sign detection under natural scene - Google Patents
Method for traffic sign detection under natural scene Download PDFInfo
- Publication number
- CN107301405A CN107301405A CN201710540228.3A CN201710540228A CN107301405A CN 107301405 A CN107301405 A CN 107301405A CN 201710540228 A CN201710540228 A CN 201710540228A CN 107301405 A CN107301405 A CN 107301405A
- Authority
- CN
- China
- Prior art keywords
- image
- scene
- pixel
- color
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 230000011218 segmentation Effects 0.000 claims abstract description 63
- 238000012545 processing Methods 0.000 claims abstract description 38
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 25
- 230000003044 adaptive effect Effects 0.000 claims abstract description 18
- 238000010586 diagram Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 5
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 230000000873 masking effect Effects 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 abstract description 3
- 238000000638 solvent extraction Methods 0.000 abstract 1
- 238000005286 illumination Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000003915 air pollution Methods 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention proposes the method for traffic sign detection under a kind of natural scene, including:Obtain the detection image shot under natural scene;Monochrome information to detection image is counted, different luminance areas are marked off according to grade brightness threshold value, the pixel ratio of different luminance areas is calculated respectively, and image is divided into by dark scene, bright scene, backlight scene and normal scene according to each pixel ratio and scene classification threshold value;Gamma parameter values are chosen according to scene classification result, strengthen algorithm using adaptive Gamma carries out image enhancement processing to classification chart picture;Under RGB color, choose partitioning algorithm for different scenes and carry out segmentation of a color image, obtain suspected target region;Gray level image after color segmentation is subjected to binary conversion treatment, the suspected target region after binaryzation is obtained;Suspected target region is screened by Feature Selection device, positioning traffic sign region.The robustness and real-time of Mark Detection can be taken into account.
Description
Technical Field
The invention relates to the technical field of sign identification and detection, in particular to a traffic sign detection method in a natural scene.
Background
The traffic sign plays an important role in guaranteeing the orderly traffic of road traffic, and in the driving process, the accurate identification of the traffic sign plays an important role in improving the driving safety of the road, and traffic accidents possibly caused by neglecting the traffic sign. For unmanned vehicles, if the vehicle cannot automatically detect and accurately identify the traffic sign, it cannot be truly unmanned. At present, in the fields of computer vision, pattern recognition, intelligent robots, intelligent traffic systems and the like, traffic sign detection and recognition technology is widely researched, and has important academic significance and practical value. Application of traffic sign detection and identification in real life still faces great challenges, including: ADAS, automatic driving, mark monitoring and maintenance and the like. There are many factors that make the detection and identification of traffic signs difficult, including: (1) the sign is exposed to sunlight for a long time to cause fading; (2) air pollution and weather conditions lead to reduced visibility; (3) the color of the signboard is influenced by outdoor complex light conditions; (4) the signboard is shielded by the barrier; (5) vehicle operation causes motion blur. The interference brings difficulty to the detection and identification of the mark, and higher requirements are put forward on the real-time performance and robustness of a related algorithm.
At present, the main research methods of the traffic sign detection module include methods based on a color segmentation technology, a detection technology based on a specific shape, a classifier designed based on texture and local features, and the like.
Color-based segmentation techniques include extracting the target region in RGB, HSI and YUV color spaces, respectively. (1) Based on color segmentation in the RGB space, a target color region is extracted by utilizing the difference relation among three components of R, G and B. However, the RGB color space is very sensitive to illumination variation, which easily causes missing detection or false detection, so how to overcome this problem is always the key and difficult point of color segmentation in the RGB color space. (2) Based on color segmentation under an HSI space, the RGB color space is converted into the HSI color space, and a target color area is extracted by utilizing the characteristic that the HSI color space is insensitive to illumination. However, the conversion of the color space is a time-consuming process, which results in that the real-time performance of the detection process is not guaranteed. (3) The RGB color space is converted to YUV color space based on color segmentation in YUV space, and the system will be used to detect blue rectangular signboards. However, this approach is not robust enough for a wide variety of signs. Color-based segmentation often requires the design of many thresholds in pursuit of robustness, and the determination of these thresholds is very difficult in practice. Color segmentation based on RGB space has the advantage of meeting the real-time requirement, but how to meet the robustness requirement is always the key point and the place worthy of breakthrough of the research in the field.
The detection technology based on the specific shape is to extract a target area by utilizing the specific shape characteristics of the signboard, and an algorithm based on the shape characteristics is directly applied to an image, and mainly comprises the following steps: hough transform, radial symmetry and gradient histograms, etc. (1) The Hough transform is mainly used to detect circular marks, and is not sufficient to detect all mark shapes. In addition, the Hough transformation has the defect of slow calculation, and cannot meet the real-time requirement. (2) The radial symmetry rapid detection algorithm extracts a target area by utilizing the symmetry of triangle, square, diamond and circle, however, the robustness of the deformed target is not sufficient. (3) The gradient histogram utilizes the characteristic that target feature calculation depends on image gradient information to extract the target. However, the gradient feature is very sensitive to image noise and cannot meet the robustness requirement. The detection technology based on the specific shape usually consumes a large time cost in the process of extracting the shape feature, and cannot meet the real-time requirement.
The classifier designed based on texture and local features complements color features and shape features mutually, firstly carries out coarse detection by utilizing color segmentation, and then carries out fine detection by utilizing shape features, and mainly comprises the following steps: HSI color segmentation and template matching are combined; a color shape pair; a neural network; an Adaboost classifier; HOG and SVM classifiers are combined. (1) The HSI color segmentation and template matching method comprises the steps of completing segmentation in an HSI color space, and obtaining a detection result by using a shape template matching method. However, both HSI color conversion and template matching will take a lot of time and cannot meet the real-time requirements. (2) The color shape pair is formed by associating a specific color segmentation result with a specific shape extraction method for different traffic signs. But for the complex interference of urban traffic, the robustness can not be guaranteed. (3) The neural network is designed with a neural network feature extractor aiming at color and shape features respectively and then is fused by using fuzzy logic, but the method has large calculation amount and cannot meet the real-time requirement. (4) The Adaboost classifier effectively combines color and local characteristic information to realize traffic sign detection, but needs long time for online calculation of complex real scenes, and cannot meet the real-time requirement. (5) The combination of the HOG and the SVM classifier is used for constructing HOG characteristics of each color component, color information and edge information are effectively fused to serve as input characteristics of the SVM classifier, but the process is long in time consumption and cannot meet the real-time requirement.
The above research methods cannot give consideration to both robustness and real-time property of the marker detection, and often cannot meet the real-time property requirement on the premise of ensuring the robustness and improving the detection accuracy; or the robustness of the real-time environment is not enough to be expressed on the premise of ensuring the real-time performance.
Disclosure of Invention
The invention aims to provide a traffic sign detection method in a natural scene, which can give consideration to both robustness and real-time performance of sign detection.
In order to solve the above problems, the invention provides a traffic sign detection method in a natural scene, which comprises the following steps:
s1: acquiring a detection image shot in a natural scene;
s2: counting the brightness information of the detected image, dividing different brightness areas according to a grade brightness threshold value, respectively calculating the pixel proportion of the different brightness areas, and dividing the image into a dark scene, a bright scene, a backlight scene and a normal scene according to each pixel proportion and a scene classification threshold value;
s3: selecting a Gamma parameter value according to a scene classification result, and performing image enhancement processing on the classified image by adopting a self-adaptive Gamma enhancement algorithm;
s4: under the RGB color space, selecting a segmentation algorithm for different scenes to perform image color segmentation to obtain a suspected target area; aiming at a bright scene, performing color segmentation by adopting a normalized RGB segmentation algorithm; aiming at a dark scene, a backlight scene and a normal scene, an improved three-component color difference method is adopted for color segmentation, and the improved three-component color difference method is used for self-adaptively adjusting an extracted color area by setting a self-adaptive weighting factor;
s5: carrying out binarization processing on the gray level image subjected to color segmentation to obtain a binarized suspected target area;
s6: screening the suspected target area through a characteristic screener, and positioning a traffic sign area; wherein the feature filter is established based on shape features of the traffic sign.
According to an embodiment of the present invention, the step S2 includes:
s21: counting the brightness information, dividing a brightness region according to a brightness division threshold, determining the number of pixel points of a high-frequency region, a medium-frequency region and a low-frequency region, and respectively expressing the pixel points by brightness statistical variables num _ high, num _ middle and num _ low;
s22: respectively calculating the proportion P _ high, P _ middle and P _ low of the pixel numbers of the high frequency region, the middle frequency region and the low frequency region in all the pixel points,
wherein,
m is the number of all pixel points;
s23: the image is divided into a dark scene, a bright scene, a backlight scene, and a normal scene according to a scene classification threshold, wherein,
the bright scene is P _ high more than 0.53 and P _ low less than 0.35;
the dark scene is P _ low > 0.51;
the backlight scene is P _ high + P _ low >0.8& P _ low > P _ middle & P _ high > P _ middle;
normal scenarios are otherwise.
According to one embodiment of the present invention, the luminance information is counted and normalized, and the luminance regions are divided according to the luminance division threshold as follows: the low frequency region is [0,0.4 ]; the medium frequency region is [0.4,0.7 ]; the high frequency region is [0.7,1 ].
According to an embodiment of the present invention, in step S3, the formula for performing image enhancement processing on the classified image by using the adaptive Gamma enhancement algorithm is as follows:
f(I)=Iγwherein
Wherein, I is the gray value of each pixel point of the image before processing, f (I) is the gray value of each pixel point of the image after processing, and gamma is gamma parameter.
According to an embodiment of the present invention, in step S4, the performing color segmentation by using the improved three-component color difference method in the RGB color space includes:
extracting the characteristic operator of the red region as lambda R-G-B, processing the R, G, B component of the image according to the characteristic operator according to a formula (1) to extract the red region to obtain an R component image,
extracting the characteristic operator of the blue region as B-R-G, processing the R, G, B component of the image according to the characteristic operator according to a formula (2) to extract the blue region to obtain a B component image,
wherein CA1 represents the pixel gray scale value of the extracted red region, CA2 represents the pixel gray scale value of the extracted blue region, and λ is an adaptive weighting factor;
in step S4, performing color segmentation by using a normalized RGB segmentation algorithm in an RGB color space includes:
calculating R ═ R/(R + G + B), G ═ G/(R + G + B), B ═ B/(R + G + B);
if r ═ 0.4& g <0.3, the pixel is red, if b >0.4, the pixel is blue, if r + g >0.85, the pixel is yellow.
According to an embodiment of the present invention, the adaptive weighting factor is obtained by:
a1, setting an initial value of an adaptive weighting factor;
a2, calculating the number of pixels with pixel values of 0 on the R component diagram and the B component diagram according to the formulas (1) and (2), and respectively representing the pixels with the pixel values of 0 by num _ red0 and num _ blue 0;
a3, calculating a target proportion k0(num _ red0+ num _ blue0)/2 × M × N, where M × N is the number of all pixels;
a4, if k is judged0>0.98&If the lambda is less than or equal to 2, the lambda is equal to lambda +0.2, and the step A2 is returned to for calculation; otherwise, determining the current entry as the final adaptive weighting factor.
According to an embodiment of the present invention, the step S5 includes the steps of:
s51: dividing the image into L gray levels;
s52: calculating the proportion of foreground points in the image as w0 and the average gray scale as uO, the proportion of background points in the image as w1 and the average gray scale as u1 according to each gray scale; calculating the total average gray scale of the image as u-w 0 u0+ w1 u1, and calculating the variance of the foreground and background images as g-w 0 (u0-u) (u0-u) + w1 (u1-u) (u1-u) w0 w1 (u0-u1) (u0-u 1);
s53: the gray scale when the variance g is maximum is the optimal segmentation threshold;
s54: and performing foreground and background segmentation by using the optimal segmentation threshold.
According to an embodiment of the present invention, the step S5 includes: and after the color-segmented gray-scale image is subjected to binarization processing, rough processing and gap filling are carried out by using morphological corrosion and expansion methods, so as to obtain a suspected target area.
According to an embodiment of the present invention, in step S6, a feature filter is established by using shape features of the signboard, including an aspect ratio, a perimeter-to-area ratio, a region-to-area ratio, and a shape-similarity invariant feature, and the suspected target region is filtered by the feature filter, wherein:
the length-width ratio is 0.45< ratio1< 1.2;
the area ratio of the perimeter is 0.045< ratio2<0.07| ratio2< 0.038;
the area ratio is 0.58< ratio3< 0.63; and
setting different judgment standards for different shapes by using the similar invariant feature and spatial distribution information of the shape outline, converting the outline coordinate into a polar coordinate (theta-rho), and distinguishing a circle, a triangle and a rectangle according to the characteristic of the rho value.
According to an embodiment of the present invention, the method further includes step S7: masking the obtained traffic sign area and the original image:
s71: taking the binary image of the positioned traffic sign area as a mask template, and multiplying the mask template by the original image, so that the image value in the traffic sign area is kept unchanged, and the image values outside the traffic sign area are all 0;
s72: and respectively counting the sum of pixel gray values in the horizontal direction and the vertical direction of the image obtained by the mask to obtain a traffic sign area, and then cutting and extracting.
After the technical scheme is adopted, compared with the prior art, the invention has the following beneficial effects:
aiming at the complex condition in the natural scene, the traffic sign board can be quickly detected, the high detection rate can be achieved, the requirements of sign detection on robustness and real-time performance are met, a scene classification method based on the brightness condition is provided, the influence of complex illumination on RGB color components is solved, and the defect of insufficient robustness of image segmentation based on an RGB space is overcome; the method combines RGB color segmentation and shape feature-based screening, realizes coarse-to-fine mark detection, greatly improves the detection rate, and ensures the real-time requirement of mark detection.
Drawings
Fig. 1 is a schematic flow chart of a traffic sign detection method in a natural scene according to an embodiment of the present invention;
FIG. 2a is a schematic diagram of an original inspection image according to an embodiment of the present invention;
fig. 2b is a schematic diagram of an image of a binarized suspected target area obtained by image color segmentation according to an embodiment of the present invention;
FIG. 2c is a schematic diagram of an image after screening and locating a traffic sign area according to an embodiment of the present invention;
FIG. 2d is a schematic diagram of an image after mask processing according to an embodiment of the present invention;
fig. 2e and 2f are schematic diagrams of images of the extracted traffic sign region by clipping according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather construed as limited to the embodiments set forth herein.
Referring to fig. 1, in one embodiment, a method for detecting a traffic sign in a natural scene includes the following steps:
s1: acquiring a detection image shot in a natural scene;
s2: counting the brightness information of the detected image, dividing different brightness areas according to a grade brightness threshold value, respectively calculating the pixel proportion of the different brightness areas, and dividing the image into a dark scene, a bright scene, a backlight scene and a normal scene according to each pixel proportion and a scene classification threshold value;
s3: selecting a Gamma parameter value according to a scene classification result, and performing image enhancement processing on the classified image by adopting a self-adaptive Gamma enhancement algorithm;
s4: under the RGB color space, selecting a segmentation algorithm for different scenes to perform image color segmentation to obtain a suspected target area; aiming at a bright scene, performing color segmentation by adopting a normalized RGB segmentation algorithm; aiming at a dark scene, a backlight scene and a normal scene, an improved three-component color difference method is adopted for color segmentation, and the improved three-component color difference method is used for self-adaptively adjusting an extracted color area by setting a self-adaptive weighting factor;
s5: carrying out binarization processing on the gray level image subjected to color segmentation to obtain a binarized suspected target area;
s6: screening the suspected target area through a characteristic screener, and positioning a traffic sign area; wherein the feature filter is established based on shape features of the traffic sign.
The following describes a traffic sign detection method in a natural scene in an embodiment of the present invention with reference to the accompanying drawings, but the present invention is not limited thereto.
In step S1, the detection image captured in the natural scene may be obtained by capturing in real time during the journey, but of course, the traffic sign may not appear in the detection image captured each time, and whether the traffic sign exists in the detection image may be detected by the subsequent steps, and if there is no traffic sign, the image may be ignored. In the processing step of the present embodiment, the presence of a traffic sign in the image is detected. Of course, since image processing in RGB space is required subsequently, the image is taken as a color image, and the colors of fig. 2a are removed for convenience of description.
Next, step S2 is executed to count the brightness information of the detected image obtained in step S1, divide the detected image into different brightness regions according to the level brightness threshold, calculate the pixel ratios of the different brightness regions, and divide the detected image into a dark scene, a bright scene, a backlight scene, and a normal scene according to the pixel ratios and the scene classification threshold.
The obvious difference range of the brightness intervals under different illumination conditions can be compared through the brightness histogram, the classification interval is reduced, and then the scene classification threshold value is adjusted by adjusting the interval, so that the optimal scene classification result is obtained.
Further, step S2 may include:
s21: counting the brightness information, dividing a brightness region according to a brightness division threshold, determining the number of pixel points of a high-frequency region, a medium-frequency region and a low-frequency region, and respectively expressing the pixel points by brightness statistical variables num _ high, num _ middle and num _ low;
s22: respectively calculating the proportion P _ high, P _ middle and P _ low of the pixel numbers of the high frequency region, the middle frequency region and the low frequency region in all the pixel points,
wherein,m is the number of all pixel points;
s23: the image is divided into a dark scene, a bright scene, a backlight scene, and a normal scene according to a scene classification threshold, wherein,
the bright scene is P _ high more than 0.53 and P _ low less than 0.35;
the dark scene is P _ low > 0.51;
the backlight scene is P _ high + P _ low >0.8& P _ low > P _ middle & P _ high > P _ middle;
normal scenarios are otherwise.
Optionally, the luminance information is counted and normalized, and the luminance area is divided according to the luminance division threshold as follows: the low frequency region is [0,0.4 ]; the medium frequency region is [0.4,0.7 ]; the high frequency region is [0.7,1 ].
And step S3 is executed, the Gamma parameter value is selected according to the scene classification result, and the image enhancement processing is carried out on the classified image by adopting the self-adaptive Gamma enhancement algorithm.
Specifically, in step S3, the formula for performing image enhancement processing on the classified image by using the adaptive Gamma enhancement algorithm is as follows:
f(I)=Iγwherein
Wherein, I is the gray value of each pixel point of the image before processing, f (I) is the gray value of each pixel point of the image after processing, and gamma is gamma parameter.
According to the scene classification result, performing corresponding image enhancement processing on the classified image by adopting a self-adaptive Gamma enhancement method, which is different from general Gamma transformation, wherein the value of the Gamma parameter is selected in a self-adaptive manner according to the classification result, namely the bright scene is 0.5; dark scenes are 2.2; the backlight scene is 1.2; the normal scene is 1.
The RGB color space is very sensitive to illumination change, so that the condition of omission or false detection is easily caused, the influence of illumination on the RGB color space is eliminated by classifying the brightness scene and carrying out adaptive enhancement processing, the problem of omission or false detection is avoided, and the defect of insufficient robustness of image segmentation based on the RGB space is overcome.
Next, step S4 is executed to perform coarse marker localization in RGB space. Under the RGB color space, selecting a segmentation algorithm for different scenes to perform image color segmentation to obtain a suspected target area; aiming at a bright scene, performing color segmentation by adopting a normalized RGB segmentation algorithm; aiming at a dark scene, a backlight scene and a normal scene, an improved three-component color difference method is adopted for color segmentation, and the improved three-component color difference method is used for self-adaptively adjusting an extracted color area by setting a self-adaptive weighting factor.
Since the luminance scene classification eliminates the effect of illumination on the RGB color space, the enhanced image can be color-segmented directly in the RGB space. And adaptively selecting an improved three-component color difference method and a normalized RGB segmentation method according to the classification result of the brightness scene.
Specifically, in step S4, the color segmentation using the improved three-component color difference method in the RGB color space includes:
extracting the characteristic operator of the red region into R-G-B, processing R, G, B components of the image according to the characteristic operator according to a formula (1) to extract the red region to obtain an R component image,
extracting the characteristic operator of the blue region as lambda B-R-G, processing the R, G, B component of the image according to the characteristic operator according to a formula (2) to extract the blue region to obtain a B component image,
where CA1 represents the pixel grayscale value of the extracted red region, CA2 represents the pixel grayscale value of the extracted blue region, and λ is the adaptive weighting factor.
Because the threshold interval of red is close to that of yellow, yellow can be obtained while red is extracted, and RGB components can be extracted only by extracting a red area and a blue area.
The former weighting factors are constant values and have no robustness; and the weighting factor is improved, so that the weighting factor can be adaptively adjusted according to different images. If the weighting factor is a fixed value of 1.6, the dark image may have missed detection, and the backlight image has over-detection; this phenomenon can be improved after adaptive adjustment.
In step S4, the color segmentation using the normalized RGB segmentation algorithm in the RGB color space includes:
calculating R ═ R/(R + G + B), G ═ G/(R + G + B), B ═ B/(R + G + B);
if r ═ 0.4& g <0.3, the pixel is red, if b >0.4, the pixel is blue, if r + g >0.85, the pixel is yellow. r, g, b are all temporary proportionality parameters.
Although the luminance scene classification reduces the influence of illumination on the RGB space, when the improved three-component color difference method is directly used for a bright scene, the extracted image processing edge is incomplete, and when the normalized RGB segmentation method is used, the condition is avoided, so that the bright scene is subjected to a color segmentation method different from other scenes.
Preferably, the adaptive weighting factor can be obtained by:
a1, setting an initial value of an adaptive weighting factor; the initial value may be set to 1.2;
a2, calculating the number of pixels with pixel values of 0 on the R component diagram and the B component diagram according to the formulas (1) and (2), and respectively representing the pixels with the pixel values of 0 by num _ red0 and num _ blue 0; namely, the number of pixel points with the required component value of 0 in the image is respectively calculated;
a3, calculating a target proportion k0(num _ red0+ num _ blue0)/2 × M × N, where M × N is the number of all pixels;
a4, if k is judged0>0.98&If the lambda is less than or equal to 2, the lambda is equal to lambda +0.2, and the step A2 is returned to for calculation; otherwise, determining the lambda at the moment as the final adaptive weighting factor.
According to the embodiment of the invention, the classification of the brightness scenes is realized according to the brightness values, the pictures are divided into different brightness scenes based on the classification idea, and then the matched extraction algorithm is selected in a targeted manner, so that the robustness is improved; if the same method is used instead of scene classification, the method is not robust, namely the extraction algorithm is only suitable for normal illumination conditions, and the image processing effect on weak illumination or strong illumination is not good.
Next, step S5 is executed to perform binarization processing on the color-divided grayscale image, so as to obtain a binarized pseudo target area. After the segmentation, a pseudo region appears, but since the image obtained after the segmentation is a grayscale image, it is necessary to perform binarization processing to obtain a binary image.
Specifically, step S5 includes the following steps:
s51: dividing the image into L gray levels;
s52: calculating the proportion of foreground points in the image as w0 and the average gray scale as u0, the proportion of background points in the image as w1 and the average gray scale as u1 according to each gray scale; calculating the total average gray scale of the image as u-w 0 u0+ w1 u1, and calculating the variance of the foreground and background images as g-w 0 (u0-u) (u0-u) + w1 (u1-u) (u1-u) w0 w1 (u0-u1) (u0-u 1);
s53: the gray scale when the variance g is maximum is the optimal segmentation threshold;
s54: and performing foreground and background segmentation by using the optimal segmentation threshold.
When the variance g is maximum, the difference between the foreground and the background at this time can be considered to be maximum, the gray level t at this time is an optimal threshold, and the gray levels below t on the image become 0 and the gray levels above t become 1.
Preferably, step S5 includes: and after the color-segmented gray-scale image is subjected to binarization processing, rough processing and gap filling are carried out by using morphological corrosion and expansion methods, so as to obtain a suspected target area. The image after the suspected target area is obtained is shown in fig. 2 b.
Step S6 is executed, the suspected target area is screened through the characteristic filter, and the traffic sign area is positioned; wherein the feature filter is established based on shape features of the traffic sign.
For the suspected target area extracted in step S5, there is still a non-marked area, so that it is necessary to remove the part of the suspected area. And removing irrelevant areas by using the characteristic screener, realizing the accurate positioning of the traffic sign and obtaining the traffic sign area.
Specifically, in step S6, a feature filter is established by using shape features of the signboard, including aspect ratio, perimeter area ratio, and shape similarity invariant features, and a suspected target area is filtered by the feature filter, where:
the length-width ratio is 0.45< ratio1< 1.2;
the area ratio of the perimeter is 0.045< ratio2<0.07| ratio2< 0.038;
the area ratio is 0.58< ratio3< 0.63; and
setting different judgment standards for different shapes by using the similar invariant feature and spatial distribution information of the shape outline, converting the outline coordinate into a polar coordinate (theta-rho), and distinguishing a circle, a triangle and a rectangle according to the characteristic of the rho value.
By utilizing the characteristic of similarity and invariance of the contours, the problem of sensitivity to noise such as image translation, rotation, contraction and the like can be solved. After accurate positioning, the traffic sign area is obtained as shown in fig. 2 c.
In one embodiment, the method for detecting a traffic sign in a natural scene may further include step S7: masking the obtained traffic sign area and the original image:
s71: taking the binary image of the positioned traffic sign area as a mask template, and multiplying the mask template by the original image, so that the image value in the traffic sign area is kept unchanged, and the image values outside the traffic sign area are all 0; the image after the mask process is shown in FIG. 2 d;
s72: and respectively counting the sum of pixel gray values in the horizontal direction and the vertical direction of the image obtained by the mask to obtain a traffic sign area, and then cutting and extracting. The extracted image is cropped as shown in fig. 2e and 2 f.
The method is used for carrying out verification tests in a German Traffic Sign Database (GTSDB), and experimental results show that the method meets the requirements of robustness and instantaneity. Compared with the related data in the current traffic sign detection and identification field, the invention uses the luminance scene classification and the sign detection method based on the combination of RGB and shape characteristics, overcomes the influence of illumination on RGB components, achieves higher detection rate and shows better robustness and real-time property.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the scope of the claims, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention.
Claims (10)
1. A traffic sign detection method under a natural scene is characterized by comprising the following steps:
s1: acquiring a detection image shot in a natural scene;
s2: counting the brightness information of the detected image, dividing different brightness areas according to a grade brightness threshold value, respectively calculating the pixel proportion of the different brightness areas, and dividing the image into a dark scene, a bright scene, a backlight scene and a normal scene according to each pixel proportion and a scene classification threshold value;
s3: selecting a Gamma parameter value according to a scene classification result, and performing image enhancement processing on the classified image by adopting a self-adaptive Gamma enhancement algorithm;
s4: under the RGB color space, selecting a segmentation algorithm for different scenes to perform image color segmentation to obtain a suspected target area; aiming at a bright scene, performing color segmentation by adopting a normalized RGB segmentation algorithm; aiming at a dark scene, a backlight scene and a normal scene, an improved three-component color difference method is adopted for color segmentation, and the improved three-component color difference method is used for self-adaptively adjusting an extracted color area by setting a self-adaptive weighting factor;
s5: carrying out binarization processing on the gray level image subjected to color segmentation to obtain a binarized suspected target area;
s6: screening the suspected target area through a characteristic screener, and positioning a traffic sign area; wherein the feature filter is established based on shape features of the traffic sign.
2. The method for detecting traffic signs in natural scenes according to claim 1, wherein the step S2 includes:
s21: counting the brightness information, dividing a brightness region according to a brightness division threshold, determining the number of pixel points of a high-frequency region, a medium-frequency region and a low-frequency region, and respectively expressing the pixel points by brightness statistical variables num _ high, num _ middle and num _ low;
s22: respectively calculating the proportion P _ high, P _ middle and P _ low of the pixel numbers of the high frequency region, the middle frequency region and the low frequency region in all the pixel points,
wherein,
m is the number of all pixel points;
s23: the image is divided into a dark scene, a bright scene, a backlight scene, and a normal scene according to a scene classification threshold, wherein,
bright light scene is P _ high >0.53& P _ low < 0.35;
dark scenes are P _ low > 0.51;
the backlight scene is P _ high + P _ low >0.8& P _ low > P _ middle & P _ high > P _ middle;
normal scenarios are otherwise.
3. The method for detecting traffic signs in natural scenes according to claim 2, wherein the luminance information is counted and normalized, and the luminance regions are divided according to the luminance division threshold as follows: the low frequency region is [0,0.4 ]; the medium frequency region is [0.4,0.7 ]; the high frequency region is [0.7,1 ].
4. The method for detecting traffic signs in natural scenes according to claim 1, wherein in step S3, the formula for performing image enhancement processing on the classified images by using the adaptive Gamma enhancement algorithm is as follows:
f(I)=Iγwherein
Wherein, I is the gray value of each pixel point of the image before processing, f (I) is the gray value of each pixel point of the image after processing, and gamma is gamma parameter.
5. The method for detecting traffic signs in natural scenes according to claim 1, wherein in step S4, the color segmentation using the improved three-component color difference method in the RGB color space comprises:
extracting the characteristic operator of the red region as lambda R-G-B, processing the R, G, B component of the image according to the characteristic operator according to a formula (1) to extract the red region to obtain an R component image,
extracting the characteristic operator of the blue region as lambda B-R-G, processing the R, G, B component of the image according to the characteristic operator according to a formula (2) to extract the blue region to obtain a B component image,
wherein CA1 represents the pixel gray scale value of the extracted red region, CA2 represents the pixel gray scale value of the extracted blue region, and λ is an adaptive weighting factor;
in step S4, performing color segmentation by using a normalized RGB segmentation algorithm in an RGB color space includes:
calculating R ═ R/(R + G + B), G ═ G/(R + G + B), B ═ B/(R + G + B);
and if r > is 0.4& g <0.3, the pixel is red, if b >0.4, the pixel is blue, and if r + g >0.85, the pixel is yellow.
6. The method for detecting traffic signs in natural scenes according to claim 5, wherein the adaptive weighting factor is obtained by the following steps:
a1, setting an initial value of an adaptive weighting factor lambda;
a2, calculating the number of pixels with pixel values of 0 on the R component diagram and the B component diagram according to the formulas (1) and (2), and respectively representing the pixels with the pixel values of 0 by num _ red0 and num _ blue 0;
a3, calculating a target proportion k0(num _ red0+ num _ blue0)/2 × M × N, where M × N is the number of all pixels;
a4, if k is judged0>0.98&If the lambda is less than or equal to 2, the lambda is equal to lambda +0.2, and the step A2 is returned to for calculation; otherwise, determining the lambda at the moment as the final adaptive weighting factor.
7. The method for detecting traffic signs in natural scenes according to claim 1, wherein the step S5 includes the steps of:
s51: dividing the image into L gray levels;
s52: calculating the proportion of foreground points in the image as w0 and the average gray scale as u0, the proportion of background points in the image as w1 and the average gray scale as u1 according to each gray scale; calculating the total average gray scale of the image as u-w 0 u0+ w1 u1, and calculating the variance of the foreground and background images as g-w 0 (u0-u) (u0-u) + w1 (u1-u) (u1-u) w0 w1 (u0-u1) (u0-u 1);
s53: the gray scale when the variance g is maximum is the optimal segmentation threshold;
s54: and performing foreground and background segmentation by using the optimal segmentation threshold.
8. The method for detecting traffic signs in natural scenes according to claim 1 or 7, wherein the step S5 includes: and after the color-segmented gray-scale image is subjected to binarization processing, rough processing and gap filling are carried out by using morphological corrosion and expansion methods, so as to obtain a suspected target area.
9. The method for detecting traffic signs in natural scenes according to claim 1, wherein in step S6, a feature filter is established by using shape features of the sign board, including aspect ratio, perimeter-to-area ratio, area ratio and shape similarity invariant features, and the suspected target area is filtered by the feature filter, wherein:
an aspect ratio of 0.45< ratio1< 1.2;
a perimeter to area ratio of 0.045< ratio2<0.07| ratio2< 0.038;
a zone occupancy ratio of 0.58< ratio3< 0.63; and
setting different judgment standards for different shapes by using the similar invariant feature and spatial distribution information of the shape outline, converting the outline coordinate into a polar coordinate (theta-rho), and distinguishing a circle, a triangle and a rectangle according to the characteristic of the rho value.
10. The method for detecting traffic signs in natural scenes according to claim 1, further comprising the step S7 of: masking the obtained traffic sign area and the original image:
s71: taking the binary image of the positioned traffic sign area as a mask template, and multiplying the mask template by the original image, so that the image value in the traffic sign area is kept unchanged, and the image values outside the traffic sign area are all 0;
s72: and respectively counting the sum of pixel gray values in the horizontal direction and the vertical direction of the image obtained by the mask to obtain a traffic sign area, and then cutting and extracting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710540228.3A CN107301405A (en) | 2017-07-04 | 2017-07-04 | Method for traffic sign detection under natural scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710540228.3A CN107301405A (en) | 2017-07-04 | 2017-07-04 | Method for traffic sign detection under natural scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107301405A true CN107301405A (en) | 2017-10-27 |
Family
ID=60136146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710540228.3A Pending CN107301405A (en) | 2017-07-04 | 2017-07-04 | Method for traffic sign detection under natural scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107301405A (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108024105A (en) * | 2017-12-14 | 2018-05-11 | 珠海市君天电子科技有限公司 | Image color adjusting method, device, electronic equipment and storage medium |
CN108399610A (en) * | 2018-03-20 | 2018-08-14 | 上海应用技术大学 | A kind of depth image enhancement method of fusion RGB image information |
CN108711160A (en) * | 2018-05-18 | 2018-10-26 | 西南石油大学 | A kind of Target Segmentation method based on HSI enhancement models |
CN108734131A (en) * | 2018-05-22 | 2018-11-02 | 杭州电子科技大学 | A kind of traffic sign symmetry detection methods in image |
CN108765443A (en) * | 2018-05-22 | 2018-11-06 | 杭州电子科技大学 | A kind of mark enhancing processing method of adaptive color Threshold segmentation |
CN109214434A (en) * | 2018-08-20 | 2019-01-15 | 上海萃舟智能科技有限公司 | A kind of method for traffic sign detection and device |
CN109815906A (en) * | 2019-01-25 | 2019-05-28 | 华中科技大学 | Method for traffic sign detection and system based on substep deep learning |
CN109916415A (en) * | 2019-04-12 | 2019-06-21 | 北京百度网讯科技有限公司 | Road type determines method, apparatus, equipment and storage medium |
CN110312106A (en) * | 2019-07-30 | 2019-10-08 | 广汽蔚来新能源汽车科技有限公司 | Display methods, device, computer equipment and the storage medium of image |
CN110598705A (en) * | 2019-09-27 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Semantic annotation method and device for image |
CN110619648A (en) * | 2019-09-19 | 2019-12-27 | 四川长虹电器股份有限公司 | Method for dividing image area based on RGB change trend |
CN110838131A (en) * | 2019-11-04 | 2020-02-25 | 网易(杭州)网络有限公司 | Method and device for realizing automatic cutout, electronic equipment and medium |
CN110930358A (en) * | 2019-10-17 | 2020-03-27 | 广州丰石科技有限公司 | Solar panel image processing method based on self-adaptive algorithm |
CN111062309A (en) * | 2019-12-13 | 2020-04-24 | 吉林大学 | Method, storage medium and system for detecting traffic signs in rainy days |
CN111275648A (en) * | 2020-01-21 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Face image processing method, device and equipment and computer readable storage medium |
CN111369634A (en) * | 2020-03-26 | 2020-07-03 | 苏州瑞立思科技有限公司 | Image compression method and device based on weather conditions |
CN111666811A (en) * | 2020-04-22 | 2020-09-15 | 北京联合大学 | Method and system for extracting traffic sign area in traffic scene image |
CN111723805A (en) * | 2019-03-18 | 2020-09-29 | 浙江宇视科技有限公司 | Signal lamp foreground area identification method and related device |
CN111860533A (en) * | 2019-04-30 | 2020-10-30 | 深圳数字生命研究院 | Image recognition method and device, storage medium and electronic device |
CN112052700A (en) * | 2019-06-06 | 2020-12-08 | 北京京东尚科信息技术有限公司 | Image binarization threshold matrix determination and graphic code information identification method and device |
CN112226812A (en) * | 2020-10-20 | 2021-01-15 | 北京图知天下科技有限责任公司 | Czochralski monocrystalline silicon production method, device and system |
CN112507911A (en) * | 2020-12-15 | 2021-03-16 | 浙江科技学院 | Real-time recognition method of pecan fruits in image based on machine vision |
CN112699841A (en) * | 2021-01-13 | 2021-04-23 | 华南理工大学 | Traffic sign detection and identification method based on driving video |
CN112906712A (en) * | 2021-03-02 | 2021-06-04 | 湖南金烽信息科技有限公司 | Neural network image preprocessing method based on light intensity analysis |
US11044410B2 (en) | 2018-08-13 | 2021-06-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Imaging control method and apparatus, electronic device, and computer readable storage medium |
CN113610185A (en) * | 2021-08-19 | 2021-11-05 | 江西应用技术职业学院 | Wood color sorting method based on dominant hue identification |
CN114299280A (en) * | 2021-12-29 | 2022-04-08 | 深圳供电局有限公司 | Air switch identification method and system |
CN114511770A (en) * | 2021-12-21 | 2022-05-17 | 武汉光谷卓越科技股份有限公司 | Road sign plate identification method |
CN114863109A (en) * | 2022-05-25 | 2022-08-05 | 广东飞达交通工程有限公司 | Segmentation technology-based fine recognition method for various targets and elements of traffic scene |
CN117893496A (en) * | 2024-01-12 | 2024-04-16 | 江苏乐聚医药科技有限公司 | Processing method and processing device for evaluating image of injection effect of needleless injector |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787475A (en) * | 2016-03-29 | 2016-07-20 | 西南交通大学 | Traffic sign detection and identification method under complex environment |
CN106503704A (en) * | 2016-10-21 | 2017-03-15 | 河南大学 | Circular traffic sign localization method in a kind of natural scene |
-
2017
- 2017-07-04 CN CN201710540228.3A patent/CN107301405A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787475A (en) * | 2016-03-29 | 2016-07-20 | 西南交通大学 | Traffic sign detection and identification method under complex environment |
CN106503704A (en) * | 2016-10-21 | 2017-03-15 | 河南大学 | Circular traffic sign localization method in a kind of natural scene |
Non-Patent Citations (1)
Title |
---|
任敬义: "自然场景中交通标志的检测与识别", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 * |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108024105A (en) * | 2017-12-14 | 2018-05-11 | 珠海市君天电子科技有限公司 | Image color adjusting method, device, electronic equipment and storage medium |
CN108399610A (en) * | 2018-03-20 | 2018-08-14 | 上海应用技术大学 | A kind of depth image enhancement method of fusion RGB image information |
CN108711160A (en) * | 2018-05-18 | 2018-10-26 | 西南石油大学 | A kind of Target Segmentation method based on HSI enhancement models |
CN108711160B (en) * | 2018-05-18 | 2022-06-14 | 西南石油大学 | Target segmentation method based on HSI (high speed input/output) enhanced model |
CN108765443A (en) * | 2018-05-22 | 2018-11-06 | 杭州电子科技大学 | A kind of mark enhancing processing method of adaptive color Threshold segmentation |
CN108765443B (en) * | 2018-05-22 | 2021-08-24 | 杭州电子科技大学 | Sign enhancement processing method for self-adaptive color threshold segmentation |
CN108734131A (en) * | 2018-05-22 | 2018-11-02 | 杭州电子科技大学 | A kind of traffic sign symmetry detection methods in image |
CN108734131B (en) * | 2018-05-22 | 2021-08-17 | 杭州电子科技大学 | Method for detecting symmetry of traffic sign in image |
US11044410B2 (en) | 2018-08-13 | 2021-06-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Imaging control method and apparatus, electronic device, and computer readable storage medium |
US11765466B2 (en) | 2018-08-13 | 2023-09-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Imaging control method and apparatus, electronic device, and computer readable storage medium |
CN109214434A (en) * | 2018-08-20 | 2019-01-15 | 上海萃舟智能科技有限公司 | A kind of method for traffic sign detection and device |
CN109815906A (en) * | 2019-01-25 | 2019-05-28 | 华中科技大学 | Method for traffic sign detection and system based on substep deep learning |
CN109815906B (en) * | 2019-01-25 | 2021-04-06 | 华中科技大学 | Traffic sign detection method and system based on step-by-step deep learning |
CN111723805B (en) * | 2019-03-18 | 2023-06-20 | 浙江宇视科技有限公司 | Method and related device for identifying foreground region of signal lamp |
CN111723805A (en) * | 2019-03-18 | 2020-09-29 | 浙江宇视科技有限公司 | Signal lamp foreground area identification method and related device |
CN109916415A (en) * | 2019-04-12 | 2019-06-21 | 北京百度网讯科技有限公司 | Road type determines method, apparatus, equipment and storage medium |
CN111860533A (en) * | 2019-04-30 | 2020-10-30 | 深圳数字生命研究院 | Image recognition method and device, storage medium and electronic device |
CN111860533B (en) * | 2019-04-30 | 2023-12-12 | 深圳数字生命研究院 | Image recognition method and device, storage medium and electronic device |
CN112052700B (en) * | 2019-06-06 | 2024-04-05 | 北京京东乾石科技有限公司 | Image binarization threshold matrix determination and graphic code information identification method and device |
CN112052700A (en) * | 2019-06-06 | 2020-12-08 | 北京京东尚科信息技术有限公司 | Image binarization threshold matrix determination and graphic code information identification method and device |
CN110312106B (en) * | 2019-07-30 | 2021-05-18 | 广汽蔚来新能源汽车科技有限公司 | Image display method and device, computer equipment and storage medium |
CN110312106A (en) * | 2019-07-30 | 2019-10-08 | 广汽蔚来新能源汽车科技有限公司 | Display methods, device, computer equipment and the storage medium of image |
CN110619648A (en) * | 2019-09-19 | 2019-12-27 | 四川长虹电器股份有限公司 | Method for dividing image area based on RGB change trend |
CN110619648B (en) * | 2019-09-19 | 2022-03-15 | 四川长虹电器股份有限公司 | Method for dividing image area based on RGB change trend |
CN110598705A (en) * | 2019-09-27 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Semantic annotation method and device for image |
CN110930358B (en) * | 2019-10-17 | 2023-04-21 | 广州丰石科技有限公司 | Solar panel image processing method based on self-adaptive algorithm |
CN110930358A (en) * | 2019-10-17 | 2020-03-27 | 广州丰石科技有限公司 | Solar panel image processing method based on self-adaptive algorithm |
CN110838131A (en) * | 2019-11-04 | 2020-02-25 | 网易(杭州)网络有限公司 | Method and device for realizing automatic cutout, electronic equipment and medium |
CN110838131B (en) * | 2019-11-04 | 2022-05-17 | 网易(杭州)网络有限公司 | Method and device for realizing automatic cutout, electronic equipment and medium |
CN111062309A (en) * | 2019-12-13 | 2020-04-24 | 吉林大学 | Method, storage medium and system for detecting traffic signs in rainy days |
CN111062309B (en) * | 2019-12-13 | 2022-12-30 | 吉林大学 | Method, storage medium and system for detecting traffic signs in rainy days |
CN111275648A (en) * | 2020-01-21 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Face image processing method, device and equipment and computer readable storage medium |
CN111275648B (en) * | 2020-01-21 | 2024-02-09 | 腾讯科技(深圳)有限公司 | Face image processing method, device, equipment and computer readable storage medium |
CN111369634A (en) * | 2020-03-26 | 2020-07-03 | 苏州瑞立思科技有限公司 | Image compression method and device based on weather conditions |
CN111666811B (en) * | 2020-04-22 | 2023-08-15 | 北京联合大学 | Method and system for extracting traffic sign board area in traffic scene image |
CN111666811A (en) * | 2020-04-22 | 2020-09-15 | 北京联合大学 | Method and system for extracting traffic sign area in traffic scene image |
CN112226812A (en) * | 2020-10-20 | 2021-01-15 | 北京图知天下科技有限责任公司 | Czochralski monocrystalline silicon production method, device and system |
CN112507911A (en) * | 2020-12-15 | 2021-03-16 | 浙江科技学院 | Real-time recognition method of pecan fruits in image based on machine vision |
CN112699841A (en) * | 2021-01-13 | 2021-04-23 | 华南理工大学 | Traffic sign detection and identification method based on driving video |
CN112906712A (en) * | 2021-03-02 | 2021-06-04 | 湖南金烽信息科技有限公司 | Neural network image preprocessing method based on light intensity analysis |
CN113610185B (en) * | 2021-08-19 | 2022-03-22 | 江西应用技术职业学院 | Wood color sorting method based on dominant hue identification |
CN113610185A (en) * | 2021-08-19 | 2021-11-05 | 江西应用技术职业学院 | Wood color sorting method based on dominant hue identification |
CN114511770A (en) * | 2021-12-21 | 2022-05-17 | 武汉光谷卓越科技股份有限公司 | Road sign plate identification method |
CN114299280A (en) * | 2021-12-29 | 2022-04-08 | 深圳供电局有限公司 | Air switch identification method and system |
CN114863109A (en) * | 2022-05-25 | 2022-08-05 | 广东飞达交通工程有限公司 | Segmentation technology-based fine recognition method for various targets and elements of traffic scene |
CN117893496A (en) * | 2024-01-12 | 2024-04-16 | 江苏乐聚医药科技有限公司 | Processing method and processing device for evaluating image of injection effect of needleless injector |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107301405A (en) | Method for traffic sign detection under natural scene | |
CN107545239B (en) | Fake plate detection method based on license plate recognition and vehicle characteristic matching | |
CN105373794B (en) | A kind of licence plate recognition method | |
TWI409718B (en) | Method of locating license plate of moving vehicle | |
CN108564814B (en) | Image-based parking lot parking space detection method and device | |
Wen et al. | An algorithm for license plate recognition applied to intelligent transportation system | |
Badr et al. | Automatic number plate recognition system | |
CN101334836B (en) | License plate positioning method incorporating color, size and texture characteristic | |
CN112819094B (en) | Target detection and identification method based on structural similarity measurement | |
CN106683119B (en) | Moving vehicle detection method based on aerial video image | |
Yang et al. | Vehicle license plate location based on histogramming and mathematical morphology | |
CN106651872A (en) | Prewitt operator-based pavement crack recognition method and system | |
Guo et al. | Nighttime vehicle lamp detection and tracking with adaptive mask training | |
CN102915433B (en) | Character combination-based license plate positioning and identifying method | |
CN104318225A (en) | License plate detection method and device | |
Prabhakar et al. | A novel design for vehicle license plate detection and recognition | |
CN106815583A (en) | A kind of vehicle at night license plate locating method being combined based on MSER and SWT | |
CN104050684A (en) | Video moving object classification method and system based on on-line training | |
CN109886168B (en) | Ground traffic sign identification method based on hierarchy | |
CN105184291A (en) | Method and system for detecting multiple types of license plates | |
CN105787475A (en) | Traffic sign detection and identification method under complex environment | |
CN112115800A (en) | Vehicle combination recognition system and method based on deep learning target detection | |
Aung et al. | Automatic license plate detection system for myanmar vehicle license plates | |
CN111402185A (en) | Image detection method and device | |
Kim | Detection of traffic signs based on eigen-color model and saliency model in driver assistance systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171027 |