CN112989900B - Method for accurately detecting traffic sign or marking - Google Patents
Method for accurately detecting traffic sign or marking Download PDFInfo
- Publication number
- CN112989900B CN112989900B CN201911376005.3A CN201911376005A CN112989900B CN 112989900 B CN112989900 B CN 112989900B CN 201911376005 A CN201911376005 A CN 201911376005A CN 112989900 B CN112989900 B CN 112989900B
- Authority
- CN
- China
- Prior art keywords
- key points
- traffic
- keypoints
- key point
- marked lines
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 claims description 19
- 238000013135 deep learning Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 2
- 230000001629 suppression Effects 0.000 claims 1
- 230000008901 benefit Effects 0.000 description 4
- 239000002131 composite material Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000005764 inhibitory process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for accurately detecting traffic signs or marked lines, which comprises the following steps: overlapping and combining the images of the traffic signs or the marked lines to form new synthesized signs or marked lines; setting key points on the synthesized marks or marked lines; and detecting the key points, and identifying traffic information indicated or prompted by traffic signs or marked lines corresponding to the key points.
Description
Technical Field
The invention relates to the field of image detection, in particular to a method for accurately detecting traffic signs or marked lines.
Background
The guide arrow is an important traffic sign applied on the road surface and is used for indicating the driving mode of a complex road. For many fields including autopilot, the guide arrow is important information for guiding an intelligent vehicle or an autopilot to use a road correctly, and thus, the guide arrow has wide application in various modules in the autopilot field, including high-precision map construction, high-precision positioning, driving decision, and the like.
In the existing template matching method, the image to be detected is matched with each template by constructing templates of each arrow, and if the matching value exceeds a threshold value, the matching is considered to be successful. The technical disadvantage is that the problems of shielding, illumination change, weather change, actual guide arrow size change, different scribing standards in different national regions and the like cannot be handled robustly.
Disclosure of Invention
In order to solve the defects that the prior art cannot robustly process shielding, illumination change, weather change, actual guide arrow size change, different national regions and different marking standards and the like, the invention provides a method for accurately detecting traffic signs or marking lines.
According to one aspect of the present invention, there is provided a method of accurately detecting traffic signs or markings, the method comprising the steps of:
Overlapping and combining the images of the traffic signs or the marked lines to form new synthesized signs or marked lines; setting key points on the synthesized marks or marked lines; and detecting the key points, and identifying traffic information indicated or prompted by traffic signs or marked lines corresponding to the key points.
Preferably, the traffic sign or marking is a guide arrow.
Preferably, the keypoints are identified by reference numerals.
Preferably, the identifying the key point by a reference number includes: the key points are identified by Arabic numerals or English letters or Chinese characters.
Preferably, the identifying the key point by a reference number includes: the reference numerals are given by a counterclockwise or clockwise or top-down order.
Preferably, the step of detecting the keypoints is implemented by a deep convolutional network.
Preferably, the deep convolution network generates a key point score map and a key point feature map through two convolution branches, the score map is used for measuring the possibility that the detected position points exist the key points, and the feature map is used for describing whether the key points correspond to the same traffic sign or marking line.
Preferably, the score map screens the detection result of the key points in a non-maximum value inhibition mode.
Preferably, the step of detecting the key point includes: the effect of the deep learning is optimized by annotating the image of the real traffic scene containing the traffic sign or marking.
Compared with the prior art, the invention has the beneficial effects that:
By combining the guide arrows, storage can be reduced, calculation can be quickened, and by representing the guide arrows by key points, the problems of shielding, actual guide arrow size change, different national regions and different demarcation standards can be handled. Through the supervised learning of a large amount of annotation data, the problems of illumination change, weather change and the like can be processed through the depth model.
Drawings
The foregoing summary, as well as the following detailed description, will be better understood when read in conjunction with the appended drawings. For ease of illustration, certain embodiments of the disclosure are shown in the drawings. It should be understood that the invention is not limited to the precise arrangements and instrumentalities shown. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an implementation of the system and apparatus according to the invention and, together with the description, serve to explain the advantages and principles according to the invention.
Wherein,
FIG. 1 is a flow chart of a method of precisely detecting traffic signs or markings according to the present invention.
Fig. 2 is a schematic view of the structure of the guide arrow of the present invention.
Fig. 3 is a schematic diagram of the structure of the merging arrow of the present invention.
Fig. 4 is a schematic diagram of a detection flow of the deep convolutional network of the present invention.
Fig. 5 is a schematic diagram of another detection flow of the deep convolutional network of the present invention.
Detailed Description
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The figures and written description are provided to guide those skilled in the art in making and using the invention, for which patent protection is sought. The invention is capable of other embodiments and of being practiced and carried out in various ways. Those skilled in the art will appreciate that not all features of a commercial embodiment are shown for the sake of clarity and understanding. Those skilled in the art will also appreciate that the development of an actual commercial embodiment incorporating aspects of the present inventions will require numerous implementation-specific decisions to achieve the developer's ultimate goal of the commercial embodiment. While such a job may be complex and time-consuming, such a job would be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
In addition, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. For example, the use of singular terms, such as "a," "an," and "the" is not intended to limit the number of items. Further, the use of relational terms, such as, without limitation, "top," "bottom," "left," "right," "upper," "lower," "downward," "upward," "lateral," and the like are used in the description with particular reference to the figures for clarity and are not intended to limit the scope of the invention or the appended claims. Furthermore, it should be understood that any of the features of the present invention may be used alone or in combination with other features. Other systems, methods, features and advantages of the invention will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.
Referring to fig. 1 to 4, which schematically illustrate an embodiment of a method for precisely detecting traffic signs or markings according to the present invention, the specific steps are as follows:
Step S101: merging guide arrows
As shown in fig. 2, the guide arrows are divided into twelve guide arrows, including: indicating straight, indicating that the front can go straight or turn right, indicating that the front can go straight or turn left or turn right, indicating that the front road can only turn left or right, indicating that the front can turn left or turn around, indicating that the front turns around, indicating that the front can go straight or turn around, indicating that the front turns left, indicating that the front turns right, indicating that the front road has a left turn or needs to merge left, indicating that the front road has a right turn or needs to merge right.
The twelve guide arrows are combined into a composite arrow by superposition and combination, wherein the composite arrow combines the repeated parts of the twelve guide arrows, as shown in fig. 3.
Step S102: setting key points of synthesized arrows
As shown in FIG. 3, the synthetic arrow includes twenty-six key points, numbered 1 through 26, respectively. Correspondingly, the twelve guide arrows respectively comprise a plurality of key points, wherein common parts use the same key point marks; the same reference numerals are used for the same keypoint labels. In particular, the key point marks used for prompting that the road in front has a left curve or needs to be converged to the left and prompting that the road in front has a right curve or needs to be converged to the right are the same as the key point marks for indicating straight going, and the different guide arrows can be distinguished through the projection positions of the No. 1 point on the connecting line of the No. 11 and the No. 12 points.
Step S103: detecting keypoints and identifying guide arrows
The detection, characterization and classification of the key points are realized based on the deep convolution network, and finally the detection of the guide arrow is obtained, and the specific flow is shown in fig. 4.
And acquiring a monocular image through a monocular camera, extracting depth features through a depth convolution network, and respectively generating a key point score map and a key point feature map through two convolution branches. The key point score map is used for measuring the possibility that each position has a corresponding key point, the score of the key point score map indicates the possibility, and a detection result of the key point is generated through screening. The key point score map needs to further screen the detection result of the key point in a non-maximum value inhibition mode, so that repeated detection of the same key point is avoided.
The key point feature map is used for describing each key point, features of key points belonging to the same arrow are similar, features of key points of different arrows are dissimilar, and association from the key points to the guide arrow is established, so that detection of the guide arrow is realized. Specifically, according to the feature map of the key points, the key points of the same guide arrow can be clustered together by clustering the key points in the feature space, so that the detection and classification of the guide arrow are realized, namely the position and the classification of the output guide arrow are realized.
And finally, the key points screened by the key point score map are clustered by the key point feature map according to different features to form a guide arrow.
In some embodiments, deep learning may be performed by annotating a number of up to tens or hundreds of thousands of actual images of the scene containing the true directional arrow.
In some embodiments, learning of network parameters may be achieved by a GPU cluster.
In some embodiments, a top-down detection method may be employed. Compared with the bottom-up detection method of combining the key points to the guide arrows, the top-down detection method detects the guide arrows by using rectangular frames, and then detects the key points of the guide arrows in the rectangular frames.
On the basis of the above embodiments, another embodiment of a method for precisely detecting traffic signs or markings is shown according to the invention. The method comprises the following specific steps:
step S201: setting key points of guide arrows
Dividing the guide arrows into twelve guide arrows includes: indicating straight, indicating that the front can go straight or turn right, indicating that the front can go straight or turn left or turn right, indicating that the front road can only turn left or right, indicating that the front can turn left or turn around, indicating that the front turns around, indicating that the front can go straight or turn around, indicating that the front turns left, indicating that the front turns right, indicating that the front road has a left turn or needs to merge left, indicating that the front road has a right turn or needs to merge right.
According to fig. 2, the reference numerals of the key points are set on the twelve guide arrows, and referring specifically to fig. 2, each guide arrow contains the key points, wherein the common parts use the same reference numerals of the key points. In particular, the key point labels used for prompting that the front road has a left curve or needs to be converged left and prompting that the front road has a right curve or needs to be converged right are the same as the key point labels used for indicating straight going.
In some embodiments, the guide arrows may be of other sizes or shapes.
In some embodiments, the keypoint labels may be represented using other words, such as a, b, c, d instead of 1,2, 3, 4, and such as one, two, three, four instead of 1,2, 3, 4.
In some embodiments, the order of the keypoint labels may be in other order or follow other logic, e.g., the order of the keypoint labels is counter-clockwise.
In some embodiments, the number of classifications of the directional arrows may be other numbers, such as dividing the directional arrow into any one of two to twelve or more directional arrows.
Step S202: merging guide arrows
The twelve guide arrows are combined into a composite arrow by superposition and combination, as shown in fig. 3, wherein the composite arrow combines repeated parts of the twelve guide arrows according to key points. As shown in FIG. 3, the synthetic arrow includes twenty-six key points, numbered 1 through 26, respectively.
In some embodiments, the synthetic arrow may be one of the directional arrows, or a combination of the superimposed and combined directional arrows.
In some embodiments, the synthetic arrow may be formed by overlapping and combining two to twelve guide arrows, or by overlapping and combining more guide arrows.
Step S203: detecting keypoints and identifying guide arrows
And detecting, characterizing and classifying the key points based on the deep convolution network, and finally obtaining the detection of the guide arrows.
On the basis of the above embodiment, and in connection with fig. 5, another embodiment of a method of precisely detecting traffic signs or markings is shown according to the invention. The method comprises the following specific steps:
step S301: merging the guide arrows, refer specifically to step S101.
Step S302: the key point of the synthesized arrow is set, specifically referring to step S102.
Step S303: detecting keypoints and identifying guide arrows
On the basis of step S103, two further convolution branches are introduced to generate a center point score map and a center point feature map, respectively. The center point is the geometric center of a 2D frame of the road actual road sign, wherein a center point score map is used for measuring the possibility that each position has a corresponding center point, and the score of the center point score map is used for screening and generating a detection result of the center point. The center point score map needs to further screen the detection result of the center point in a non-maximum value inhibition mode.
And comparing the similarity between the characteristics of the central point and the characteristics of the key points, and when the similarity is larger than a preset threshold value, associating the key points to the corresponding central points to form the detection of the guide arrows. Because one guiding arrow corresponds to one central point, the central point features integrally describe the guiding arrow, and the guiding arrow has stronger distinguishing property, thereby avoiding errors caused by weak distinguishing property of calculation similarity among key points.
The foregoing description of the preferred embodiment of the present invention is not intended to limit the invention in any way, and any modifications, equivalent variations on the above-described embodiment according to the present invention fall within the scope of the present invention.
Claims (7)
1. A method of accurately detecting traffic signs or markings, comprising:
overlapping and combining the images of the traffic signs or the marked lines to form new synthesized signs or marked lines;
the traffic sign or marking is a guide arrow; combining the multiple guiding arrows into a combined arrow through superposition and combination;
Setting key points on the synthesized marks or marked lines; wherein the key points are identified by reference numerals, and common parts use the same key point reference numerals; the same indicators use the same keypoint labels;
And detecting the key points, and identifying traffic information indicated or prompted by traffic signs or marked lines corresponding to the key points.
2. The method of claim 1, wherein the identifying the keypoints by labels comprises:
the key points are identified by Arabic numerals or English letters or Chinese characters.
3. The method of claim 2, wherein the identifying the keypoints by reference numerals comprises:
the reference numerals are given by a counterclockwise or clockwise or top-down order.
4. The method of claim 1, wherein the step of detecting the keypoints is implemented by a deep convolutional network.
5. The method of claim 4, wherein the deep convolutional network generates a score map of a key point and a feature map of the key point through two convolutional branches, the score map is used for measuring the possibility that the key point exists at the detected position point, and the feature map is used for describing whether the key point corresponds to the same traffic sign or marking.
6. The method of claim 5, wherein the score map is screened for detection of the keypoints by non-maxima suppression.
7. The method of claim 1, wherein the step of detecting the keypoints comprises:
The effect of the deep learning is optimized by annotating the image of the real traffic scene containing the traffic sign or marking.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2019112852556 | 2019-12-13 | ||
CN201911285255 | 2019-12-13 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112989900A CN112989900A (en) | 2021-06-18 |
CN112989900B true CN112989900B (en) | 2024-09-24 |
Family
ID=76344189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911376005.3A Active CN112989900B (en) | 2019-12-13 | 2019-12-27 | Method for accurately detecting traffic sign or marking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112989900B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799859A (en) * | 2012-06-20 | 2012-11-28 | 北京交通大学 | Method for identifying traffic sign |
CN104819724A (en) * | 2015-03-02 | 2015-08-05 | 北京理工大学 | Unmanned ground vehicle self-driving assisting system based on GIS |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102012221159A1 (en) * | 2012-11-20 | 2014-05-22 | Robert Bosch Gmbh | Method and device for detecting variable message signs |
KR101409340B1 (en) * | 2013-03-13 | 2014-06-20 | 숭실대학교산학협력단 | Method for traffic sign recognition and system thereof |
US10262466B2 (en) * | 2015-10-14 | 2019-04-16 | Qualcomm Incorporated | Systems and methods for adjusting a combined image visualization based on depth information |
CN117824676A (en) * | 2016-12-09 | 2024-04-05 | 通腾全球信息公司 | Method and system for video-based positioning and mapping |
CN108710826A (en) * | 2018-04-13 | 2018-10-26 | 燕山大学 | A kind of traffic sign deep learning mode identification method |
CN108711298B (en) * | 2018-05-20 | 2022-06-17 | 北京鑫洋浩海科技有限公司 | Mixed reality road display method |
CN109815836A (en) * | 2018-12-29 | 2019-05-28 | 江苏集萃智能制造技术研究所有限公司 | A kind of urban road surfaces guiding arrow detection recognition method |
CN110135307B (en) * | 2019-04-30 | 2022-07-01 | 北京邮电大学 | Traffic sign detection method and device based on attention mechanism |
CN110298262B (en) * | 2019-06-06 | 2024-01-02 | 华为技术有限公司 | Object identification method and device |
-
2019
- 2019-12-27 CN CN201911376005.3A patent/CN112989900B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799859A (en) * | 2012-06-20 | 2012-11-28 | 北京交通大学 | Method for identifying traffic sign |
CN104819724A (en) * | 2015-03-02 | 2015-08-05 | 北京理工大学 | Unmanned ground vehicle self-driving assisting system based on GIS |
Also Published As
Publication number | Publication date |
---|---|
CN112989900A (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109308476B (en) | Billing information processing method, system and computer readable storage medium | |
CN108596066B (en) | Character recognition method based on convolutional neural network | |
CN103049750B (en) | Character identifying method | |
CN108921166A (en) | Medical bill class text detection recognition method and system based on deep neural network | |
CN106407981A (en) | License plate recognition method, device and system | |
CN112016605B (en) | Target detection method based on corner alignment and boundary matching of bounding box | |
CN108388871B (en) | Vehicle detection method based on vehicle body regression | |
CN113158808A (en) | Method, medium and equipment for Chinese ancient book character recognition, paragraph grouping and layout reconstruction | |
CN104766042A (en) | Method and apparatus for and recognizing traffic sign board | |
CN104050447A (en) | Traffic light identification method and device | |
CN105354533B (en) | A kind of unlicensed vehicle model recognizing method of bayonet based on bag of words | |
CN103366181A (en) | Method and device for identifying scene integrated by multi-feature vision codebook | |
CN105303153A (en) | Vehicle license plate identification method and apparatus | |
CN101901494A (en) | Method and system for automatically realizing map lettering | |
CN102750530B (en) | Character recognition method and device | |
CN110287959B (en) | License plate recognition method based on re-recognition strategy | |
CN111753592B (en) | Traffic sign recognition method, device, computer equipment and storage medium | |
CN104268552A (en) | Fine category classification method based on component polygons | |
US20220219700A1 (en) | Apparatus, method, and computer program for generating map | |
Almutairy et al. | Arts: Automotive repository of traffic signs for the united states | |
Mulyanto et al. | Indonesian traffic sign recognition for advanced driver assistent (adas) using yolov4 | |
CN111368682A (en) | Method and system for detecting and identifying station caption based on faster RCNN | |
CN104992173A (en) | Symbol recognition method and system used for medical report | |
CN107392115B (en) | Traffic sign identification method based on hierarchical feature extraction | |
CN113033497A (en) | Lane line recognition method, device, equipment and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220727 Address after: Room 618, 6 / F, building 5, courtyard 15, Kechuang 10th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176 Applicant after: Xiaomi Automobile Technology Co.,Ltd. Address before: 1219, floor 11, SOHO, Zhongguancun, No. 8, Haidian North Second Street, Haidian District, Beijing 100089 Applicant before: SHENDONG TECHNOLOGY (BEIJING) Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |