Nothing Special   »   [go: up one dir, main page]

WO2013151266A1 - Method and system for lane departure warning based on image recognition - Google Patents

Method and system for lane departure warning based on image recognition Download PDF

Info

Publication number
WO2013151266A1
WO2013151266A1 PCT/KR2013/002519 KR2013002519W WO2013151266A1 WO 2013151266 A1 WO2013151266 A1 WO 2013151266A1 KR 2013002519 W KR2013002519 W KR 2013002519W WO 2013151266 A1 WO2013151266 A1 WO 2013151266A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
roi
straight
cell
line
Prior art date
Application number
PCT/KR2013/002519
Other languages
French (fr)
Inventor
Yong-Jeong Park
Original Assignee
Movon Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Movon Corporation filed Critical Movon Corporation
Publication of WO2013151266A1 publication Critical patent/WO2013151266A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • B60W2050/0057Frequency analysis, spectral techniques or transforms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/10Path keeping
    • B60Y2300/12Lane keeping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Definitions

  • This invention relates to a method and system for Lane Departure Detection based on image recognition through a single camera installed into a car. More intimately, this invention can distinguish whether the host car departs its lane or not by dividing the inputted image into Cell Region of Interest (Hereafter as ROI); extracting dot elements; judging departure status.
  • ROI Cell Region of Interest
  • Lane Departure Warning System measures whether the host vehicle departs lane markings or not, and is configured to perform a predetermined alarm processing if it is measured that the vehicle departs the lane.
  • Traditional Lane Departure Waning System basically recognizes lane markings which are formed of line and determine whether the vehicle departs or not.
  • Traditional Lane Departure Waning System was unable to recognize lane departure precisely due to many variables and noise, such as lane width, radius of curvature of lane, lane crossing time, a difference between the center of the lane and the position of camera's optical axis.
  • Korean Patent Application Publication No. 2000-0037604 discloses a method to recognize the lane. According to this technique, it uses two cameras to recognize lane markings that contain high brightness as compared to the road. In general, however, foreign objects other than lane markings, such as some objects fallen from cargo box, can be put on the road. If this foreign object is brighter than the road itself, Traditional Lane Departure Waning System recognizes it as lane markings. Also, by using two cameras product costs increase and installation space is needed greatly. So this feature made it difficult to be a compact size.
  • the conventional lane recognition method that uses only Sobel algorithm to extract the edge of the lane markings and through this recognizes lane marking.
  • this method is very vulnerable to noise. It detects not only the lane markings but other non-lane markings images.
  • This invention has been devised to solve the above problems.
  • the purpose of this invention is to prevent the lane markings misrecognition when there are broken lane markings, other marks except for the lane markings, and the noises that are brighter than the road.
  • this invention is the method and system for lane departure warning based on image recognition that can recognize lane markings precisely in the situation of the curve road by preventing the noises through a single camera.
  • a lane departure detection method based on image recognition has following 7 step features.
  • the step 1 is to set the area to be used for image processing to detect lane markings in the image captured by the vehicle-equipped camera (hereinafter referred to as the 'the entire ROI').
  • the step 2 is to set the entire ROI divided into several regions (hereinafter, the Cell ROI).
  • the step 3 is to extract edge on each the Cell ROI which includes lane markings by using edge detection technique.
  • the step 4 is to extract straight lines through Hough Transform, using edge on each the Cell ROI extracted in Step 3.
  • the step 5 is to extract multiple dot elements which are corresponded to the above straight line and arranged mutually and differently.
  • the step 6 is to define lane model applying least-squares method to each multiple dot element in Step 5.
  • the step 7 is to judge whether or not the car departs the lane, based on the lane model defined in Step6.
  • the system can remove several noises that can be mistaken as lane markings or road information, traffic sign on the road by dividing input image and processing within the Cell ROI.
  • the system can make virtual lane markings to maximize the accuracy of the lane markings recognition.
  • the entire ROI is vertically and horizontally split into the Cell ROI. And by processing the Cell ROI, there is an advantage to minimize regions for actual image processing, which can reduce the operation workload of the image processing.
  • Fig. 1 shows the overall configuration of the system for lane departure warning based on image recognition according to the present invention.
  • Fig. 2 is the block diagram that draws the detail configuration of the image recognition processor illustrated in Figure 1.
  • Fig. 3 is the coordinates of the method and system for lane departure warning based on image recognition.
  • Fig. 4 is the block flowchart to draw processing flow of the method for lane departure warning based on image recognition according to the present invention.
  • Fig. 5 is an example to show that the entire ROI is set through the location of the hood and vanishing line of the present invention.
  • Fig. 6(a) is an example of the general black and white image.
  • Fig. 6(b) is an example of the black and white image which highlights the yellow component that is converted by image conversion part of the present invention.
  • Fig. 7(a) is an example of the configured Cell ROI unless lane markings are recognizable in the previous frame.
  • Fig. 7(b) is an example of the configured Cell ROI if lane markings are recognizable in the previous frame.
  • Fig. 8 is an example of the gradient magnitude image figured by Sobel algorithm of the invention ⁇ s edge extraction part.
  • Fig. 9 is an example of the horizontal gradient image figured by Sobel algorithm of the invention ⁇ s edge extraction part.
  • Fig. 10 is an example of edge display figured by the invention ⁇ s edge extraction part.
  • Fig. 11 is an example of straight line figured by the invention ⁇ s straight line extraction part.
  • Fig. 12 is an example of dots ingredient extracted by the invention ⁇ s dot extraction part about straight lane.
  • Fig. 13 is an example of dots ingredient extracted by the invention ⁇ s dot extraction part about curve lane.
  • Warning Output 20 Image Recognition Processing
  • This invention can remove noise as much as possible and successfully warn lane departure without lane marking recognition errors under a variety of environmental conditions such as damage of lane markings and a variety of road conditions such as straight or curve road.
  • Fig. 1 shows the overall configuration of the system for lane departure warning based on image recognition according to the present invention.
  • Fig. 2 is the block diagram that draws the detail configuration of the image recognition processor illustrated in Figure 1.
  • the system for lane departure warning based on image recognition of this invention contains Camera (10), Image Recognition Processing part (20) and Warning Output part (95).
  • Camera (10) installed inside the car is a composition part of acquisitioning image by recording the car ⁇ s forward driving road and it could be CMOS or CCD camera reasonably.
  • Image Recognition Processing part (20) of this invention recognizes lane markings from images taken by Camera (10) and processes images for lane departure judgment in order to prevent car accidents by drowsy or sleepy driving.
  • Image Recognition Processing part (20) can be divided into 7 sub-parts for functional purposes: Image Transformation (30), Image Division (40), Edge Extraction (50) Straight Line Information Extraction (60), Dot Extraction (70), Lane Model Formation (80), and Lane Departure Distinction (90).
  • the system for lane departure warning based on image recognition of this invention initializes the system first to detect lane departure by recognizing and processing the acquired image through the camera on driving.
  • the function of system initialization is to set up a region (as the entire ROI) from input image for lane detection processing, and to allocate some memory for image processing. And then it saves information of lane markings, road width, and front wheel location for Hough Transform as Lookup table.
  • Image Transformation part (30) of this invention basically transforms input images taken by a camera to black-and-white images and then emphasizes the yellow component from transformed-black and-white images reasonably.
  • Image Division part (40) of this invention divides the entire ROI in black-and-white image into a series of regions (from now on, the Cell ROI ).
  • the above Cell ROI is comprised of Cell areas formed by dividing the above entire ROI into lengthwise and widthwise.
  • the above Cell ROI is comprised of the left lane markings and right lane markings among the above whole area of interest arranged along the lane markings.
  • Edge Extraction part (50) of this invention detects edge information using edge detection techniques in Cell ROI that includes lane markings.
  • Sobel Edge method is used to detect edges of objects in an image using gradient difference calculated by differential operator for edge detection.
  • Straight Line Information Extraction part (60) of this invention extracts the straight line information through Hough Transform by using the edge extracted by the edge extraction part (50).
  • Hough Transform used for Straight Line Information Extraction part (60) is the most broadly used method to find the straight line in image processing. It finds straight line by converting a linear equation on two-dimension image coordinate to parametric space. The explanation about Hough Transform is omitted as Hough Transform is the generally used algorithm in engineering.
  • Straight Line Information Extraction part (60) generates virtual straight lines if straight lines could not be extracted in Cell ROIs (non-extraction Cell ROI). Virtual straight lines will be generated along on non-extraction Cell ROIs by calculating straight line information from previous and next ROIs. Thus, it can extract continuously intact straight lines.
  • Dot Extraction part (70) of this invention extracts the multiple dot elements in Straight Information Extraction part (60) corresponded to the above straight and arranged mutually and differently.
  • Dot Extraction part (70) a group of multiple dot elements extracted by Dot Extraction part (70) will be located along the left lane markings at regular intervals.
  • the other extracted group will be located along the right lane markings at regular intervals.
  • Lane Model Formation part (80) of this invention defines lane model by applying multiple dot elements extracted by Dot Extraction part (70) to Least Square Technique.
  • Lane Model Formation part (80) judges whether lane markings on an image are straight or curved lines using the slope of straight line extracted per the Cell ROI. Once the above result of the judgment is straight line, Lane Model Formation part (80) determines lane markings through applying Least Square Technique approximating to linear equation. Once the above result of the judgment is curved line, Lane Model Formation part (80) determines lane markings will be used to detect lane departure by defining lane model through applying Least Square Technique approximating to quadratic equation.
  • Lane Departure Distinction part (90) of this invention judges whether the host vehicle departs the lane or not by using lane model determined through Lane Model Formation part (80) (in other words, a base line to detect lane departure).
  • Lane Departure Distinction part (90) of this invention calculates a distance between the front wheels of the vehicle and lane marking and offers warning occurrence signal to Warning Output part (95) when a distance calculated by the above calculation gets out of the baseline.
  • Lane Departure Distinction part offers another early warning occurrence signal to the Warning Output part (95) not only lane departure but approaching to the lane markings less than a certain distance.
  • Warning Output part (95) of this invention gets visual or sound warning released according to the warning occurrence signal or advance warning occurrence signal which is sent by lane departure distinction, while informing drivers of the relevant fact.
  • the system consist of two different warning methods: warning and early-warning, so that the driver can tell if the host car is about to depart or already departed.
  • Fig. 3 is the coordinates of the method and system for lane departure warning based on image recognition.
  • Camera (10) of figure 3 is installed inside a car and can use CMOS OR CCD camera.
  • Xw, Yw and Zw mean the world coordinate system including the whole space of driving road and Xc, Yc and Zc mean the coordinate system of the camera image frame revolving around the camera's time.
  • the camera is set to incline by angle ⁇ about the road surface which has the lane markings and the height from the above lane markings surface is marked as H.
  • the relationship between the real world coordinates, the camera, and image coordinates will be explained first because the invention recognizes lane markings and judges lane departure by information of thickness of lane markings and lane width in the real world coordinates that transformed into imported image through the camera.
  • Xc, Yc, Zc X, Y, Z axis coordinates of the camera coordinate system
  • Xw, Yw, Zw X, Y, Z axis coordinates of the world coordinate system.
  • the image coordinate system can be expressed as the following Mathematical equation 2 by using Pinhole camera model and the above Mathematical equation 1.
  • Zw value can be assumed ‘0' because lane markings recognized exist on the road surface.
  • the coordinates of pixel coordinate system can be expressed as the following Mathematical equation 3 by using width and length's resolution of the image sensor and width and length's pixel size.
  • Xp, yp pixel coordinates of horizontal, vertical axis
  • rw, rh width, vertical resolution of the image sensor
  • Fig. 4 is the block flowchart to draw processing flow of the method for lane departure warning based on image recognition according to the present invention.
  • lane departure detection method based on image recognition has following 7 step features.
  • the step 1 is to set the area to be used for image processing to detect lane markings in the image captured by the vehicle-equipped camera (hereinafter referred to as the 'the entire ROI').
  • the step 2 is to set the entire ROI divided into several regions (hereinafter, the Cell ROI).
  • the step 3 is to extract edge on each the Cell ROI which includes lane markings by using edge detection technique.
  • the step 4 is to extract straight lines through Hough Transform, using edge on each the Cell ROI extracted in Step 3.
  • the step 5 is to extract multiple dot elements which are corresponded to the above straight line and arranged mutually and differently.
  • the step 6 is to define lane model applying least-squares method to each multiple dot element in Step 5.
  • the step 7 is to judge whether or not the car departs the lane, based on the lane model defined in Step6.
  • the step 1 can include the step that transforms the image inputted by the vehicle camera into black-and-white image and the step (1-1) that emphasizes and processes yellow component in the above transformed black-and-white image.
  • the Cell ROI in step 2 is comprised of the Cell areas formed by dividing the above entire ROI into lengthwise or widthwise.
  • the Cell ROI of the above Cell is comprised of the left and right lane markings among the above entire ROI arranged along the lane markings.
  • the edge detection technique in step 3 can use desirably the Sobel Edge detection technique.
  • the step 4 can include the step that calculates the average value of straight gradient extracted from per Cell ROI and the step that removes the straight line that differs in certain angle from the average value of the above gradient among the extracted straight per Cell ROI.
  • the method for lane markings recognition of this invention can include the step producing virtual straight line, that will be included in the above non-extraction Cell ROI, through the inference using straight line information from previous and next ROIs based on the non-extraction Cell ROI if there is the Cell ROI that cannot extract straight line (from now on, non-extraction Cell ROI ) in step 4.
  • inference using straight line information included in the Cell ROI arranged before and after is to produce the straight line connecting bottom and top of the straight line included in the Cell ROI arranged before and after. And the method producing the straight line combined to this by using the gradient average value of the above straight line can be used.
  • the method for lane markings recognition of this invention can include the step judging whether the lane markings of the image shot by the above camera is the straight line or the curve by using the gradient of straight line extracted per the Cell ROI through the step 4.
  • the step 6 determines lane model through applying Least Square Technique approximating to linear equation if the above result of the judgment is straight line.
  • the step 6 determines lane model through applying Least Square Technique approximating to quadratic expression.
  • the step 1 of this invention is the step (S10) that performs initialization in order to realize the lane markings recognition function by processing and recognizing the image taken by the camera while driving.
  • the step 1 seeks vertical coordinates of vanishing line on the images by using the mathematical equation 1 or the mathematical equation 3.
  • the vanishing line means the corresponding line to the meeting point of the horizon and among the many lines that divide the entire ROI along the vertical axis into widthwise.
  • the vertical coordinates of vanishing line is the yp value of when Yw of the world coordinate system converges ‘ ⁇ ' value.
  • the vertical coordinates of vanishing line is calculated by mathematical equation 4.
  • this invention sets up the entire ROI by locating the vertical coordinate values of the vanishing line at the top and locating the vertical coordinate values of the point that locates the vehicle hood at the bottom.
  • the space between ‘vanishing' line and ‘hood' line in figure 5 is the entire ROI that will be used in image processing.
  • the step 1 of this invention not only performs initialization like the following but sets up the entire ROI.
  • this step allocates a certain memories to process the image such as the black and white image, horizontal gradient image, Hough Transform, etc.
  • this step calculates the sin, cos value needed in performing Hough Transform and saves as Lookup table.
  • this step converts minimum, maximum thickness of the lane markings on the world coordinate system and minimum, maximum width to the value coordinated to every ordinate corresponded to top and bottom of the entire ROI on the image by using expression of the coordinate system.
  • this step calculates and saves the ordinate value corresponded the location of car's front wheel on the image in order to judge lane departure through using the expressions of the coordinate system.
  • the step 1-1 of the invention is processed with Image Transformation Part(30). Describing it further, the color image taken by the vehicle-equipped camera is converted to the black and white image. And, in a bid to increase the recognition rate of the yellow line, Cb color component and YCbCr color space are used. Yellow's ingredient is vividly appeared from the black and white image with the following formula used.
  • Gray Pixel Y + ( 128 - Cb ) * 2
  • Fig. 6(a) shows the general black and white image and Fig. 6(b) shows the yellow-factor-highlighted image.
  • Cb of color components underscores the factor of yellow color and it gets lane markings on the black and white image quite clarified.
  • the step 1-1 of the invention is processed with Image Division Part(40).
  • This invention sets the Cell ROI corresponding to left and right lane markings of the entire ROI of the image.
  • the nearer the lane markings are located to a vehicle the more linear they look.
  • the farther they are located to a vehicle the less clear.
  • the farther the lane markings are poised to a vehicle the less linear they look.
  • the constantly space-arrayed horizontal lines along the vertical axis are dividing the entire ROI into the appropriate number of ROI. So, lane markings in each segmented space can be independently image-processed.
  • this ROI set-up makes lane markings' presence more predictable.
  • a number of segmented ROIs along the vertical axis are again divided by several vertical lines(42) along the horizontal axis on each left and right lane markings. Finally, as Fig. 7 shows, the Cell ROI can be obtained.
  • This invention recommends this Cell ROI be continuously positioned along each a series of left and right lane markings respectively within the entire ROI and be configured in an arranged format.
  • the width of each Cell ROI can be adjustable according to if the lane markings have been recognized in the previous frame.
  • Fig. 7(a) describes ROI in case that the lane markings are not founded in the previous frame.
  • Fig. 7(b) describes ROI in case that the lane markings are founded in the previous frame.
  • this invention sets the interval between the lane's min. width and max. one as the width of the corresponding Cell ROI. (Refer to Fig. 7(a)).
  • this invention sets the width of the corresponding Cell ROI after calculating a certain range of Cell ROI which is extended left and right in the constant value from the center point of the lane markers recognized in the previous frame.( Refer to Fig. 7(b) ).
  • the width of the Cell ROI is controlled as described above, there are many good points to be able to remove various noises, which may lead to the error of the recognition, such as the letter, number, figure and line that are written and drawn on the lane or the road.
  • the entire ROI is vertically and horizontally split, some certain areas can be used for this image recognition process. So, the actual area image-processed can be minimized and the computational workload for the image processing can be reduced.
  • the Step 3 of this invention is to extract the edge on each Cell ROI which includes lane makers, using the edge extraction method. This phase is processed with Edge Extraction Part(50).
  • this invention acquires the horizontal gradient image and the gradient magnitude image.
  • the horizontal gradient image is the horizontal component for Sobel algorithm.
  • Fig. 8 exemplifies this horizontal gradient image.
  • the gradient magnitude image uses the horizontal and vertical components. It is acquired through the following Mathematical Equation 5.
  • Fig. 9 exemplifies the gradient magnitude image to be calculated through the above equation.
  • This invention considers the features of lane markings as follows in order to recognize the lane markings, using 3 images, one is the black and white image from the step 1-1 and the step 3, the other is the horizontal gradient image and another is the gradient magnitude image.
  • the left and right borderlines of lane markings are the point which has local maximum value in the gradient magnitude image.
  • the dot point of Fig. 9 ( Red dot ; 52 ) is the very left and right borderline of lane markings which have local maximum values.
  • the inner boundary of the left lane markings has minus value and the inner boundary of right lane markers has plus value.
  • This invention acquires the line edge image by the following methods, using the above 4 characteristics of lane markers.
  • Pixel X1 which has plus value in horizontal gradient image and local maximum value in gradient magnitude image while moving from left to right in the image corresponding to the y coordinate of ROI.
  • Fig. 10 exemplifies the edge image of lane markings calculated by the above method.
  • Fig. 10's dot ( Red dot ) corresponds to the right dot of two dots shown in Fig. 8 and Fig. 9, respectively.
  • the step 4 is to calculate parameters( ⁇ , ⁇ ) for a straight line from detected edge information of Cell ROI in the step 3 using Hough Transform.
  • a straight-line can be described as the no.6 mathematical equation below.
  • Noise reduction can use the characteristic of a straight-line that shall be connected seamlessly from its previous and next lines in Cell ROI, so noise and disconnected lines will be removed. Moreover, straight-lines that have irregular slope values compared to the average gradient value in Cell ROI after Hough Transform in S4 will be removed.
  • Fig. 11 shows straight-lines after Hough Transform using edge information.
  • the 3rd Cell ROI from the right lane marking is regarded as a non-calculated ROI and its virtual line will be generated as described above.
  • the step 5 is to extract multiple dot elements separated at regular intervals in Dot Extraction part (70) with straight-lines calculated in the step 4.
  • Dot Extraction 70
  • a part of dot will be placed along on the left lane markings and the other part will be placed along on the right lane markings.
  • Fig. 12 shows Extracted Dot Elements (71) along straight-lines and Fig. 13 shows Extracted Dot Elements (72) along curved lane markings.
  • extracted multiple dot elements will be determined as a lane model after using Least Square Technique in the step 6.
  • the step 6 stage is to determine a lane model using Least Square Technique in Lane Model Generation part (80) with extracted multiple dot elements in the step 5.
  • a curved lane marking can be determined if a calculated average variance is above a specific value, and if the average gradient value is continuously decreasing or increasing from the bottom to the top in a series of Cell ROI in left lane marking (or in the right lane marking). And if other cases that do not satisfy the condition above, it will be determined as straight lane markings.
  • the benefit of this method is that it can lower computing workload compared to when dot elements on lane markings are directly calculated by Least Square Technique.
  • the method only calculates extracted dot elements from calculated lines after Hough Transform and then applies Least Square Technique so that it can omit unnecessary edges.
  • the step 7 is to judge lane departure status in Lane Departure Distinction part (90) by calculating distances from the front wheels to the lane markings using the lane model in the step 6.
  • a horizontal coordinate will be calculated by putting a vertical coordinate of the host car's front wheels, in which processed images from the step 1, into the lane model equation.
  • the distance from the front wheels to the lane markings can be calculated using the above horizontal, vertical coordinates, and the width of the host car.
  • the green numbers are the distance of from the front wheels to the lane markings in the fig. 12 and 13.
  • Warning Output part (95) As the distance from the left lane marking (or the right lane marking) to the front wheel of the left side (or the right side) on the host car is calculated above, it can judge if the car departs its lane based on a width information of the car, and then the driver will be warned with sound or external signals in Warning Output part (95).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The purpose of this invention, this invention relates to a method and system for lane departure detection, offers method and system for lane departure warning based on image recognition that can maximize the accuracy of lane markings recognition as the system is strong to the various road condition, such as various forms of noise, road form, lane vanishing. The lane departure detection method based on image recognition has following 7 step features. The step 1 is to set the area to be used for image processing to detect lane markings in the image captured by the vehicle-equipped camera (hereinafter referred to as the 'the entire ROI'). The step 2 is to set the entire ROI divided into several regions (hereinafter, the Cell ROI). The step 3 is to extract edge on each the Cell ROI which includes lane markings by using edge detection technique. The step 4 is to extract straight lines through Hough Transform, using edge on each the Cell ROI extracted in Step 3. The step 5 is to extract multiple dot elements which are corresponded to the above straight line and arranged mutually and differently. The step 6 is to define lane model applying least-squares method to each multiple dot element in Step 5. The step 7 is to judge whether or not the car departs the lane, based on the lane model defined in Step6.

Description

METHOD AND SYSTEM FOR LANE DEPARTURE WARNING BASED ON IMAGE RECOGNITION
This invention relates to a method and system for Lane Departure Detection based on image recognition through a single camera installed into a car. More intimately, this invention can distinguish whether the host car departs its lane or not by dividing the inputted image into Cell Region of Interest (Hereafter as ROI); extracting dot elements; judging departure status.
Generally, Lane Departure Warning System measures whether the host vehicle departs lane markings or not, and is configured to perform a predetermined alarm processing if it is measured that the vehicle departs the lane.
Traditional Lane Departure Waning System basically recognizes lane markings which are formed of line and determine whether the vehicle departs or not. Traditional Lane Departure Waning System was unable to recognize lane departure precisely due to many variables and noise, such as lane width, radius of curvature of lane, lane crossing time, a difference between the center of the lane and the position of camera's optical axis.
Further, Traditional Lane Departure Waning System was not precise about the road which a lane marking is disconnected in the middle of and the road which has marking other than lane marking. Because of the issue mentioned above, the traditional system can cause a false alarm sound, even if the vehicle did not deviate from the center of the driving lane.
Korean Patent Application Publication No. 2000-0037604 discloses a method to recognize the lane. According to this technique, it uses two cameras to recognize lane markings that contain high brightness as compared to the road. In general, however, foreign objects other than lane markings, such as some objects fallen from cargo box, can be put on the road. If this foreign object is brighter than the road itself, Traditional Lane Departure Waning System recognizes it as lane markings. Also, by using two cameras product costs increase and installation space is needed greatly. So this feature made it difficult to be a compact size.
In addition, the conventional lane recognition method that uses only Sobel algorithm to extract the edge of the lane markings and through this recognizes lane marking. However, this method is very vulnerable to noise. It detects not only the lane markings but other non-lane markings images.
This invention has been devised to solve the above problems. The purpose of this invention is to prevent the lane markings misrecognition when there are broken lane markings, other marks except for the lane markings, and the noises that are brighter than the road.
And, this invention is the method and system for lane departure warning based on image recognition that can recognize lane markings precisely in the situation of the curve road by preventing the noises through a single camera.
In order to achieve the above object, a lane departure detection method based on image recognition has following 7 step features. The step 1 is to set the area to be used for image processing to detect lane markings in the image captured by the vehicle-equipped camera (hereinafter referred to as the 'the entire ROI'). The step 2 is to set the entire ROI divided into several regions (hereinafter, the Cell ROI). The step 3 is to extract edge on each the Cell ROI which includes lane markings by using edge detection technique. The step 4 is to extract straight lines through Hough Transform, using edge on each the Cell ROI extracted in Step 3. The step 5 is to extract multiple dot elements which are corresponded to the above straight line and arranged mutually and differently. The step 6 is to define lane model applying least-squares method to each multiple dot element in Step 5. The step 7 is to judge whether or not the car departs the lane, based on the lane model defined in Step6.
According to method and system for lane departure warning based on image recognition of this invention, the system can remove several noises that can be mistaken as lane markings or road information, traffic sign on the road by dividing input image and processing within the Cell ROI. In addition, even though lane markings were cut in the middle, the system can make virtual lane markings to maximize the accuracy of the lane markings recognition.
Moreover, the entire ROI is vertically and horizontally split into the Cell ROI. And by processing the Cell ROI, there is an advantage to minimize regions for actual image processing, which can reduce the operation workload of the image processing.
Using representative dots of a straight line extracted through Hough Transformation instead of applying the edge dots of lane markings to the Least Square Technique, it can enhance the accuracy and reduce operation workload by removing the edges which are not corresponded to the straight and reducing the number of dots applied to Least Square Technique.
Fig. 1 shows the overall configuration of the system for lane departure warning based on image recognition according to the present invention.
Fig. 2 is the block diagram that draws the detail configuration of the image recognition processor illustrated in Figure 1.
Fig. 3 is the coordinates of the method and system for lane departure warning based on image recognition.
Fig. 4 is the block flowchart to draw processing flow of the method for lane departure warning based on image recognition according to the present invention.
Fig. 5 is an example to show that the entire ROI is set through the location of the hood and vanishing line of the present invention.
Fig. 6(a) is an example of the general black and white image.
Fig. 6(b) is an example of the black and white image which highlights the yellow component that is converted by image conversion part of the present invention.
Fig. 7(a) is an example of the configured Cell ROI unless lane markings are recognizable in the previous frame.
Fig. 7(b) is an example of the configured Cell ROI if lane markings are recognizable in the previous frame.
Fig. 8 is an example of the gradient magnitude image figured by Sobel algorithm of the invention`s edge extraction part.
Fig. 9 is an example of the horizontal gradient image figured by Sobel algorithm of the invention`s edge extraction part.
Fig. 10 is an example of edge display figured by the invention`s edge extraction part.
Fig. 11 is an example of straight line figured by the invention`s straight line extraction part.
Fig. 12 is an example of dots ingredient extracted by the invention`s dot extraction part about straight lane.
Fig. 13 is an example of dots ingredient extracted by the invention`s dot extraction part about curve lane.
[PART NUMBER INDEX]
10 : Camera
30 : Image Transformation
41 : Widthwise direction line of Cell ROI
50 : Edge Extraction
70 : Dot Extraction
80 : Lane Model Formation
95 : Warning Output 20 : Image Recognition Processing
40 : Image Division
42 : Lengthwise direction line of Cell ROI
60 : Straight line Information Extraction
71, 72 : Extracted Dot Elements
90 : Lane Departure Distinction
This invention can remove noise as much as possible and successfully warn lane departure without lane marking recognition errors under a variety of environmental conditions such as damage of lane markings and a variety of road conditions such as straight or curve road.
From next part, this description shows you some examples, advantages, and features of this invention with reference to the accompanying drawings.
Fig. 1 shows the overall configuration of the system for lane departure warning based on image recognition according to the present invention. And, Fig. 2 is the block diagram that draws the detail configuration of the image recognition processor illustrated in Figure 1.
With reference of Fig. 1 and 2, the system for lane departure warning based on image recognition of this invention contains Camera (10), Image Recognition Processing part (20) and Warning Output part (95).
Camera (10) installed inside the car is a composition part of acquisitioning image by recording the car`s forward driving road and it could be CMOS or CCD camera reasonably.
Image Recognition Processing part (20) of this invention recognizes lane markings from images taken by Camera (10) and processes images for lane departure judgment in order to prevent car accidents by drowsy or sleepy driving.
Image Recognition Processing part (20) can be divided into 7 sub-parts for functional purposes: Image Transformation (30), Image Division (40), Edge Extraction (50) Straight Line Information Extraction (60), Dot Extraction (70), Lane Model Formation (80), and Lane Departure Distinction (90).
The system for lane departure warning based on image recognition of this invention initializes the system first to detect lane departure by recognizing and processing the acquired image through the camera on driving.
The function of system initialization is to set up a region (as the entire ROI) from input image for lane detection processing, and to allocate some memory for image processing. And then it saves information of lane markings, road width, and front wheel location for Hough Transform as Lookup table.
Once system initialization is completed, recognition processing of input images and lane departure detection will be done through some stages of each composition part.
Image Transformation part (30) of this invention basically transforms input images taken by a camera to black-and-white images and then emphasizes the yellow component from transformed-black and-white images reasonably.
Image Division part (40) of this invention divides the entire ROI in black-and-white image into a series of regions (from now on, the Cell ROI ). The above Cell ROI is comprised of Cell areas formed by dividing the above entire ROI into lengthwise and widthwise. The above Cell ROI is comprised of the left lane markings and right lane markings among the above whole area of interest arranged along the lane markings.
Edge Extraction part (50) of this invention detects edge information using edge detection techniques in Cell ROI that includes lane markings. In this case, Sobel Edge method is used to detect edges of objects in an image using gradient difference calculated by differential operator for edge detection.
Straight Line Information Extraction part (60) of this invention extracts the straight line information through Hough Transform by using the edge extracted by the edge extraction part (50).
For reference, Hough Transform used for Straight Line Information Extraction part (60) is the most broadly used method to find the straight line in image processing. It finds straight line by converting a linear equation on two-dimension image coordinate to parametric space. The explanation about Hough Transform is omitted as Hough Transform is the generally used algorithm in engineering. Moreover, Straight Line Information Extraction part (60) generates virtual straight lines if straight lines could not be extracted in Cell ROIs (non-extraction Cell ROI). Virtual straight lines will be generated along on non-extraction Cell ROIs by calculating straight line information from previous and next ROIs. Thus, it can extract continuously intact straight lines.
Dot Extraction part (70) of this invention extracts the multiple dot elements in Straight Information Extraction part (60) corresponded to the above straight and arranged mutually and differently.
Therefore, a group of multiple dot elements extracted by Dot Extraction part (70) will be located along the left lane markings at regular intervals. The other extracted group will be located along the right lane markings at regular intervals.
Lane Model Formation part (80) of this invention defines lane model by applying multiple dot elements extracted by Dot Extraction part (70) to Least Square Technique.
To be more concrete, Lane Model Formation part (80) judges whether lane markings on an image are straight or curved lines using the slope of straight line extracted per the Cell ROI. Once the above result of the judgment is straight line, Lane Model Formation part (80) determines lane markings through applying Least Square Technique approximating to linear equation. Once the above result of the judgment is curved line, Lane Model Formation part (80) determines lane markings will be used to detect lane departure by defining lane model through applying Least Square Technique approximating to quadratic equation.
Lane Departure Distinction part (90) of this invention judges whether the host vehicle departs the lane or not by using lane model determined through Lane Model Formation part (80) (in other words, a base line to detect lane departure).
Concretely, Lane Departure Distinction part (90) of this invention calculates a distance between the front wheels of the vehicle and lane marking and offers warning occurrence signal to Warning Output part (95) when a distance calculated by the above calculation gets out of the baseline.
By the way, Lane Departure Distinction part offers another early warning occurrence signal to the Warning Output part (95) not only lane departure but approaching to the lane markings less than a certain distance.
Warning Output part (95) of this invention gets visual or sound warning released according to the warning occurrence signal or advance warning occurrence signal which is sent by lane departure distinction, while informing drivers of the relevant fact.
Moreover, it is recommended that the system consist of two different warning methods: warning and early-warning, so that the driver can tell if the host car is about to depart or already departed.
From now on, this description will explain about the function and effect performed by per configuration part of Image Recognition Processing part (20) to recognize and process the image.
Fig. 3 is the coordinates of the method and system for lane departure warning based on image recognition. Referentially, Camera (10) of figure 3 is installed inside a car and can use CMOS OR CCD camera.
In Fig. 3, Xw, Yw and Zw mean the world coordinate system including the whole space of driving road and Xc, Yc and Zc mean the coordinate system of the camera image frame revolving around the camera's time. The camera is set to incline by angle θ about the road surface which has the lane markings and the height from the above lane markings surface is marked as H. In this invention, the relationship between the real world coordinates, the camera, and image coordinates will be explained first because the invention recognizes lane markings and judges lane departure by information of thickness of lane markings and lane width in the real world coordinates that transformed into imported image through the camera.
At this time, world coordinate system (Xw, Yw, Zw) and the camera coordinate system (Xc, Yc, Zc) have the relationship of Mathematical equation 1.
[MATHEMATICAL EQUATION 1]
Figure PCTKR2013002519-appb-I000001
Figure PCTKR2013002519-appb-I000002
(In the Mathematical equation 1,
Xc, Yc, Zc : X, Y, Z axis coordinates of the camera coordinate system,
Xw, Yw, Zw : X, Y, Z axis coordinates of the world coordinate system.
θ : Tilted angle camera optical axis (ZC) on the horizon (YW)
H : The height from the road surface to the location where the camera in installed )
The image coordinate system can be expressed as the following Mathematical equation 2 by using Pinhole camera model and the above Mathematical equation 1.
[MATHEMATICAL EQUATION 2]
Figure PCTKR2013002519-appb-I000003
Figure PCTKR2013002519-appb-I000004
( In the mathematical equation 2,
u : horizontal axis of the image coordinates system,
v : vertical axis of the image coordinates system,
f : The focal length of the camera )
Meanwhile, Zw value can be assumed ‘0' because lane markings recognized exist on the road surface. Once Zw=0 is applied to the above mathematical equation 2, the image coordinate system formula in simplified.
The coordinates of pixel coordinate system can be expressed as the following Mathematical equation 3 by using width and length's resolution of the image sensor and width and length's pixel size.
[MATHEMATICAL EQUATION 3]
Figure PCTKR2013002519-appb-I000005
Figure PCTKR2013002519-appb-I000006
(In the mathematical equation 3,
Xp, yp : pixel coordinates of horizontal, vertical axis
rw, rh : width, vertical resolution of the image sensor
Su, Sv : width, vertical pixel size of the image sensor. )
From now on, this description will explain about the lane departure detecting method through processing the lane markings on input image based on the relationship of world coordinate system, the camera, and the image coordinate system.
Fig. 4 is the block flowchart to draw processing flow of the method for lane departure warning based on image recognition according to the present invention.
Refer to Fig. 4, lane departure detection method based on image recognition has following 7 step features. The step 1 is to set the area to be used for image processing to detect lane markings in the image captured by the vehicle-equipped camera (hereinafter referred to as the 'the entire ROI'). The step 2 is to set the entire ROI divided into several regions (hereinafter, the Cell ROI). The step 3 is to extract edge on each the Cell ROI which includes lane markings by using edge detection technique. The step 4 is to extract straight lines through Hough Transform, using edge on each the Cell ROI extracted in Step 3. The step 5 is to extract multiple dot elements which are corresponded to the above straight line and arranged mutually and differently. The step 6 is to define lane model applying least-squares method to each multiple dot element in Step 5. The step 7 is to judge whether or not the car departs the lane, based on the lane model defined in Step6.
Desirably, the step 1 can include the step that transforms the image inputted by the vehicle camera into black-and-white image and the step (1-1) that emphasizes and processes yellow component in the above transformed black-and-white image.
The Cell ROI in step 2 is comprised of the Cell areas formed by dividing the above entire ROI into lengthwise or widthwise. The Cell ROI of the above Cell is comprised of the left and right lane markings among the above entire ROI arranged along the lane markings.
The edge detection technique in step 3 can use desirably the Sobel Edge detection technique.
The step 4 can include the step that calculates the average value of straight gradient extracted from per Cell ROI and the step that removes the straight line that differs in certain angle from the average value of the above gradient among the extracted straight per Cell ROI.
The method for lane markings recognition of this invention can include the step producing virtual straight line, that will be included in the above non-extraction Cell ROI, through the inference using straight line information from previous and next ROIs based on the non-extraction Cell ROI if there is the Cell ROI that cannot extract straight line (from now on, non-extraction Cell ROI ) in step 4.
Here, inference using straight line information included in the Cell ROI arranged before and after is to produce the straight line connecting bottom and top of the straight line included in the Cell ROI arranged before and after. And the method producing the straight line combined to this by using the gradient average value of the above straight line can be used.
In addition, the method for lane markings recognition of this invention can include the step judging whether the lane markings of the image shot by the above camera is the straight line or the curve by using the gradient of straight line extracted per the Cell ROI through the step 4. In case of the above, the step 6 determines lane model through applying Least Square Technique approximating to linear equation if the above result of the judgment is straight line. Once the above result of the judgment is the curve lane, the step 6 determines lane model through applying Least Square Technique approximating to quadratic expression.
From now on, this description will explain about each step through the examples.
< Step 1 (S10)>
The step 1 of this invention is the step (S10) that performs initialization in order to realize the lane markings recognition function by processing and recognizing the image taken by the camera while driving. The step 1 seeks vertical coordinates of vanishing line on the images by using the mathematical equation 1 or the mathematical equation 3. The vanishing line means the corresponding line to the meeting point of the horizon and among the many lines that divide the entire ROI along the vertical axis into widthwise.
The vertical coordinates of vanishing line is the yp value of when Yw of the world coordinate system converges ‘∞' value. The vertical coordinates of vanishing line is calculated by mathematical equation 4.
[MATHEMATICAL EQUATION 4]
Figure PCTKR2013002519-appb-I000007
Figure PCTKR2013002519-appb-I000008
On the image, this invention sets up the entire ROI by locating the vertical coordinate values of the vanishing line at the top and locating the vertical coordinate values of the point that locates the vehicle hood at the bottom.
The space between ‘vanishing' line and ‘hood' line in figure 5 is the entire ROI that will be used in image processing.
The step 1 of this invention not only performs initialization like the following but sets up the entire ROI.
Firstly, this step allocates a certain memories to process the image such as the black and white image, horizontal gradient image, Hough Transform, etc.
And, this step calculates the sin, cos value needed in performing Hough Transform and saves as Lookup table.
And, this step converts minimum, maximum thickness of the lane markings on the world coordinate system and minimum, maximum width to the value coordinated to every ordinate corresponded to top and bottom of the entire ROI on the image by using expression of the coordinate system.
And, this step saves the above value as Lookup table.
And, this step calculates and saves the ordinate value corresponded the location of car's front wheel on the image in order to judge lane departure through using the expressions of the coordinate system.
< Step 1-1 (S15) >
The step 1-1 of the invention is processed with Image Transformation Part(30). Describing it further, the color image taken by the vehicle-equipped camera is converted to the black and white image. And, in a bid to increase the recognition rate of the yellow line, Cb color component and YCbCr color space are used. Yellow's ingredient is vividly appeared from the black and white image with the following formula used.
{IF Cb -128 < 0 THEN
Gray Pixel = Y + ( 128 - Cb ) * 2
ELSE
Gray Pixel = Y}
Fig. 6(a) shows the general black and white image and Fig. 6(b) shows the yellow-factor-highlighted image. As Fig. 6 indicates, Cb of color components underscores the factor of yellow color and it gets lane markings on the black and white image quite clarified.
< Step 2 (S20) >
The step 1-1 of the invention is processed with Image Division Part(40). This invention sets the Cell ROI corresponding to left and right lane markings of the entire ROI of the image.
The nearer the lane markings are located to a vehicle, the more linear they look. The farther they are located to a vehicle, the less clear. In case of the curve road, the farther the lane markings are poised to a vehicle, the less linear they look. And, like Fig. 5, the constantly space-arrayed horizontal lines along the vertical axis are dividing the entire ROI into the appropriate number of ROI. So, lane markings in each segmented space can be independently image-processed.
If splitting the entire ROI along the vertical axis, it helps to improve the rate of lane markings recognition and the performance of straight line extraction even in a curve road.
In addition, because retaining information of the straight line recognized in before and after frame on each the segmented ROI even under the difficult conditions such as dotted lane markings or the impaired or broken lane markings, this ROI set-up makes lane markings' presence more predictable.
A number of segmented ROIs along the vertical axis are again divided by several vertical lines(42) along the horizontal axis on each left and right lane markings. Finally, as Fig. 7 shows, the Cell ROI can be obtained.
This invention recommends this Cell ROI be continuously positioned along each a series of left and right lane markings respectively within the entire ROI and be configured in an arranged format.
In the meanwhile, the width of each Cell ROI can be adjustable according to if the lane markings have been recognized in the previous frame.
Fig. 7(a) describes ROI in case that the lane markings are not founded in the previous frame. And, Fig. 7(b) describes ROI in case that the lane markings are founded in the previous frame.
In case that lane markings are not recognized in the previous frame, when considering the horizontal center of the image on the horizontal line of the corresponding Cell ROI to be a base point, this invention sets the interval between the lane's min. width and max. one as the width of the corresponding Cell ROI. (Refer to Fig. 7(a)).
On the other hand, in case that lane markings are recognized in the previous frame, this invention sets the width of the corresponding Cell ROI after calculating a certain range of Cell ROI which is extended left and right in the constant value from the center point of the lane markers recognized in the previous frame.( Refer to Fig. 7(b) ).
If the width of the Cell ROI is controlled as described above, there are many good points to be able to remove various noises, which may lead to the error of the recognition, such as the letter, number, figure and line that are written and drawn on the lane or the road.
And, if the entire ROI is vertically and horizontally split, some certain areas can be used for this image recognition process. So, the actual area image-processed can be minimized and the computational workload for the image processing can be reduced.
< Step 3 (S30) >
The Step 3 of this invention is to extract the edge on each Cell ROI which includes lane makers, using the edge extraction method. This phase is processed with Edge Extraction Part(50).
Describing it in more detail, after the Sobel algorithm is used in order to recognize the lane markers, this invention acquires the horizontal gradient image and the gradient magnitude image.
The following formula is the mask for Sobel algorithm. Gx stands for the horizontal factor, and Gy stands for the vertical factor.
Figure PCTKR2013002519-appb-I000009
Figure PCTKR2013002519-appb-I000010
The horizontal gradient image is the horizontal component for Sobel algorithm. Fig. 8 exemplifies this horizontal gradient image.
The gradient magnitude image uses the horizontal and vertical components. It is acquired through the following Mathematical Equation 5.
[Mathematical Equation 5]
Figure PCTKR2013002519-appb-I000011
Fig. 9 exemplifies the gradient magnitude image to be calculated through the above equation.
This invention considers the features of lane markings as follows in order to recognize the lane markings, using 3 images, one is the black and white image from the step 1-1 and the step 3, the other is the horizontal gradient image and another is the gradient magnitude image.
- Supposed that the boundaries of the left and right side of lane markings be the base point, the center area between both boundaries is brighter than the out-of-boundaries' areas.
- There are characteristic changes in the brightness of the lane markers from left to right in terms of the direction, as getting from the brighter to the darker. In Fig. 8, white pixel of the point (51a) is getting brighter, and black pixel of the point (51b) is getting darker. Both of them are the same with each dot ( 51a, 51b ).
- The left and right borderlines of lane markings are the point which has local maximum value in the gradient magnitude image. The dot point of Fig. 9 ( Red dot ; 52 ) is the very left and right borderline of lane markings which have local maximum values.
- In the horizontal gradient image, the inner boundary of the left lane markings has minus value and the inner boundary of right lane markers has plus value.
This invention acquires the line edge image by the following methods, using the above 4 characteristics of lane markers.
a) Set up the vertical coordinate of the upper area as y, and the lower area's one as ymax.
b) Find out Pixel X1 which has plus value in horizontal gradient image and local maximum value in gradient magnitude image while moving from left to right in the image corresponding to the y coordinate of ROI.
c) Again, find out Pixel X2 which has minus value in horizontal gradient image and local maximum value in gradient magnitude image while moving from left to right on the X1 basis.
d) If the interval between X1 and X2 falls into the lookup table of lane markings' thickness of the images obtained in the initial phase, and if the average value of the brightness of the black-and-white images of X1 and X2 is bigger than X1‘s brightness value and X2-adjacent pixel (X1 - 1, X2 + 1)'s one, this invention considers it the actual lane markings and save it as the edge of a vehicle. At this time, in case of the Cell ROI corresponding to the left lane markings, X2 is saved as edge pixel. In case of the Cell ROI corresponding to the right lane markings, X1 is saved as edge pixel.
e) c ~ d is repeatedly processed about every pixel of the Cell ROI that corresponds to the y coordinate, and after this process, b ~ d is repeatedly processed with 1 addition to y. This process repeats until y arrives at ymax.
Fig. 10 exemplifies the edge image of lane markings calculated by the above method. Fig. 10's dot ( Red dot ) corresponds to the right dot of two dots shown in Fig. 8 and Fig. 9, respectively.
< Step 4 (S40)>
The step 4 is to calculate parameters(ρ, θ) for a straight line from detected edge information of Cell ROI in the step 3 using Hough Transform. A straight-line can be described as the no.6 mathematical equation below.
[Mathematical Equation 6]
Figure PCTKR2013002519-appb-I000012
After Hough Transform, straight-lines created by noise that do not correspond with the actual lane markings need to be removed.
Noise reduction can use the characteristic of a straight-line that shall be connected seamlessly from its previous and next lines in Cell ROI, so noise and disconnected lines will be removed. Moreover, straight-lines that have irregular slope values compared to the average gradient value in Cell ROI after Hough Transform in S4 will be removed.
However, when a straight-line in Cell ROI cannot be calculated by noise, dotted or damaged lane markings, a virtual straight-line will be generated by calculating straight-line information from its previous and next Cell ROI.
Fig. 11 shows straight-lines after Hough Transform using edge information. The 3rd Cell ROI from the right lane marking is regarded as a non-calculated ROI and its virtual line will be generated as described above.
<Step 5 (S50)>
The step 5 is to extract multiple dot elements separated at regular intervals in Dot Extraction part (70) with straight-lines calculated in the step 4.
Among extracted multiple dot elements by Dot Extraction (70), a part of dot will be placed along on the left lane markings and the other part will be placed along on the right lane markings.
Fig. 12 shows Extracted Dot Elements (71) along straight-lines and Fig. 13 shows Extracted Dot Elements (72) along curved lane markings.
Finally, extracted multiple dot elements will be determined as a lane model after using Least Square Technique in the step 6.
<Step 6 (S60)>
The step 6 stage is to determine a lane model using Least Square Technique in Lane Model Generation part (80) with extracted multiple dot elements in the step 5.
Lane Model Formation part (80) determines a reference line (a base lane marking) for lane departure detection. If extracted dot elements are on straight lane markings, it will be described as a linear equation(x = a + by) by Least Square Technique in order to determine a lane model. If extracted dot elements are on curved lane markings, it will be described as a quadratic equation(x = a + by + cy 2) by Least Square Technique in order to determine a lane model.
The following is as of how to identify whether dot elements are on straight or curved lane markings. First of all, average gradient and average variance values in Cell ROI are needed to be calculated. Then, a curved lane marking can be determined if a calculated average variance is above a specific value, and if the average gradient value is continuously decreasing or increasing from the bottom to the top in a series of Cell ROI in left lane marking (or in the right lane marking). And if other cases that do not satisfy the condition above, it will be determined as straight lane markings.
The benefit of this method is that it can lower computing workload compared to when dot elements on lane markings are directly calculated by Least Square Technique. The method only calculates extracted dot elements from calculated lines after Hough Transform and then applies Least Square Technique so that it can omit unnecessary edges.
<Step 7(S70)>
The step 7 is to judge lane departure status in Lane Departure Distinction part (90) by calculating distances from the front wheels to the lane markings using the lane model in the step 6.
Specifically, a horizontal coordinate will be calculated by putting a vertical coordinate of the host car's front wheels, in which processed images from the step 1, into the lane model equation.
Then, the distance from the front wheels to the lane markings can be calculated using the above horizontal, vertical coordinates, and the width of the host car. The green numbers are the distance of from the front wheels to the lane markings in the fig. 12 and 13.
As the distance from the left lane marking (or the right lane marking) to the front wheel of the left side (or the right side) on the host car is calculated above, it can judge if the car departs its lane based on a width information of the car, and then the driver will be warned with sound or external signals in Warning Output part (95).
Particular terms and figures are used for a better explanation for this invention. But the purpose of these explanations is to help understanding of technical topics. Thus, terms and examples can be modified if there is a better substitute as long as it does not change the subject of the invention for the patent. Such modified examples cannot be considered separately from the actual invention, and managed within the patent.

Claims (9)

  1. Vision based Lane Departure Warning algorithm including features below:
    Step 1 that establishes a region (as Entire ROI) in order to detect lane markings in videos through the camera;
    Step 2 that divides Entire ROI into a series of Region of Interest (as Cell ROI);
    Step 3 that extracts edges in Cell ROI from images including lane markings using Edge Extraction method;
    Step 4 that extracts straight-line information with extracted edge information from step 3 using Hough Transform;
    Step 5 that extracts multiple dot elements at regular intervals along on the straight-lines from Step 4;
    Step 6 that determines a lane model by Least Square Technique with extracted multiple dot elements in step 5; and
    Step 7 that judges lane departure by the lane model determined in Stage 6.
  2. Vision based Lane Departure Warning algorithm including features below:
    In addition to Step 1, 2 as claimed in Claim 1, the process transforms color images into black-and-white images in imported images through the camera; and the process that emphasizes yellow components in the transformed black-and white images.
  3. Vision based Lane Departure Warning algorithm including features below:
    In addition to Step 2 as claimed in Claim 1, Cell ROI is a divided square horizontally and vertically in Entire ROI. Each Cell ROI is placed continuously along the left and right lane markings in Entire ROI.
  4. Vision based Lane Departure Warning method including features below:
    In addition to Claim 2, Sobel Edge extraction is used for Edge Extraction in Stage 3.
  5. Vision based Lane Departure Warning method including features below:
    In addition to Claim 3, Step 4-1 calculates the average slope from extracted straight-line in Cell ROI from Step 4;
    In Step 4, Step 4-2 removes irregular straight lines if gradient values are exceeded compared to the average gradient value;
    If a straight-line cannot be extracted in Cell ROI (regarded as non-calculated Cell ROI) in Step 4, Step 4-3 can be used to generate a virtual straight line for non-calculated Cell ROI by analyzing straight line information in the previous and next Cell ROI.
  6. Vision based Lane Departure Warning method including features below:
    In addition to Claim 3, Step 4, the process judges whether it is a straight-line or curved line in images through the camera using gradient values of extracted straight-lines;
    In Step 6, after the judgment process above, if an extracted line is judged as a straight-line, a linear equation with Least Square Technique will be used. And if an extracted line is judged as a curved line, quadratic equation with Least Square Technique will be used to determine a lane model.
  7. Vision based Lane Departure Warning method including features below:
    In addition to Claim 6, the step, that judges whether an extracted line is straight or curved, includes a process that calculates average gradient and average variance of a straight line in Cell ROI. Furthermore, a curved lane marking can be determined if a calculated average variance is above a specific value, and if the average gradient value is continuously decreasing or increasing from the bottom to the top in a series of Cell ROI in left lane marking (or in the right lane marking). And if other cases that do not satisfy the condition above, it will be determined as a straight lane marking.
  8. Vision based Lane Departure Warning method including features below:
    In addition to Claim 1, Step 7 judges Lane Departure when the host car departs its lane as calculated by the distance from the left lane marking (or the right lane marking) to the front wheel of the left side (or the right side) on the host car and based on a width information of the car.
  9. Vision based Lane Departure Warning method including features below:
    Camera part that records front driving scenes;
    Image Transformation part that converts color into black-and-white in input images from Camera part;
    Image Division part that divides Entire Region of Interest into a series of Region of Interest (Cell ROI);
    Edge Extraction part that extracts edges on lane markings in Cell ROI;
    Straight-line Information Extraction part that extracts straight-lines by Hough Transform with extracted edge information in Cell ROI as a result of Edge Extraction part;
    Dot Extraction part that extracts multiple dot elements at regular intervals along the extracted straight-lines as a result of Straight-line Information Extraction part;
    Lane Model Formation part that determines a lane model by Least Square Technique using extracted multiple dot elements as a result of Dot Extraction part;
    Lane Departure Distinction parts that judges whether the host car departs its lane or not as a result of Lane Model Formation; and
    Warning Output parts that warns the driver when the host car departs as a result of Lane Departure Distinction part.
PCT/KR2013/002519 2012-04-04 2013-03-27 Method and system for lane departure warning based on image recognition WO2013151266A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2012-0034996 2012-04-04
KR1020120034996A KR101392850B1 (en) 2012-04-04 2012-04-04 Method and system for lane departure warning based on image recognition

Publications (1)

Publication Number Publication Date
WO2013151266A1 true WO2013151266A1 (en) 2013-10-10

Family

ID=49300713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/002519 WO2013151266A1 (en) 2012-04-04 2013-03-27 Method and system for lane departure warning based on image recognition

Country Status (2)

Country Link
KR (1) KR101392850B1 (en)
WO (1) WO2013151266A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112118A (en) * 2014-06-26 2014-10-22 大连民族学院 Lane departure early-warning system-based lane line detection method
DE102014109063A1 (en) * 2014-06-27 2015-12-31 Connaught Electronics Ltd. Method for detecting an object having a predetermined geometric shape in a surrounding area of a motor vehicle, camera system and motor vehicle
CN105620477A (en) * 2014-10-28 2016-06-01 奇瑞汽车股份有限公司 Lane departure early warning control method for vehicle
CN106184232A (en) * 2016-07-27 2016-12-07 北京航空航天大学 A kind of lane departure warning control method based on driver visual angle
CN106950950A (en) * 2017-03-02 2017-07-14 广东工业大学 A kind of automobile doubling accessory system and control method based on camera
CN107284455A (en) * 2017-05-16 2017-10-24 浙江理工大学 A kind of ADAS systems based on image procossing
CN109271844A (en) * 2018-07-29 2019-01-25 国网上海市电力公司 Electrical cabinet electrical symbol recognition methods based on OpenCV
CN109741314A (en) * 2018-12-29 2019-05-10 广州博通信息技术有限公司 A kind of visible detection method and system of part
CN110136222A (en) * 2019-04-17 2019-08-16 百度在线网络技术(北京)有限公司 Virtual lane line generation method, apparatus and system
CN110400348A (en) * 2019-06-25 2019-11-01 天津大学 The unmanned vibration equipment of the twin-rotor housing of view-based access control model is rotated to detection, scaling method
CN110789534A (en) * 2019-11-07 2020-02-14 淮阴工学院 Lane departure early warning method and system based on road condition detection
CN110962847A (en) * 2019-11-26 2020-04-07 清华大学苏州汽车研究院(吴江) Lane centering auxiliary self-adaptive cruise trajectory planning method and system
CN111833291A (en) * 2019-04-22 2020-10-27 上海汽车集团股份有限公司 Semantic segmentation training set manual annotation evaluation method and device
CN114801991A (en) * 2022-04-28 2022-07-29 东风汽车集团股份有限公司 Roadside vehicle falling prevention method and system

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101962700B1 (en) * 2013-12-24 2019-03-28 주식회사 만도 System and method for lane recognition using defog sensor
KR101508357B1 (en) * 2013-12-26 2015-04-14 주식회사 유라코퍼레이션 Lane departure warning system and method for warning thereof
KR101584907B1 (en) * 2014-07-29 2016-01-22 울산대학교 산학협력단 Method and Apparatus for recognizing lane using region of interest
KR101689805B1 (en) * 2015-03-20 2016-12-27 동아대학교 산학협력단 Apparatus and method for reconstructing scene of traffic accident using OBD, GPS and image information of vehicle blackbox
KR101700813B1 (en) * 2016-02-02 2017-01-31 도로교통공단 Monitoring and analyzing system whether the bus mounted the kerb stone for first class heavy vehicle driving license test
KR102499398B1 (en) 2017-08-09 2023-02-13 삼성전자 주식회사 Lane detection method and apparatus
KR102132899B1 (en) 2018-10-08 2020-07-21 주식회사 만도 Route Generation Apparatus at Crossroad, Method and Apparatus for Controlling Vehicle at Crossroad
KR102224234B1 (en) 2019-07-04 2021-03-08 (주)에이아이매틱스 Lane departure warning determination method using driver state monitoring
KR102527522B1 (en) * 2020-11-13 2023-05-04 (주)에이트원 System for supporting work of aircraft mechanic, and method therefor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6925194B2 (en) * 2000-12-27 2005-08-02 Hyundai Motor Company Curved lane recognizing method in road modeling system
EP1667086A1 (en) * 2003-09-24 2006-06-07 Aisin Seiki Kabushiki Kaisha Device for detecting road traveling lane
KR20110001425A (en) * 2009-06-30 2011-01-06 태성전장주식회사 Lane classification method using statistical model of hsi color information
US7937196B2 (en) * 2004-03-12 2011-05-03 Toyota Jidosha Kabushiki Kaisha Lane boundary detector
KR20110046607A (en) * 2009-10-29 2011-05-06 조재수 Lane detection method and Detecting system using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6925194B2 (en) * 2000-12-27 2005-08-02 Hyundai Motor Company Curved lane recognizing method in road modeling system
EP1667086A1 (en) * 2003-09-24 2006-06-07 Aisin Seiki Kabushiki Kaisha Device for detecting road traveling lane
US7937196B2 (en) * 2004-03-12 2011-05-03 Toyota Jidosha Kabushiki Kaisha Lane boundary detector
KR20110001425A (en) * 2009-06-30 2011-01-06 태성전장주식회사 Lane classification method using statistical model of hsi color information
KR20110046607A (en) * 2009-10-29 2011-05-06 조재수 Lane detection method and Detecting system using the same

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112118A (en) * 2014-06-26 2014-10-22 大连民族学院 Lane departure early-warning system-based lane line detection method
CN104112118B (en) * 2014-06-26 2017-09-05 大连民族学院 Method for detecting lane lines for Lane Departure Warning System
DE102014109063A1 (en) * 2014-06-27 2015-12-31 Connaught Electronics Ltd. Method for detecting an object having a predetermined geometric shape in a surrounding area of a motor vehicle, camera system and motor vehicle
CN105620477A (en) * 2014-10-28 2016-06-01 奇瑞汽车股份有限公司 Lane departure early warning control method for vehicle
CN106184232A (en) * 2016-07-27 2016-12-07 北京航空航天大学 A kind of lane departure warning control method based on driver visual angle
CN106184232B (en) * 2016-07-27 2018-11-16 北京航空航天大学 A kind of lane departure warning control method based on driver visual angle
CN106950950A (en) * 2017-03-02 2017-07-14 广东工业大学 A kind of automobile doubling accessory system and control method based on camera
CN107284455A (en) * 2017-05-16 2017-10-24 浙江理工大学 A kind of ADAS systems based on image procossing
CN109271844A (en) * 2018-07-29 2019-01-25 国网上海市电力公司 Electrical cabinet electrical symbol recognition methods based on OpenCV
CN109271844B (en) * 2018-07-29 2023-03-28 国网上海市电力公司 Electrical cabinet electrical symbol recognition method based on OpenCV
CN109741314A (en) * 2018-12-29 2019-05-10 广州博通信息技术有限公司 A kind of visible detection method and system of part
CN110136222A (en) * 2019-04-17 2019-08-16 百度在线网络技术(北京)有限公司 Virtual lane line generation method, apparatus and system
CN111833291A (en) * 2019-04-22 2020-10-27 上海汽车集团股份有限公司 Semantic segmentation training set manual annotation evaluation method and device
CN111833291B (en) * 2019-04-22 2023-11-03 上海汽车集团股份有限公司 Semantic segmentation training set manual annotation evaluation method and device
CN110400348A (en) * 2019-06-25 2019-11-01 天津大学 The unmanned vibration equipment of the twin-rotor housing of view-based access control model is rotated to detection, scaling method
CN110400348B (en) * 2019-06-25 2022-12-06 天津大学 Method for detecting and calibrating steering of vibrating wheel of double-cylinder unmanned equipment based on vision
CN110789534A (en) * 2019-11-07 2020-02-14 淮阴工学院 Lane departure early warning method and system based on road condition detection
CN110962847A (en) * 2019-11-26 2020-04-07 清华大学苏州汽车研究院(吴江) Lane centering auxiliary self-adaptive cruise trajectory planning method and system
CN114801991A (en) * 2022-04-28 2022-07-29 东风汽车集团股份有限公司 Roadside vehicle falling prevention method and system

Also Published As

Publication number Publication date
KR20130112536A (en) 2013-10-14
KR101392850B1 (en) 2014-05-09

Similar Documents

Publication Publication Date Title
WO2013151266A1 (en) Method and system for lane departure warning based on image recognition
US8005266B2 (en) Vehicle surroundings monitoring apparatus
WO2018110964A1 (en) Electronic device and method for recognizing object by using plurality of sensors
US8229171B2 (en) Apparatus, method, and computer product for vehicle-type determination using image data of vehicle
WO2017008224A1 (en) Moving object distance detection method, device and aircraft
US20090245582A1 (en) Lane recognition apparatus for vehicle, vehicle thereof, and lane recognition program for vehicle
US20100110193A1 (en) Lane recognition device, vehicle, lane recognition method, and lane recognition program
WO2018105842A1 (en) Radar-based high-precision incident detection system
KR101922852B1 (en) Method for Detecting Border of Grassland Using Image-Based Color Information
WO2023120831A1 (en) De-identification method and computer program recorded in recording medium for executing same
KR101169338B1 (en) Lane recognition apparatus and method of recognition lane thereof
WO2022005053A1 (en) Method and device for evaluating driver by using adas
WO2013022153A1 (en) Apparatus and method for detecting lane
CN108973918B (en) Device and method for monitoring vehicle blind area
CN106874897A (en) A kind of licence plate recognition method and device
WO2020246735A1 (en) Vehicle control method, vehicle control device, and vehicle control system including same
WO2022255677A1 (en) Method for determining location of fixed object by using multi-observation information
JP2002321579A (en) Warning information generating method and vehicle side image generating device
WO2016163590A1 (en) Vehicle auxiliary device and method based on infrared image
WO2019112296A1 (en) Image processing apparatus and mobile robot equipped with same
KR102214321B1 (en) Lane recognizing apparatus and method thereof
WO2020071853A1 (en) Driving assistance apparatus and image processing method
WO2022255678A1 (en) Method for estimating traffic light arrangement information by using multiple observation information
JP3260503B2 (en) Runway recognition device and runway recognition method
KR20160089786A (en) Integrated warning system for lane departure and forward vehicle collision using camera for improved image acquisition in dark environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13772871

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13772871

Country of ref document: EP

Kind code of ref document: A1