Nothing Special   »   [go: up one dir, main page]

CN103940434B - Real-time lane detection system based on monocular vision and inertial navigation unit - Google Patents

Real-time lane detection system based on monocular vision and inertial navigation unit Download PDF

Info

Publication number
CN103940434B
CN103940434B CN201410129551.8A CN201410129551A CN103940434B CN 103940434 B CN103940434 B CN 103940434B CN 201410129551 A CN201410129551 A CN 201410129551A CN 103940434 B CN103940434 B CN 103940434B
Authority
CN
China
Prior art keywords
mrow
lane
frame
line
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410129551.8A
Other languages
Chinese (zh)
Other versions
CN103940434A (en
Inventor
薛建儒
崔迪潇
沈雅清
宋晔
张耿
杜少毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201410129551.8A priority Critical patent/CN103940434B/en
Publication of CN103940434A publication Critical patent/CN103940434A/en
Application granted granted Critical
Publication of CN103940434B publication Critical patent/CN103940434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3658Lane guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a kind of real-time lane detection system based on monocular vision and inertial navigation unit, the system mainly includes off-line calibration and in line projection's changing pattern block, inertial navigation unit module, single frames lane detection module associates correction verification module with the multiframe lane line based on inertial navigation unit, with realizing the road surface multilane to urban area circumstance accurate detection in real time, the lane detection system, it is adaptable to illumination variation, barrier blocks, deep camber, actual situation line, the different road conditions such as multilane and road type, realize the track line stabilization under a variety of environment and the detection senses of long range.Navigation equipment, image capture device and the calculating platform for the low cost low-power consumption that the system uses, it can be widely applied to the fields such as pilotless automobile vision guided navigation, intelligent vehicular visual auxiliary driving.

Description

Real-time lane line detection system based on monocular vision and inertial navigation unit
Technical Field
The invention belongs to the field of computer vision and intelligent traffic, relates to a lane line detection system in an urban environment, and particularly relates to a real-time urban multilane lane line detection system developed based on monocular vision and an inertial navigation unit.
Background
An Intelligent Vehicle (IV) is a comprehensive system integrating multiple functions of environmental perception, dynamic decision and planning, Intelligent control and execution, and is an important mark for measuring the whole national research strength and industrial level. As an environment sensing technology which is one of three key technologies of an intelligent vehicle, a sensing system which mainly actively senses laser, radar and structured light has been partially and successfully applied at the present stage, but the sensors have the problems of large power consumption, large volume, high manufacturing cost and the like, and the popularization of the sensors in the technical research, development and application of the intelligent vehicle is restricted. While passive visible light sensing, i.e. cameras, have significant advantages in terms of power consumption, volume, and cost. In recent years, a great number of research teams and organizations have made a lot of effective research on the aspect of completing the perception of traffic scenes by using visible light sensing, and the perception of lanes and obstacles based on vision becomes a research hotspot in the field.
The common lane line detection system based on the camera cannot solve the problems of missing detection, false detection and the like which occur when the camera cannot acquire images containing enough lane information in the processes of sudden illumination change, close shielding of obstacles, lane changing and the like because only image information is utilized, is difficult to have long-time long-distance stable operation performance, and limits that the system only has application possibility in a specific environment. Therefore, how to design and realize a set of lane line detection system which can fully utilize image information as much as possible, adapt to various external environment changes and vehicle body posture changes of the system, can stably run for a long time and a long distance and has low cost, low power consumption and high portability so as to meet the application requirements in the fields of unmanned driving, automobile auxiliary driving and the like becomes one of research hotspots of real-time lane detection.
Disclosure of Invention
The invention aims to provide a real-time lane line detection system based on a monocular vision and inertial navigation unit.
In order to achieve the purpose, the invention adopts the following technical scheme:
real-time urban area multilane lane line detecting system based on monocular vision and inertial navigation unit, its characterized in that: the real-time lane line detection system comprises an off-line calibration and on-line projection change module M1, an inertial navigation unit module M2, a single-frame lane line detection module M3 and a multi-frame lane line correlation check module M4; the off-line calibration and on-line projection change module M1 is used for acquiring image signals, completing homography projection change relative to the plane of the front of the vehicle and acquiring the top view of the image of the front of the vehicle; the inertial navigation unit module M2 is used for measuring the attitude change of the vehicle body relative to the initial position; the single-frame lane line detection module M3 is configured to perform lane line detection of a single-frame image; the multi-frame lane line association checking module M4 is used for associating the lane line data of the current frame and the historical frame, checking the current detection result, providing prediction and providing an accurate multi-lane detection visualization result and a parameter description equation at the current moment.
The off-line calibration and on-line projection change module M1 comprises an off-line calibration module M11 for camera internal reference and external reference and a plane projection transformation module M12; the offline calibration module M11 utilizes the characteristic mark points to obtain the internal attribute parameters of the camera and the external position parameters of the camera relative to the coordinates of the vehicle body; the planar projection transformation module M12 acquires RGB images on line, converts the images into an overhead plan view (IPM) by using the calculated camera internal parameters and external parameters, and performs chromaticity space conversion to obtain a grayscale image.
The Inertial Navigation Unit module M2 measures the speed pulse and the steering angle of the vehicle body at the current moment according to the speed sensor and the fiber-optic gyroscope, and estimates the vehicle body pose under the local coordinate defined by the Inertial Navigation Unit (INU) by using the speed, course and track position calculation algorithm output by fusion filtering, thereby providing high-precision smooth continuous local vehicle body pose estimation.
The single-frame lane line detection module M3 extracts possible lane line information based on the basic assumption that the pixels of the lane line partial image are higher than the road surface area; and eliminating non-lane line information by utilizing the parallelism among a plurality of lane lines, the association among the plurality of frames of images and the interest area provided by the historical frame detection result to finish the lane line detection in the single frame of image.
The multi-frame lane line correlation checking module M4 determines whether the spatial position relationship of the two frame results conforms to the change relationship described by the two frames of vehicle body pose measurement information by using the vehicle body pose measurement information at two consecutive sensing times of the previous time and the current time and the corresponding lane line detection result, that is, the historical frame detection result is changed to the current time according to the vehicle body pose change relationship of the two frames to obtain the parameter form of the historical frame detection result under the current vehicle body pose, and then the historical frame detection result is matched with the current frame lane line detection result, if the historical frame detection result conforms to the current frame detection result, the current frame detection result is accepted, otherwise, the current frame result is discarded, the converted historical frame result is used as the detection result of the current time, and the final result is used to generate the predicted interest region of the next time; meanwhile, the semantic information of the detection result is perfected by using the multi-frame detection result and the three-lane semantic model, and information such as virtual lines and solid lines of lane lines, relative positions of a vehicle body and the like is provided.
Through the four modules, the real-time lane line detection system based on the monocular vision and inertial navigation unit realizes the detection of the lane lines on the road surface under the urban environment, utilizes the vehicle body pose information to correlate and check the continuous multi-frame lane line detection result, perfects the semantic information description of the lane, provides the interest area at the next moment and realizes the continuous and stable multi-lane detection.
The system mainly comprises an offline calibration and online projection change module, an inertial navigation unit module, a single-frame lane line detection module and a multi-frame lane line correlation verification module based on the inertial navigation unit, and realizes real-time and accurate detection of multiple lanes on the road surface of the urban environment. The lane line detection system is suitable for different road conditions and road types such as illumination change, barrier shielding, large curvature, virtual and real lines, multiple lanes and the like, and realizes stable and long-distance detection perception of lane lines in various environments. The system comprises the following parts: the visible light sensor is a Basler Scout sca640-74gc Ethernet interface digital camera and is provided with a computer 8mm fixed-focus manual aperture megapixel lens; the inertial navigation unit comprises a fiber-optic gyroscope and a speed sensor; the system calculation load is carried out on a vehicle-mounted blade server cPCI-6910; all modules communicate in a UDP data packet mode through a gigabit Ethernet. The system can stably provide lane line detection results of three lanes (a left lane, a current lane and a right lane) within the range of 50m (> =50 m) in front of the vehicle-mounted camera at the frequency of 10Hz per second, gives the virtual-real line attributes of the lane lines, and meets the multi-lane detection and real-time detection performance requirements of the system. The system has low cost, low power consumption and higher integral portability, and is suitable for batch popularization and application.
The working principle of the lane line detection system is as follows: the off-line calibration and on-line projection change module is used for completing external reference and internal reference calibration of the camera by utilizing a three-line calibration method and a California university calibration tool box, acquiring a current road image in real time through a Basler Scout sca640-74gc digital camera, and carrying out Inverse projection change (IPM) according to internal and external parameters to obtain an overhead image of a road surface; the inertial navigation unit module is used for acquiring angle measurement of the fiber-optic gyroscope and steering pulse measurement of the speed sensor in real time, and estimating the current attitude and position (relative to the initial moment of the system) of the vehicle by using a filtering fusion algorithm and a track position calculation algorithm; the single-frame lane line detection module is used for carrying out inter-frame correlation of an image space based on local highlight characteristics and parallelism constraints of lane lines and acquiring a lane line detection result of a current frame; and the multi-frame lane line correlation checking module receives the result of the single-frame lane line detection module and the continuous multi-frame vehicle body pose measurement result of the inertial navigation unit module, checks the result of the historical frame and the result of the current frame by using the lane line detection result of the historical frame and the current frame and the vehicle body pose measurements corresponding to the two frames to obtain the checked final lane line detection result, and uses the checked result in generating a prediction area at the next moment.
The invention is characterized in that a low-cost and high-precision lane line detection system composed of a Basler industrial camera, a fiber-optic gyroscope and a speed sensor is adopted to realize multi-lane detection of urban environment.
Compared with the prior art, the invention has the technical effects that:
1. compared with the traditional lane line detection system, the invention fully utilizes the image data and the vehicle body pose data information, utilizes the time alignment of the vehicle body pose data and the image data to complete the space alignment among the multi-frame lane lines, effectively solves the problem of stable detection when the lane lines are lost or no lane lines are observed for a short time in extreme environments such as high-speed steering, illumination mutation and the like, and greatly improves the adaptability of the system.
2. The invention develops a set of low-cost high-precision vehicle body local pose measurement system, eliminates noise data in observation and measurement by utilizing the fusion filtering and dead reckoning principle, reduces the calculation accumulated error and provides continuous and smooth local vehicle body positioning.
3. The invention is composed of a Basler industrial camera, a speed sensor and a fiber-optic gyroscope, and has the characteristic of low cost.
Drawings
FIG. 1 is a diagram of the system hardware relationship of the present invention.
FIG. 2 is a software system framework diagram of the present invention.
FIG. 3 is a diagram of a three-lane model defined by the present invention.
FIG. 4 is a frame diagram of the local pose measurement system based on the inertial navigation unit.
FIG. 5 is a schematic diagram of a vehicle body model and dead reckoning defined by the present invention.
FIG. 6 is a comparison between the measurement accuracy of the local pose measurement system based on the inertial navigation unit and the accuracy of the differential GPS.
FIG. 7 is a relationship between a local coordinate system and a vehicle body coordinate system defined by the present invention.
FIG. 8 is a block diagram of a linear interpolation pose alignment algorithm employed by the present invention.
Fig. 9 is a system framework diagram of a single-frame lane line detection module and a multi-frame lane line association check module according to the present invention.
Fig. 10 is a display diagram of the lane marking detection result and the corresponding interest area according to the present invention.
Fig. 11 is a diagram of an intermediate result of single-frame lane line detection and multi-frame lane line association check detection according to the present invention.
Fig. 12 is an explanatory view of the lane line parallelism screening rule of the single-frame lane line detecting section of the present invention.
Fig. 13 is an explanatory diagram of the region of interest expansion rule of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a sensing configuration of real-time urban multilane based on monocular vision and inertial navigation units and an overall hardware connection relation diagram of a quadao first intelligent vehicle installed at the university of west ampere traffic are shown. The system is composed of the following subsystems: 1. a power supply system: the Yamaha generator provides 220V alternating current power, the alternating current power is transferred and output from the UPS, and other equipment is provided with matched independent transformation equipment except that the sensor needs to use an additional power module to complete transformation; 2. a sensing system: the monocular camera is configured to: delta Basler digital camera, model: scA640-74 gc; the lens is a daily computer megapixel fixed focal length lens, and the model is as follows: M0814-MP; the inertial navigation unit is configured to be a vehicle-mounted speed sensor and a fiber-optic gyroscope DSP 3000; 3. a communication system: the communication data of the current system includes 1. input: image data, synchronized timestamp data (planning blade send), synchronized vehicle body pose data (pose blade send); 2. and (3) outputting: and (5) processing the image. All other data, except image data, are connectionless UDP packets. In order to avoid that the transmission of a large amount of image data blocks the whole communication link and aggravates the communication load, the transmission of the image data adopts an independent communication link, the DLink switch is used for exchanging the image data, the connection between a Basler camera and a corresponding blade server is determined according to different network segments, the data of a speed sensor and a fiber-optic gyroscope is sent to a pose module blade server in a UDP packet form through a serial port, and the rest UDP packets are communicated through a Huawei 24-port switch S1024R-CN; 4. and the computing platform runs all computing work on a CORE XEON DUAL level processor blade server with the model of cPCI-6910.
The system is integrally installed on a Toyota, Highland SUV prototype. The monocular camera sca640-74gc is arranged on the roof of the vehicle, is positioned above a cab of the vehicle, and is provided with a UV optical filter and a rain cover; the speed sensor is a factory-leaving original-vehicle sensor of Hanlanda, and the fiber-optic gyroscope DSP3000 is arranged in the middle of a rear seat of the vehicle; blade cPCI-6910 is mounted on an integral cabinet behind the vehicle owner's ride.
Referring to fig. 2, the software system framework of the system as a whole includes the 4 main functional modules described. The off-line calibration and on-line projection change module M1 comprises an off-line calibration part M11 for camera internal reference and external reference and a plane projection transformation part M12; the off-line calibration part utilizes the characteristic mark points to calculate the internal attribute parameters of the camera and a three-line calibration method to calculate the external position parameters of the camera relative to the coordinates of the vehicle body; the online projection transformation part acquires an RGB image online, converts the image into an IPM (intelligent power management) plan view by using the internal parameter and the external parameter obtained by calculation, and obtains a gray scale image after carrying out chromaticity space conversion. The Inertial Navigation Unit module M2 estimates the vehicle body pose defined by the Inertial Navigation Unit (INU) based on the local coordinate by using fusion filtering according to the speed pulse and the steering angle of the vehicle body at the current time measured by the speed sensor and the fiber-optic gyroscope, and fuses the vehicle body local pose and the global pose in the GPS coordinate by cooperating with the icp (iterative Navigation point) algorithm, thereby providing high-precision smooth continuous vehicle body pose estimation. The single-frame lane line detection module M3 extracts possible lane line information based on the basic assumption that the pixels of the lane line partial image are higher than the road surface area; and eliminating non-lane line information by utilizing the parallelism among a plurality of lane lines, the association among the plurality of frames of images and the interest area provided by the historical frame detection result to finish the lane line detection in the single frame of image. The multi-frame lane line correlation checking module M4 judges whether the two frame results meet the change relationship described by the two frame vehicle body pose measurement information in the spatial position relationship by using the continuous vehicle body pose measurement information at the previous time and the current time and the corresponding lane line detection result, namely, the historical frame detection result is changed to the current time according to the vehicle body pose change relationship of the two frames to obtain the parameter form of the historical frame detection result under the current vehicle body pose, and then the historical frame detection result is matched with the current frame lane line detection result, if the historical frame detection result meets the change relationship, the current frame detection result is judged to be acceptable, otherwise, the current frame result is discarded, the converted historical frame result is used as the detection result at the current time, and the final result is used for generating the predicted interest area at the next time; meanwhile, the semantic information of the detection result is perfected by using the multi-frame detection result and the three-lane semantic model, and information such as virtual lines and solid lines of lane lines, relative positions of a vehicle body and the like is provided.
Referring to fig. 3, to simplify lane line detection, the present invention needs to satisfy the following basic assumptions: 1) the assumption of a flat road surface is: this assumption guarantees the feasibility of IPM, while simplifying the original 6D measurement (x, y, z, θ, α, β) of INU to 3D measurement (x, y, θ), where (x, y, z) represents the vehicle body position measurement of global coordinates defined by GPS, θ represents the heading angle of the camera, α represents the pitch angle, β represents the roll angle; 2) any two lane lines are kept basically parallel, and the lane width changes slowly; 3) the INU module has the precision meeting the requirement of lane line detection; 4) the lane model conforms to the assumed 3-lane model, L1 represents the left lane line of the current lane of the vehicle body, R1 represents the right lane line of the current lane, and L2 and R2, and so on, with each lane line having the property of a virtual-solid line.
Referring to fig. 4, a pose measurement system is shown in a block diagram. The local pose measurement system of the inertial navigation unit is designed based on a dead reckoning algorithm, and local pose information of a carrier is calculated by utilizing real-time speed and course. The original speed output of the odometer and the original course output of the gyroscope are preprocessed before position calculation, so that the measurement errors of speed and course are eliminated, and the accuracy of track calculation is improved.
Referring to FIG. 5, Dead Reckoning (Dead reckong) is shown. The track estimation uses the speed and attitude information provided by the vehicle-mounted sensor to estimate the position of the current time according to the position of the carrier at the previous time. The dead reckoning positioning completely depends on information output by a sensor inside the carrier to carry out position reckoning, and is a completely independent local positioning system. Assuming that the vehicle body moves in a two-dimensional plane, the speed sensor reads the rotation speed information of the front wheel of the vehicle body, the origin of a coordinate system of the vehicle body is selected to be positioned at the center of a rear shaft of the vehicle body, and the rotation speed of the rear wheel is the component of the speed of the front wheel along the direction of the vehicle body when the vehicle turns. The front wheel rotation angle can be read by the sensor information. After considering the front wheel rotation angle, the speed output is decomposed into:
the real-time position information of the vehicle body is averaged according to the speed sensors and the gyro course angles of the front frame and the rear frame and then accumulated into an integral in real timeAnd (4) obtaining. [ k-1, k ]]The relative displacement in time is expressed as: delta Dk=ΔT[k-1,k](vk-1cosαk-1+vkcosαk) And/2, the relative variation of the course angle is as follows: delta thetakkk-1. Wherein Δ T[k-1,k]The time difference between the two frames of data is obtained, the course angle theta is given by the gyroscope in real time, vk,vk-1Given by a speed sensor, front wheel slip angle αk-1kGiven by the declination sensor. The dead reckoning formula is as follows:
in the course of dead reckoning, the speed precision given by the odometer directly influences the precision of position reckoning. The vehicle speed is provided by a vehicle-mounted odometer calibrated by a differential GPS. Under the influence of wheel slip, uneven ground and calibration factor precision, the speed information obtained by only depending on the odometer cannot meet the precision requirement of track calculation on speed output under many conditions. The MTI silicon micro-inertia measurement system installed on the vehicle can give real-time acceleration information of the vehicle body in the horizontal direction, and meanwhile, a filtering model is established to estimate the real-time speed by utilizing the acceleration information of the MTI and the output of the odometer, so that the speed measurement error of the odometer under the conditions of low speed and uneven road surface can be effectively reduced, and the accuracy of dead reckoning is improved. The state vector and the state equation of the model are as follows:
Xk=[vkak]T
Xk+1=AXk+Wk
in the formula:
v-vehicle body speed;
a, vehicle body acceleration;
a — a state transition matrix;
Wk-process noise.
Setting a system sampling period as t, and obtaining a state transition matrix from a constant acceleration model as follows:
according to the assumption of the constant acceleration model, the acceleration is constant within one sampling period. The amount of change in acceleration during the sampling period is considered process noise. Assuming that the maximum value of the acceleration change in one cycle is σ, the process noise WkCan be expressed as:
the process covariance matrix is obtained from the equation:
the velocity of the odometer and the horizontal direction acceleration output by the MTI accelerometer are taken as measurements, namely:
Zk=HXk+Vk
in the above formula, the first and second carbon atoms are,vector of measured values representing velocity and acceleration, H = I2Representing a second order measurement unit matrix, Vk=[vvva]TMeasurement noise indicative of velocity and acceleration
Referring to fig. 6, it is shown that the pose measurement accuracy of the inertial navigation unit module in the present invention is compared with the accuracy of using differential gps (differential gps), the vehicle body positioning result is smooth and stable, the maximum measurement error at a driving distance of 100m is 3m, and the accuracy meets the requirement of the multi-frame lane line detection module for correcting a part of lane line false detections.
Referring to fig. 7, the observed variation relationship for the same scene during the vehicle driving is shown in the figure. Given the presence of a series of points in a traffic scene, the camera may acquire some or all of the information during autonomous driving of the vehicle. The view of the data point from a camera hard-wired to the vehicle body changes as the vehicle itself moves. The local coordinate system is defined by INU, and the vehicle moves from V to V', so that the two local coordinate systems are defined by the two different spatial points. The motion between two points can be represented by a translation matrix T and a rotation matrix R, and the two lane line observation points are represented as P (P ') and Q (Q ') in the two coordinate systems V and V ', respectively. The corresponding change can be expressed as
P′=R*(P-T)
Q′=R*(Q-T)
Wherein
R-R2×2Rotation matrix of 2 × 2 of X space
2-dimensional translation matrix of T-2-dimensional space
Where R and T can be obtained directly from the results of the INU information.
Referring to fig. 8, in an exaggerated parent practical system, the operating frequency of the INU and the visual lane line perception frequency have different operating frequencies. The system uses a linear interpolation method to perform time synchronization of asynchronous sensors. The inertial navigation unit obtains corresponding pose measurement p at the time t0 and the time t1 respectively0And p1T 'is the visual image acquisition time'1And then, the interpolation pose measurement corresponding to the moment is as follows:
p'1=p1+(p1-p0)×(t1'-t1)/(t1-t0)
wherein p'1=(x1',y1',θ1'). Similarly, t'2Vehicle corresponding to time imageThe posture of the body is represented by t1And t2Pose measurement p of time1And p2And interpolating according to the formula.
Referring to fig. 9, there is shown a flow chart of the algorithm of the single-frame lane line detection module M3 and the multi-frame lane line association check module M4 of the present invention, which form an integrated lane line detection function. The algorithm overall comprises three stages: the first stage, using a horizontal filter to detect all possible lane line candidate lines in the IPM image; in the second stage, rejecting characteristic false detection according to parallelism constraint and equal-width constraint, wherein all adopted thresholds are in a self-adaptive mode; the third stage is a checking stage, whether the current detection result is matched with the coordinate conversion historical detection result is judged through the coordinate conversion historical detection result, and the checking stage completes the completion of the semantic information of the lane line. Wherein the first and second stages belong to module M3 and the third stage belongs to module M4.
Referring to fig. 10, a graph is shown of the system for the result of urban environment multi-lane line detection. The left side in the figure is the original image collected by the monocular camera and marked with the actual detection result. Wherein L1, R1, L2 and R2 have the meanings shown in FIG. 3, the character 'Solid' represents that the lane line is a Solid line, and the character 'dash' represents that the lane line is a dashed line; the right side in the figure is the interest area of the current frame obtained by transformation according to the historical lane line detection result and the corresponding pose.
Referring to fig. 11, the main algorithm for the lane line detection function of the system processes the intermediate results, which are: a) extracting features, b) enhancing and binarizing the features, c) fitting the feature broken lines, d) connecting the broken lines, e) carrying out multi-frame lane line correlation verification f) and completing final detection results and semantics.
Referring to fig. 11 (a), the lane line feature extraction is based on the following assumptions: if a pixel is higher than the gray value of the pixel at the left side and the right side of the pixel by the width of the lane line, the pixel point is considered as a possible lane line pixel, and the difference between the pixel gray value and the gray value of the pixel at the two sides is accumulated to be used as the gray value of the current point; if a certain pixel point and its adjacent pixel do not meet the above condition, the pixel value of the point is set to 0.
Wherein
r (x, y) is the calculated lane line feature map, d+m(x, y) is the feature found when the lane line feature template is shifted to the right, d-m(x, y) is the feature obtained when the lane line feature template is shifted to the left, b (x, y) is the gray scale image of the original input, and b (x, y ± m) is the image obtained by shifting the gray scale image of the original input by m pixels to the left and right.
Referring to fig. 11 (b), for the extracted lane line feature, the gray value of the pixel with the largest gray value in the 8-neighborhood is used to replace the gray value of the current point pixel, so that the unclear state of the lane line caused by the shadow and the like can be improved. And splitting the enhanced graph into a plurality of small blocks, performing global Otsu method threshold segmentation on each small block, and combining to obtain a final all binary graph. The number of horizontal and vertical splits is controlled by the lane width and the average length of the dashed line.
When there is history detection information, the following operations need to be completed: 1. completing the splitting under the condition of no historical data; 2. and mapping the historical data to the image coordinates of the current frame according to the vehicle body posture change to generate an interest region, and performing binary segmentation on part of pixels covered by the interest region in the enhanced image.
Wherein o isi(x, y) is the global Otsu segmentation result, and ir (x, y) is the region of interest Otsu segmentation result.
Referring to fig. 11 (c), connected domain detection is performed on the binary segmented image, and each connected domain is fitted according to a broken line segment. And after the polyline fitting is completed on all the connected domains, performing polyline segment connection according to the constraints of the length and the angle of the connected domains. If the history data exists, performing related broken line connection: and for each candidate broken line segment, counting the proportion threshold value of the candidate broken line segment in a certain interest area, and regarding all broken line segments meeting the threshold value requirement of the same specific interest area as belonging to the same interest area. Assigning S = [ S ] to each broken line segmentL2,sL1,sR1,sR2]Each component represents the probability of overlap of the polyline with the prediction region. And 0 represents that the broken line segment does not coincide with any prediction region, for each broken line segment, if the maximum nonzero value in the vector exceeds a threshold value, the broken line segment is judged to coincide with the region, and all broken line segments with uniform overlapping regions are connected into a broken line segment to form a lane candidate line.
Referring to fig. 11 (d), candidate lane lines are filtered using width and parallelism. First according to a length threshold tlAnd (5) filtering. According to the relevant literature, 98% of the longest candidate lines belong to the lane lines. All longer than tlThe broken lines form a group of 'heat group', which is called as a reference line, wherein the longest lane line is selected as the first group of lane candidate lines for the parallelism calculation, and the parallelism characteristics of other candidate lines and the reference line are calculated. The calculation process is shown in fig. 12.
For the ith reference line miCalculate its sum with the jth candidate line cjThe parallelism formula of (c) is as follows:
xstart=max(mi-start(x),cj-start(x))
xend=min(mi-end(x),cj-end(x))
wherein,mi-start(x) Represents the starting point of the reference line in the X direction (XY definition see FIG. 12), mi-end(x) Representing the end point of the reference line in the X direction, similar for the candidate lines. Pmi(y)kThe value of the kth sample point of the reference line in the Y direction, Pcj(y)kRepresents the value of the k-th sample point in the Y direction, wkThe distance deviation in the Y direction for the kth sample pair is represented and STEP represents the spatially discrete unit for the sample point. If the mean value of the calculation results meets the following formula and the mean square error is smaller than a certain threshold value, the two are judged to have parallelism.
n*W-dw<w<n*W+dw
Wherein w is the detection width of the current candidate lane line relative to the reference lane line; w is the known standard lane width, and according to CJJ37-2012 urban road design Specification, the urban area environment standard lane width is 3.25 m-3.75 m; n is the number of lanes normalized based on the standard lane width W, dw is the error brought by detection and measurement in the actual system operation, and then the current reference lane width is W +/-dw. The above calculation process is repeated for all the reference lines and the candidate lines, and a group of candidate lines with the highest evaluation value is the single-frame detection result under the current condition, and the length of the candidate lines is ensured to exceed a certain threshold (the system is selected to be 8m, and is determined according to the basic requirement of a subsequent planning module for shortest path planning).
When a historical detection result exists, the selection of a reference line is slightly different, and two evaluation criteria of a lane line type c and matching accuracy a are added on the basis of a length threshold l. Wherein l is normalized by the maximum look-ahead distance of 50 m; c is a lane line type mark (1 is a solid line, and 0 is a dotted line), and in the invention, the solid line is determined to have high confidence and reliability; and a is the lane matching precision of the current frame and the historical frame with the same lane mark number. The line with the highest value is selected as the candidate line for the current frame.
Referring to fig. 11 (e), when there is no history information, lane verification is performed based on the image information. And outputting the same lane candidate line detected by continuous multiple frames through counting the detection results at different times. When there is a historical test result to correlate, the system checks using the predictive information provided by the INU. And the historical detection result is aligned with the current detection result in space and time through pose change (R, T). After the lane line number attributes corresponding to the two frames are determined, the verification is carried out according to the following mode:
wherein a = { L1, R1}, SP(a)(i) Represents the ith sample point, SC, in the historical test result a(a)(i) The ith sample point in the current detection result a is represented, and the confidence evaluation of the current detection result is represented (the point with the minimum value is selected as the reference line in the next frame). And if the current detection result and the historical result are continuous and multiple frames and cannot be matched, converting the historical result into a current frame output result. Variable wtIndicating whether the current converted result is still valid.
Wherein VxmaxAnd VxminDefining a field of view in the x-direction,/minIs the shortest detection result allowed. When w istIf the value is less than 1, the system determines that the detection result of the current conversion is no longer sufficiently accurate, and the result is discarded.
Referring to fig. 11 (f), semantic information refinement based on the 3-lane model needs to be performed after the current frame lane verification is completed. The partial characteristics of the lane lines, such as the dashed solid lines, lane numbers, left and right lane lines, are important information for performing reasonable autonomous driving behavior and complying with traffic regulations. The invention determines the virtual and real types of the lane lines by calculating the proportion of the dotted line and the solid line. Initially, from the safety perspective, the lane line is forced to be a solid line until it is continuously detected as a dotted line, thereby completing semantic information refinement of the lane. From this virtual-real line characteristic, and the 3-lane model, the system deduces the lane line detection interest region for the next frame, see fig. 13. The region of interest expansion rule is as follows, taking L1 as an example: see fig. 13 a) when L1 is a dashed line, then the region of interest is pushed out to both sides; see fig. 13 b) when L1 is a solid line, then the region of interest is pushed to the right. R1 is similar.
While the invention has been described in further detail with reference to specific preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. The utility model provides a real-time lane line detecting system based on monocular vision and inertial navigation unit which characterized in that: the real-time lane line detection system comprises an off-line calibration and on-line projection change module M1, an inertial navigation unit module M2, a single-frame lane line detection module M3 and a multi-frame lane line correlation check module M4;
the off-line calibration and on-line projection change module M1 is used for acquiring image signals, completing the calibration of internal parameters and external parameters of the image acquisition equipment based on feature point detection and a three-line calibration method, completing homography projection change relative to a plane in front of the vehicle, and acquiring a top view of the image in front of the vehicle;
the inertial navigation unit module M2 is based on a vehicle-mounted single-axis fiber-optic gyroscope and a speed sensor and is used for measuring displacement change, orientation change and instantaneous speed of the vehicle body relative to an initial position under a two-dimensional plane;
the single-frame lane line detection module M3 is used for detecting the lane lines of the single-frame image based on the assumption that the lane lines are locally highlighted, parallel and equal in width;
the multi-frame lane line correlation checking module M4 is used for correlating the lane line data of the current frame and the historical frame, checking the current detection result, giving an accurate parameter description equation of the multi-lane result at the current moment, perfecting the result according to the virtual-real line semantics of each lane line under a three-lane model, and providing detection area prediction for the detection at the next moment;
the algorithm of the single-frame lane line detection module M3 and the multi-frame lane line correlation check module M4 forms an integral lane line detection function, and the integral algorithm comprises three stages: a first stage of detecting all possible lane line candidates in the top plan view image using a horizontal filter; in the second stage, rejecting characteristic false detection according to parallelism constraint and equal-width constraint, wherein all adopted thresholds are in a self-adaptive mode; the third stage is a checking stage, and whether the current detection result is matched with the coordinate conversion historical detection result is judged;
the multi-frame lane line correlation checking module M4 determines whether the spatial position relationship of the two frame results conforms to the change relationship described by the two frames of vehicle body pose measurement information by using the continuous vehicle body pose measurement information at the two sensing times at the previous time and the current time and the corresponding lane line detection result, that is, the historical frame detection result is changed to the current time according to the vehicle body pose change relationship of the two frames to obtain the parameter form of the historical frame detection result under the current vehicle body pose, and then the historical frame detection result is matched with the current frame lane line detection result, if the historical frame detection result conforms to the current frame detection result, the current frame detection result is accepted, otherwise, the current frame result is discarded, the converted historical frame result is used as the detection result at the current time, and the final result is used for generating the predicted interest region at the next;
and performing space-time alignment on the historical detection result and the current detection result through pose change (R, T), and after the number attribute of the lane line corresponding to the two frames is determined, performing verification in the following mode:
<mrow> <msub> <mi>e</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <mo>|</mo> <msub> <mi>SC</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>SP</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>&amp;times;</mo> <mi>R</mi> <mo>+</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mi>N</mi> </mfrac> <mo>,</mo> <mi>i</mi> <mi>f</mi> <mo>|</mo> <msub> <mi>SC</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>SP</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>&amp;times;</mo> <mi>R</mi> <mo>+</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msub> <mi>e</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>&amp;le;</mo> <mi>t</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>&amp;infin;</mi> <mo>,</mo> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
where a ═ L1, R1, L1 represents the left lane line of the current lane where the vehicle body is located, R1 represents the right lane line of the current lane where the vehicle body is located, SP 1 represents the right lane line of the current lane where the vehicle body is located, and SP represents the right lane line of the current lane where the vehicle body is located(a)(i) Represents the ith sample point, SC, in the historical test result a(a)(i) Representing the ith sample point in the current detection result a, representing the confidence evaluation of the current detection result, if the sample point with the minimum confidence evaluation is the reference line, selecting the sample point as the reference line in the next frame, if the current detection result and the historical result are continuous and multiple frames which can not be matched, converting the historical result into the current frame output result, and changing the variable wtIndicating whether the current converted result is still valid;
<mrow> <msub> <mi>w</mi> <mi>t</mi> </msub> <mo>=</mo> <msup> <mi>e</mi> <mfrac> <mrow> <mo>(</mo> <msub> <mi>SP</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>d</mi> </mrow> </msub> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>-</mo> <msub> <mi>l</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>V</mi> <mrow> <mi>x</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> <mrow> <msub> <mi>V</mi> <mrow> <mi>x</mi> <mi>max</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>V</mi> <mrow> <mi>x</mi> <mi>min</mi> </mrow> </msub> </mrow> </mfrac> </msup> </mrow>
wherein Vx maxAnd Vx minDefining a field of view in the x-direction,/minIs the shortest detection result allowed when wtIf the value is less than 1, the system determines that the detection result of the current conversion is no longer sufficiently accurate, and the result is discarded.
2. The system of claim 1, wherein the real-time lane marking detection system based on monocular vision and inertial navigation units comprises: the off-line calibration and on-line projection change module M1 comprises an off-line calibration module M11 for camera internal reference and external reference and a plane projection transformation module M12; the off-line calibration module M11 utilizes the characteristic mark points to calculate the internal attribute parameters of the camera and a three-line calibration method to calculate the external position parameters of the camera relative to the coordinates of the vehicle body; the plane projection transformation module M12 collects RGB images on line, and uses the calculated internal parameters and external parameters of the camera to complete the homography projection change relative to the front plane of the vehicle, and converts the images into a top plan view, and obtains a gray scale image after the chromaticity space conversion.
3. The system of claim 1, wherein the real-time lane marking detection system based on monocular vision and inertial navigation units comprises: the inertial navigation unit module M2 estimates the smooth continuous local vehicle displacement change, orientation change and real-time speed based on the local coordinate defined by the inertial navigation unit by using the fusion filtering according to the speed pulse and steering angle of the vehicle body at the current moment measured by the speed sensor and the single-axis fiber-optic gyroscope and by using the speed, heading and track position calculation algorithm output by the fusion filtering.
4. The system of claim 1, wherein the real-time lane marking detection system based on monocular vision and inertial navigation units comprises: the single-frame lane line detection module M3 extracts possible lane line information based on the basic assumption that the pixels of the lane line partial image are higher than the road surface area; and eliminating non-lane line information by utilizing the parallelism and the equal width among the plurality of lane lines and the interest area provided by the historical frame detection result to finish the lane line detection in the single-frame image.
5. The system of claim 1, wherein the real-time lane marking detection system based on monocular vision and inertial navigation units comprises: the multi-frame lane line correlation checking module M4 perfects semantic information of the detection result by simultaneously using the multi-frame detection result and the three-lane semantic model, and provides information of relative positions of virtual lines and solid lines of the lane lines and the vehicle body.
CN201410129551.8A 2014-04-01 2014-04-01 Real-time lane detection system based on monocular vision and inertial navigation unit Active CN103940434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410129551.8A CN103940434B (en) 2014-04-01 2014-04-01 Real-time lane detection system based on monocular vision and inertial navigation unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410129551.8A CN103940434B (en) 2014-04-01 2014-04-01 Real-time lane detection system based on monocular vision and inertial navigation unit

Publications (2)

Publication Number Publication Date
CN103940434A CN103940434A (en) 2014-07-23
CN103940434B true CN103940434B (en) 2017-12-15

Family

ID=51188203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410129551.8A Active CN103940434B (en) 2014-04-01 2014-04-01 Real-time lane detection system based on monocular vision and inertial navigation unit

Country Status (1)

Country Link
CN (1) CN103940434B (en)

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104567872B (en) * 2014-12-08 2018-09-18 中国农业大学 A kind of extracting method and system of agricultural machinery and implement leading line
DE102015209467A1 (en) * 2015-05-22 2016-11-24 Continental Teves Ag & Co. Ohg Method of estimating lanes
CN106485663A (en) * 2015-08-26 2017-03-08 腾讯科技(深圳)有限公司 A kind of lane line image enchancing method and system
CN105588563B (en) * 2016-01-15 2018-06-12 武汉光庭科技有限公司 Binocular camera and inertial navigation combined calibrating method in a kind of intelligent driving
CN105741635A (en) * 2016-03-01 2016-07-06 武汉理工大学 Multifunctional road experiment vehicle platform
CN106324790B (en) * 2016-08-12 2020-04-03 中国科学院光电技术研究所 Coupling mirror automatic adjustment method based on monocular vision pose measurement
CN106485917B (en) * 2016-09-28 2019-11-22 上海海积信息科技股份有限公司 A kind of method and apparatus for adjudicating vehicle to change lane
CN106546238B (en) * 2016-10-26 2020-09-01 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device
CN106682586A (en) * 2016-12-03 2017-05-17 北京联合大学 Method for real-time lane line detection based on vision under complex lighting conditions
CN106886217B (en) * 2017-02-24 2020-09-08 深圳中智卫安机器人技术有限公司 Autonomous navigation control method and device
CN107133600A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 A kind of real-time lane line detection method based on intra-frame trunk
CN107229063A (en) * 2017-06-26 2017-10-03 奇瑞汽车股份有限公司 A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN107527506A (en) * 2017-09-20 2017-12-29 上海安道雷光波系统工程有限公司 Embedded radar monitors recombination optics and radar monitoring capturing system and method
CN107498559B (en) * 2017-09-26 2020-12-29 珠海市一微半导体有限公司 Vision-based robot steering detection method and chip
CN110120081B (en) * 2018-02-07 2023-04-25 北京四维图新科技股份有限公司 Method, device and storage equipment for generating lane markings of electronic map
CN108764075A (en) * 2018-05-15 2018-11-06 北京主线科技有限公司 The method of container truck location under gantry crane
CN110160542B (en) * 2018-08-20 2022-12-20 腾讯科技(深圳)有限公司 Method and device for positioning lane line, storage medium and electronic device
CN110909569B (en) * 2018-09-17 2022-09-23 深圳市优必选科技有限公司 Road condition information identification method and terminal equipment
CN110770741B (en) * 2018-10-31 2024-05-03 深圳市大疆创新科技有限公司 Lane line identification method and device and vehicle
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device
CN109491364B (en) * 2018-11-19 2022-04-01 长安大学 Driving robot system for vehicle testing and control method
CN111256693B (en) * 2018-12-03 2022-05-13 北京魔门塔科技有限公司 Pose change calculation method and vehicle-mounted terminal
CN111316284A (en) * 2019-02-13 2020-06-19 深圳市大疆创新科技有限公司 Lane line detection method, device and system, vehicle and storage medium
CN111611830A (en) * 2019-02-26 2020-09-01 阿里巴巴集团控股有限公司 Lane line identification method, device and system and vehicle
CN110081880A (en) * 2019-04-12 2019-08-02 同济大学 A kind of sweeper local positioning system and method merging vision, wheel speed and inertial navigation
CN110207716B (en) 2019-04-26 2021-08-17 纵目科技(上海)股份有限公司 Reference driving line rapid generation method, system, terminal and storage medium
CN110221616A (en) * 2019-06-25 2019-09-10 清华大学苏州汽车研究院(吴江) A kind of method, apparatus, equipment and medium that map generates
CN112214009B (en) * 2019-06-25 2022-07-26 上海商汤临港智能科技有限公司 Sensor data processing method and device, electronic equipment and system
CN110207714B (en) * 2019-06-28 2021-01-19 广州小鹏自动驾驶科技有限公司 Method for determining vehicle pose, vehicle-mounted system and vehicle
CN110458892A (en) * 2019-07-17 2019-11-15 北京自动化控制设备研究所 Camera calibration method based on inertial navigation
CN112304291B (en) * 2019-07-26 2023-12-22 厦门雅迅网络股份有限公司 HUD-based lane line display method and computer-readable storage medium
CN111324616B (en) * 2020-02-07 2023-08-25 北京百度网讯科技有限公司 Method, device and equipment for detecting lane change information
CN111307176B (en) * 2020-03-02 2023-06-16 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111401446A (en) * 2020-03-16 2020-07-10 重庆长安汽车股份有限公司 Single-sensor and multi-sensor lane line rationality detection method and system and vehicle
CN112050821B (en) * 2020-09-11 2021-08-20 湖北亿咖通科技有限公司 Lane line polymerization method
CN112249014B (en) * 2020-10-22 2021-09-28 英博超算(南京)科技有限公司 Vehicle lateral control method, vehicle, and computer-readable storage medium
CN112590670A (en) * 2020-12-07 2021-04-02 安徽江淮汽车集团股份有限公司 Three-lane environment display method, device, equipment and storage medium
CN112683292B (en) * 2021-01-07 2024-06-21 阿里巴巴集团控股有限公司 Navigation route determining method and device and related products
CN113932796B (en) * 2021-10-15 2024-09-10 北京百度网讯科技有限公司 High-precision map lane line generation method and device and electronic equipment
EP4425462A1 (en) * 2021-11-26 2024-09-04 Huawei Technologies Co., Ltd. Method and apparatus for predicting drivable lane
CN114322786A (en) * 2021-12-20 2022-04-12 重庆长安汽车股份有限公司 Lane line transverse distance measuring system and method thereof
CN115027482B (en) * 2022-06-29 2024-08-16 东风商用车有限公司 Fusion positioning method in intelligent driving
CN116486354B (en) * 2022-07-13 2024-04-16 阿波罗智能技术(北京)有限公司 Lane line processing method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099915A1 (en) * 2007-02-16 2008-08-21 Mitsubishi Electric Corporation Road/feature measuring device, feature identifying device, road/feature measuring method, road/feature measuring program, measuring device, measuring method, measuring program, measured position data, measuring terminal, measuring server device, drawing device, drawing method, drawing program, and drawing data
US8126642B2 (en) * 2008-10-24 2012-02-28 Gray & Company, Inc. Control and systems for autonomously driven vehicles
CN102156979B (en) * 2010-12-31 2012-07-04 上海电机学院 Method and system for rapid traffic lane detection based on GrowCut

Also Published As

Publication number Publication date
CN103940434A (en) 2014-07-23

Similar Documents

Publication Publication Date Title
CN103940434B (en) Real-time lane detection system based on monocular vision and inertial navigation unit
CN109100730B (en) Multi-vehicle cooperative rapid map building method
CN108955702B (en) Lane-level map creation system based on three-dimensional laser and GPS inertial navigation system
CN112083725B (en) Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
KR102434580B1 (en) Method and apparatus of dispalying virtual route
CN111986506B (en) Mechanical parking space parking method based on multi-vision system
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
Rawashdeh et al. Collaborative automated driving: A machine learning-based method to enhance the accuracy of shared information
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
CN112254729B (en) Mobile robot positioning method based on multi-sensor fusion
CN103499337B (en) Vehicle-mounted monocular camera distance and height measuring device based on vertical target
CN111788571A (en) Vehicle tracking
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN103686083B (en) Real-time speed measurement method based on vehicle-mounted sensor video streaming matching
CN105976402A (en) Real scale obtaining method of monocular vision odometer
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN110119138A (en) For the method for self-locating of automatic driving vehicle, system and machine readable media
CN115824235B (en) Lane positioning method, lane positioning device, computer equipment and readable storage medium
CN117111085A (en) Automatic driving automobile road cloud fusion sensing method
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN112945233B (en) Global drift-free autonomous robot simultaneous positioning and map construction method
CN116665166A (en) Intelligent vehicle 3D target detection method suitable for uneven road surface scene
Cordes et al. Accuracy evaluation of camera-based vehicle localization
Chen et al. KFPA Monocular Ranging Algorithm Design and Application in Mobile edge Computing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant