CN115027482A - Fusion positioning method in intelligent driving - Google Patents
Fusion positioning method in intelligent driving Download PDFInfo
- Publication number
- CN115027482A CN115027482A CN202210761697.9A CN202210761697A CN115027482A CN 115027482 A CN115027482 A CN 115027482A CN 202210761697 A CN202210761697 A CN 202210761697A CN 115027482 A CN115027482 A CN 115027482A
- Authority
- CN
- China
- Prior art keywords
- curve
- curvature
- point
- vehicle
- lane line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012937 correction Methods 0.000 claims abstract description 22
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 238000005070 sampling Methods 0.000 claims description 18
- 230000000007 visual effect Effects 0.000 claims description 17
- 230000003137 locomotive effect Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 238000004806 packaging method and process Methods 0.000 claims description 7
- 238000009795 derivation Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 16
- 238000013507 mapping Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 241000283070 Equus zebra Species 0.000 description 1
- 241000282414 Homo sapiens Species 0.000 description 1
- 238000005299 abrasion Methods 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
- B60W40/072—Curvature of the road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/35—Data fusion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/40—High definition maps
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mathematical Physics (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
The invention relates to a fusion positioning method in intelligent driving, which comprises the following steps: outputting a boundary fitting curve; calculating to obtain a lane line fitting curve; obtaining a course angle calculation curve under a global coordinate system; obtaining corresponding lane central line point information; obtaining a lane line curve; obtaining a first discrete point set; obtaining a second discrete point set; corresponding the first discrete type point to a curvature first curvature value; obtaining a first curvature set; corresponding the curvature of the second discrete type point to a second curvature value; obtaining a second curvature set; outputting the reference point; correcting the longitudinal position of the vehicle; correcting the transverse position of the vehicle; and correcting the vehicle heading angle. According to the invention, a more accurate fitting model in the prior art is obtained, and the obtained correction points can more accurately correct the abscissa of the current position of the vehicle; calculating longitudinal position correction information of a vehicle, which is not provided by the prior art; and further correcting the current heading angle of the vehicle.
Description
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a fusion positioning method in intelligent driving.
Background
High-precision positioning is one of indispensable bottom technologies for developing intelligent automatic driving; currently, in the field of intelligent automatic driving technology, how to overcome the technical limitations at the present stage and ensure the continuity, integrity and high availability of high-precision positioning is not determined. The current mainstream view is focused on the two major categories of visual localization and radar sensors.
Firstly, it is clear that the common understanding of human beings on intelligent automatic driving is in the following four performance indexes: precision: the degree of coincidence between the measured value and the true value; integrity: the ability to raise an alarm when service is unavailable; continuity: informing the client of the continuous capability of normal work of the system; availability: providing a percentage of on-demand location services;
in the prior art, visual positioning refers to a positioning mode of shooting an environment image by using a vehicle-mounted camera, comparing the environment image with a known map element or calculating the position of a vehicle in a recursion mode, and can be divided into absolute positioning and relative positioning; wherein:
there are three main types of absolutely located material sources: the ground printed matter comprises lane lines, zebra stripes, flow guide belts, ground characters, ground icons and the like printed on the ground of a road by a road administration department, and the semantic feature is very stable as long as no construction correction or use abrasion is carried out; the air semantic objects comprise road signs, traffic marks, traffic lights and the like above the road, and the positions of the devices are basically fixed, the semantic information is clear, and the air semantic objects are also very suitable for positioning; street view, relatively speaking, has not been the first two methods in the mainstream.
Relative positioning, namely vSLAM (visual synchronous positioning and mapping) and VO (visual odometer), which are now more popular. These two terms are often used together, the former includes the latter, which is generally discussed as vSLAM, and the main feature is to provide the back loop and optimization, but when the vehicle is driving normally, the vehicle will not return to the previous place within a short time after driving, and the purpose of the loop is not very large, so VO is mainly used in the visual positioning.
The theoretical basis of vSLAM and VO is the multi-view geometry, and it can be seen from the above figure that images of the same object taken by the camera from different positions must be similar and slightly different. Through the image processing method, the feature points corresponding to one another in the two images can be found. When the number of the matched feature points is enough, the rotation and translation relation between the two cameras can be obtained by solving the homography matrix or the essential matrix, and the rotation and translation are symmetrically transformed. When the data continuously collected by the camera form a video sequence, the transformation between two frames is solved and combined to obtain a track starting from the initial position to the current position. Because the relative track is obtained, the SLAM cannot directly complete the positioning task, and needs to be fused with absolute positioning. Data from other sensors can be put into the SLAM framework as additional constraints, and local relationships of visual observations or odometers can also be output as constraints to other positioning frameworks.
In either way, the general process of visual localization can be divided into four steps: the method comprises the steps of collecting images by a camera, preprocessing the images, extracting image characteristics or extracting semantics, and solving the pose by a multi-view geometry and optimization method. The camera in the visual positioning task needs to take various hardware factors into consideration. For example, in order to allow the visual algorithm to operate at night, an infrared camera, a starlight camera, or even a thermal imaging camera may be selected; in order to cover different field ranges, a wide-angle lens, a fisheye lens, a full-view camera and the like can be selected; the vehicle-mounted camera has various installation positions and various quantity configurations, and is mainly used for positioning tasks such as forward looking monocular or forward looking binocular.
The prior art visual positioning has the defects that:
1. the imaging range is small when the field angle is small, so that the positioning is not facilitated;
2. however, as the image capture range is large when the field angle is large, although more objects can be seen, for the CCD target surface with the same size, if a lens with a larger field angle is mounted, the size of each object in the image is much smaller, so that the large field angle is not beneficial to completing certain visual tasks;
3. if a monocular camera is used, there is also a drawback that the dimensions of the object cannot be distinguished.
In the prior art, radar sensor technologies are various, and laser radar is mainly used for vehicle positioning at present. Two-dimensional laser radars are commonly used in AGV or robot positioning navigation, and the positioning principle of the two-dimensional laser radars can be simply understood as that a beam of laser irradiates downwards from the upper part, a mirror rotates continuously from the lower part to convert the laser into transverse scanning, and the laser strikes objects with different distances and returns for different time periods, so that the outline of the peripheral environment can be obtained on a scanning plane. However, in the field of autonomous driving, three-dimensional lidar is most frequently used.
The principle of the three-dimensional laser radar is as follows: the emitting tubes and the receiving tubes of the multiple beams of laser are arranged according to different angles, the isolation plates are arranged in the middle, the emitting tubes and the receiving tubes are emitted in a staggered mode according to a certain sequence to avoid mutual interference, after the light source assembly and the receiver assembly rotate, the multi-line scanning result of the surrounding environment can be obtained, and a set of points in a three-dimensional coordinate system is formed and is called as point cloud.
In the prior art, laser radar positioning can be divided into two types of patterned positioning and non-patterned positioning; wherein:
mapping positioning comprises two steps of mapping and mapping: and during map building, the point cloud is superposed on the running track of the vehicle frame by frame to obtain the point cloud map. The trajectory may be a trajectory output by the high-precision combined inertial navigation system, or a trajectory output by the point cloud SLAM.
The map-free positioning is similar to a visual odometer, and a point cloud odometer can be constructed to realize relative positioning after two frames of the point cloud are matched and combined, for example, a point cloud positioning module in open source software Autoware. And plane features and angular point features of the point cloud can also be extracted for matching, and a point cloud feature odometer is constructed to realize positioning, such as an open source algorithm LOAM.
The radar positioning technology has the defects that:
1. because point cloud direct mapping is likely to generate a point cloud file with a particularly huge volume, the original point cloud map is not suitable for large-scale use;
2. the price is high, the service life is short, and the mechanical laser radar cannot pass through a gauge and cannot be really used in the industrial field.
Therefore, the prior art also considers the adoption of a fusion mode for positioning;
the integrated positioning integrates all positioning modes on the market at present, including GPS, base station positioning, Wifi positioning, Bluetooth positioning and sensor positioning.
Currently, in the continent, the implementation of the converged positioning technology is mainly implemented by cooperation of a third-party location service manufacturer, such as goodwill, with a chip manufacturer, so as to integrate the converged positioning technology at the hardware and system level of the autonomous vehicle.
The fusion positioning technology has the following advantages:
1. the positioning effect is not influenced by the surrounding environment. GPS positioning cannot be used in areas such as indoor areas, tunnels and the like, base station and WiFi positioning are also restricted by network signals, and fusion positioning can be automatically assisted by a sensor under the environment, so that the GPS positioning is not influenced by the environmental factors;
2. because the fusion positioning technology fuses the positioning logic and the key positioning technology into the chip, the positioning capability can be implanted into the operation control system of the vehicle along with the chip, and the system can automatically specify a positioning strategy according to the hardware condition;
3. because GPS positioning consumes a large amount of electricity, the integrated positioning can automatically perform positioning in a low-power-consumption mode under the condition of permission of conditions, such as network positioning and sensors, so that the power consumption is greatly saved;
4. the geo-fencing function is truly implemented;
5. the fusion positioning technology comprises vehicle motion state information obtained through an acceleration sensor, so that the current motion state of the automatically driven vehicle can be judged;
6. the track of the vehicle can be recorded in a low-power-consumption mode by combining the positioning technology, and the position of the vehicle can be tracked more continuously without depending on a network and a GPS signal, so that the track of the vehicle user can be smoother and more power-saving than the prior art when the track of the vehicle user is visited again.
However, the fusion positioning technology in the prior art is still incomplete; the typical prior art of fusion positioning is called as "a vehicle fusion positioning system and method", and the chinese invention application with the application number CN202111055356.1 discloses the following technical solutions:
the method comprises the steps of firstly, acquiring motion information of a vehicle through an inertia measurement module, a satellite navigation information receiving module and a wheel speed information acquisition module which are installed on an intelligent driving vehicle, dynamically acquiring lane line information in front of the vehicle through a lane line acquisition module which is installed in the front direction, processing the acquired lane line information, identifying the lane line condition in front of the vehicle, calculating the relative distance between the vehicle and the lane line, calculating the absolute position information or part of the absolute position information of the vehicle according to the specific condition of the acquired lane line and the specific position of the lane line in a lane line map, then performing secondary fusion positioning calculation to obtain a corrected fusion positioning result and the correction amount of system errors, and completing a cycle fusion positioning process.
The technical idea of the invention is as follows: and the information of the distance between the vehicle and the lane lines on the two sides, which is output by the vision sensor, the lane line information of the current lane in the world coordinate system and the position information of the vehicle in the world coordinate system, which are output by the high-precision map, are fused and filtered to obtain more accurate position information.
The fusion positioning technology in the prior art has the defects that:
1. as the laser radar technology is as described above, the actual industrial application cannot be realized, so that most of the existing fusion positioning technologies utilize a vision sensor, and the above disadvantages of the vision positioning technology are inherited inevitably;
2. the invention of chinese application No. CN202111055356.1 can only correct the lateral distance of the vehicle in the lane, and cannot restrict the longitudinal position. In a curve driving scene, accurate position information cannot be provided, so that the control precision of the intelligent driving vehicle is influenced.
Disclosure of Invention
Aiming at the problems, the invention provides a fusion positioning method in intelligent driving, aiming at obtaining a more accurate fitting model in the prior art and correcting the abscissa of the current position of the vehicle by the obtained correction point more accurately; calculating longitudinal position correction information of a vehicle, which is not provided by the prior art; and further correcting the current course angle of the vehicle to ensure that the accuracy of the fusion positioning is one step higher.
In order to solve the problems, the technical scheme provided by the invention is as follows:
a fusion positioning method in intelligent driving comprises the following steps:
s100, outputting boundary fitting curves of two sides of a current lane of a vehicle to be positioned through a visual sensor; the boundary fitting curve is positioned in a vehicle coordinate system;
s200, calculating to obtain a lane line fitting curve about a lane central line according to the boundary fitting curve;
s300, performing curve fitting on the line points in the lane recorded in the high-precision map to obtain a course angle calculation curve under a global coordinate system; then converting the lane central line recorded in the high-precision map into the vehicle coordinate system to obtain corresponding lane central line shape point information; the lane central line point information comprises the lane central line point;
s400, performing curve fitting on the line points in the lane, which are output and converted from the high-precision map, to obtain a lane line curve;
s500, sampling the lane line fitting curve from the vision sensor according to a sampling interval preset manually to obtain a first discrete point set; the first discrete type point set comprises a plurality of samples to obtain a first discrete type point;
sampling the lane line curve from the high-precision map according to the sampling interval with the same value to obtain a second discrete point set; the second discrete type point set comprises a plurality of sampling to obtain a second discrete type point;
s600, intercepting all the first discrete points with the distance from the locomotive to a first credible distance range; the first credible distance range is preset manually; then, curvature information of the position of each first discrete type point on the lane line fitting curve is extracted one by one; then, taking the curvature information corresponding to each first discrete type point as a first curvature value; then packaging all the first curvature values to obtain a first curvature set;
s700, intercepting all the second discrete points with the distance from the locomotive to a second credible distance range; the second credible distance range is preset manually; then, curvature information of the position of each second discrete type point on the lane line curve is extracted one by one; then, taking the curvature information corresponding to each second discrete type point as a second curvature value; then packaging all the second curvature values to obtain a second curvature set;
s800, finding a point on the lane line curve according to the first curvature set and the second curvature set, and enabling the curvature error value of the first curvature set relative to the second curvature set to be the minimum by taking the point as a reference; then outputting the point as a reference point;
s900, subtracting the vertical coordinate of the point closest to the vehicle head in the first credible distance range from the vertical coordinate of the reference point extracted in S800 to obtain the vertical coordinate of the current vehicle position, and correcting the longitudinal position of the vehicle;
s1000, respectively intercepting points of which the vertical coordinates are the vertical coordinates of the current vehicle position obtained in the S900 from the lane line fitting curve and the lane line curve; outputting a point obtained by intercepting the lane line fitting curve as a first correction point; outputting a point obtained by intercepting the lane line curve as a second correction point;
s1100, finding a point with the minimum distance from the point to the current vehicle position in the first discrete point set to serve as a transverse calibration point; then, the transverse calibration point is used for replacing the first calibration point to obtain the abscissa of the current vehicle position, and the transverse position of the vehicle is calibrated;
s1200, calculating an included angle value between the lane line fitting curve and a y axis on the lane line fitting curve as a first included angle; then calculating absolute course angle information of a lane central line in the course angle calculation curve to serve as a second included angle; then, calculating to obtain a course angle of the current vehicle position according to the first included angle and the second included angle, and correcting the course angle of the vehicle;
s1300, packaging and outputting the corrected vertical coordinate of the current vehicle position, the horizontal coordinate of the current vehicle position and the course angle of the current vehicle position, namely, the final output result of the positioning method.
Preferably, the lane line-fitting curve is expressed by:
x=c 3 y 3 +c 2 y 2 +c 1 y+c 0
wherein: x is the abscissa, under the vehicle coordinate system; y is a vertical coordinate under the vehicle coordinate system; the vehicle coordinate system is as follows: the front of the vehicle is taken as an x axis, the pointing direction of the vehicle head is taken as a positive direction, and the vehicle coordinate system meets a right-hand rule; c. C 3 Fitting a curve to the lane lines, wherein the curve fits 6 times the curvature change rate at the intersection of the curve and the y axis on the vehicle coordinate system; c. C 2 Fitting a curve to the lane lines by 2 times the curvature at the intersection of the curve and the y-axis on the vehicle coordinate system; c. C 1 Fitting a curvature at the intersection of the curve and the y-axis on the vehicle coordinate system to the lane line; c. C 0 And fitting the intercept of the curve and the y axis on the vehicle coordinate system for the lane line.
Preferably, the lane center line is expressed by the following formula:
x=c 3 y 3 +cy 2 +c 1 y+c' 0
wherein: c' 0 Is the intercept of the lane centerline and the y-axis on the vehicle coordinate system.
Preferably, the lane line curve is expressed by the following formula:
x=b 3 y 3 +b 2 y 2 +b 1 y+b 0
wherein: b 3 Is 6 times of the curvature change rate at the intersection of the lane line curve and the y axis on the vehicle coordinate system; b 2 On the lane line curve and the vehicle coordinate system2 times the curvature at the y-axis intersection of (a); b 1 The curvature of the intersection point of the lane line curve and the y axis on the vehicle coordinate system is obtained; b 0 Is the intercept of the lane line curve and the y-axis on the vehicle coordinate system.
Preferably, the sampling interval is 20 cm.
Preferably, the curvature error value is expressed by:
M=(r' p -r 1 ) 2 +(r' p+1 -r 1 ) 2 +...+(r' p+n-1 -r 1 ) 2 (1≤p≤m-n+1)
wherein: m is the curvature error value; r is 1 ∈[r 1 ,r 2 ,r 3 ,r 4 ,r 5 …r n ]Wherein [ r ] 1 ,r 2 ,r 3 ,r 4 ,r 5 …r n ]A counter for the first curvature set, n being the first curvature set; r' p ∈[r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 ...r' m ],r' p+1 ∈[r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 …r' m ],...r' p+n-1 ∈[r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 ...r' m ]Wherein r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 …r' m ]Is the second curvature set, m is a counter of the second curvature set, and n < m.
Preferably, the first angle is expressed by the following formula:
wherein: alpha is the first included angle;the derivation of the lane line fitting curve is expressed by the following equation:
wherein: y is 1 And the longitudinal coordinate value of the corrected position of the transverse and longitudinal positions.
Preferably, the second angle is expressed by the following formula:
wherein: beta is the second included angle;for the derivation result of the course angle calculation curve, the following expression is provided:
wherein: y is 2 Longitudinal coordinate values of the corrected horizontal and longitudinal positions under the global coordinate system; e.g. of the type 3 Calculating 6 times of curvature change rate at the intersection of the curve and the y axis on the vehicle coordinate system for the course angle; e.g. of the type 2 Calculating 2 times of curvature of the intersection point of the curve and the y axis on the vehicle coordinate system for the course angle; e.g. of a cylinder 1 And calculating the curvature of the intersection point of the curve and the y axis on the vehicle coordinate system for the course angle.
Preferably, the heading angle of the current vehicle position is expressed by:
θ=α+β
wherein: and theta is the heading angle of the current vehicle position.
Preferably, the distance from the vehicle head to the last point in the first discrete point does not exceed the maximum value of the first credible distance range;
the distance between the first credible distance range and the locomotive is not more than 20 m;
the second credible distance range is not more than 60m away from the locomotive.
Compared with the prior art, the invention has the following advantages:
1. on the basis of the prior art, the invention adopts a polynomial of a lane central line of a current lane output by a vision sensor, and corresponding lane central line point information under a strategy coordinate system output by a high-precision map, curve fitting is carried out on the lane central line points output by the map to obtain a corresponding curve equation, and then equal-interval sampling is carried out on a lane line fitting curve output by the vision sensor and a lane line curve output by the high-precision map to obtain two groups of discrete points, so that a more accurate fitting model for teaching the prior art is obtained, and the obtained correction points can more accurately correct the horizontal coordinate of the current position of the vehicle;
2. because the invention adopts the cubic polynomial output by the camera to have higher reliability only when the vehicle is close to the vehicle on the basis of the prior art, the position from the vehicle head is taken as a point s 0 The method comprises the steps of obtaining the curvature of line-shaped points in a series of lanes from a high-precision map by taking a group of points at the beginning of the distance and extracting curvature information of each point, carrying out sliding matching on the curvature output by a camera on the curvature output by the map, calculating the minimum error, and finally obtaining the starting point of the line-shaped points in the high-precision map according to the minimum error so as to deduce the longitudinal position correction information of the vehicle, wherein the correction is not available in the prior art;
3. the invention can obtain more accurate abscissa of the current position of the vehicle and can calculate the ordinate correction information which cannot be obtained in the prior art, so that the current course angle of the vehicle can be further corrected on the basis, and the accuracy of fusion positioning is improved by one step, which is not provided by the prior art.
Drawings
FIG. 1 is a schematic flow chart of a fusion positioning method according to an embodiment of the present invention;
FIG. 2 is a diagram of a lane line fitting curve, a lane line curve and a confidence distance selection in a vehicle coordinate system according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a process of correcting a lateral distance after a longitudinal position is corrected according to an embodiment of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
As shown in fig. 1, a fusion positioning method in intelligent driving includes the following steps:
s100, outputting boundary fitting curves of two sides of a current lane of a vehicle to be positioned through a visual sensor; the boundary-fit curve is located in the vehicle coordinate system.
And S200, calculating to obtain a lane line fitting curve about the lane central line according to the boundary fitting curve.
In this embodiment, the lane line fitting curve is expressed by the following formula (1):
x=c 3 y 3 +c 2 y 2 +c 1 y+c 0 (1)
wherein: x is the abscissa, in the vehicle coordinate system; y is a vertical coordinate under a vehicle coordinate system; the vehicle coordinate system is as follows: the front of the vehicle is taken as an x axis, the pointing direction of the vehicle head is taken as a positive direction, and the coordinate system of the vehicle meets the right-hand rule; c. C 3 Fitting a curve to the lane line, wherein the curvature change rate of the intersection point of the curve and the y axis on the vehicle coordinate system is 6 times; c. C 2 Fitting a curve to the lane line by 2 times of curvature at the intersection of the curve and the y-axis on the vehicle coordinate system; c. C 1 Fitting the curvature of the intersection point of the curve and the y axis on the vehicle coordinate system for the lane line; c. C 0 And fitting the intercept of the curve and the y axis on the vehicle coordinate system for the lane line.
In this embodiment, the lane center line is obtained by translating a lane line fitting curve in a vehicle coordinate system, and is expressed by the following formula (2):
x=c 3 y 3 +cy 2 +c 1 y+c' 0 (2)
wherein: c' 0 Is the intercept of the lane centerline with the y-axis on the vehicle coordinate system.
S300, carrying out curve fitting on the line points in the lane recorded in the high-precision map to obtain a course angle calculation curve under a global coordinate system; converting the lane central line recorded in the high-precision map into a vehicle coordinate system to obtain corresponding lane central line shape point information; the lane central line shape point information comprises lane central line shape points;
and S400, performing curve fitting on the line points in the lane output and converted by the high-precision map to obtain a lane line curve.
In this embodiment, the lane line curve is expressed by the following formula (3):
x=b 3 y 3 +b 2 y 2 +b 1 y+b 0 (3)
wherein: b 3 The curvature change rate of the intersection point of the lane line curve and the y axis on the vehicle coordinate system is 6 times; b 2 Is 2 times of the curvature of the intersection point of the lane line curve and the y axis on the vehicle coordinate system; b 1 The curvature of the intersection point of the lane line curve and the y axis on the vehicle coordinate system; b 0 Is the intercept of the lane line curve with the y-axis on the vehicle coordinate system.
S500, sampling a lane line fitting curve from a visual sensor according to a sampling interval preset manually to obtain a first discrete point set; the first set of discretized points comprises a plurality of samples resulting in a first discretized point.
Sampling lane line curves from a high-precision map according to sampling intervals of the same numerical value to obtain a second discrete point set; the second set of discrete points includes a plurality of samples to obtain a second discrete point.
In this embodiment, the sampling interval is 20 cm.
It should be noted that, at this step, two sets of discrete points, namely a first discrete point set and a second discrete point set, are obtained; the subsequent steps are based on selecting proper points from the discrete points to correct the vehicle position and the heading angle so as to obtain a result which is relatively consistent with the true value.
S600, intercepting all first discrete points with the distance from the locomotive to be within a first credible distance range; the first credible distance range is preset manually; then, curvature information of the position of each first discrete point on a lane line fitting curve is extracted one by one; then, taking the curvature information corresponding to each first discrete point as a first curvature value; all first curvature values are then packed, resulting in a first set of curvatures.
In this embodiment, the distance from the vehicle head to the last point in the first discrete point does not exceed the maximum value of the first reliable distance range.
In this embodiment, the first trusted distance range is no more than 20m from the vehicle head.
In this embodiment, the second trusted distance range is not more than 60m from the vehicle head.
It should be noted that the third-order polynomial output from the vision sensor is high in reliability only when the vehicle is close to the vehicle, and therefore the distance to the vehicle head is taken as the point s 0 Distance point s 0 Starting set of points x 1 ,x 2 ,x 3 ,x 4 ,x 5 …,x n ]。
It is further noted that the last point x n Position s from the vehicle head n It is required to be within a range of higher confidence of the lane line output by the camera, i.e., within a first confidence distance range.
S700, intercepting all second discrete points with the distance from the locomotive to be within a second credible distance range; the second credible distance range is preset manually; then, curvature information of the position of each second discrete point on the lane line curve is extracted one by one; then taking the curvature information corresponding to each second discrete point as a second curvature value; all second curvature values are then packed resulting in a second set of curvatures.
S800, finding a point on the lane line curve according to the first curvature set and the second curvature set, and enabling the point to serve as a reference, wherein the curvature error value of the first curvature set relative to the second curvature set is the minimum; this point is then output as a reference point.
In this embodiment, the curvature error value is expressed by equation (4):
M=(r' p -r 1 ) 2 +(r' p+1 -r 1 ) 2 +...+(r' p+n-1 -r 1 ) 2 (1≤p≤m-n+1) (4)
wherein: m is a curvature error value; r is 1 ∈[r 1 ,r 2 ,r 3 ,r 4 ,r 5 …r n ]Wherein [ r ] 1 ,r 2 ,r 3 ,r 4 ,r 5 …r n ]A counter for the first curvature set, n being the first curvature set; r' p ∈[r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 ...r' m ],r' p+1 ∈[r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 …r' m ],...r' p+n-1 ∈[r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 …r' m ]Wherein r' 1 ,r' 2 ,r' 3 ,r' 4 ,r' 5 ...r' m ]Is a second curvature set, m is a counter for the second curvature set, and n < m.
It should be noted that the logic of S800 is as follows:
first, a curvature error value M is defined.
Then, going to try one by one according to equation (4), one must find a value p such that M at the value p is minimal.
And S900, subtracting the ordinate of the point closest to the vehicle head in the first credible distance range from the ordinate of the reference point extracted in S800 to obtain the ordinate of the current vehicle position, and correcting the longitudinal position of the vehicle.
It should be noted that the principle of S900 is as follows:
in the high-precision map, the value p corresponds to a first point of a lane line fitting curve output by the vision sensor; thus, with point s of p from the head position p Subtracting the distance from the head position to be a point s 0 The difference between the two is the ordinate of the current vehicle position.
As shown in fig. 3, S1000, points whose vertical coordinates are both the vertical coordinates of the current vehicle position obtained in S900 are respectively intercepted on the lane line fitting curve and the lane line curve; outputting points obtained by intercepting the lane line fitting curve as first correction points; and outputting a point obtained by intercepting the lane line curve as a second correction point.
The abscissa of the first correction point is c in the formula (1) 0 And the abscissa of the second correction point is b in the formula (3) 0 。
S1100, finding a point with the minimum distance from the point to the current vehicle position in the first discrete point set to serve as a transverse calibration point; and then, replacing the first correction point with the transverse calibration point to obtain the abscissa of the current vehicle position, thereby realizing the correction of the transverse position of the vehicle.
It should be noted that the principle of S1100 is as follows:
at point s p Finding a point D closest to the vehicle nearby; then, a point closest to the vehicle can be found on the lane line fitting curve, and a point closest to the vehicle can be found on the lane line curve; for these two points, the lateral distance to the vehicle is calculated, here designated d 1 And d 2 (ii) a Then the confidence is higher due to the closer distance to the vehicle, and the lateral offset is d 1 Ratio d 2 The lateral deviation difference between the curve fitting the lane line and the curve of the lane line is reduced, and the lateral deviation correction can be just used for correcting the lateral distance, so that the distance can be corrected at d 2 On the line segment, the distance of the line segment is reduced to d 1 A lateral correction point is obtained.
S1200, calculating an included angle value between the lane line fitting curve and the y axis on the lane line fitting curve to serve as a first included angle; then calculating absolute course angle information of the lane central line in the course angle calculation curve to serve as a second included angle; and then, calculating to obtain the course angle of the current vehicle position according to the first included angle and the second included angle, and correcting the course angle of the vehicle.
In this embodiment, the first included angle is obtained by deriving a polynomial of a fitted curve of the lane line output by the visual sensor according to equation (5):
wherein: alpha is a first included angle;for the result of derivation of the lane line fitting curve, it is expressed by equation (6):
wherein: y is 1 And the longitudinal coordinate value of the corrected position of the transverse and longitudinal positions.
In this embodiment, the second included angle is expressed by equation (7) by deriving a polynomial of a fitted curve of the lane line output from the high-precision map:
wherein: beta is a second included angle;to derive the course angle calculation curve, it is expressed by equation (8):
wherein: y is 2 Longitudinal coordinate values of the corrected horizontal and longitudinal positions under the global coordinate system; e.g. of the type 3 Calculating 6 times of curvature change rate at the intersection of the curve and the y axis on the vehicle coordinate system for the course angle; e.g. of the type 2 Calculating 2 times of curvature of the intersection point of the curve and the y axis on the vehicle coordinate system for the course angle; e.g. of the type 1 And calculating the curvature of the intersection point of the curve and the y axis on the vehicle coordinate system for the heading angle.
In this embodiment, the heading angle of the current vehicle position is expressed by equation (9):
θ=α+β (9)
wherein: theta is the heading angle of the current vehicle position.
S1300, packaging and outputting the corrected vertical coordinate of the current vehicle position, the horizontal coordinate of the current vehicle position and the course angle of the current vehicle position, namely the final output result of the positioning method.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A fusion positioning method in intelligent driving is characterized in that: comprises the following steps:
s100, outputting boundary fitting curves of two sides of a current lane of a vehicle to be positioned through a visual sensor; the boundary fitting curve is positioned in a vehicle coordinate system;
s200, calculating to obtain a lane line fitting curve about a lane central line according to the boundary fitting curve;
s300, performing curve fitting on the line points in the lane recorded in the high-precision map to obtain a course angle calculation curve under a global coordinate system; then converting the lane central line recorded in the high-precision map into the vehicle coordinate system to obtain corresponding lane central line shape point information; the lane central line point information comprises the lane central line point;
s400, performing curve fitting on the line points in the lane, which are output and converted by the high-precision map, to obtain a lane line curve;
s500, sampling the lane line fitting curve from the vision sensor according to a sampling interval preset manually to obtain a first discrete point set; the first discrete type point set comprises a plurality of samples to obtain a first discrete type point;
sampling the lane line curve from the high-precision map according to the sampling interval with the same value to obtain a second discrete point set; the second discrete type point set comprises a plurality of sampling to obtain a second discrete type point;
s600, intercepting all the first discrete points with the distance from the locomotive to a first credible distance range; the first credible distance range is preset manually; then, curvature information of the position of each first discrete type point on the lane line fitting curve is extracted one by one; then, taking the curvature information corresponding to each first discrete type point as a first curvature value; then packaging all the first curvature values to obtain a first curvature set;
s700, intercepting all the second discrete points with the distance from the locomotive to a second credible distance range; the second credible distance range is preset manually; then, curvature information of the position of each second discrete type point on the lane line curve is extracted one by one; then, taking the curvature information corresponding to each second discrete point as a second curvature value; then packaging all the second curvature values to obtain a second curvature set;
s800, finding a point on the lane line curve according to the first curvature set and the second curvature set, and enabling the curvature error value of the first curvature set relative to the second curvature set to be minimum by taking the point as a reference; then outputting the point as a reference point;
s900, subtracting the vertical coordinate of the point closest to the vehicle head in the first credible distance range from the vertical coordinate of the reference point extracted in S800 to obtain the vertical coordinate of the current vehicle position, and correcting the longitudinal position of the vehicle;
s1000, respectively intercepting points of which the vertical coordinates are the vertical coordinates of the current vehicle position obtained in the S900 on the lane line fitting curve and the lane line curve; outputting points obtained by intercepting the lane line fitting curve as first correction points; outputting a point obtained by intercepting the lane line curve as a second correction point;
s1100, finding a point with the minimum distance from the point to the current vehicle position in the first discrete point set to serve as a transverse calibration point; then, the transverse calibration point is used for replacing the first calibration point to obtain the abscissa of the current vehicle position, and the transverse position of the vehicle is calibrated;
s1200, calculating an included angle value between the lane line fitting curve and a y axis on the lane line fitting curve as a first included angle; then calculating absolute course angle information of a lane central line in the course angle calculation curve to serve as a second included angle; then, calculating to obtain a course angle of the current vehicle position according to the first included angle and the second included angle, and correcting the course angle of the vehicle;
s1300, packaging and outputting the corrected vertical coordinate of the current vehicle position, the horizontal coordinate of the current vehicle position and the course angle of the current vehicle position, namely, the final output result of the positioning method.
2. The fusion positioning method in intelligent driving according to claim 1, wherein: the lane line fitting curve is expressed by the following formula:
x=c 3 y 3 +c 2 y 2 +c 1 y+c 0
wherein: x is the abscissa, in the vehicle coordinate system; y is a vertical coordinate under the vehicle coordinate system; the vehicle coordinate system is as follows: the front of the vehicle is taken as an x axis, the pointing direction of the vehicle head is taken as a positive direction, and the vehicle coordinate system meets a right-hand rule; c. C 3 Fitting a curve to the lane line at 6 times the rate of curvature change at the y-axis intersection on the vehicle coordinate system; c. C 2 Fitting 2 times the curvature of the lane line at the intersection of the curve and the y-axis on the vehicle coordinate system; c. C 1 Fitting a curvature at the intersection of the curve and the y-axis on the vehicle coordinate system to the lane line; c. C 0 And fitting an intercept of a curve and a y-axis on the vehicle coordinate system for the lane line.
3. The fusion positioning method in intelligent driving according to claim 2, wherein: the lane center line is expressed as follows:
x=c 3 y 3 +cy 2 +c 1 y+c' 0
wherein: c' 0 Is the intercept of the lane centerline and the y-axis on the vehicle coordinate system.
4. The fusion positioning method in intelligent driving according to claim 3, wherein: the lane line curve is expressed by the following formula:
x=b 3 y 3 +b 2 y 2 +b 1 y+b 0
wherein: b 3 Is 6 times of the curvature change rate at the intersection of the lane line curve and the y axis on the vehicle coordinate system; b 2 Is 2 times the curvature at the intersection of the lane line curve and the y-axis on the vehicle coordinate system; b 1 The curvature of the intersection point of the lane line curve and the y axis on the vehicle coordinate system is obtained; b 0 Is the intercept of the lane line curve and the y-axis on the vehicle coordinate system.
5. The fusion positioning method in intelligent driving according to claim 4, wherein: the sampling interval was 20 cm.
6. The fusion positioning method in intelligent driving according to claim 5, wherein: the curvature error value is expressed as:
M=(r' p -r 1 ) 2 +(r' p+1 -r 1 ) 2 +...+(r' p+n-1 -r 1 ) 2 (1≤p≤m-n+1)
wherein: m is the curvature error value; r is 1 ∈[r 1 ,r 2 ,r 3 ,r 4 ,r 5 ...r n ]Wherein [ r ] 1 ,r 2 ,r 3 ,r 4 ,r 5 ...r n ]A counter for the first curvature set, n being the first curvature set; r' p ∈[r 1 ',r' 2 ,r' 3 ,r' 4 ,r' 5 ...r' m ],r' p+1 ∈[r 1 ',r' 2 ,r' 3 ,r' 4 ,r' 5 ...r' m ],...r' p+n-1 ∈[r 1 ',r' 2 ,r' 3 ,r' 4 ,r' 5 ...r' m ]Wherein [ r ] 1 ',r' 2 ,r' 3 ,r' 4 ,r' 5 …r' m ]Is the second curvature set, m is a counter of the second curvature set, and n < m.
7. The fusion positioning method in intelligent driving according to claim 6, wherein: the first included angle is expressed as follows:
wherein: alpha is the first included angle;the derivation of the lane line fitting curve is expressed by the following equation:
wherein: y is 1 And the longitudinal coordinate value of the corrected position of the transverse and longitudinal positions.
8. The fusion positioning method in intelligent driving according to claim 7, wherein: the second angle is expressed as follows:
wherein: beta is the second included angle;for the derivation result of the course angle calculation curve, the following expression is provided:
wherein: y is 2 Longitudinal coordinate values of the corrected horizontal and longitudinal positions under the global coordinate system; e.g. of a cylinder 3 Calculating 6 times of curvature change rate at the intersection of the curve and the y axis on the vehicle coordinate system for the course angle; e.g. of the type 2 Calculating 2 times of curvature of the intersection point of the curve and the y axis on the vehicle coordinate system for the course angle; e.g. of the type 1 And calculating the curvature of the intersection point of the curve and the y axis on the vehicle coordinate system for the heading angle.
9. The fusion positioning method in intelligent driving according to claim 8, wherein: the course angle of the current vehicle position is expressed by the following formula:
θ=α+β
wherein: and theta is the heading angle of the current vehicle position.
10. The fusion positioning method in intelligent driving according to claim 9, wherein: the distance from the last point in the first discrete points to the vehicle head does not exceed the maximum value of the first credible distance range;
the distance between the first credible distance range and the locomotive is not more than 20 m;
the second credible distance range is not more than 60m away from the locomotive.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210761697.9A CN115027482B (en) | 2022-06-29 | 2022-06-29 | Fusion positioning method in intelligent driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210761697.9A CN115027482B (en) | 2022-06-29 | 2022-06-29 | Fusion positioning method in intelligent driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115027482A true CN115027482A (en) | 2022-09-09 |
CN115027482B CN115027482B (en) | 2024-08-16 |
Family
ID=83128623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210761697.9A Active CN115027482B (en) | 2022-06-29 | 2022-06-29 | Fusion positioning method in intelligent driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115027482B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115950441A (en) * | 2023-03-08 | 2023-04-11 | 智道网联科技(北京)有限公司 | Fusion positioning method and device for automatic driving vehicle and electronic equipment |
CN115993137A (en) * | 2023-02-22 | 2023-04-21 | 禾多科技(北京)有限公司 | Vehicle positioning evaluation method, device, electronic equipment and computer readable medium |
CN116630928A (en) * | 2023-07-25 | 2023-08-22 | 广汽埃安新能源汽车股份有限公司 | Lane line optimization method and device and electronic equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103940434A (en) * | 2014-04-01 | 2014-07-23 | 西安交通大学 | Real-time lane line detecting system based on monocular vision and inertial navigation unit |
US20160018229A1 (en) * | 2014-07-16 | 2016-01-21 | GM Global Technology Operations LLC | Accurate curvature estimation algorithm for path planning of autonomous driving vehicle |
JP2016045144A (en) * | 2014-08-26 | 2016-04-04 | アルパイン株式会社 | Traveling lane detection device and driving support system |
CN109017780A (en) * | 2018-04-12 | 2018-12-18 | 深圳市布谷鸟科技有限公司 | A kind of Vehicular intelligent driving control method |
JP2019059319A (en) * | 2017-09-26 | 2019-04-18 | 株式会社Subaru | Travel control device of vehicle |
CN110969837A (en) * | 2018-09-30 | 2020-04-07 | 长城汽车股份有限公司 | Road information fusion system and method for automatic driving vehicle |
CN111516673A (en) * | 2020-04-30 | 2020-08-11 | 重庆长安汽车股份有限公司 | Lane line fusion system and method based on intelligent camera and high-precision map positioning |
CN113602267A (en) * | 2021-08-26 | 2021-11-05 | 东风汽车有限公司东风日产乘用车公司 | Lane keeping control method, storage medium, and electronic apparatus |
CN113682313A (en) * | 2021-08-11 | 2021-11-23 | 中汽创智科技有限公司 | Lane line determination method, lane line determination device and storage medium |
US20210362741A1 (en) * | 2018-09-30 | 2021-11-25 | Great Wall Motor Company Limited | Method for constructing driving coordinate system, and application thereof |
CN114002725A (en) * | 2021-11-01 | 2022-02-01 | 武汉中海庭数据技术有限公司 | Lane line auxiliary positioning method and device, electronic equipment and storage medium |
-
2022
- 2022-06-29 CN CN202210761697.9A patent/CN115027482B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103940434A (en) * | 2014-04-01 | 2014-07-23 | 西安交通大学 | Real-time lane line detecting system based on monocular vision and inertial navigation unit |
US20160018229A1 (en) * | 2014-07-16 | 2016-01-21 | GM Global Technology Operations LLC | Accurate curvature estimation algorithm for path planning of autonomous driving vehicle |
JP2016045144A (en) * | 2014-08-26 | 2016-04-04 | アルパイン株式会社 | Traveling lane detection device and driving support system |
JP2019059319A (en) * | 2017-09-26 | 2019-04-18 | 株式会社Subaru | Travel control device of vehicle |
CN109017780A (en) * | 2018-04-12 | 2018-12-18 | 深圳市布谷鸟科技有限公司 | A kind of Vehicular intelligent driving control method |
CN110969837A (en) * | 2018-09-30 | 2020-04-07 | 长城汽车股份有限公司 | Road information fusion system and method for automatic driving vehicle |
US20210362741A1 (en) * | 2018-09-30 | 2021-11-25 | Great Wall Motor Company Limited | Method for constructing driving coordinate system, and application thereof |
CN111516673A (en) * | 2020-04-30 | 2020-08-11 | 重庆长安汽车股份有限公司 | Lane line fusion system and method based on intelligent camera and high-precision map positioning |
CN113682313A (en) * | 2021-08-11 | 2021-11-23 | 中汽创智科技有限公司 | Lane line determination method, lane line determination device and storage medium |
CN113602267A (en) * | 2021-08-26 | 2021-11-05 | 东风汽车有限公司东风日产乘用车公司 | Lane keeping control method, storage medium, and electronic apparatus |
CN114002725A (en) * | 2021-11-01 | 2022-02-01 | 武汉中海庭数据技术有限公司 | Lane line auxiliary positioning method and device, electronic equipment and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115993137A (en) * | 2023-02-22 | 2023-04-21 | 禾多科技(北京)有限公司 | Vehicle positioning evaluation method, device, electronic equipment and computer readable medium |
CN115950441A (en) * | 2023-03-08 | 2023-04-11 | 智道网联科技(北京)有限公司 | Fusion positioning method and device for automatic driving vehicle and electronic equipment |
CN116630928A (en) * | 2023-07-25 | 2023-08-22 | 广汽埃安新能源汽车股份有限公司 | Lane line optimization method and device and electronic equipment |
CN116630928B (en) * | 2023-07-25 | 2023-11-17 | 广汽埃安新能源汽车股份有限公司 | Lane line optimization method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115027482B (en) | 2024-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110057373B (en) | Method, apparatus and computer storage medium for generating high-definition semantic map | |
CN108802785B (en) | Vehicle self-positioning method based on high-precision vector map and monocular vision sensor | |
CN109696663B (en) | Vehicle-mounted three-dimensional laser radar calibration method and system | |
US11024055B2 (en) | Vehicle, vehicle positioning system, and vehicle positioning method | |
CN115027482A (en) | Fusion positioning method in intelligent driving | |
CN111856491B (en) | Method and apparatus for determining geographic position and orientation of a vehicle | |
JP5157067B2 (en) | Automatic travel map creation device and automatic travel device. | |
WO2018196391A1 (en) | Method and device for calibrating external parameters of vehicle-mounted camera | |
CN108983248A (en) | It is a kind of that vehicle localization method is joined based on the net of 3D laser radar and V2X | |
EP4321898A2 (en) | Detection device and method for adjusting parameter thereof | |
CN116685873A (en) | Vehicle-road cooperation-oriented perception information fusion representation and target detection method | |
JP2018533721A (en) | Method and system for generating and using localization reference data | |
CN112577517A (en) | Multi-element positioning sensor combined calibration method and system | |
CN112669354B (en) | Multi-camera motion state estimation method based on incomplete constraint of vehicle | |
CN103424112A (en) | Vision navigating method for movement carrier based on laser plane assistance | |
CN111077907A (en) | Autonomous positioning method of outdoor unmanned aerial vehicle | |
WO2012097077A1 (en) | Mobile mapping system for road inventory | |
CN112683260A (en) | High-precision map and V2X-based integrated navigation positioning precision improving system and method | |
JP2018077162A (en) | Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection | |
CN114485654A (en) | Multi-sensor fusion positioning method and device based on high-precision map | |
CN114035167A (en) | Target high-precision sensing method based on roadside multi-sensors | |
CN113312403B (en) | Map acquisition method and device, electronic equipment and storage medium | |
CN106292660B (en) | Balance car course corrections device and method based on odometer and gray-scale sensor | |
CN115166721A (en) | Radar and GNSS information calibration fusion method and device in roadside sensing equipment | |
CN114705199A (en) | Lane-level fusion positioning method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |