CN119413185A - Autonomous navigation and positioning system and method for autonomous driving vehicle - Google Patents
Autonomous navigation and positioning system and method for autonomous driving vehicle Download PDFInfo
- Publication number
- CN119413185A CN119413185A CN202411557905.9A CN202411557905A CN119413185A CN 119413185 A CN119413185 A CN 119413185A CN 202411557905 A CN202411557905 A CN 202411557905A CN 119413185 A CN119413185 A CN 119413185A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- information
- receiver
- unit
- satellite
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 55
- 230000000007 visual effect Effects 0.000 claims abstract description 32
- 238000005259 measurement Methods 0.000 claims abstract description 27
- 230000004927 fusion Effects 0.000 claims abstract description 26
- 230000008447 perception Effects 0.000 claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 14
- 230000007613 environmental effect Effects 0.000 claims description 16
- 238000012937 correction Methods 0.000 claims description 14
- 238000005516 engineering process Methods 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 5
- 238000000691 measurement method Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims 2
- 238000012545 processing Methods 0.000 description 16
- 238000001914 filtration Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000004888 barrier function Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000010355 oscillation Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002996 descriptor matching method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005672 electromagnetic field Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000005433 ionosphere Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000005436 troposphere Substances 0.000 description 1
Landscapes
- Navigation (AREA)
Abstract
The invention discloses an autonomous navigation and positioning system and an autonomous navigation and positioning method for an autonomous driving vehicle, which belong to the technical field of autonomous driving, and specifically comprise the steps of receiving satellite signals through a GPS receiver, preliminarily determining the position of the vehicle, starting a vehicle sensor to measure in real time and preprocessing the result; the method comprises the steps of converting a preprocessed measurement result into position information in a global coordinate system, calculating the position and the gesture of a vehicle by combining a sensor fusion algorithm, generating point cloud data by utilizing a laser radar, determining the accurate position of the vehicle by matching with a high-precision map, identifying obstacles and road features, capturing an environment image by a vehicle-mounted camera, extracting visual feature information and fusing with environment perception information, perceiving the conditions of the obstacles and the road in real time based on the fusion information and the distance information, determining an optimal driving route by adopting a path planning algorithm, realizing visual navigation and automatic obstacle avoidance, and improving the safety and the reliability of automatic driving.
Description
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to an autonomous navigation and positioning system and an autonomous navigation and positioning method for an automatic driving vehicle.
Background
The automatic driving technology is a technology for realizing autonomous running, automatic obstacle avoidance and automatic path planning of a vehicle through advanced perception, decision, control and vehicle technologies, wherein the autonomous navigation positioning technology is used for determining the position of the vehicle in space through various sensors and algorithms and planning an optimal running route. The automatic driving vehicle autonomous navigation positioning method has the advantages that the development process of the automatic driving vehicle autonomous navigation positioning method is changed from single sensor positioning to multi-sensor fusion positioning, the automatic driving vehicle is mainly positioned by a global satellite navigation system in early stage, the global satellite navigation system is limited in positioning precision in a scene with satellite signals being easily blocked, along with the continuous progress of sensor technology and the reduction of cost, the automatic driving vehicle starts to adopt the multi-sensor fusion positioning method, the vehicle can realize more accurate and reliable positioning by fusing the information of a plurality of sensors such as GNSS, INS, liDAR, a camera and the like, but the defects and challenges of increased system complexity, increased data processing and calculation load, mutual interference among the sensors, sensor faults and redundancy problems, environmental adaptability challenges, cost problems and the like are faced, and in order to overcome the defects and challenges, an automatic driving vehicle autonomous navigation positioning algorithm is continuously optimized and used, for example, the information of a plurality of sensors is fused by adopting a Kalman filtering algorithm, the positioning precision is improved, and visual information such as road marks, lane lines and the like is recognized and processed by utilizing a deep learning algorithm to assist the vehicle positioning.
The patent with the publication number of CN118408558A discloses a vehicle autonomous navigation method, a device and a computer device based on vehicle positioning, which comprise the steps of acquiring current driving parameters of a target vehicle in a driving process, lane line information and barrier information in a driving environment based on image acquisition equipment on the target vehicle, determining a target planning path corresponding to the target vehicle and real-time position information of the target vehicle according to the current driving parameters, the lane line information and the barrier information, and determining planning driving parameters of the target vehicle according to the target planning path and the real-time position information of the target vehicle, wherein the planning driving parameters are used for autonomous navigation of the target vehicle.
The patent with the authorized bulletin number of CN107462218B discloses a night panoramic vision relative positioning system and method of an autonomous navigation tractor, wherein the night panoramic vision relative positioning system comprises three groups of binocular vision systems, an Ethernet switch and an industrial personal computer, each group of vision systems comprises two short wave infrared cameras which are horizontally fixed in the same direction, the three groups of binocular vision systems are arranged in an equilateral triangle mode to form a panoramic vision unit which is 120 degrees each other and are connected to the industrial personal computer through the Ethernet switch, and the positioning method comprises the steps that the three groups of binocular vision systems detect the same target in the environment in each direction at the same time, accordingly, the vehicle motion is reversely pushed, displacement vectors under different coordinate systems are formed, and the displacement vectors are converted into the same coordinate system to realize relative positioning. The technical scheme is reasonable in design, simple in structure and convenient to operate, the vehicle motion state is detected from multiple directions, and the detection results are subjected to data fusion, so that the positioning accuracy of the tractor at night is effectively improved, and data support is provided for navigation.
The above prior art has the problems of poor dependency and environmental adaptability, and limited view range and resolution of the camera and influence of ambient light and weather conditions, although obstacle avoidance and path planning can be performed through visual information, the obstacle avoidance and path planning capability may be limited.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an autonomous navigation positioning system and an autonomous navigation positioning method for an automatic driving vehicle, which are characterized in that a GPS receiver is used for receiving satellite signals and preliminarily determining the position of the vehicle, a vehicle sensor is started to conduct real-time measurement and preprocessing results, the preprocessed measurement results are converted into position information in a global coordinate system, the position and the gesture of the vehicle are calculated by combining a sensor fusion algorithm, point cloud data are generated by using a laser radar and are matched with a high-precision map to determine the accurate position of the vehicle and identify the characteristics of obstacles and roads, a vehicle-mounted camera is used for capturing an environment image, extracting visual characteristic information and fusing the visual characteristic information with environment perception information, the conditions of the obstacles and the roads are perceived in real time based on the fusion information and the distance information, and an optimal driving route is determined by adopting a path planning algorithm, so that visual navigation and automatic obstacle avoidance are realized, and the safety and reliability of automatic driving are improved.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an autonomous navigation positioning method of an autonomous vehicle, comprising:
Step S1, receiving signals transmitted by satellites through a GPS receiver, calculating a distance error between the receiver and the satellites by combining a carrier phase measurement method, thus preliminarily determining the position of a vehicle, starting a vehicle sensor to perform real-time measurement based on the preliminarily determined vehicle position, and preprocessing a measurement result;
S2, converting the preprocessed measurement result into position information in a global coordinate system, combining the position of the vehicle with the preliminary determination, and calculating the position and the posture of the vehicle in space by using a fusion algorithm;
S3, based on the position and the posture of the vehicle in the space, the laser radar emits laser beams, calculates the relative distance between the vehicle and the surrounding environment according to the reflection time, generates point cloud data, matches the point cloud data with a high-precision map in a vehicle system, determines the accurate position of the vehicle in a global coordinate system through a matching strategy, identifies obstacles and road characteristics in the environment by utilizing the three-dimensional characteristics of the point cloud data, and generates environment perception information;
S4, capturing an environmental image around the vehicle by the vehicle-mounted camera, analyzing the environmental image, extracting visual characteristic information of road marks, lane lines and obstacles, and fusing the extracted visual characteristic information with environment perception information to obtain fused characteristic information;
And S5, determining an optimal driving route by adopting a path planning algorithm based on the fusion characteristic information and the distance information, automatically driving along the extracted lane line by a visual navigation technology, automatically taking obstacle avoidance measures when encountering an obstacle, and updating the position and posture information of the obstacle avoidance measures.
Specifically, the specific steps of the step S1 include:
S1.1, a GPS receiver receives radio frequency signals from a plurality of GPS satellites, and the radio frequency signals are subjected to down-conversion and digital processing to obtain baseband signals;
S1.2, extracting carrier phase information from the baseband signal, and calculating the phase difference between the satellite carrier signal received by the receiver and the local oscillator reference signal of the receiver according to the carrier phase information Wherein, Representing the phase of the satellite carrier signal received by the receiver,Representing the phase of the local reference signal generated by the receiver;
s1.3, calculating the accurate distance from the receiver to the satellite according to the wavelength in the carrier phase information and the phase difference obtained in the step S1.2 to obtain a satellite ranging result Wherein d n represents the nth satellite ranging result,The phase difference between the nth satellite carrier signal received by the receiver and the local oscillator reference signal of the receiver is represented, lambda n represents the nth satellite carrier wavelength, and n represents the number of satellites.
Specifically, the specific steps of the step S1 further include:
s1.4, correcting the preliminary position according to the satellite ranging result, wherein the formula is as follows:
Where d n,cor denotes the exact distance of the corrected receiver to the nth satellite, Representing phase errors caused by multipath effects, α dian representing ionospheric delay correction factors, α dian representing tropospheric delay correction factors, α wei,t representing satellite clock correction factors, α jie,t representing receiver clock correction factors, β representing adjustment coefficients;
And S1.5, starting a vehicle sensor based on the corrected preliminary position information, measuring the sensor in real time, acquiring vehicle dynamic information, and preprocessing the vehicle dynamic information measured by the sensor.
Specifically, the specific steps of the step S3 include:
S3.1, the laser radar scans the surrounding environment of the vehicle in an omnibearing manner in an array scanning mode based on the position and the posture of the vehicle in space, and calculates the relative distance between the vehicle and each point in the surrounding environment by utilizing the difference delta t between the time of transmitting laser and the time of receiving the laser and the light speed c
S3.2, acquiring scanning angle information of the laser radar, calculating the position of each scanning point in a three-dimensional space according to the scanning angle information and the relative distance data, and presenting the position in a form of point cloud data;
S3.3, matching the generated point cloud data with a high-precision map in a vehicle system, searching for characteristics in the point cloud and corresponding characteristics in the map, and iteratively adjusting the position of the vehicle in a global coordinate system according to a matching result;
S3.4, determining the accurate position of the vehicle in the global coordinate system through a matching strategy according to the position adjustment result in the step S3.3 by combining the preliminary position information and the vehicle sensor data;
S3.5, dividing and classifying the point cloud data by utilizing the three-dimensional characteristics of the point cloud data and combining the accurate position information of the vehicle in a global coordinate system, and identifying the obstacle and road characteristics in the environment;
And S3.6, calculating distance and speed information of the obstacle relative to the vehicle according to the position and the shape of the obstacle, and generating environment sensing information containing the obstacle and the road characteristics by combining the road characteristic information.
Specifically, the specific steps of the matching policy in S3.4 include:
S3.41, preprocessing the vehicle sensor data, extracting characteristic points from the preprocessed vehicle sensor data by utilizing a characteristic point detection algorithm, and describing the extracted characteristic points to generate a characteristic descriptor;
S3.42, constructing a feature database from feature points and descriptors thereof in the high-precision map, and matching the feature points acquired by the vehicle sensor with the feature points in the feature database by using a distance-based matching algorithm to find the most similar matching pair;
S3.43, verifying the matching result, if verification is successful, calculating the preliminary position of the vehicle in the global coordinate system by using the shortest distance formula according to the feature point pair of the successful verification;
And S3.44, optimizing the preliminary position by utilizing an optimization algorithm by combining the position adjustment result and the preliminary position information in the step S3.3, and outputting the optimized vehicle position information to a vehicle control system.
Specifically, the specific steps of the step S5 include:
s5.1, receiving the fusion characteristic information and the distance information, and integrating the fusion characteristic information and the distance information to form vehicle perception data;
s5.2, abstracting a vehicle moving scene into an environment model based on vehicle perception data, searching in the known environment model by adopting a path planning algorithm, and determining a global optimal driving route from the current position to the target position;
S5.3, carrying out local path planning according to the global path, the real-time position of the vehicle and the environmental perception information, generating a local path planning result, and generating decision information according to the local path planning result and the vehicle state information;
s5.4, controlling the vehicle to run along the global optimal running route by using a path tracking algorithm according to the generated decision information;
s5.5, based on the global optimal driving route, using a visual navigation technology and an edge detection algorithm to extract lane line information from the road image, calculating the position and the posture of the vehicle relative to the lane line according to the lane line information, and simultaneously generating steering and speed control instructions to enable the vehicle to automatically drive along the extracted lane line;
s5.6, detecting the front obstacle in real time in the driving process, judging whether an obstacle avoidance measure is needed according to the position and speed information of the obstacle, and re-planning the driving path according to a local path planning algorithm if the obstacle avoidance measure is needed.
An autonomous navigation positioning system of an autonomous driving vehicle comprises a positioning module, an environment sensing module, a path planning module and a decision control module;
The positioning module is used for starting a sensor to measure in real time according to the preliminarily determined vehicle position, and fusing the measurement results to obtain the accurate position and the accurate posture of the vehicle in space;
The environment sensing module is used for generating point cloud data by utilizing a laser radar, matching the point cloud data with a high-precision map, determining the accurate position of a vehicle, and identifying obstacles and road features in the environment;
The path planning module is used for sensing surrounding obstacles and road conditions in real time according to the fusion characteristic information and the distance information, determining an optimal driving route and performing automatic navigation and obstacle avoidance;
And the decision control module is used for making driving decisions and controlling the vehicle to execute according to the results of the environment awareness and the path planning.
The positioning module comprises a GPS receiver unit, a carrier phase measuring unit, a data converting unit and a position and posture calculating unit;
the GPS receiver unit is used for receiving signals transmitted by satellites and providing preliminary position information for the vehicle;
the carrier phase measuring unit is used for combining GPS signals, calculating the distance error from the receiver to the satellite and improving the positioning precision;
the data conversion unit is used for converting the preprocessed sensor measurement result into position information in a global coordinate system;
The position and posture calculation unit is used for calculating the position and posture of the vehicle in the space by using a sensor fusion algorithm.
The path planning module comprises a path planning unit, a visual navigation unit and an obstacle avoidance execution unit;
the path planning unit is used for determining an optimal driving route according to the fusion characteristic information and the current state of the vehicle;
the visual navigation unit is used for navigating by using visual information;
The obstacle avoidance execution unit is used for automatically taking obstacle avoidance measures when encountering obstacles.
The decision control module comprises a decision unit and a control unit;
the decision unit is used for evaluating the current environmental condition and selecting a driving mode;
and the control unit is used for generating a control command according to the decision result, and directly controlling a hardware system of the vehicle to execute according to the control command.
Compared with the prior art, the invention has the beneficial effects that:
1. According to the autonomous navigation positioning method of the autonomous driving vehicle, the GPS signals, the sensor measurement and the laser radar point cloud data are combined, so that the system can accurately determine the position of the vehicle in the global coordinate system in real time, meanwhile, the obstacle and the road characteristic of the surrounding environment are identified, and the environment sensing capability of the vehicle is improved.
2. The invention provides an autonomous navigation positioning method of an automatic driving vehicle, which is based on fused characteristic information and distance data, and the system can intelligently plan an optimal driving route, so that visual navigation and automatic obstacle avoidance are realized, and in the whole process, the position and posture information of the vehicle are continuously updated, so that accurate navigation and intelligent driving are realized, and the safety and reliability of automatic driving are improved.
Drawings
FIG. 1 is a schematic diagram of an autonomous navigation positioning method for an autonomous driving vehicle according to the present invention;
FIG. 2 is a schematic flow chart of an autonomous navigation positioning method of an autonomous driving vehicle according to the present invention;
FIG. 3 is a schematic diagram of an autonomous navigation positioning system for an autonomous vehicle according to the present invention.
Detailed Description
Example 1
Referring to fig. 1-2, an embodiment of the invention provides an autonomous navigation positioning method for an autonomous driving vehicle, comprising the following steps:
Step S1, receiving signals transmitted by satellites through a GPS receiver, calculating a distance error between the receiver and the satellites by combining a carrier phase measurement method, thus preliminarily determining the position of a vehicle, starting a vehicle sensor to perform real-time measurement based on the preliminarily determined vehicle position, and preprocessing a measurement result;
S2, converting the preprocessed measurement result into position information in a global coordinate system, combining the position of the vehicle with the preliminary determination, and calculating the position and the posture of the vehicle in space by using a fusion algorithm;
further, the specific steps of step S2 include:
S2.1, determining an origin, a direction and a unit length of a global coordinate system, and converting a preprocessed measurement result into coordinate information in the global coordinate system through coordinate transformation, rotation and translation operations;
S2.2, inputting the converted coordinate information and GPS positioning result into a Kalman filter for fusion, and obtaining the position and the posture of the vehicle in space according to the output result of the Kalman filter, wherein the Kalman filter is the prior art content in the field and is not an inventive scheme of the application, and is not repeated here.
S3, based on the position and the posture of the vehicle in the space, the laser radar emits laser beams, calculates the relative distance between the vehicle and the surrounding environment according to the reflection time, generates point cloud data, matches the point cloud data with a high-precision map in a vehicle system, determines the accurate position of the vehicle in a global coordinate system through a matching strategy, identifies obstacles and road characteristics in the environment by utilizing the three-dimensional characteristics of the point cloud data, and generates environment perception information;
S4, capturing an environmental image around the vehicle by using the vehicle-mounted camera, analyzing the environmental image by using a computer vision algorithm, extracting visual characteristic information of road marks, lane lines and obstacles, and fusing the extracted visual characteristic information with environment perception information to obtain fused characteristic information;
Further, the specific steps of step S4 include:
(1) Capturing a high-definition image of the surrounding environment of the vehicle by using the vehicle-mounted camera, and ensuring that the camera has a high resolution and a wide-angle view so as to capture more environment details;
(2) Preprocessing the captured image, including denoising, graying, contrast enhancement and the like, removing noise in the image by using a median filtering method, and converting a color image into a gray image to reduce the calculated amount, wherein the median filtering method and the gray conversion method are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
(3) Extracting visual characteristic information of road marks, lane lines and barriers by using an edge detection algorithm, and dividing an image into different areas by using a threshold segmentation algorithm so as to extract targets such as the road marks, the lane lines and the barriers, wherein the edge detection algorithm and the threshold segmentation algorithm are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
(4) Fusing the extracted visual characteristic information with environment perception information provided by a laser radar on a vehicle to obtain more accurate and comprehensive environment perception;
(5) And outputting the fused characteristic information to an automatic driving system of the vehicle.
And S5, based on the fusion characteristic information and the distance information, surrounding obstacles and road conditions are perceived in real time, an optimal driving route is determined by adopting a path planning algorithm, the vehicle automatically drives along the extracted lane line by a visual navigation technology, obstacle avoidance measures are automatically taken when the obstacle is encountered, and the position and posture information of the obstacle are updated.
The specific steps of the step S1 include:
S1.1, a GPS receiver receives radio frequency signals from a plurality of GPS satellites, and the radio frequency signals are subjected to down-conversion and digital processing to obtain baseband signals;
further, the specific steps of S1.1 include:
(1) The GPS receiving antenna senses various electromagnetic fields and interference including all visible GPS signals and transmits the radio frequency signals to the radio frequency front-end processing module;
(2) The radio frequency front end processing module filters and amplifies all visible GPS satellite signals received by an antenna, mixes with a sine wave local oscillation signal generated by a local oscillator and down-converts the signals into intermediate frequency signals, wherein radio frequency front end electronic devices are generally integrated in an application specific integrated circuit chip ASIC, and are commonly called as radio frequency integrated circuits RFIC;
(3) And the digital processing is that the analog intermediate frequency signal output by the radio frequency front end is converted into a digital intermediate frequency signal through an analog-to-digital converter so as to facilitate the subsequent digital signal processing.
S1.2, extracting carrier phase information from the baseband signal, and calculating the phase difference between the satellite carrier signal received by the receiver and the local oscillator reference signal of the receiver according to the carrier phase informationWherein, Representing the phase of the satellite carrier signal received by the receiver,Representing the phase of the local reference signal generated by the receiver and, in the calculationWhen the phase difference is given in degrees, the phase difference needs to be converted into radian;
In calculating the phase difference, the receiver simultaneously generates a local reference signal having the same frequency as the carrier signal transmitted by the satellite, and the phase difference is obtained by comparing the phase of the received satellite carrier signal with the phase of the local reference signal.
It should be noted that, since the carrier signal is a periodic sinusoidal signal, the phase measurement can only measure the portion of less than one wavelength, and thus there is a problem of uncertainty of the whole cycle number, and in step S1.4 of the present invention, the problem of the whole cycle ambiguity needs to be corrected to obtain an accurate phase difference measurement value.
Further, the specific step of extracting carrier phase information from the baseband signal includes:
(1) The GPS satellite sends out the modulation signals containing the ranging codes according to the own satellite-borne clock, and meanwhile, the receiver receives the modulation signals containing the ranging codes and performs preliminary signal processing;
(2) Because the carrier signal is modulated on the ranging code, the ranging code and the satellite text modulated on the carrier are firstly removed, and the carrier is acquired again, namely the carrier is rebuilt, wherein the carrier rebuilding method can adopt a code correlation method which is the prior art content in the field and is not an inventive scheme of the application, and the description is omitted here;
(3) After the carrier is rebuilt, the phase locking is realized by using a phase-locked loop circuit, and after the phase locking, the local signal phase of the receiver is the same as the GPS carrier signal phase, and the carrier phase information is extracted at the moment, wherein the phase-locked loop circuit is the prior art in the field and is not an inventive scheme of the application, and is not repeated here.
S1.3, calculating the accurate distance from the receiver to the satellite according to the wavelength in the carrier phase information and the phase difference obtained in the step S1.2 to obtain a satellite ranging resultWherein d n represents the nth satellite ranging result,The phase difference between the nth satellite carrier signal received by the receiver and the local oscillation reference signal of the receiver is represented, lambda n represents the wavelength of the nth satellite carrier, and n represents the number of satellites;
the carrier wavelength is known, and for the GPS system, the L1 carrier wavelength is about 19.03cm and the L2 carrier wavelength is about 24.42cm.
S1.4, correcting the preliminary position according to the satellite ranging result, wherein the formula is as follows:
where d n,cor denotes the corrected receiver-to-satellite accurate distance, Representing phase errors caused by multipath effects, a dian representing ionospheric delay correction factors for correcting the effect of the ionosphere on signal propagation, a dian representing tropospheric delay correction factors for correcting the effect of the troposphere on signal propagation, a wei,t representing satellite clock difference correction factors for correcting errors of the satellite clock, a jie,t representing receiver clock difference correction factors for correcting errors of the receiver clock, and β representing adjustment coefficients;
It should be noted that, the formula in the present invention comprehensively considers various factors affecting the ranging accuracy, including multipath effect, ionospheric delay, tropospheric delay, satellite clock error and receiver clock error, and considers other possible error sources by introducing correction factor β, so that the overall consideration makes the ranging result more accurate and reliable, and helps to improve the satellite navigation and positioning accuracy.
And S1.5, starting a vehicle sensor based on the corrected preliminary position information, measuring the sensor in real time, acquiring vehicle dynamic information, and preprocessing the vehicle dynamic information measured by the sensor.
The vehicle sensor comprises a camera, a laser radar and an inertial navigation unit, vehicle dynamic information comprises distance, speed, acceleration, direction and images, and preprocessing operation comprises data filtering, denoising, calibration and time synchronization.
The specific steps of S3 include:
S3.1, the laser radar scans the surrounding environment of the vehicle in an omnibearing manner in an array scanning mode based on the position and the posture of the vehicle in space, and calculates the relative distance between the vehicle and each point in the surrounding environment by utilizing the difference delta t between the time of transmitting laser and the time of receiving the laser and the light speed c
In the scanning process, the laser radar emits laser beams to the surrounding environment, and after the laser beams irradiate the target object, part of the laser beams are reflected back and received by a receiving module of the laser radar.
S3.2, acquiring scanning angle information of the laser radar, calculating the position of each scanning point in a three-dimensional space according to the scanning angle information and the relative distance data, and presenting the position in a form of point cloud data;
further, the specific step of S3.2 includes:
(1) Acquiring scanning angle information and relative distance data;
(2) The coordinate of each scanning point relative to the laser radar in the three-dimensional space is calculated through geometric transformation by utilizing the scanning angle information and the relative distance data, the polar coordinate is usually converted into Cartesian coordinate, the specific conversion process is the prior art content in the field, and the application is not an inventive scheme and is not repeated here;
(3) The calculated three-dimensional coordinates are combined into point cloud data and stored in the form of a list or array, wherein each element represents the three-dimensional coordinates of one scan point.
S3.3, matching the generated point cloud data with a high-precision map in a vehicle system, searching for characteristics in the point cloud and corresponding characteristics in the map, and iteratively adjusting the position of the vehicle in a global coordinate system according to a matching result;
Further, the specific step of S3.3 includes:
(1) The data acquisition, namely acquiring road data by utilizing a high-precision map acquisition vehicle, wherein the normal work of a sensor is ensured in the acquisition process, and the data is accurate and reliable;
(2) Processing the acquired point cloud and image data, including the steps of point cloud splicing, image enhancement and noise removal, so as to obtain high-quality point cloud data and image data, wherein the point cloud splicing, image enhancement and noise removal processes are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
(3) Extracting features from the processed point cloud data, wherein the features can be road elements such as lane lines, street lamps, telegraph poles and the like, can also be other objects with obvious identification, and simultaneously, extracting corresponding feature information from a high-precision map;
(4) And (3) matching the extracted point cloud features with features in the high-precision map to find out the corresponding relation between the extracted point cloud features and the features. In the application, the FPFH feature descriptor matching method is adopted in the step, and is the prior art content in the field, and is not an inventive scheme of the application, and is not repeated here;
(5) And (3) position adjustment, namely iteratively adjusting the position of the vehicle in a global coordinate system according to the matching result, so that the vehicle is positioned more accurately.
S3.4, determining the accurate position of the vehicle in the global coordinate system through a matching strategy according to the position adjustment result in the step S3.3 by combining the preliminary position information and the vehicle sensor data;
Further, the specific step of S3.4 includes:
(1) Inputting the position adjustment result, the preliminary position information and the vehicle sensor data in the step S3.3;
(2) Preprocessing and fusing input data to form unified environment perception information;
(3) Extracting features from the fused data, and selecting key features for matching;
(4) Applying a matching strategy to match the extracted features with features in the high-precision map;
(5) Determining the accurate position of the vehicle in the global coordinate system through an optimization algorithm according to the matching result and the sensor data, wherein the optimization algorithm adopts an iterative nearest-neighbor algorithm which is the prior art in the field and is not an inventive scheme of the application and is not repeated herein;
(6) And verifying the determined position and outputting final position information.
S3.5, dividing and classifying the point cloud data by utilizing the three-dimensional characteristics of the point cloud data and combining the accurate position information of the vehicle in a global coordinate system, and identifying the obstacle and road characteristics in the environment;
further, the specific step of S3.5 includes:
(1) Inputting point cloud data acquired by a vehicle sensor and accurate position information of a vehicle in a global coordinate system;
(2) Carrying out preprocessing operations such as denoising, filtering and the like on the point cloud data to improve the data quality, and converting the point cloud data into a global coordinate system according to the vehicle position information;
(3) The method is characterized in that an edge-based segmentation method is adopted, and three-dimensional characteristics of point cloud data are utilized for segmentation, wherein the three-dimensional characteristics of the point cloud data comprise spatial positions and geometric forms, and meanwhile, the edge-based segmentation method is the prior art content in the field and is not an inventive scheme of the application, and details are not repeated herein;
(4) Classifying the segmented point cloud data by adopting a convolutional neural network model, and identifying different objects or features, wherein the convolutional neural network model is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
(5) And identifying obstacles and road features in the environment according to the classification result, wherein the obstacles comprise vehicles, pedestrians and trees, and the road features comprise lane lines, road shoulders and traffic signs.
And S3.6, calculating distance and speed information of the obstacle relative to the vehicle according to the position and the shape of the obstacle, and generating environment sensing information containing the obstacle and the road characteristics by combining the road characteristic information.
The specific steps of the matching strategy in S3.4 include:
S3.41, preprocessing the vehicle sensor data, extracting characteristic points from the preprocessed vehicle sensor data by utilizing a characteristic point detection algorithm, and describing the extracted characteristic points to generate a characteristic descriptor, wherein the characteristic point detection algorithm is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
Further, the specific step of S3.41 includes:
(1) Acquiring raw data from vehicle sensors;
(2) The method comprises the steps of performing filtering, denoising, time alignment, coordinate conversion and other processing on original data;
(3) Detecting characteristic points in the preprocessed data by using a characteristic point detection algorithm;
(4) Describing the detected feature points by using SIFT descriptors to generate feature descriptors, wherein the feature descriptors are usually a vector and are used for representing local features of the feature points, and the SIFT descriptors are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
S3.42, constructing a feature database from feature points and descriptors thereof in the high-precision map, and matching the feature points acquired by the vehicle sensor with the feature points in the feature database by using a distance-based matching algorithm to find the most similar matching pair, wherein the distance-based matching algorithm is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
S3.43, verifying the matching result, if the verification is successful, calculating the preliminary position of the vehicle in the global coordinate system according to the feature point pair of the verification success by using a shortest distance formula, wherein the shortest distance formula is the prior art content in the field and is not an inventive scheme of the application, and is not repeated herein;
and S3.44, combining the position adjustment result and the preliminary position information in the step S3.3, optimizing the preliminary position by using an optimization algorithm, and outputting the optimized vehicle position information to a vehicle control system.
The specific steps of the step S5 include:
s5.1, receiving the fusion characteristic information and the distance information, and integrating the fusion characteristic information and the distance information to form vehicle perception data;
s5.2, abstracting a vehicle moving scene into an environment model based on vehicle perception data, searching in the known environment model by adopting a path planning algorithm, and determining a global optimal driving route from the current position to the target position;
further, the specific step of S5.2 includes:
(1) Creating a software environment executed by a path planning method, and mapping a traditional and abstract path problem to a computer-processable quantization problem in a real space based on the environment;
(2) The path searching is that an optimal path is found through continuous optimizing processing, and the path can select the minimum time consumption or the shortest distance and the like as evaluation indexes according to actual requirements;
(3) And (3) path optimization, namely performing smoothing treatment on the searched path to meet the requirements of comfort, stability and the like of vehicle running.
S5.3, carrying out local path planning according to the global path, the real-time position of the vehicle and the environmental perception information, generating a local path planning result, and generating decision information according to the local path planning result and the vehicle state information;
The decision information is how to run on the determined path after the path is determined, including steering, accelerating, decelerating, changing lanes, and the like.
S5.4, controlling the vehicle to run along the global optimal running route according to the generated decision information by using a path tracking algorithm, wherein the path tracking algorithm uses a pure tracking algorithm, and the pure tracking algorithm is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
S5.5, based on the global optimal driving route, using a visual navigation technology and an edge detection algorithm to extract lane line information from a road image, calculating the position and the gesture of a vehicle relative to a lane line according to the lane line information, and generating steering and speed control instructions to enable the vehicle to automatically drive along the extracted lane line, wherein the visual navigation technology and the edge detection algorithm are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
further, the specific steps of calculating the position and posture of the vehicle relative to the lane line include:
(1) Determining parameters of a lane line, namely determining parameters such as slope, intercept and the like of the lane line according to a lane line tracking result, wherein the curve lane line needs to be approximated to the shape by using a polynomial fitting method and the like;
(2) Calculating the position of the vehicle, namely converting the position of the vehicle in the image into the position in an actual road by using geometric transformation according to the lane line parameters and the position information of the vehicle in the image;
(3) Calculating the vehicle attitude, namely calculating the course angle of the vehicle, namely the included angle between the vehicle and the direction of the lane line according to the direction of the lane line and the running direction of the vehicle, wherein the calculation can be realized by comparing the vector of the running direction of the vehicle and the direction of the lane line, and simultaneously, the attitude information such as the roll angle, the pitch angle and the like of the vehicle can be estimated according to the information such as the curvature of the lane line, the running speed and the like of the vehicle;
(4) And fusing the global path information, namely fusing the calculated vehicle position and posture information with the global optimal driving route, which is helpful for evaluating whether the vehicle deviates from the preset route and making corresponding adjustment.
S5.6, detecting the front obstacle in real time in the driving process, judging whether an obstacle avoidance measure is needed according to the position and speed information of the obstacle, and re-planning the driving path according to a local path planning algorithm if the obstacle avoidance measure is needed.
Further, the specific implementation steps of S5.6 include:
(1) Acquiring a front obstacle and the type and position of the obstacle, and continuously tracking the obstacle after the obstacle is detected to acquire the motion state, such as position, speed and acceleration;
(2) Judging whether an obstacle avoidance measure is needed according to the position and speed information of the obstacle, the current state of the vehicle and the running target, and if the obstacle avoidance measure is needed, re-planning a running path according to a local path planning algorithm;
(3) After determining that obstacle avoidance is required, generating a new and safe driving path by using a local path planning algorithm, wherein the path can avoid known obstacles and is as close to an original driving target of the vehicle as possible;
(4) After the new travel path is generated, precise path tracking and control of the vehicle using the vehicle control system is required, typically involving precise control of steering, braking, and acceleration operations of the vehicle to ensure that the vehicle is able to travel safely along the new path.
Example 2
Referring to FIG. 3, another embodiment of the present invention provides an autonomous navigation positioning system for an autonomous vehicle, comprising;
the system comprises a positioning module, an environment sensing module, a path planning module and a decision control module;
The positioning module is used for starting the sensor to perform real-time measurement according to the preliminarily determined vehicle position, and fusing the measurement results to obtain the accurate position and posture of the vehicle in space;
The environment sensing module is used for generating point cloud data by utilizing a laser radar, matching the point cloud data with a high-precision map, determining the accurate position of a vehicle, and identifying obstacles and road features in the environment;
The path planning module is used for sensing surrounding obstacles and road conditions in real time according to the fusion characteristic information and the distance information, determining an optimal driving route and performing automatic navigation and obstacle avoidance;
and the decision control module is used for making driving decisions and controlling the vehicle to execute according to the results of the environment awareness and the path planning.
The positioning module comprises a GPS receiver unit, a carrier phase measuring unit, a data conversion unit and a position and posture calculating unit;
A GPS receiver unit for receiving signals transmitted by satellites and providing preliminary position information for the vehicle;
the carrier phase measuring unit is used for combining the GPS signals, calculating the distance error from the receiver to the satellite and improving the positioning precision;
the data conversion unit is used for converting the preprocessed sensor measurement result into position information in a global coordinate system;
and the position and posture calculation unit is used for calculating the position and posture of the vehicle in the space by using a sensor fusion algorithm.
The environment sensing module comprises a laser radar unit, a point cloud data processing unit, a map matching unit and an obstacle recognition unit;
The laser radar unit is used for emitting laser beams, calculating the relative distance between the vehicle and the surrounding environment according to the reflection time and generating point cloud data;
The point cloud data processing unit is used for processing the point cloud data, such as filtering, noise reduction and feature extraction;
the map matching unit is used for matching the point cloud data with the high-precision map;
and the obstacle identification unit is used for identifying the obstacle and the road characteristic by utilizing the three-dimensional characteristics of the point cloud data.
The path planning module comprises a path planning unit, a visual navigation unit, an obstacle avoidance execution unit and an updating unit;
the path planning unit is used for determining an optimal driving route according to the fusion characteristic information and the current state of the vehicle;
the visual navigation unit is used for navigating by using visual information, such as lane lines and traffic signs;
The obstacle avoidance execution unit is used for automatically taking obstacle avoidance measures such as speed reduction, steering and parking when encountering an obstacle;
and the updating unit is used for updating the position and posture information of the vehicle according to the real-time running condition of the vehicle and providing more accurate input data for subsequent navigation and obstacle avoidance.
The decision control module comprises a decision unit and a control unit;
The decision unit is used for evaluating the current environmental conditions and selecting driving modes such as following a car, changing a road and stopping a car;
And the control unit is used for generating control commands, such as acceleration, deceleration, steering and the like, according to the decision result, and directly controlling a hardware system of the vehicle to execute according to the control commands, such as a motor, a brake and a steering mechanism.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and variations, modifications, substitutions and alterations can be made to the above-described embodiments by those having ordinary skill in the art without departing from the spirit and scope of the present invention, and these are all within the protection of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411557905.9A CN119413185A (en) | 2024-11-04 | 2024-11-04 | Autonomous navigation and positioning system and method for autonomous driving vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411557905.9A CN119413185A (en) | 2024-11-04 | 2024-11-04 | Autonomous navigation and positioning system and method for autonomous driving vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119413185A true CN119413185A (en) | 2025-02-11 |
Family
ID=94474437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411557905.9A Pending CN119413185A (en) | 2024-11-04 | 2024-11-04 | Autonomous navigation and positioning system and method for autonomous driving vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119413185A (en) |
-
2024
- 2024-11-04 CN CN202411557905.9A patent/CN119413185A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12147242B2 (en) | Crowdsourcing a sparse map for autonomous vehicle navigation | |
JP7432285B2 (en) | Lane mapping and navigation | |
CN108572663B (en) | Target tracking | |
US10943355B2 (en) | Systems and methods for detecting an object velocity | |
KR102572219B1 (en) | Navigation information fusion framework (FUSION FRAMEWORK) and batch alignment (BATCH ALIGNMENT) for autonomous driving | |
EP3447528B1 (en) | Automated driving system that merges heterogenous sensor data | |
CN109791052B (en) | Method and system for classifying data points of a point cloud using a digital map | |
Schreiber et al. | Laneloc: Lane marking based localization using highly accurate maps | |
CN111856491B (en) | Method and apparatus for determining geographic position and orientation of a vehicle | |
Wijesoma et al. | Road-boundary detection and tracking using ladar sensing | |
CN115668182A (en) | Autonomous Vehicle Environment Awareness Software Architecture | |
US20210229280A1 (en) | Positioning method and device, path determination method and device, robot and storage medium | |
CN112132896B (en) | Method and system for detecting states of trackside equipment | |
JP4433887B2 (en) | Vehicle external recognition device | |
Kim et al. | Sensor fusion algorithm design in detecting vehicles using laser scanner and stereo vision | |
US12091004B2 (en) | Travel lane estimation device, travel lane estimation method, and computer-readable non-transitory storage medium | |
CN119124193A (en) | Alignment of road information for navigation | |
JP2018092483A (en) | Object recognition device | |
Shunsuke et al. | GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon | |
Sehestedt et al. | Robust lane detection in urban environments | |
Fortin et al. | Feature extraction in scanning laser range data using invariant parameters: Application to vehicle detection | |
Chetan et al. | An overview of recent progress of lane detection for autonomous driving | |
KR20230031344A (en) | System and Method for Detecting Obstacles in Area Surrounding Vehicle | |
JP2018048949A (en) | Object identification device | |
Park et al. | Vehicle localization using an AVM camera for an automated urban driving |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication |