Nothing Special   »   [go: up one dir, main page]

CN119413185A - Autonomous navigation and positioning system and method for autonomous driving vehicle - Google Patents

Autonomous navigation and positioning system and method for autonomous driving vehicle Download PDF

Info

Publication number
CN119413185A
CN119413185A CN202411557905.9A CN202411557905A CN119413185A CN 119413185 A CN119413185 A CN 119413185A CN 202411557905 A CN202411557905 A CN 202411557905A CN 119413185 A CN119413185 A CN 119413185A
Authority
CN
China
Prior art keywords
vehicle
information
receiver
unit
satellite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411557905.9A
Other languages
Chinese (zh)
Inventor
庄双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Bognahua New Energy Technology Co ltd
Original Assignee
Suzhou Bognahua New Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Bognahua New Energy Technology Co ltd filed Critical Suzhou Bognahua New Energy Technology Co ltd
Priority to CN202411557905.9A priority Critical patent/CN119413185A/en
Publication of CN119413185A publication Critical patent/CN119413185A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)

Abstract

The invention discloses an autonomous navigation and positioning system and an autonomous navigation and positioning method for an autonomous driving vehicle, which belong to the technical field of autonomous driving, and specifically comprise the steps of receiving satellite signals through a GPS receiver, preliminarily determining the position of the vehicle, starting a vehicle sensor to measure in real time and preprocessing the result; the method comprises the steps of converting a preprocessed measurement result into position information in a global coordinate system, calculating the position and the gesture of a vehicle by combining a sensor fusion algorithm, generating point cloud data by utilizing a laser radar, determining the accurate position of the vehicle by matching with a high-precision map, identifying obstacles and road features, capturing an environment image by a vehicle-mounted camera, extracting visual feature information and fusing with environment perception information, perceiving the conditions of the obstacles and the road in real time based on the fusion information and the distance information, determining an optimal driving route by adopting a path planning algorithm, realizing visual navigation and automatic obstacle avoidance, and improving the safety and the reliability of automatic driving.

Description

Autonomous navigation and positioning system and method for autonomous driving vehicle
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to an autonomous navigation and positioning system and an autonomous navigation and positioning method for an automatic driving vehicle.
Background
The automatic driving technology is a technology for realizing autonomous running, automatic obstacle avoidance and automatic path planning of a vehicle through advanced perception, decision, control and vehicle technologies, wherein the autonomous navigation positioning technology is used for determining the position of the vehicle in space through various sensors and algorithms and planning an optimal running route. The automatic driving vehicle autonomous navigation positioning method has the advantages that the development process of the automatic driving vehicle autonomous navigation positioning method is changed from single sensor positioning to multi-sensor fusion positioning, the automatic driving vehicle is mainly positioned by a global satellite navigation system in early stage, the global satellite navigation system is limited in positioning precision in a scene with satellite signals being easily blocked, along with the continuous progress of sensor technology and the reduction of cost, the automatic driving vehicle starts to adopt the multi-sensor fusion positioning method, the vehicle can realize more accurate and reliable positioning by fusing the information of a plurality of sensors such as GNSS, INS, liDAR, a camera and the like, but the defects and challenges of increased system complexity, increased data processing and calculation load, mutual interference among the sensors, sensor faults and redundancy problems, environmental adaptability challenges, cost problems and the like are faced, and in order to overcome the defects and challenges, an automatic driving vehicle autonomous navigation positioning algorithm is continuously optimized and used, for example, the information of a plurality of sensors is fused by adopting a Kalman filtering algorithm, the positioning precision is improved, and visual information such as road marks, lane lines and the like is recognized and processed by utilizing a deep learning algorithm to assist the vehicle positioning.
The patent with the publication number of CN118408558A discloses a vehicle autonomous navigation method, a device and a computer device based on vehicle positioning, which comprise the steps of acquiring current driving parameters of a target vehicle in a driving process, lane line information and barrier information in a driving environment based on image acquisition equipment on the target vehicle, determining a target planning path corresponding to the target vehicle and real-time position information of the target vehicle according to the current driving parameters, the lane line information and the barrier information, and determining planning driving parameters of the target vehicle according to the target planning path and the real-time position information of the target vehicle, wherein the planning driving parameters are used for autonomous navigation of the target vehicle.
The patent with the authorized bulletin number of CN107462218B discloses a night panoramic vision relative positioning system and method of an autonomous navigation tractor, wherein the night panoramic vision relative positioning system comprises three groups of binocular vision systems, an Ethernet switch and an industrial personal computer, each group of vision systems comprises two short wave infrared cameras which are horizontally fixed in the same direction, the three groups of binocular vision systems are arranged in an equilateral triangle mode to form a panoramic vision unit which is 120 degrees each other and are connected to the industrial personal computer through the Ethernet switch, and the positioning method comprises the steps that the three groups of binocular vision systems detect the same target in the environment in each direction at the same time, accordingly, the vehicle motion is reversely pushed, displacement vectors under different coordinate systems are formed, and the displacement vectors are converted into the same coordinate system to realize relative positioning. The technical scheme is reasonable in design, simple in structure and convenient to operate, the vehicle motion state is detected from multiple directions, and the detection results are subjected to data fusion, so that the positioning accuracy of the tractor at night is effectively improved, and data support is provided for navigation.
The above prior art has the problems of poor dependency and environmental adaptability, and limited view range and resolution of the camera and influence of ambient light and weather conditions, although obstacle avoidance and path planning can be performed through visual information, the obstacle avoidance and path planning capability may be limited.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an autonomous navigation positioning system and an autonomous navigation positioning method for an automatic driving vehicle, which are characterized in that a GPS receiver is used for receiving satellite signals and preliminarily determining the position of the vehicle, a vehicle sensor is started to conduct real-time measurement and preprocessing results, the preprocessed measurement results are converted into position information in a global coordinate system, the position and the gesture of the vehicle are calculated by combining a sensor fusion algorithm, point cloud data are generated by using a laser radar and are matched with a high-precision map to determine the accurate position of the vehicle and identify the characteristics of obstacles and roads, a vehicle-mounted camera is used for capturing an environment image, extracting visual characteristic information and fusing the visual characteristic information with environment perception information, the conditions of the obstacles and the roads are perceived in real time based on the fusion information and the distance information, and an optimal driving route is determined by adopting a path planning algorithm, so that visual navigation and automatic obstacle avoidance are realized, and the safety and reliability of automatic driving are improved.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an autonomous navigation positioning method of an autonomous vehicle, comprising:
Step S1, receiving signals transmitted by satellites through a GPS receiver, calculating a distance error between the receiver and the satellites by combining a carrier phase measurement method, thus preliminarily determining the position of a vehicle, starting a vehicle sensor to perform real-time measurement based on the preliminarily determined vehicle position, and preprocessing a measurement result;
S2, converting the preprocessed measurement result into position information in a global coordinate system, combining the position of the vehicle with the preliminary determination, and calculating the position and the posture of the vehicle in space by using a fusion algorithm;
S3, based on the position and the posture of the vehicle in the space, the laser radar emits laser beams, calculates the relative distance between the vehicle and the surrounding environment according to the reflection time, generates point cloud data, matches the point cloud data with a high-precision map in a vehicle system, determines the accurate position of the vehicle in a global coordinate system through a matching strategy, identifies obstacles and road characteristics in the environment by utilizing the three-dimensional characteristics of the point cloud data, and generates environment perception information;
S4, capturing an environmental image around the vehicle by the vehicle-mounted camera, analyzing the environmental image, extracting visual characteristic information of road marks, lane lines and obstacles, and fusing the extracted visual characteristic information with environment perception information to obtain fused characteristic information;
And S5, determining an optimal driving route by adopting a path planning algorithm based on the fusion characteristic information and the distance information, automatically driving along the extracted lane line by a visual navigation technology, automatically taking obstacle avoidance measures when encountering an obstacle, and updating the position and posture information of the obstacle avoidance measures.
Specifically, the specific steps of the step S1 include:
S1.1, a GPS receiver receives radio frequency signals from a plurality of GPS satellites, and the radio frequency signals are subjected to down-conversion and digital processing to obtain baseband signals;
S1.2, extracting carrier phase information from the baseband signal, and calculating the phase difference between the satellite carrier signal received by the receiver and the local oscillator reference signal of the receiver according to the carrier phase information Wherein, Representing the phase of the satellite carrier signal received by the receiver,Representing the phase of the local reference signal generated by the receiver;
s1.3, calculating the accurate distance from the receiver to the satellite according to the wavelength in the carrier phase information and the phase difference obtained in the step S1.2 to obtain a satellite ranging result Wherein d n represents the nth satellite ranging result,The phase difference between the nth satellite carrier signal received by the receiver and the local oscillator reference signal of the receiver is represented, lambda n represents the nth satellite carrier wavelength, and n represents the number of satellites.
Specifically, the specific steps of the step S1 further include:
s1.4, correcting the preliminary position according to the satellite ranging result, wherein the formula is as follows:
Where d n,cor denotes the exact distance of the corrected receiver to the nth satellite, Representing phase errors caused by multipath effects, α dian representing ionospheric delay correction factors, α dian representing tropospheric delay correction factors, α wei,t representing satellite clock correction factors, α jie,t representing receiver clock correction factors, β representing adjustment coefficients;
And S1.5, starting a vehicle sensor based on the corrected preliminary position information, measuring the sensor in real time, acquiring vehicle dynamic information, and preprocessing the vehicle dynamic information measured by the sensor.
Specifically, the specific steps of the step S3 include:
S3.1, the laser radar scans the surrounding environment of the vehicle in an omnibearing manner in an array scanning mode based on the position and the posture of the vehicle in space, and calculates the relative distance between the vehicle and each point in the surrounding environment by utilizing the difference delta t between the time of transmitting laser and the time of receiving the laser and the light speed c
S3.2, acquiring scanning angle information of the laser radar, calculating the position of each scanning point in a three-dimensional space according to the scanning angle information and the relative distance data, and presenting the position in a form of point cloud data;
S3.3, matching the generated point cloud data with a high-precision map in a vehicle system, searching for characteristics in the point cloud and corresponding characteristics in the map, and iteratively adjusting the position of the vehicle in a global coordinate system according to a matching result;
S3.4, determining the accurate position of the vehicle in the global coordinate system through a matching strategy according to the position adjustment result in the step S3.3 by combining the preliminary position information and the vehicle sensor data;
S3.5, dividing and classifying the point cloud data by utilizing the three-dimensional characteristics of the point cloud data and combining the accurate position information of the vehicle in a global coordinate system, and identifying the obstacle and road characteristics in the environment;
And S3.6, calculating distance and speed information of the obstacle relative to the vehicle according to the position and the shape of the obstacle, and generating environment sensing information containing the obstacle and the road characteristics by combining the road characteristic information.
Specifically, the specific steps of the matching policy in S3.4 include:
S3.41, preprocessing the vehicle sensor data, extracting characteristic points from the preprocessed vehicle sensor data by utilizing a characteristic point detection algorithm, and describing the extracted characteristic points to generate a characteristic descriptor;
S3.42, constructing a feature database from feature points and descriptors thereof in the high-precision map, and matching the feature points acquired by the vehicle sensor with the feature points in the feature database by using a distance-based matching algorithm to find the most similar matching pair;
S3.43, verifying the matching result, if verification is successful, calculating the preliminary position of the vehicle in the global coordinate system by using the shortest distance formula according to the feature point pair of the successful verification;
And S3.44, optimizing the preliminary position by utilizing an optimization algorithm by combining the position adjustment result and the preliminary position information in the step S3.3, and outputting the optimized vehicle position information to a vehicle control system.
Specifically, the specific steps of the step S5 include:
s5.1, receiving the fusion characteristic information and the distance information, and integrating the fusion characteristic information and the distance information to form vehicle perception data;
s5.2, abstracting a vehicle moving scene into an environment model based on vehicle perception data, searching in the known environment model by adopting a path planning algorithm, and determining a global optimal driving route from the current position to the target position;
S5.3, carrying out local path planning according to the global path, the real-time position of the vehicle and the environmental perception information, generating a local path planning result, and generating decision information according to the local path planning result and the vehicle state information;
s5.4, controlling the vehicle to run along the global optimal running route by using a path tracking algorithm according to the generated decision information;
s5.5, based on the global optimal driving route, using a visual navigation technology and an edge detection algorithm to extract lane line information from the road image, calculating the position and the posture of the vehicle relative to the lane line according to the lane line information, and simultaneously generating steering and speed control instructions to enable the vehicle to automatically drive along the extracted lane line;
s5.6, detecting the front obstacle in real time in the driving process, judging whether an obstacle avoidance measure is needed according to the position and speed information of the obstacle, and re-planning the driving path according to a local path planning algorithm if the obstacle avoidance measure is needed.
An autonomous navigation positioning system of an autonomous driving vehicle comprises a positioning module, an environment sensing module, a path planning module and a decision control module;
The positioning module is used for starting a sensor to measure in real time according to the preliminarily determined vehicle position, and fusing the measurement results to obtain the accurate position and the accurate posture of the vehicle in space;
The environment sensing module is used for generating point cloud data by utilizing a laser radar, matching the point cloud data with a high-precision map, determining the accurate position of a vehicle, and identifying obstacles and road features in the environment;
The path planning module is used for sensing surrounding obstacles and road conditions in real time according to the fusion characteristic information and the distance information, determining an optimal driving route and performing automatic navigation and obstacle avoidance;
And the decision control module is used for making driving decisions and controlling the vehicle to execute according to the results of the environment awareness and the path planning.
The positioning module comprises a GPS receiver unit, a carrier phase measuring unit, a data converting unit and a position and posture calculating unit;
the GPS receiver unit is used for receiving signals transmitted by satellites and providing preliminary position information for the vehicle;
the carrier phase measuring unit is used for combining GPS signals, calculating the distance error from the receiver to the satellite and improving the positioning precision;
the data conversion unit is used for converting the preprocessed sensor measurement result into position information in a global coordinate system;
The position and posture calculation unit is used for calculating the position and posture of the vehicle in the space by using a sensor fusion algorithm.
The path planning module comprises a path planning unit, a visual navigation unit and an obstacle avoidance execution unit;
the path planning unit is used for determining an optimal driving route according to the fusion characteristic information and the current state of the vehicle;
the visual navigation unit is used for navigating by using visual information;
The obstacle avoidance execution unit is used for automatically taking obstacle avoidance measures when encountering obstacles.
The decision control module comprises a decision unit and a control unit;
the decision unit is used for evaluating the current environmental condition and selecting a driving mode;
and the control unit is used for generating a control command according to the decision result, and directly controlling a hardware system of the vehicle to execute according to the control command.
Compared with the prior art, the invention has the beneficial effects that:
1. According to the autonomous navigation positioning method of the autonomous driving vehicle, the GPS signals, the sensor measurement and the laser radar point cloud data are combined, so that the system can accurately determine the position of the vehicle in the global coordinate system in real time, meanwhile, the obstacle and the road characteristic of the surrounding environment are identified, and the environment sensing capability of the vehicle is improved.
2. The invention provides an autonomous navigation positioning method of an automatic driving vehicle, which is based on fused characteristic information and distance data, and the system can intelligently plan an optimal driving route, so that visual navigation and automatic obstacle avoidance are realized, and in the whole process, the position and posture information of the vehicle are continuously updated, so that accurate navigation and intelligent driving are realized, and the safety and reliability of automatic driving are improved.
Drawings
FIG. 1 is a schematic diagram of an autonomous navigation positioning method for an autonomous driving vehicle according to the present invention;
FIG. 2 is a schematic flow chart of an autonomous navigation positioning method of an autonomous driving vehicle according to the present invention;
FIG. 3 is a schematic diagram of an autonomous navigation positioning system for an autonomous vehicle according to the present invention.
Detailed Description
Example 1
Referring to fig. 1-2, an embodiment of the invention provides an autonomous navigation positioning method for an autonomous driving vehicle, comprising the following steps:
Step S1, receiving signals transmitted by satellites through a GPS receiver, calculating a distance error between the receiver and the satellites by combining a carrier phase measurement method, thus preliminarily determining the position of a vehicle, starting a vehicle sensor to perform real-time measurement based on the preliminarily determined vehicle position, and preprocessing a measurement result;
S2, converting the preprocessed measurement result into position information in a global coordinate system, combining the position of the vehicle with the preliminary determination, and calculating the position and the posture of the vehicle in space by using a fusion algorithm;
further, the specific steps of step S2 include:
S2.1, determining an origin, a direction and a unit length of a global coordinate system, and converting a preprocessed measurement result into coordinate information in the global coordinate system through coordinate transformation, rotation and translation operations;
S2.2, inputting the converted coordinate information and GPS positioning result into a Kalman filter for fusion, and obtaining the position and the posture of the vehicle in space according to the output result of the Kalman filter, wherein the Kalman filter is the prior art content in the field and is not an inventive scheme of the application, and is not repeated here.
S3, based on the position and the posture of the vehicle in the space, the laser radar emits laser beams, calculates the relative distance between the vehicle and the surrounding environment according to the reflection time, generates point cloud data, matches the point cloud data with a high-precision map in a vehicle system, determines the accurate position of the vehicle in a global coordinate system through a matching strategy, identifies obstacles and road characteristics in the environment by utilizing the three-dimensional characteristics of the point cloud data, and generates environment perception information;
S4, capturing an environmental image around the vehicle by using the vehicle-mounted camera, analyzing the environmental image by using a computer vision algorithm, extracting visual characteristic information of road marks, lane lines and obstacles, and fusing the extracted visual characteristic information with environment perception information to obtain fused characteristic information;
Further, the specific steps of step S4 include:
(1) Capturing a high-definition image of the surrounding environment of the vehicle by using the vehicle-mounted camera, and ensuring that the camera has a high resolution and a wide-angle view so as to capture more environment details;
(2) Preprocessing the captured image, including denoising, graying, contrast enhancement and the like, removing noise in the image by using a median filtering method, and converting a color image into a gray image to reduce the calculated amount, wherein the median filtering method and the gray conversion method are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
(3) Extracting visual characteristic information of road marks, lane lines and barriers by using an edge detection algorithm, and dividing an image into different areas by using a threshold segmentation algorithm so as to extract targets such as the road marks, the lane lines and the barriers, wherein the edge detection algorithm and the threshold segmentation algorithm are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
(4) Fusing the extracted visual characteristic information with environment perception information provided by a laser radar on a vehicle to obtain more accurate and comprehensive environment perception;
(5) And outputting the fused characteristic information to an automatic driving system of the vehicle.
And S5, based on the fusion characteristic information and the distance information, surrounding obstacles and road conditions are perceived in real time, an optimal driving route is determined by adopting a path planning algorithm, the vehicle automatically drives along the extracted lane line by a visual navigation technology, obstacle avoidance measures are automatically taken when the obstacle is encountered, and the position and posture information of the obstacle are updated.
The specific steps of the step S1 include:
S1.1, a GPS receiver receives radio frequency signals from a plurality of GPS satellites, and the radio frequency signals are subjected to down-conversion and digital processing to obtain baseband signals;
further, the specific steps of S1.1 include:
(1) The GPS receiving antenna senses various electromagnetic fields and interference including all visible GPS signals and transmits the radio frequency signals to the radio frequency front-end processing module;
(2) The radio frequency front end processing module filters and amplifies all visible GPS satellite signals received by an antenna, mixes with a sine wave local oscillation signal generated by a local oscillator and down-converts the signals into intermediate frequency signals, wherein radio frequency front end electronic devices are generally integrated in an application specific integrated circuit chip ASIC, and are commonly called as radio frequency integrated circuits RFIC;
(3) And the digital processing is that the analog intermediate frequency signal output by the radio frequency front end is converted into a digital intermediate frequency signal through an analog-to-digital converter so as to facilitate the subsequent digital signal processing.
S1.2, extracting carrier phase information from the baseband signal, and calculating the phase difference between the satellite carrier signal received by the receiver and the local oscillator reference signal of the receiver according to the carrier phase informationWherein, Representing the phase of the satellite carrier signal received by the receiver,Representing the phase of the local reference signal generated by the receiver and, in the calculationWhen the phase difference is given in degrees, the phase difference needs to be converted into radian;
In calculating the phase difference, the receiver simultaneously generates a local reference signal having the same frequency as the carrier signal transmitted by the satellite, and the phase difference is obtained by comparing the phase of the received satellite carrier signal with the phase of the local reference signal.
It should be noted that, since the carrier signal is a periodic sinusoidal signal, the phase measurement can only measure the portion of less than one wavelength, and thus there is a problem of uncertainty of the whole cycle number, and in step S1.4 of the present invention, the problem of the whole cycle ambiguity needs to be corrected to obtain an accurate phase difference measurement value.
Further, the specific step of extracting carrier phase information from the baseband signal includes:
(1) The GPS satellite sends out the modulation signals containing the ranging codes according to the own satellite-borne clock, and meanwhile, the receiver receives the modulation signals containing the ranging codes and performs preliminary signal processing;
(2) Because the carrier signal is modulated on the ranging code, the ranging code and the satellite text modulated on the carrier are firstly removed, and the carrier is acquired again, namely the carrier is rebuilt, wherein the carrier rebuilding method can adopt a code correlation method which is the prior art content in the field and is not an inventive scheme of the application, and the description is omitted here;
(3) After the carrier is rebuilt, the phase locking is realized by using a phase-locked loop circuit, and after the phase locking, the local signal phase of the receiver is the same as the GPS carrier signal phase, and the carrier phase information is extracted at the moment, wherein the phase-locked loop circuit is the prior art in the field and is not an inventive scheme of the application, and is not repeated here.
S1.3, calculating the accurate distance from the receiver to the satellite according to the wavelength in the carrier phase information and the phase difference obtained in the step S1.2 to obtain a satellite ranging resultWherein d n represents the nth satellite ranging result,The phase difference between the nth satellite carrier signal received by the receiver and the local oscillation reference signal of the receiver is represented, lambda n represents the wavelength of the nth satellite carrier, and n represents the number of satellites;
the carrier wavelength is known, and for the GPS system, the L1 carrier wavelength is about 19.03cm and the L2 carrier wavelength is about 24.42cm.
S1.4, correcting the preliminary position according to the satellite ranging result, wherein the formula is as follows:
where d n,cor denotes the corrected receiver-to-satellite accurate distance, Representing phase errors caused by multipath effects, a dian representing ionospheric delay correction factors for correcting the effect of the ionosphere on signal propagation, a dian representing tropospheric delay correction factors for correcting the effect of the troposphere on signal propagation, a wei,t representing satellite clock difference correction factors for correcting errors of the satellite clock, a jie,t representing receiver clock difference correction factors for correcting errors of the receiver clock, and β representing adjustment coefficients;
It should be noted that, the formula in the present invention comprehensively considers various factors affecting the ranging accuracy, including multipath effect, ionospheric delay, tropospheric delay, satellite clock error and receiver clock error, and considers other possible error sources by introducing correction factor β, so that the overall consideration makes the ranging result more accurate and reliable, and helps to improve the satellite navigation and positioning accuracy.
And S1.5, starting a vehicle sensor based on the corrected preliminary position information, measuring the sensor in real time, acquiring vehicle dynamic information, and preprocessing the vehicle dynamic information measured by the sensor.
The vehicle sensor comprises a camera, a laser radar and an inertial navigation unit, vehicle dynamic information comprises distance, speed, acceleration, direction and images, and preprocessing operation comprises data filtering, denoising, calibration and time synchronization.
The specific steps of S3 include:
S3.1, the laser radar scans the surrounding environment of the vehicle in an omnibearing manner in an array scanning mode based on the position and the posture of the vehicle in space, and calculates the relative distance between the vehicle and each point in the surrounding environment by utilizing the difference delta t between the time of transmitting laser and the time of receiving the laser and the light speed c
In the scanning process, the laser radar emits laser beams to the surrounding environment, and after the laser beams irradiate the target object, part of the laser beams are reflected back and received by a receiving module of the laser radar.
S3.2, acquiring scanning angle information of the laser radar, calculating the position of each scanning point in a three-dimensional space according to the scanning angle information and the relative distance data, and presenting the position in a form of point cloud data;
further, the specific step of S3.2 includes:
(1) Acquiring scanning angle information and relative distance data;
(2) The coordinate of each scanning point relative to the laser radar in the three-dimensional space is calculated through geometric transformation by utilizing the scanning angle information and the relative distance data, the polar coordinate is usually converted into Cartesian coordinate, the specific conversion process is the prior art content in the field, and the application is not an inventive scheme and is not repeated here;
(3) The calculated three-dimensional coordinates are combined into point cloud data and stored in the form of a list or array, wherein each element represents the three-dimensional coordinates of one scan point.
S3.3, matching the generated point cloud data with a high-precision map in a vehicle system, searching for characteristics in the point cloud and corresponding characteristics in the map, and iteratively adjusting the position of the vehicle in a global coordinate system according to a matching result;
Further, the specific step of S3.3 includes:
(1) The data acquisition, namely acquiring road data by utilizing a high-precision map acquisition vehicle, wherein the normal work of a sensor is ensured in the acquisition process, and the data is accurate and reliable;
(2) Processing the acquired point cloud and image data, including the steps of point cloud splicing, image enhancement and noise removal, so as to obtain high-quality point cloud data and image data, wherein the point cloud splicing, image enhancement and noise removal processes are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
(3) Extracting features from the processed point cloud data, wherein the features can be road elements such as lane lines, street lamps, telegraph poles and the like, can also be other objects with obvious identification, and simultaneously, extracting corresponding feature information from a high-precision map;
(4) And (3) matching the extracted point cloud features with features in the high-precision map to find out the corresponding relation between the extracted point cloud features and the features. In the application, the FPFH feature descriptor matching method is adopted in the step, and is the prior art content in the field, and is not an inventive scheme of the application, and is not repeated here;
(5) And (3) position adjustment, namely iteratively adjusting the position of the vehicle in a global coordinate system according to the matching result, so that the vehicle is positioned more accurately.
S3.4, determining the accurate position of the vehicle in the global coordinate system through a matching strategy according to the position adjustment result in the step S3.3 by combining the preliminary position information and the vehicle sensor data;
Further, the specific step of S3.4 includes:
(1) Inputting the position adjustment result, the preliminary position information and the vehicle sensor data in the step S3.3;
(2) Preprocessing and fusing input data to form unified environment perception information;
(3) Extracting features from the fused data, and selecting key features for matching;
(4) Applying a matching strategy to match the extracted features with features in the high-precision map;
(5) Determining the accurate position of the vehicle in the global coordinate system through an optimization algorithm according to the matching result and the sensor data, wherein the optimization algorithm adopts an iterative nearest-neighbor algorithm which is the prior art in the field and is not an inventive scheme of the application and is not repeated herein;
(6) And verifying the determined position and outputting final position information.
S3.5, dividing and classifying the point cloud data by utilizing the three-dimensional characteristics of the point cloud data and combining the accurate position information of the vehicle in a global coordinate system, and identifying the obstacle and road characteristics in the environment;
further, the specific step of S3.5 includes:
(1) Inputting point cloud data acquired by a vehicle sensor and accurate position information of a vehicle in a global coordinate system;
(2) Carrying out preprocessing operations such as denoising, filtering and the like on the point cloud data to improve the data quality, and converting the point cloud data into a global coordinate system according to the vehicle position information;
(3) The method is characterized in that an edge-based segmentation method is adopted, and three-dimensional characteristics of point cloud data are utilized for segmentation, wherein the three-dimensional characteristics of the point cloud data comprise spatial positions and geometric forms, and meanwhile, the edge-based segmentation method is the prior art content in the field and is not an inventive scheme of the application, and details are not repeated herein;
(4) Classifying the segmented point cloud data by adopting a convolutional neural network model, and identifying different objects or features, wherein the convolutional neural network model is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
(5) And identifying obstacles and road features in the environment according to the classification result, wherein the obstacles comprise vehicles, pedestrians and trees, and the road features comprise lane lines, road shoulders and traffic signs.
And S3.6, calculating distance and speed information of the obstacle relative to the vehicle according to the position and the shape of the obstacle, and generating environment sensing information containing the obstacle and the road characteristics by combining the road characteristic information.
The specific steps of the matching strategy in S3.4 include:
S3.41, preprocessing the vehicle sensor data, extracting characteristic points from the preprocessed vehicle sensor data by utilizing a characteristic point detection algorithm, and describing the extracted characteristic points to generate a characteristic descriptor, wherein the characteristic point detection algorithm is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
Further, the specific step of S3.41 includes:
(1) Acquiring raw data from vehicle sensors;
(2) The method comprises the steps of performing filtering, denoising, time alignment, coordinate conversion and other processing on original data;
(3) Detecting characteristic points in the preprocessed data by using a characteristic point detection algorithm;
(4) Describing the detected feature points by using SIFT descriptors to generate feature descriptors, wherein the feature descriptors are usually a vector and are used for representing local features of the feature points, and the SIFT descriptors are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
S3.42, constructing a feature database from feature points and descriptors thereof in the high-precision map, and matching the feature points acquired by the vehicle sensor with the feature points in the feature database by using a distance-based matching algorithm to find the most similar matching pair, wherein the distance-based matching algorithm is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
S3.43, verifying the matching result, if the verification is successful, calculating the preliminary position of the vehicle in the global coordinate system according to the feature point pair of the verification success by using a shortest distance formula, wherein the shortest distance formula is the prior art content in the field and is not an inventive scheme of the application, and is not repeated herein;
and S3.44, combining the position adjustment result and the preliminary position information in the step S3.3, optimizing the preliminary position by using an optimization algorithm, and outputting the optimized vehicle position information to a vehicle control system.
The specific steps of the step S5 include:
s5.1, receiving the fusion characteristic information and the distance information, and integrating the fusion characteristic information and the distance information to form vehicle perception data;
s5.2, abstracting a vehicle moving scene into an environment model based on vehicle perception data, searching in the known environment model by adopting a path planning algorithm, and determining a global optimal driving route from the current position to the target position;
further, the specific step of S5.2 includes:
(1) Creating a software environment executed by a path planning method, and mapping a traditional and abstract path problem to a computer-processable quantization problem in a real space based on the environment;
(2) The path searching is that an optimal path is found through continuous optimizing processing, and the path can select the minimum time consumption or the shortest distance and the like as evaluation indexes according to actual requirements;
(3) And (3) path optimization, namely performing smoothing treatment on the searched path to meet the requirements of comfort, stability and the like of vehicle running.
S5.3, carrying out local path planning according to the global path, the real-time position of the vehicle and the environmental perception information, generating a local path planning result, and generating decision information according to the local path planning result and the vehicle state information;
The decision information is how to run on the determined path after the path is determined, including steering, accelerating, decelerating, changing lanes, and the like.
S5.4, controlling the vehicle to run along the global optimal running route according to the generated decision information by using a path tracking algorithm, wherein the path tracking algorithm uses a pure tracking algorithm, and the pure tracking algorithm is the prior art content in the field and is not an inventive scheme of the application and is not repeated herein;
S5.5, based on the global optimal driving route, using a visual navigation technology and an edge detection algorithm to extract lane line information from a road image, calculating the position and the gesture of a vehicle relative to a lane line according to the lane line information, and generating steering and speed control instructions to enable the vehicle to automatically drive along the extracted lane line, wherein the visual navigation technology and the edge detection algorithm are prior art contents in the field and are not inventive schemes of the application and are not repeated herein;
further, the specific steps of calculating the position and posture of the vehicle relative to the lane line include:
(1) Determining parameters of a lane line, namely determining parameters such as slope, intercept and the like of the lane line according to a lane line tracking result, wherein the curve lane line needs to be approximated to the shape by using a polynomial fitting method and the like;
(2) Calculating the position of the vehicle, namely converting the position of the vehicle in the image into the position in an actual road by using geometric transformation according to the lane line parameters and the position information of the vehicle in the image;
(3) Calculating the vehicle attitude, namely calculating the course angle of the vehicle, namely the included angle between the vehicle and the direction of the lane line according to the direction of the lane line and the running direction of the vehicle, wherein the calculation can be realized by comparing the vector of the running direction of the vehicle and the direction of the lane line, and simultaneously, the attitude information such as the roll angle, the pitch angle and the like of the vehicle can be estimated according to the information such as the curvature of the lane line, the running speed and the like of the vehicle;
(4) And fusing the global path information, namely fusing the calculated vehicle position and posture information with the global optimal driving route, which is helpful for evaluating whether the vehicle deviates from the preset route and making corresponding adjustment.
S5.6, detecting the front obstacle in real time in the driving process, judging whether an obstacle avoidance measure is needed according to the position and speed information of the obstacle, and re-planning the driving path according to a local path planning algorithm if the obstacle avoidance measure is needed.
Further, the specific implementation steps of S5.6 include:
(1) Acquiring a front obstacle and the type and position of the obstacle, and continuously tracking the obstacle after the obstacle is detected to acquire the motion state, such as position, speed and acceleration;
(2) Judging whether an obstacle avoidance measure is needed according to the position and speed information of the obstacle, the current state of the vehicle and the running target, and if the obstacle avoidance measure is needed, re-planning a running path according to a local path planning algorithm;
(3) After determining that obstacle avoidance is required, generating a new and safe driving path by using a local path planning algorithm, wherein the path can avoid known obstacles and is as close to an original driving target of the vehicle as possible;
(4) After the new travel path is generated, precise path tracking and control of the vehicle using the vehicle control system is required, typically involving precise control of steering, braking, and acceleration operations of the vehicle to ensure that the vehicle is able to travel safely along the new path.
Example 2
Referring to FIG. 3, another embodiment of the present invention provides an autonomous navigation positioning system for an autonomous vehicle, comprising;
the system comprises a positioning module, an environment sensing module, a path planning module and a decision control module;
The positioning module is used for starting the sensor to perform real-time measurement according to the preliminarily determined vehicle position, and fusing the measurement results to obtain the accurate position and posture of the vehicle in space;
The environment sensing module is used for generating point cloud data by utilizing a laser radar, matching the point cloud data with a high-precision map, determining the accurate position of a vehicle, and identifying obstacles and road features in the environment;
The path planning module is used for sensing surrounding obstacles and road conditions in real time according to the fusion characteristic information and the distance information, determining an optimal driving route and performing automatic navigation and obstacle avoidance;
and the decision control module is used for making driving decisions and controlling the vehicle to execute according to the results of the environment awareness and the path planning.
The positioning module comprises a GPS receiver unit, a carrier phase measuring unit, a data conversion unit and a position and posture calculating unit;
A GPS receiver unit for receiving signals transmitted by satellites and providing preliminary position information for the vehicle;
the carrier phase measuring unit is used for combining the GPS signals, calculating the distance error from the receiver to the satellite and improving the positioning precision;
the data conversion unit is used for converting the preprocessed sensor measurement result into position information in a global coordinate system;
and the position and posture calculation unit is used for calculating the position and posture of the vehicle in the space by using a sensor fusion algorithm.
The environment sensing module comprises a laser radar unit, a point cloud data processing unit, a map matching unit and an obstacle recognition unit;
The laser radar unit is used for emitting laser beams, calculating the relative distance between the vehicle and the surrounding environment according to the reflection time and generating point cloud data;
The point cloud data processing unit is used for processing the point cloud data, such as filtering, noise reduction and feature extraction;
the map matching unit is used for matching the point cloud data with the high-precision map;
and the obstacle identification unit is used for identifying the obstacle and the road characteristic by utilizing the three-dimensional characteristics of the point cloud data.
The path planning module comprises a path planning unit, a visual navigation unit, an obstacle avoidance execution unit and an updating unit;
the path planning unit is used for determining an optimal driving route according to the fusion characteristic information and the current state of the vehicle;
the visual navigation unit is used for navigating by using visual information, such as lane lines and traffic signs;
The obstacle avoidance execution unit is used for automatically taking obstacle avoidance measures such as speed reduction, steering and parking when encountering an obstacle;
and the updating unit is used for updating the position and posture information of the vehicle according to the real-time running condition of the vehicle and providing more accurate input data for subsequent navigation and obstacle avoidance.
The decision control module comprises a decision unit and a control unit;
The decision unit is used for evaluating the current environmental conditions and selecting driving modes such as following a car, changing a road and stopping a car;
And the control unit is used for generating control commands, such as acceleration, deceleration, steering and the like, according to the decision result, and directly controlling a hardware system of the vehicle to execute according to the control commands, such as a motor, a brake and a steering mechanism.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and variations, modifications, substitutions and alterations can be made to the above-described embodiments by those having ordinary skill in the art without departing from the spirit and scope of the present invention, and these are all within the protection of the present invention.

Claims (10)

1.一种自动驾驶车辆自主导航定位方法,其特征在于,包括:1. An autonomous navigation and positioning method for an autonomous driving vehicle, comprising: 步骤S1:通过GPS接收器接收卫星发射的信号,结合载波相位测量方法计算接收机至卫星的距离误差,从而初步确定车辆的位置,基于初步确定的车辆位置,启动车辆传感器进行实时测量,并对测量结果进行预处理;Step S1: Receive the signal transmitted by the satellite through the GPS receiver, and calculate the distance error from the receiver to the satellite by combining the carrier phase measurement method, so as to preliminarily determine the position of the vehicle, start the vehicle sensor to perform real-time measurement based on the preliminarily determined vehicle position, and pre-process the measurement results; 步骤S2:将预处理后的测量结果转换为全局坐标系中的位置信息,结合初步确定车辆的位置,利用融合算法计算出车辆在空间中的位置和姿态;Step S2: convert the preprocessed measurement results into position information in the global coordinate system, combine with the preliminary determination of the vehicle's position, and use the fusion algorithm to calculate the position and posture of the vehicle in space; 步骤S3:基于车辆在空间中的位置和姿态,激光雷达发射激光束,根据反射时间计算车辆与周围环境的相对距离,生成点云数据,并将点云数据与车辆系统中的高精度地图进行匹配,通过匹配策略确定车辆在全球坐标系中的精确位置,并利用点云数据的三维特性识别出环境中的障碍物和道路特征,生成环境感知信息;Step S3: Based on the position and posture of the vehicle in space, the laser radar emits a laser beam, calculates the relative distance between the vehicle and the surrounding environment according to the reflection time, generates point cloud data, and matches the point cloud data with the high-precision map in the vehicle system. The precise position of the vehicle in the global coordinate system is determined through the matching strategy, and the three-dimensional characteristics of the point cloud data are used to identify obstacles and road features in the environment to generate environmental perception information; 步骤S4:车载摄像头捕捉车辆周围的环境图像,并对环境图像进行分析,提取出道路标志、车道线和障碍物的视觉特征信息,并将提取出的视觉特征信息与环境感知信息进行融合,获得融合特征信息;Step S4: The on-board camera captures the environment image around the vehicle, analyzes the environment image, extracts the visual feature information of road signs, lane lines and obstacles, and fuses the extracted visual feature information with the environment perception information to obtain fused feature information; 步骤S5:基于融合特征信息和距离信息,采用路径规划算法确定最佳行驶路线,并通过视觉导航技术,自动沿着提取的车道线行驶,在遇到障碍物时自动采取避障措施,更新其位置和姿态信息。Step S5: Based on the fusion feature information and distance information, a path planning algorithm is used to determine the best driving route, and through visual navigation technology, the vehicle automatically drives along the extracted lane line, automatically takes obstacle avoidance measures when encountering obstacles, and updates its position and posture information. 2.如权利要求1所述的一种自动驾驶车辆自主导航定位方法,其特征在于,所述步骤S1的具体步骤包括:2. The autonomous navigation and positioning method for an automatic driving vehicle according to claim 1, wherein the specific steps of step S1 include: S1.1:GPS接收器接收来自多颗GPS卫星的射频信号,并将射频信号经过下变频和数字化处理,得到基带信号;S1.1: The GPS receiver receives radio frequency signals from multiple GPS satellites, and down-converts and digitizes the radio frequency signals to obtain baseband signals; S1.2:从基带信号中提取载波相位信息,根据载波相位信息,计算接收机接收的卫星载波信号与接收机本振参考信号的相位差其中,表示接收机接收到的卫星载波信号相位,表示接收机产生的本地参考信号相位;S1.2: Extract carrier phase information from the baseband signal, and calculate the phase difference between the satellite carrier signal received by the receiver and the receiver local oscillator reference signal based on the carrier phase information. in, Indicates the phase of the satellite carrier signal received by the receiver. Represents the phase of the local reference signal generated by the receiver; S1.3:根据载波相位信息中的波长和步骤S1.2中获得的相位差计算接收机到卫星的精确距离,得到卫星测距结果其中,dn表示第n颗卫星测距结果,表示接收机接收的第n颗卫星载波信号与接收机本振参考信号的相位差,λn表示第n颗卫星载波波长,n表示卫星数量。S1.3: Calculate the precise distance from the receiver to the satellite based on the wavelength in the carrier phase information and the phase difference obtained in step S1.2 to obtain the satellite ranging result Among them, d n represents the ranging result of the nth satellite, It represents the phase difference between the nth satellite carrier signal received by the receiver and the receiver local oscillator reference signal, λn represents the wavelength of the nth satellite carrier, and n represents the number of satellites. 3.如权利要求2所述的一种自动驾驶车辆自主导航定位方法,其特征在于,所述步骤S1的具体步骤还包括:3. The autonomous navigation and positioning method for an automatic driving vehicle according to claim 2, wherein the specific steps of step S1 further include: S1.4:根据卫星测距结果,对初步位置进行校正,公式为:S1.4: According to the satellite ranging results, correct the initial position, the formula is: 其中,dn,cor表示经过校正后的接收机到第n颗卫星的精确距离,表示多路径效应引起的相位误差,αdian表示电离层延迟校正因子,αdian表示对流层延迟校正因子,αwei,t表示卫星钟差校正因子,αjie,t表示接收机钟差校正因子,β表示调整系数;Where d n,cor represents the accurate distance from the receiver to the nth satellite after correction. represents the phase error caused by the multipath effect, α dian represents the ionospheric delay correction factor, α dian represents the tropospheric delay correction factor, α wei,t represents the satellite clock error correction factor, α jie,t represents the receiver clock error correction factor, and β represents the adjustment coefficient; S1.5:基于修正后的初步位置信息,启动车辆传感器,传感器进行实时测量,获取车辆动态信息,并对传感器测量的车辆动态信息进行预处理操作。S1.5: Based on the corrected preliminary position information, the vehicle sensor is started, the sensor performs real-time measurement, obtains the vehicle dynamic information, and performs preprocessing operations on the vehicle dynamic information measured by the sensor. 4.如权利要求3所述的一种自动驾驶车辆自主导航定位方法,其特征在于,所述步骤S3的具体步骤包括:4. The autonomous navigation and positioning method for an automatic driving vehicle according to claim 3, wherein the specific steps of step S3 include: S3.1:激光雷达基于车辆在空间中的位置和姿态,通过阵列扫描方式对车辆周围环境进行全方位扫描,并利用发送激光的时间和接收激光的时间之间的差值Δt和光速c,计算出车辆与周围环境中各个点的相对距离 S3.1: Based on the position and posture of the vehicle in space, the laser radar performs an all-round scan of the vehicle's surrounding environment through array scanning, and uses the difference Δt between the time of sending the laser and the time of receiving the laser and the speed of light c to calculate the relative distance between the vehicle and each point in the surrounding environment. S3.2:获取激光雷达的扫描角度信息,根据扫描角度信息和相对距离数据,计算出每个扫描点在三维空间中的位置,并以点云数据的形式呈现;S3.2: Obtain the scanning angle information of the laser radar, calculate the position of each scanning point in the three-dimensional space according to the scanning angle information and relative distance data, and present it in the form of point cloud data; S3.3:将生成的点云数据与车辆系统中的高精度地图进行匹配,寻找点云中的特征与地图中的对应特征,并根据匹配结果,迭代调整车辆在全球坐标系中的位置;S3.3: Match the generated point cloud data with the high-precision map in the vehicle system, find the features in the point cloud and the corresponding features in the map, and iteratively adjust the position of the vehicle in the global coordinate system based on the matching results; S3.4:根据步骤S3.3中的位置调整结果,结合初步位置信息和车辆传感器数据,通过匹配策略确定车辆在全球坐标系中的精确位置;S3.4: According to the position adjustment result in step S3.3, the precise position of the vehicle in the global coordinate system is determined by a matching strategy in combination with the preliminary position information and the vehicle sensor data; S3.5:利用点云数据的三维特性,结合车辆在全球坐标系中的精确位置信息,对点云数据进行分割、分类,识别出环境中的障碍物和道路特征;S3.5: Using the three-dimensional characteristics of point cloud data and the precise position information of the vehicle in the global coordinate system, segment and classify the point cloud data to identify obstacles and road features in the environment; S3.6:根据障碍物的位置和形状,计算障碍物相对于车辆的距离、速度信息,结合道路特征信息,生成包含障碍物和道路特征的环境感知信息。S3.6: Calculate the distance and speed of the obstacle relative to the vehicle based on the position and shape of the obstacle, and combine it with the road feature information to generate environmental perception information including the obstacle and road features. 5.如权利要求4所述的一种自动驾驶车辆自主导航定位方法,其特征在于,所述S3.4中匹配策略的具体步骤包括:5. The autonomous navigation and positioning method for an automatic driving vehicle according to claim 4, wherein the specific steps of the matching strategy in S3.4 include: S3.41:对车辆传感器数据进行预处理,并利用特征点检测算法从预处理后的车辆传感器数据中提取特征点,同时,对提取出的特征点进行描述,生成特征描述子;S3.41: preprocessing the vehicle sensor data, extracting feature points from the preprocessed vehicle sensor data using a feature point detection algorithm, and describing the extracted feature points to generate feature descriptors; S3.42:将高精度地图中的特征点及其描述子构建成特征数据库,并使用基于距离的匹配算法将车辆传感器采集到的特征点与特征数据库中的特征点进行匹配,找到最相似的匹配对;S3.42: Construct feature points and their descriptors in the high-precision map into a feature database, and use a distance-based matching algorithm to match the feature points collected by the vehicle sensor with the feature points in the feature database to find the most similar matching pairs; S3.43:对匹配结果进行验证,若验证成功,则根据验证成功的特征点对,利用最短距离公式计算车辆在全球坐标系中的初步位置;S3.43: Verify the matching result. If the verification is successful, calculate the initial position of the vehicle in the global coordinate system using the shortest distance formula based on the successfully verified feature point pairs; S3.44:结合步骤S3.3中的位置调整结果和初步位置信息,利用优化算法对初步位置进行优化,并将优化后的车辆位置信息输出给车辆控制系统。S3.44: Combining the position adjustment result in step S3.3 and the preliminary position information, the preliminary position is optimized using an optimization algorithm, and the optimized vehicle position information is output to the vehicle control system. 6.如权利要求5所述的一种自动驾驶车辆自主导航定位方法,其特征在于,所述步骤S5的具体步骤包括:6. The autonomous navigation and positioning method for an automatic driving vehicle according to claim 5, wherein the specific steps of step S5 include: S5.1:接收融合特征信息和距离信息,并将融合特征信息和距离信息进行整合形成车辆感知数据;S5.1: receiving fused feature information and distance information, and integrating the fused feature information and distance information to form vehicle perception data; S5.2:基于车辆感知数据,将车辆移动场景抽象为环境模型,并采用路径规划算法在已知的环境模型中进行搜索,确定从当前位置到目标位置的全局最佳行驶路线;S5.2: Based on the vehicle perception data, the vehicle movement scene is abstracted into an environmental model, and a path planning algorithm is used to search in the known environmental model to determine the global optimal driving route from the current position to the target position; S5.3:根据全局路径、车辆实时位置和环境感知信息,进行局部路径规划,生成局部路径规划结果,并根据局部路径规划的结果和车辆状态信息,生成决策信息;S5.3: Perform local path planning based on the global path, the real-time position of the vehicle and the environmental perception information, generate local path planning results, and generate decision information based on the local path planning results and the vehicle status information; S5.4:使用路径跟踪算法根据生成的决策信息,控制车辆沿全局最佳行驶路线行驶;S5.4: Use a path tracking algorithm to control the vehicle to travel along the global optimal driving route based on the generated decision information; S5.5:基于全局最佳行驶路线,使用视觉导航技术和边缘检测算法,从道路图像中提取车道线信息,并根据车道线信息,计算车辆相对于车道线的位置和姿态,同时,生成转向和速度控制指令,使车辆能够自动沿着提取的车道线行驶;S5.5: Based on the global optimal driving route, the visual navigation technology and edge detection algorithm are used to extract lane line information from the road image, and the position and posture of the vehicle relative to the lane line are calculated based on the lane line information. At the same time, steering and speed control instructions are generated so that the vehicle can automatically drive along the extracted lane line; S5.6:在行驶过程中,实时检测前方的障碍物,并根据障碍物的位置和速度信息,判断是否需要采取避障措施,若需要避障时,则根据局部路径规划算法重新规划行驶路径。S5.6: During driving, the vehicle detects obstacles ahead in real time and determines whether to take obstacle avoidance measures based on the location and speed of the obstacles. If obstacle avoidance is required, the vehicle replans the driving path based on the local path planning algorithm. 7.一种自动驾驶车辆自主导航定位系统,其用于实现权利要求1-6中任一项所述的一种自动驾驶车辆自主导航定位方法,其特征在于,包括:定位模块、环境感知模块、路径规划模块、决策控制模块;7. An autonomous navigation and positioning system for an autonomous driving vehicle, which is used to implement an autonomous navigation and positioning method for an autonomous driving vehicle according to any one of claims 1 to 6, characterized in that it comprises: a positioning module, an environment perception module, a path planning module, and a decision control module; 所述定位模块,用于根据初步确定的车辆位置,启动传感器进行实时测量,并将测量结果进行融合,获得车辆在空间中的精确位置和姿态;The positioning module is used to start the sensor to perform real-time measurement according to the initially determined vehicle position, and fuse the measurement results to obtain the precise position and posture of the vehicle in space; 所述环境感知模块,用于利用激光雷达生成点云数据,并与高精度地图匹配,确定车辆精确位置,并识别出环境中的障碍物和道路特征;The environment perception module is used to generate point cloud data using laser radar and match it with a high-precision map to determine the precise location of the vehicle and identify obstacles and road features in the environment; 所述路径规划模块,用于根据融合特征信息和距离信息,实时感知周围障碍物和道路情况,确定最佳行驶路线,并进行自动导航和避障;The path planning module is used to perceive surrounding obstacles and road conditions in real time based on the fusion feature information and distance information, determine the best driving route, and perform automatic navigation and obstacle avoidance; 所述决策控制模块,用于根据环境感知和路径规划的结果,做出驾驶决策并控制车辆执行。The decision control module is used to make driving decisions and control vehicle execution based on the results of environment perception and path planning. 8.如权利要求7所述的一种自动驾驶车辆自主导航定位系统,其特征在于,所述定位模块包括:GPS接收器单元、载波相位测量单元、数据转换单元、位置和姿态计算单元;8. An autonomous navigation and positioning system for an automatic driving vehicle as claimed in claim 7, characterized in that the positioning module comprises: a GPS receiver unit, a carrier phase measurement unit, a data conversion unit, and a position and attitude calculation unit; 所述GPS接收器单元,用于接收卫星发射的信号,为车辆提供初步的位置信息;The GPS receiver unit is used to receive signals transmitted by satellites and provide preliminary position information for the vehicle; 所述载波相位测量单元,用于结合GPS信号,计算接收机至卫星的距离误差,提高定位精度;The carrier phase measurement unit is used to calculate the distance error from the receiver to the satellite in combination with the GPS signal to improve the positioning accuracy; 所述数据转换单元,用于将预处理后的传感器测量结果转换为全局坐标系中的位置信息;The data conversion unit is used to convert the preprocessed sensor measurement results into position information in the global coordinate system; 所述位置和姿态计算单元,用于利用传感器融合算法计算出车辆在空间中的位置和姿态。The position and posture calculation unit is used to calculate the position and posture of the vehicle in space by using a sensor fusion algorithm. 9.如权利要求8所述的一种自动驾驶车辆自主导航定位系统,其特征在于,所述路径规划模块包括:路径规划单元、视觉导航单元、避障执行单元;9. The autonomous navigation and positioning system for an autonomous driving vehicle according to claim 8, wherein the path planning module comprises: a path planning unit, a visual navigation unit, and an obstacle avoidance execution unit; 所述路径规划单元,用于根据融合特征信息和车辆当前状态,确定最佳行驶路线;The path planning unit is used to determine the best driving route based on the fused feature information and the current state of the vehicle; 所述视觉导航单元,用于使用视觉信息进行导航;The visual navigation unit is used to navigate using visual information; 所述避障执行单元,用于在遇到障碍物时,自动采取避障措施。The obstacle avoidance execution unit is used to automatically take obstacle avoidance measures when encountering an obstacle. 10.如权利要求9所述的一种自动驾驶车辆自主导航定位系统,其特征在于,所述决策控制模块包括:决策单元、控制单元;10. The autonomous navigation and positioning system for an automatic driving vehicle according to claim 9, wherein the decision control module comprises: a decision unit and a control unit; 所述决策单元,用于评估当前环境条件,选择驾驶模式;The decision-making unit is used to evaluate the current environmental conditions and select a driving mode; 所述控制单元,用于根据决策结果生成控制命令,直接控制车辆的硬件系统按照控制命令进行执行。The control unit is used to generate a control command according to the decision result, and directly control the hardware system of the vehicle to execute according to the control command.
CN202411557905.9A 2024-11-04 2024-11-04 Autonomous navigation and positioning system and method for autonomous driving vehicle Pending CN119413185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411557905.9A CN119413185A (en) 2024-11-04 2024-11-04 Autonomous navigation and positioning system and method for autonomous driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411557905.9A CN119413185A (en) 2024-11-04 2024-11-04 Autonomous navigation and positioning system and method for autonomous driving vehicle

Publications (1)

Publication Number Publication Date
CN119413185A true CN119413185A (en) 2025-02-11

Family

ID=94474437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411557905.9A Pending CN119413185A (en) 2024-11-04 2024-11-04 Autonomous navigation and positioning system and method for autonomous driving vehicle

Country Status (1)

Country Link
CN (1) CN119413185A (en)

Similar Documents

Publication Publication Date Title
US12147242B2 (en) Crowdsourcing a sparse map for autonomous vehicle navigation
JP7432285B2 (en) Lane mapping and navigation
CN108572663B (en) Target tracking
US10943355B2 (en) Systems and methods for detecting an object velocity
KR102572219B1 (en) Navigation information fusion framework (FUSION FRAMEWORK) and batch alignment (BATCH ALIGNMENT) for autonomous driving
EP3447528B1 (en) Automated driving system that merges heterogenous sensor data
CN109791052B (en) Method and system for classifying data points of a point cloud using a digital map
Schreiber et al. Laneloc: Lane marking based localization using highly accurate maps
CN111856491B (en) Method and apparatus for determining geographic position and orientation of a vehicle
Wijesoma et al. Road-boundary detection and tracking using ladar sensing
CN115668182A (en) Autonomous Vehicle Environment Awareness Software Architecture
US20210229280A1 (en) Positioning method and device, path determination method and device, robot and storage medium
CN112132896B (en) Method and system for detecting states of trackside equipment
JP4433887B2 (en) Vehicle external recognition device
Kim et al. Sensor fusion algorithm design in detecting vehicles using laser scanner and stereo vision
US12091004B2 (en) Travel lane estimation device, travel lane estimation method, and computer-readable non-transitory storage medium
CN119124193A (en) Alignment of road information for navigation
JP2018092483A (en) Object recognition device
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
Sehestedt et al. Robust lane detection in urban environments
Fortin et al. Feature extraction in scanning laser range data using invariant parameters: Application to vehicle detection
Chetan et al. An overview of recent progress of lane detection for autonomous driving
KR20230031344A (en) System and Method for Detecting Obstacles in Area Surrounding Vehicle
JP2018048949A (en) Object identification device
Park et al. Vehicle localization using an AVM camera for an automated urban driving

Legal Events

Date Code Title Description
PB01 Publication