CN111551186B - Real-time vehicle positioning method and system and vehicle - Google Patents
Real-time vehicle positioning method and system and vehicle Download PDFInfo
- Publication number
- CN111551186B CN111551186B CN201911205844.9A CN201911205844A CN111551186B CN 111551186 B CN111551186 B CN 111551186B CN 201911205844 A CN201911205844 A CN 201911205844A CN 111551186 B CN111551186 B CN 111551186B
- Authority
- CN
- China
- Prior art keywords
- information
- vehicle
- feature
- positioning
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000004927 fusion Effects 0.000 claims abstract description 52
- 230000003068 static effect Effects 0.000 claims abstract description 27
- 238000001514 detection method Methods 0.000 claims description 14
- 238000005259 measurement Methods 0.000 claims description 14
- 230000002093 peripheral effect Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 6
- 230000004807 localization Effects 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 6
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 9
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 9
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 9
- 238000000605 extraction Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000005315 distribution function Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/3415—Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/53—Determining attitude
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Navigation (AREA)
Abstract
The invention mainly provides a real-time vehicle positioning method, which mainly comprises the following steps: acquiring a feature truth value database according to the map information; the map is global map data, and surrounding information is acquired for the point cloud map to obtain characteristic position information; searching in a feature truth value database according to the attribute of the feature position information to obtain a positioning feature position; and carrying out position and posture calculation on the positioning characteristic position to obtain a calculated position and posture, carrying out data fusion on GPS information, the vehicle motion posture and the calculated position and posture, and outputting the fused position and posture. The invention uses the known global information, has high searching speed, and deduces the global position information of the own vehicle according to the geometric relationship between the own vehicle and the static global position after obtaining the static global position, and has less quantity of used static information and no accumulated error.
Description
Technical field:
the invention relates to the field of automatic driving, in particular to an automatic driving real-time positioning method.
The background technology is as follows:
The invention relates to the technical field of positioning, which is mainly used for real-time positioning of unmanned vehicles and is suitable for various situations including but not limited to satellite positioning.
The unmanned vehicle is used as an emerging intelligent tool, is an application case of system engineering integrating multiple technologies, and needs to realize planning of a driving path and control of vehicle movement on the basis of self positioning and environment perception, so that positioning under multiple road conditions is very important. Positioning techniques typically use a combination of satellites and IMUs to achieve positioning because their position determination is primarily derived from satellites, and in some situations where satellite positioning is poor, positioning offsets can occur. The operation of the unmanned vehicle requires that the perceived positioning result be as accurate as possible and redundant if necessary, which requires that the positioning result be reliable in various scenarios. The invention adds map and distance sensor such as laser and camera to increase the information quantity of global and local positioning, which provides redundancy on one hand and uses the combination mode of similar satellite and IMU to realize positioning when satellite positioning difference on the other hand.
The main content related in the positioning technology comprises global positioning information sources such as a global positioning system, a high-precision map, a cellular signal, wiFi and the like, and local positioning information sources such as an inertial measurement unit, a laser radar, a camera (monocular, binocular, depth and the like), a magnetic nail magnetic stripe, an ultra-wideband, RFID, infrared rays, bluetooth and the like.
Patent: application number 2018113600098, which is the implementation scheme closest to the present invention in the field, mainly discloses: under the condition that a base station is sensed by a vehicle, a positioning technology of combining satellite positioning and strapdown inertial navigation is adopted; under the condition that a base station is not sensed by a vehicle, a positioning technology of laser radar point cloud and high-precision map matching is adopted; under the condition that the light of the tunnel or the night external environment is stable, a positioning technology of a visual mileage algorithm is adopted. The method solves the problem that the automatic driving positioning mode in the prior art ensures that the automatic driving positioning precision is improved, and realizes the automatic driving accurate positioning. However, the scheme of the invention uses satellite positioning as global data, the satellite positioning precision is poor and only depends on local information positioning, and the local information positioning technology adopts interframe matching to obtain the pose.
The invention comprises the following steps:
the present application has been made to solve the above-mentioned technical problems. Embodiments of the present application provide a pose information determination method, apparatus, mobile device, computer program product, and computer readable storage medium, which can efficiently and accurately obtain pose information of the mobile device.
According to one aspect of the present application, there is provided a pose information determining method, and the present application mainly provides a real-time positioning method for a vehicle, which is characterized in that: acquiring a feature truth value database according to the map information; the map is global map data and is a point cloud map; acquiring surrounding information to obtain characteristic position information; searching in a feature truth value database according to the attribute of the feature position information to obtain a positioning feature position; position and posture resolving is carried out on the positioning characteristic position to obtain a resolved position and posture; and carrying out data fusion on the GPS information, the vehicle motion gesture and the calculated gesture and outputting the fusion gesture.
Preferably, the real-time vehicle positioning method of the present invention obtains a feature truth value database according to map information, including the point cloud map, wherein the features include geometric features, probability features and semantic features.
Preferably, according to the vehicle real-time positioning method, the characteristic position information is extracted from the environmental data according to the surrounding environmental data, and the characteristic position information is used as the detection characteristic information.
Preferably, the surrounding environment data is extracted from the environment data to obtain feature position information, and the feature position information is used as detection feature information.
And performing feature type matching on the detected feature information and feature true values in the point cloud map.
Preferably, after feature type matching is performed on the detected feature information and feature truth values in the point cloud map, the matching method comprises the steps of converting the detected feature information into a global coordinate system, and then constructing more dictionaries into values according to feature attribute and peripheral feature truth values to obtain corrected positioning feature positions.
Preferably, the feature type matching is performed on the detected feature information and feature true values in the point cloud map. The matching method comprises the steps of converting detected feature information into a global coordinate system, and then constructing more dictionary lookup values in a peripheral feature true value according to feature attributes to obtain corrected positioning feature positions.
The method comprises the steps of obtaining a resolved position gesture through locating a characteristic position, calculating the current global position of static characteristic point information and the relative position of the laser radar through triangular locating according to the static characteristic point information in the locating characteristic position, solving the current global position of the laser radar, and obtaining the current position of a vehicle through the locating characteristic position.
Preferably, the GPS information is global position obtained by a satellite positioning system and outputs the current positioning position to the fusion pose output module.
Preferably, the motion gesture of the vehicle is obtained through an inertial device of the vehicle, the motion gesture measurement of the vehicle is obtained through calculation, the vehicle gesture is obtained, and the vehicle position gesture is output to the fusion gesture output module.
Preferably, the real-time positioning method of the vehicle performs data fusion on the GPS information, the vehicle motion gesture and the resolved position gesture, wherein the resolved position gesture is used as a measured value, and the GPS information, the vehicle motion position gesture and the resolved position gesture are fused by using Kalman filtering to output a fused position gesture.
The invention also relates to a system for realizing real-time positioning of the vehicle, which comprises a point cloud map module, wherein the map comprises all predefined semantic data and is used for acquiring the true value of the peripheral characteristics by acquiring; the laser radar module is used for acquiring surrounding environment data; the inertial device is used for acquiring the motion state of the inertial device; the satellite positioning module is used for acquiring global positioning coordinates; and the pose fusion module is used for fusing data obtained by the point cloud map module, the laser radar module, the inertial device and the satellite positioning module after processing the data information and outputting accurate vehicle poses.
The invention provides a vehicle, which comprises a real-time vehicle positioning system, wherein the vehicle positioning system is used for measuring the pose of the vehicle and outputting the pose of the vehicle.
The invention has the following advantages:
Under the condition of no satellite or poor satellite positioning precision, the reliability of taking the satellite as a global positioning information source is low; under the condition of poor satellite positioning accuracy, the information of the high-precision map is fused to obtain static global information which does not change with time, the information is used for deduction of the vehicle pose, the final position obtained each time can be used as input of next positioning processing, and the positioning accuracy under the condition of lack of satellite positioning can be improved because the position is in a global coordinate system.
The method is characterized in that local positioning is carried out only by laser and/or vision, the characteristics of the surrounding environment are obtained through each frame of data, then registration and pose solving are carried out on the adjacent frame of characteristics, the obtained result is that the pose changes seen from the view angle of a vehicle are changed, and then accumulation is carried out, thus obtaining the odometer-like information, and the problem of the method is that the accumulated error is larger when the method is used for a long time; the invention obtains the static information around the self-vehicle, then searches in the high-precision map, because the global information is known, the searching speed is high, after the static global position is obtained, the global position information of the self-vehicle is deduced according to the geometric relationship between the self-vehicle and the static global position, the quantity of the static information used in the method is small, single measurement and searching errors exist, and accumulated errors do not exist.
Description of the drawings:
fig. 1 illustrates a flow chart of real-time positioning of a vehicle according to a first embodiment of the present application.
Fig. 2 illustrates a system block diagram of a real-time positioning method according to an embodiment of the application.
Fig. 3 illustrates a flow chart of vehicle-implemented positioning according to a second embodiment of the present application.
Fig. 4 illustrates a flowchart of a vehicle-implemented positioning according to a third embodiment of the present application.
The specific embodiment is as follows:
In order to solve the technical problems, the application provides a vehicle positioning method, and under the condition of no satellite or poor satellite positioning precision, the reliability of taking the satellite as a global positioning information source is low; in the application, more information of a high-precision map is used in the situation, static global information which does not change with time is obtained, the information is used for deduction of the vehicle pose, the final position obtained each time can be used as input of next positioning processing, and the requirements on satellite positioning precision and continuous online capability are low because the position is in a global coordinate system, so that the cost of an inertial navigation device can be reduced.
In order to achieve the above purpose, the present invention provides the following technical solutions:
The real-time vehicle positioning method comprises the steps of obtaining a feature truth value database according to map information, wherein the map is global map data and is a point cloud map, obtaining surrounding information to obtain feature position information, searching in the feature truth value database according to the attribute of the feature position information to obtain a positioning feature position, performing position and posture calculation on the positioning feature position to obtain a calculated pose and posture, performing data fusion on GPS information, vehicle motion pose and the calculated pose and posture, and outputting the fused pose and posture.
According to the real-time vehicle positioning method, a feature truth value database is obtained according to map information, wherein the feature comprises geometric features, probability features and semantic features.
According to the real-time vehicle positioning method, the characteristic position information is extracted from the environmental data according to the surrounding environmental data, and the characteristic position information is used as detection characteristic information. Surrounding environment data, extracting characteristic position information from the environment data, and taking the characteristic position information as detection characteristic information.
And performing feature type matching on the detected feature information and feature true values in the point cloud map.
After feature type matching is carried out on the detected feature information and feature true values in the point cloud map, the matching method comprises the steps of converting the detected feature information into a global coordinate system, and then searching values in a plurality of dictionaries according to feature attributes and the surrounding feature true values to obtain corrected positioning feature positions.
And carrying out feature type matching on the detected feature information and feature true values in the point cloud map. The matching method comprises the steps of converting detected feature information into a global coordinate system, and then constructing more dictionary lookup values in a peripheral feature true value according to feature attributes to obtain corrected positioning feature positions.
The method comprises the steps of obtaining a resolved position gesture through locating a characteristic position, calculating the current global position of static characteristic point information and the relative position of the laser radar through triangular locating according to the static characteristic point information in the locating characteristic position, solving the current global position of the laser radar, and obtaining the current position of a vehicle through the locating characteristic position.
The GPS information is used for acquiring a global position for the satellite positioning system and outputting the current positioning position to the fusion pose output module.
The vehicle motion gesture obtains the motion gesture measurement of the vehicle through an inertia device of the vehicle, calculates the vehicle gesture, and outputs the vehicle position gesture to the fusion gesture output module.
The real-time vehicle positioning method includes that GPS information, vehicle motion gestures and resolved position gestures are subjected to data fusion, the resolved position gestures are used as measured values, and the GPS information, the vehicle motion position gestures and the resolved position gestures are fused by means of Kalman filtering to output fusion position gestures.
The invention also relates to a system for realizing real-time positioning of the vehicle, which comprises a point cloud map module, wherein the map comprises all predefined semantic data and is used for acquiring the true value of the peripheral characteristics by acquiring; the laser radar module is used for acquiring surrounding environment data; the inertial device is used for acquiring the motion state of the inertial device; the satellite positioning module is used for acquiring global positioning coordinates; and the pose fusion module is used for fusing data obtained by the point cloud map module, the laser radar module, the inertial device and the satellite positioning module after processing the data information and outputting accurate vehicle poses.
The invention provides a vehicle, which comprises a real-time vehicle positioning system, wherein the vehicle positioning system is used for measuring the pose of the vehicle and outputting the pose of the vehicle.
The scheme of the invention can realize real-time positioning, has less static information quantity, has single measurement and search errors and does not have accumulated errors.
Example 1
As shown in fig. 1, a point cloud map 101, global map data, local map updates, current map data, feature truth values acquisition 103,
The system comprises a laser radar 104, a peripheral feature extraction 105, feature position information 106, a position and posture calculation 107, an inertial device 108, a motion pose measurement 109, a fusion pose output 110, a GPS receiver 112 and a global position acquisition 111.
The ADAS system is provided with a point cloud map 101, global map data is acquired in advance from the point cloud map 101, and local map update 102 is obtained from the point cloud map 101. The longitude and latitude of the global map data are converted into position information, thereby obtaining a local map update 102, and a sufficiently large range is intercepted as the current map data. From the global map data, the surrounding features true 103 are obtained. And constructing a surrounding feature truth value dictionary, wherein the feature attribute is used as a key, and the feature position is used as a value.
The global map data is a point cloud map stored in a specified format, and the data contains all map points conforming to specified features, wherein the features can be geometric features or probability features or semantic features.
The feature truth value is obtained by using equidistant searching or equal target searching in the current map data according to the last position before the last positioning system exits or the last updated fusion position posture of the current positioning system, and each feature truth value corresponds to one feature attribute and one feature position.
The ADAS system includes a lidar 104 for acquiring surrounding environment information, and the lidar 104 acquires surrounding environment data. Data processing is performed on the data acquired by the lidar 104, and features of surrounding environment data, namely, surrounding feature extraction 105, are extracted. And the surrounding environment data is used for carrying out feature recognition on each point of the three-dimensional point cloud to obtain feature points with stronger robustness, wherein the feature points are used as detection feature information.
The surrounding feature extraction 105 may obtain detected feature information, where the detected feature information needs to be consistent with a feature type included in the point cloud map, for example, the detected feature information may be described by calculating geometric features of surrounding data, or may be characterized by using a probability distribution function of the surrounding data. The detection method of the features is not limited.
The detected feature information is a plurality of feature sets in a local coordinate system, wherein a plurality of feature sets can be 128 feature sets, the detected feature information is converted into a global coordinate system by using the last input fusion position gesture, and then values are searched in a dictionary formed by the feature attribute true values according to the feature attribute, so that corrected feature position information 106 is obtained.
And meanwhile, obtaining a peripheral feature true value at the obtained feature true value 103, and matching the feature position information, the point cloud map and the detection feature information obtained from the laser radar abstraction to obtain a positioning feature position. The locating feature positions adopt static feature points, the current global positions of the feature points and the relative positions to the laser radar are calculated through triangular locating to solve the current global positions of the laser radar, and then the external parameter calibration results of the laser radar are used for solving the current positions of the vehicle, namely solving the position gestures 107.
Meanwhile, in the ADAS system, the own motion state is obtained by the inertial device 108, and information such as acceleration, angular velocity, and the like of the vehicle is included. The motion pose measurement 109 is obtained by calculating the own motion state and recursively estimating the position pose of the current vehicle.
At the same time, the global positioning system 112 receives information to obtain global positioning coordinates, which support positioning systems including, but not limited to, GPS, GLONASS, galileo, beidou, etc. Global positioning coordinates are acquired by the global positioning system 112, and the global position 111 is acquired, thereby obtaining the current positioning position.
And (3) calculating the position and posture obtained in the step (107), and fusing (110) the calculated position and posture obtained by combining the calculated position and posture with the motion state of the user and the current positioning position obtained in the global posture obtaining in the step (111) to obtain the fused position and posture. The position and the posture are obtained only through the laser radar and the point cloud map, the output of the global positioning information receiver is considered in the fused position and posture, and the fused position and posture can be optimized under the condition of good gps signals, so that the result of the positioning system is more reliable.
The position and posture of the vehicle are predicted by using the acceleration, the angular speed and the like of the motion state of the vehicle to obtain a recursive position and posture, and the fusion position and posture can be obtained by using Kalman filtering by using the calculated position and posture as a measured value.
The fusion factor is mainly characterized by using a satellite positioning state, wherein the fusion factor comprises states such as satellite receiving number, delay time and the like of satellites; when the reliability of the satellite positioning result is not high, the recursion position pose is used as the fusion position pose; when the satellite positioning result is good, the current positioning position is used for further correcting the fusion position posture.
The invention provides a real-time positioning system as shown in fig. 2, which comprises a 201 positioning information acquisition module and a 202 positioning information fusion module. The 201 positioning information acquisition module outputs the calculated position posture to the 202 positioning information fusion module, the 202 positioning information fusion module outputs the fused position posture, and the output fused position posture is output to the 201 positioning information acquisition module to form an information processing cycle. The positioning information acquisition module acquires surrounding environment data information from the sensing device and acquires global map data from the map 201. And the 202 positioning information fusion module acquires the self motion state and the global positioning coordinate, performs data fusion by combining the self motion state and the global positioning coordinate, and outputs the fusion position and posture.
Example two
As shown in fig. three, as shown in fig. 1, a point cloud map 301, global map data, local map updates, current map data, feature truth values acquisition 304,
Laser radar 305, surrounding data, surrounding feature extraction 306, detected feature information, feature position information 307, positioning feature position, position and attitude calculation 309, calculated position and attitude, inertial device 310, motion attitude measurement 311, fusion attitude output 312, GPS receiver 313, global position acquisition 314, and fusion position and attitude.
The ADAS system is provided with a point cloud map 301, global map data is acquired in advance from the point cloud map 301, and local map update 302 is obtained from the point cloud map 301. The longitude and latitude of the global map data are converted into position information, thereby obtaining a local map update 302, and a sufficiently large range is intercepted as the current map data. From the global map data, a surrounding feature truth value is obtained 304. A surrounding feature truth dictionary is constructed with feature attributes as keys and feature locations as values.
The global map data is a point cloud map stored in a specified format, and the data contains all map points conforming to specified features, wherein the features can be geometric features or probability features or semantic features.
The feature truth value is obtained by using equidistant searching or equal target searching in the current map data according to the last position before the last positioning system exits or the last updated fusion position posture of the current positioning system, and each feature truth value corresponds to one feature attribute and one feature position.
The ADAS system includes a laser radar 305 for acquiring surrounding environment information, and the laser radar 305 acquires surrounding environment data. Data processing is performed on the data acquired by the laser radar 305, and features of surrounding environment data, namely, surrounding feature extraction 306 are extracted. And the surrounding environment data is used for carrying out feature recognition on each point of the three-dimensional point cloud to obtain feature points with stronger robustness, wherein the feature points are used as detection feature information.
The surrounding feature extraction 306 may obtain detected feature information, where the detected feature information needs to be consistent with a feature type included in the point cloud map, for example, the detected feature information may be described by calculating geometric features of surrounding data, or may be characterized by using a probability distribution function of the surrounding data. The detection method of the features is not limited.
Meanwhile, the camera module in step 308 is used for acquiring the surrounding environment data information, so as to enhance the accuracy of the feature position information module.
The image obtained by the camera module is an environmental object in the FOV, and the position is defined in a daily three-dimensional space, so that the global positioning coordinate is required to be converted in a UTM coordinate system to obtain the current positioning position of the daily three-dimensional space, a part of map is framed from the semantic map by using the position, the detection characteristic information, such as 32 semantic characteristics, is searched on the part of map, each characteristic corresponds to a current static position, and the static positions form the positioning characteristic positions.
Using the positioning characteristic position to obtain a calculated position posture;
The detected feature information is a plurality of feature sets in the local coordinate system, wherein a plurality of feature sets can be 128 feature sets, the detected feature information is converted into the global coordinate system by using the last input fusion position gesture, and then values are searched in more dictionaries according to feature attributes and feature truth values, so that corrected feature position information 307 is obtained.
Meanwhile, a feature truth value 304 is obtained to obtain a peripheral feature truth value and feature position information, static feature points are adopted for locating feature positions, the current global position of the feature points and the relative position to the laser radar are calculated through triangular locating to solve the current global position of the laser radar, and then the external parameter calibration result of the laser radar is used for solving the current position of the vehicle, namely the calculated position gesture 309.
Meanwhile, in the ADAS system, the own motion state is obtained by the inertial device 310, in which information such as acceleration, angular velocity, and the like of the vehicle is included. The motion pose measurement 311 is obtained by calculating the motion state of the vehicle itself and the position pose of the current vehicle is recursively deduced.
At the same time, the global positioning system 313 receives information to obtain global positioning coordinates, which support positioning systems including, but not limited to, GPS, GLONASS, galileo, beidou, etc. Global positioning coordinates are acquired by the global positioning system 314, and the global position 314 is acquired, thereby obtaining the current positioning position.
And (3) calculating the position and posture obtained in the step 309, and fusing 312 the calculated position and posture obtained by combining the calculated position and posture with the motion state of the user and the current positioning position obtained in the global position and posture obtaining in the step to obtain a fused position and posture. The position and the posture are obtained only through the laser radar and the point cloud map, the output of the global positioning information receiver is considered in the fused position and posture, and the fused position and posture can be optimized under the condition of good GPS signals, so that the result of the positioning system is more reliable.
The position and posture of the vehicle are predicted by using the acceleration, the angular speed and the like of the motion state of the vehicle to obtain a recursive position and posture, and the fusion position and posture can be obtained by using Kalman filtering by using the calculated position and posture as a measured value.
The fusion factor is mainly characterized by using a satellite positioning state, wherein the fusion factor comprises states such as satellite receiving number, delay time and the like of satellites; when the reliability of the satellite positioning result is not high, the recursion position pose is used as the fusion position pose; when the satellite positioning result is good, the current positioning position is used for further correcting the fusion position posture.
Example III
As shown in fig. 4, the point cloud map 401, global map data, local map update, current map data, feature truth value acquisition 103.
The laser radar 403, surrounding environment data, surrounding feature extraction 404, detected feature information, feature position information 405, positioning feature positions, position and posture calculation 406, calculated position and posture, inertial device 407, motion posture measurement 408, and fusion posture output 409.
The ADAS system is provided with a point cloud map 401, and the local map update 102 is obtained from the point cloud map 401, which is acquired in advance from the point cloud map 401. The longitude and latitude of the global map data are converted into position information, thereby obtaining a local map update 102, and a sufficiently large range is intercepted as the current map data. From the global map data, a surrounding feature truth value 103 is obtained. And constructing a surrounding feature truth value dictionary, wherein the feature attribute is used as a key, and the feature position is used as a value.
The global map data is a point cloud map stored in a specified format, and the data contains all map points conforming to specified features, wherein the features can be geometric features or probability features or semantic features.
The feature truth value is obtained by using equidistant searching or equal target searching in the current map data according to the last position before the last positioning system exits or the last updated fusion position posture of the current positioning system, and each feature truth value corresponds to one feature attribute and one feature position.
The ADAS system includes a laser radar 403 for acquiring surrounding environment information, and the laser radar 403 acquires surrounding environment data. Data processing is performed on the data acquired by the lidar 403, and features of surrounding environment data, that is, surrounding feature extraction 404, are extracted. And the surrounding environment data is used for carrying out feature recognition on each point of the three-dimensional point cloud to obtain feature points with stronger robustness, wherein the feature points are used as detection feature information.
The surrounding feature extraction 404 may obtain detected feature information, where the detected feature information needs to be consistent with a feature type included in the point cloud map, for example, the detected feature information may be described by calculating geometric features of surrounding data, or may be characterized by using a probability distribution function of the surrounding data. The detection method of the features is not limited.
The detected feature information is a plurality of feature sets in the local coordinate system, wherein a plurality of feature sets can be 128 feature sets, the detected feature information is converted into the global coordinate system by using the last input fusion position gesture, and then values are searched in more dictionaries according to feature attributes and feature truth values, so that corrected feature position information 405 is obtained.
Meanwhile, a feature truth value 402 is obtained to obtain a peripheral feature truth value and feature position information, static feature points are adopted for locating feature positions, the current global position of the feature points and the relative position to the laser radar are calculated through triangular locating to solve the current global position of the laser radar, and then the external parameter calibration result of the laser radar is used for solving the current position of the vehicle, namely the solving position and the gesture 107.
Meanwhile, in the ADAS system, the own motion state is obtained by the inertial device 108, and information such as acceleration, angular velocity, and the like of the vehicle is included. The motion pose measurement 109 is obtained by calculating the own motion state and recursively estimating the position pose of the current vehicle.
And (3) calculating the position and posture obtained in the step 406, and fusing 409 the calculated position and posture and the calculated position and posture obtained by combining the calculated position and posture with the motion state of the user and the current positioning position obtained in the global position and posture obtaining in the step 406 to obtain a fused position and posture. The position and the posture are obtained only through the laser radar and the point cloud map, and the output of the global positioning information receiver is considered in the process of combining the position and the posture.
The position and posture of the vehicle are predicted by using the acceleration, the angular speed and the like of the motion state of the vehicle to obtain a recursive position and posture, and the fusion position and posture can be obtained by using Kalman filtering by using the calculated position and posture as a measured value. The fusion factor is mainly characterized by using a satellite positioning state, wherein the fusion factor comprises states such as satellite receiving number, delay time and the like of satellites; when the reliability of the satellite positioning result is not high, the recursion position pose is used as the fusion position pose; when the satellite positioning result is good, the current positioning position is used for further correcting the fusion position posture.
The technical scheme of the invention solves the problem of accumulated error existing in the prior art scheme, and the problem of the method is the defect of larger accumulated error when the method is used for a long time; the invention obtains the static information around the self-vehicle, then searches in the high-precision map, because the global information is known, the searching speed is high, after the static global position is obtained, the global position information of the self-vehicle is deduced according to the geometric relationship between the self-vehicle and the static global position, the quantity of the static information used in the method is small, single measurement and searching errors exist, and accumulated errors do not exist.
Meanwhile, the invention also provides a system for realizing real-time positioning of the vehicle, which comprises a point cloud map module, wherein the map comprises all predefined semantic data, is used for acquiring a surrounding feature true value through acquisition, a laser radar module, is used for acquiring surrounding environment data, an inertial device, is used for acquiring a self motion state, a satellite positioning module is used for acquiring global positioning coordinates, and a pose fusion module is used for fusing data obtained by the point cloud map module, the laser radar module, the inertial device and the satellite positioning module after processing the data information and outputting an accurate vehicle pose.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the foregoing description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. With such understanding, all or part of the technical solution of the present application contributing to the background art may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the embodiments or some parts of the embodiments of the present application.
Claims (4)
1. A real-time positioning method for a vehicle is characterized by comprising the following steps of:
Acquiring a feature truth value database according to the map information;
the map is global map data and is a point cloud map;
The feature truth value database is obtained according to map information and comprises the point cloud map, wherein the features comprise geometric features, probability features and semantic features;
acquiring surrounding information to obtain characteristic position information, extracting the characteristic position information from the surrounding environment data according to the surrounding environment data, and taking the characteristic position information as detection characteristic information;
Performing feature type matching on the detected feature information and feature truth values in the point cloud map; converting the detected feature information into a global coordinate system, and then constructing more dictionary lookup values according to feature attribute and surrounding feature true value to obtain a positioning feature position;
position and posture resolving is carried out on the positioning characteristic position to obtain a resolved pose; the method comprises the steps of obtaining a resolving pose through positioning characteristic positions, calculating the current global position of static characteristic point information and the relative position of the arrival laser radar through triangular positioning according to the static characteristic point information in the positioning characteristic positions, solving the current global position of the laser radar, and obtaining the resolving pose of a vehicle through the positioning characteristic positions;
performing data fusion on the GPS information, the vehicle motion gesture and the resolving gesture, and outputting a fusion gesture;
The GPS information is global position of the vehicle obtained by the satellite positioning module and outputs the current positioning position to the fusion pose output module;
The vehicle motion gesture obtains motion gesture measurement of the vehicle through an inertia device of the vehicle, calculates the vehicle gesture, and outputs the vehicle position gesture to the fusion gesture output module.
2. The method for locating a vehicle in real time according to claim 1, wherein the step of fusing the GPS information, the vehicle motion pose and the resolved pose comprises fusing the GPS information, the vehicle motion pose and the resolved pose to output a fused pose by using Kalman filtering by taking the resolved position pose as a measured value.
3. The real-time positioning system for the vehicle is characterized by comprising the following components:
The point cloud map module is used for acquiring true values of peripheral features, wherein the map comprises all predefined semantic data; acquiring a feature truth value database according to the map information;
the map is global map data and is a point cloud map;
The feature truth value database is obtained according to map information and comprises the point cloud map, wherein the features comprise geometric features, probability features and semantic features;
the laser radar module is used for acquiring surrounding environment data; acquiring surrounding information to obtain characteristic position information, extracting the characteristic position information from the surrounding environment data according to the surrounding environment data, and taking the characteristic position information as detection characteristic information;
the inertial device is used for acquiring the motion state of the inertial device;
the satellite positioning module is used for acquiring global positioning coordinates;
The positioning information acquisition module is used for outputting the calculated pose; position and posture resolving is carried out on the positioning characteristic position to obtain a resolved pose; performing feature type matching on the detected feature information and feature truth values in the point cloud map; converting the detected feature information into a global coordinate system, and then constructing more dictionary lookup values according to feature attribute and surrounding feature true value to obtain a positioning feature position; the method comprises the steps of obtaining a resolving pose through positioning characteristic positions, calculating the current global position of static characteristic point information and the relative position of the arrival laser radar through triangular positioning according to the static characteristic point information in the positioning characteristic positions, solving the current global position of the laser radar, and obtaining the resolving pose of a vehicle through the positioning characteristic positions;
The pose fusion module is used for fusing data obtained by the point cloud map module, the laser radar module, the inertial device and the satellite positioning module after processing the data information, fusing the GPS information, the vehicle motion pose and the resolving pose, and outputting the fused pose; the GPS information is global position of the vehicle obtained by the satellite positioning module, and the current positioning position is output to the pose fusion module; the vehicle motion gesture obtains motion gesture measurement of the vehicle through an inertia device of the vehicle, calculates the vehicle gesture, and outputs the vehicle position gesture to the fusion gesture.
4. A vehicle comprising the vehicle real-time localization system of claim 3 for measuring a pose of the vehicle and outputting the pose of the vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911205844.9A CN111551186B (en) | 2019-11-29 | 2019-11-29 | Real-time vehicle positioning method and system and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911205844.9A CN111551186B (en) | 2019-11-29 | 2019-11-29 | Real-time vehicle positioning method and system and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111551186A CN111551186A (en) | 2020-08-18 |
CN111551186B true CN111551186B (en) | 2024-09-27 |
Family
ID=71998043
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911205844.9A Active CN111551186B (en) | 2019-11-29 | 2019-11-29 | Real-time vehicle positioning method and system and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111551186B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112050825A (en) * | 2020-09-21 | 2020-12-08 | 金陵科技学院 | Navigation control system based on LGC-MDL nonlinear information anti-interference recognition |
CN112129297B (en) * | 2020-09-25 | 2024-04-30 | 重庆大学 | Multi-sensor information fusion self-adaptive correction indoor positioning method |
CN112162560A (en) * | 2020-10-10 | 2021-01-01 | 金陵科技学院 | Regression error anti-interference navigation control system based on nonlinear dictionary |
CN113066303B (en) * | 2021-03-25 | 2022-09-06 | 上海智能新能源汽车科创功能平台有限公司 | Intelligent bus stop combined positioning system based on vehicle-road cloud cooperation |
WO2022252337A1 (en) * | 2021-06-04 | 2022-12-08 | 华为技术有限公司 | Encoding method and apparatus for 3d map, and decoding method and apparatus for 3d map |
CN113390422B (en) * | 2021-06-10 | 2022-06-10 | 奇瑞汽车股份有限公司 | Automobile positioning method and device and computer storage medium |
CN113703446B (en) * | 2021-08-17 | 2023-11-07 | 泉州装备制造研究所 | Guide vehicle navigation method and dispatch system based on magnetic nails |
CN113899363B (en) * | 2021-09-29 | 2022-10-21 | 北京百度网讯科技有限公司 | Vehicle positioning method and device and automatic driving vehicle |
CN114001742B (en) * | 2021-10-21 | 2024-06-04 | 广州小鹏自动驾驶科技有限公司 | Vehicle positioning method, device, vehicle and readable storage medium |
CN113978512B (en) * | 2021-11-03 | 2023-11-24 | 北京埃福瑞科技有限公司 | Rail train positioning method and device |
CN114111774B (en) * | 2021-12-06 | 2024-04-16 | 纵目科技(上海)股份有限公司 | Vehicle positioning method, system, equipment and computer readable storage medium |
CN115060276B (en) * | 2022-06-10 | 2023-05-12 | 江苏集萃清联智控科技有限公司 | Multi-environment adaptive automatic driving vehicle positioning equipment, system and method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160057755A (en) * | 2014-11-14 | 2016-05-24 | 재단법인대구경북과학기술원 | Map-based positioning system and method thereof |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106525057A (en) * | 2016-10-26 | 2017-03-22 | 陈曦 | Generation system for high-precision road map |
CN108732603B (en) * | 2017-04-17 | 2020-07-10 | 百度在线网络技术(北京)有限公司 | Method and device for locating a vehicle |
CN108226883B (en) * | 2017-11-28 | 2020-04-28 | 深圳市易成自动驾驶技术有限公司 | Method and device for testing millimeter wave radar performance and computer readable storage medium |
CN108759833B (en) * | 2018-04-25 | 2021-05-25 | 中国科学院合肥物质科学研究院 | Intelligent vehicle positioning method based on prior map |
CN109116397B (en) * | 2018-07-25 | 2022-12-30 | 吉林大学 | Vehicle-mounted multi-camera visual positioning method, device, equipment and storage medium |
CN108958266A (en) * | 2018-08-09 | 2018-12-07 | 北京智行者科技有限公司 | A kind of map datum acquisition methods |
CN109696172B (en) * | 2019-01-17 | 2022-11-01 | 福瑞泰克智能系统有限公司 | Multi-sensor track fusion method and device and vehicle |
-
2019
- 2019-11-29 CN CN201911205844.9A patent/CN111551186B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160057755A (en) * | 2014-11-14 | 2016-05-24 | 재단법인대구경북과학기술원 | Map-based positioning system and method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN111551186A (en) | 2020-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111551186B (en) | Real-time vehicle positioning method and system and vehicle | |
CN112639502B (en) | Robot pose estimation | |
CN110617821B (en) | Positioning method, positioning device and storage medium | |
JP6656886B2 (en) | Information processing apparatus, control method, program, and storage medium | |
CN109341706A (en) | A kind of production method of the multiple features fusion map towards pilotless automobile | |
KR102130687B1 (en) | System for information fusion among multiple sensor platforms | |
JP6950832B2 (en) | Position coordinate estimation device, position coordinate estimation method and program | |
CN113721248B (en) | Fusion positioning method and system based on multi-source heterogeneous sensor | |
CN111796315A (en) | Indoor and outdoor positioning method and device for unmanned aerial vehicle | |
CN113822944B (en) | External parameter calibration method and device, electronic equipment and storage medium | |
CN112946681B (en) | Laser radar positioning method fusing combined navigation information | |
US20220338014A1 (en) | Trustworthiness evaluation for gnss-based location estimates | |
US20220179038A1 (en) | Camera calibration for localization | |
CN110851545A (en) | Map drawing method, device and equipment | |
CN113063425A (en) | Vehicle positioning method and device, electronic equipment and storage medium | |
CN115290071A (en) | Relative positioning fusion method, device, equipment and storage medium | |
Lucks et al. | Improving trajectory estimation using 3D city models and kinematic point clouds | |
TW202018256A (en) | Multiple-positioning-system switching and fusion calibration method and device thereof capable of setting different positioning information weights to fuse the positioning information generated by different devices and calibrate the positioning information | |
Das et al. | Pose-graph based crowdsourced mapping framework | |
CN111397602A (en) | High-precision positioning method and device integrating broadband electromagnetic fingerprint and integrated navigation | |
CN118089758A (en) | Collaborative awareness method, apparatus, device, storage medium, and program product | |
CN114281832A (en) | High-precision map data updating method and device based on positioning result and electronic equipment | |
CN113227713A (en) | Method and system for generating environment model for positioning | |
US20220122316A1 (en) | Point cloud creation | |
Verentsov et al. | Bayesian framework for vehicle localization using crowdsourced data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |