Disclosure of Invention
The application provides a positioning method, a positioning device, positioning equipment and a storage medium, which can solve the problem of inaccurate positioning of an automatic driving vehicle in the related art. The technical scheme is as follows:
In one aspect, a positioning method is provided, which is applied to an autonomous vehicle, and includes:
predicting the position and attitude information of the automatic driving vehicle at the current moment to obtain predicted position and attitude information;
generating a first grid map according to the predicted pose information, wherein the first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of grids in the first grid map and the global off-line grid map;
determining a second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of grids including lanes in the first grid map and the global off-line grid map;
determining position information of the autonomous vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the predicting the pose information of the autonomous vehicle at the current time includes:
predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU (Inertial Measurement Unit) data, historical pose information of the previous moment and the time difference between the current moment and the previous moment, wherein the historical pose information comprises historical position information and historical attitude information.
In a possible implementation manner of the present application, the generating a first grid map according to the predicted pose information includes:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprises detected echo reflection intensity values and elevation values of all points;
converting first point clouds corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating the first raster map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
In a possible implementation manner of the present application, the determining a first matching probability distribution of the first grid map and the global offline grid map according to the elevation mean of the grids in the first grid map and the global offline grid map includes:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking a position corresponding to the predicted pose information as a center in the global offline grid map, wherein the first size is smaller than the second size;
and determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
In a possible implementation manner of the present application, the determining a first matching probability distribution of the first grid map and the global offline grid map according to an elevation mean of grids in the second grid map and the third grid map includes:
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
After each designated offset is moved, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved at this time;
and determining the first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
In a possible implementation manner of the present application, before determining a second matching probability distribution of the first grid map and the global offline grid map according to an echo reflection intensity mean of grids including lanes in the first grid map and the global offline grid map, the method further includes:
respectively determining grids comprising lanes in the second grid map and the third grid map;
correspondingly, the determining a second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean values of the grids including the lanes in the first grid map and the global off-line grid map includes:
Moving the second grid map on the third grid map by taking a specified offset as a moving step so as to traverse the third grid map;
after the designated offset is moved every time, determining a second matching probability corresponding to a current moving position coordinate based on the second grid map and an elevation mean value of grids including lanes in the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved this time;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and the second matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the determining, based on the first matching probability distribution and the second matching probability distribution, the position information of the autonomous vehicle in the global offline grid map at the current time includes:
determining a fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
In a possible implementation manner of the present application, the determining, according to the first matching probability distribution and the second matching probability distribution, a fusion matching probability distribution of the first grid map and the global offline grid map includes:
determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance and the fourth variance;
and determining the fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the determining, based on the fusion matching probability distribution and the predicted pose information, the position information of the autonomous vehicle in the global offline grid map at the current moment includes:
Selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
In a possible implementation manner of the present application, the selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution includes:
determining a maximum fused match probability from the plurality of fused match probabilities;
determining a fusion matching probability of the fusion matching probabilities, which is greater than N times the maximum fusion matching probability, as the target fusion matching probabilities, where N is an integer greater than 0 and less than 1.
In another aspect, a positioning apparatus is provided, the apparatus comprising:
the prediction module is used for predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information;
The generation module is used for generating a first grid map according to the predicted pose information, the first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
the first determining module is used for determining first matching probability distribution of the first grid map and a global off-line grid map according to the elevation mean value of grids in the first grid map and the global off-line grid map;
the second determining module is used for determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of grids including lanes in the first grid map and the global off-line grid map;
and the third determination module is used for determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the prediction module is configured to:
And predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU data, historical pose information of the automatic driving vehicle at the last moment and the time difference between the current moment and the last moment, wherein the historical pose information comprises historical position information and historical attitude information.
In one possible implementation manner of the present application, the generating module is configured to:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprises detected echo reflection intensity values and elevation values of all points;
converting first point clouds corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating the first grid map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
In one possible implementation manner of the present application, the first determining module is configured to:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking a position corresponding to the predicted pose information as a center in the global off-line grid map, wherein the first size is smaller than the second size;
And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
In one possible implementation manner of the present application, the first determining module is configured to:
moving the second grid map on the third grid map by taking a specified offset as a moving step so as to traverse the third grid map;
after the designated offset is moved every time, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved this time;
and determining the first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
In a possible implementation manner of the present application, the second determining module is further configured to:
respectively determining grids comprising lanes in the second grid map and the third grid map;
Moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after the specified offset is moved every time, determining a second matching probability corresponding to a current moving position coordinate based on the elevation mean value of grids including lanes in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first specified point in the second grid map relative to a second specified point in the third grid map after the second grid map is moved this time;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and second matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the third determining module is configured to:
determining a fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
In one possible implementation manner of the present application, the third determining module is configured to:
determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance and the fourth variance;
and determining the fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the third determining module is configured to:
selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
In one possible implementation manner of the present application, the third determining module is configured to:
determining a maximum fused match probability from the plurality of fused match probabilities;
determining a fusion matching probability that is greater than N times the maximum fusion matching probability among the fusion matching probabilities as the target fusion matching probabilities, wherein N is an integer greater than 0 and less than 1.
In another aspect, an apparatus is provided, which includes a memory for storing a computer program and a processor for executing the computer program stored in the memory to implement the steps of the positioning method described above.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the positioning method described above.
In another aspect, a computer program product comprising instructions is provided, which when run on a computer, causes the computer to perform the steps of the positioning method described above.
The technical scheme provided by the application can bring the following beneficial effects at least:
and predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information. And generating a first grid map comprising a plurality of grids according to the predicted pose information, wherein each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map. And determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution. That is to say, the position information of the automatic driving vehicle in the global offline grid map at the current moment is determined according to the elevation average value of the grid and the echo reflection intensity average value of the grid, and compared with the method for positioning the automatic driving vehicle by using a single elevation average value, the method reduces the probability of inaccurate positioning and improves the accuracy rate of positioning the automatic driving vehicle.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the positioning method provided by the embodiment of the present application in detail, an implementation environment provided by the embodiment of the present application is introduced.
The positioning method provided by the embodiment of the application is applied to the automatic driving vehicle. A LIDAR (Light Detection and Ranging) system may be installed in the autonomous vehicle. LIDAR systems may include laser radar, GPS (Global Positioning System), IMU, and devices. The laser radar, the GPS and the IMU establish communication connection with the device, respectively, where the communication connection may be a wired connection or a wireless connection, and this is not limited in this embodiment of the present application.
The lidar may include a transmitter, a receiver, and an information processor, among other things. The emitter is used for converting the electric pulse into an optical pulse to be emitted out, the optical pulse is shot on an object and reflected back, the receiver is used for receiving the reflected optical pulse and converting the reflected optical pulse into an electric pulse, and the information processor is used for processing the electric pulse to obtain point cloud data. The point cloud data may include three-dimensional coordinates x, y, z of the object in a local coordinate system and echo reflected intensity values I, where z may also be referred to as elevation values. Wherein, the local coordinate system is a coordinate system established by taking the laser radar as an origin.
Wherein the GPS may be used to coarsely locate the autonomous vehicle and send coarse location information to the device.
The IMU is used for measuring three-axis attitude angles and acceleration of the automatic driving vehicle and calculating attitude information of the automatic driving vehicle according to the three-axis attitude angles and the acceleration. The IMU may include three single-axis accelerometers for detecting acceleration of the autonomous vehicle and three single-axis gyroscopes for detecting angular velocity of the autonomous vehicle.
The device can be used for processing point cloud data output by the laser radar, calculating attitude information of the automatic driving vehicle according to the acceleration and the angular velocity output by the IMU, and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the processed point cloud data, the rough position information and the attitude information.
The device may be any electronic product that can perform human-Computer interaction with a user through one or more modes such as a keyboard, a touch pad, a touch screen, a remote controller, voice interaction, or handwriting equipment, for example, a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a pocket PC (pocket PC), a tablet Computer, a smart car, a smart television, and the like.
Those skilled in the art will appreciate that the above described LIDAR systems are merely exemplary and that other existing or future LIDAR systems, as may be suitable for use with the present application, are intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
After introducing the implementation environment of the embodiment of the present application, a detailed explanation is provided next for the positioning method provided in the embodiment of the present application.
FIG. 1 is a flow chart illustrating a positioning method according to an exemplary embodiment, as applied to an autonomous vehicle in the environment of implementation described above. Referring to fig. 1, the method may include the following steps:
step 101: and predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information.
The pose information comprises position information and attitude information. The position information may include three-dimensional coordinates and the pose information may include at least a yaw angle of the autonomous vehicle.
In implementation, the specific implementation of predicting the pose information of the autonomous vehicle at the current time may include: and predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU data, historical pose information of the last moment and the time difference between the current moment and the last moment.
The IMU data may include acceleration and angular velocity of the autonomous vehicle, and the historical pose information includes historical position information and historical pose information.
The historical pose information at the previous moment is the high-precision pose information of the automatic driving vehicle determined by the technical scheme of the application, and can be obtained by a multi-sensor fusion filter.
As an example, after the pose information at each time is determined by the technical solution of the present application, the pose information at each time may be stored, and for convenience of description, the pose information before the current time may be referred to as history pose information. Therefore, when predicting the predicted pose information at the present time, the historical pose information at the previous time can be directly used.
That is, in implementations, the pose information for the autonomous vehicle at the current time may be predicted based on the speed of the autonomous vehicle, the acceleration of the autonomous vehicle, the angular velocity of the autonomous vehicle, the time difference between the current time and the previous time, and historical pose information for the previous time.
As an example, attitude change information of the autonomous vehicle, which indicates an attitude change situation of the autonomous vehicle from a previous time to a current time, may be obtained from an acceleration and an angular velocity of the autonomous vehicle, and the attitude information of the current time may be obtained from the attitude change information and historical attitude information of the previous time. The acceleration and the traveling direction of the autonomous vehicle from the previous time to the present time can be determined based on the acceleration and the angular velocity of the autonomous vehicle, and then the position information of the autonomous vehicle at the present time can be determined in combination with the velocity of the autonomous vehicle and the time difference between the present time and the previous time. And determining the position information and the posture information of the automatic driving vehicle at the current moment as the pose information at the current moment, namely obtaining the predicted pose information.
Further, in addition to determining the predicted pose information using the above method, if the GPS position information is acquired at the current time, in this case, the predicted pose information at the current time may also be determined in combination with the GPS position information acquired at the current time. In implementation, the predicted pose information may be determined based on IMU data and historical pose information at a previous time, and in combination with GPS location information at a current time.
It should be noted that, the present application is only described by taking an example in which the predicted pose information is determined by an autonomous vehicle. In one possible implementation, determining the predicted pose information may also be performed by a multi-sensor fusion filter.
Step 102: and generating a first grid map according to the predicted pose information.
The first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid.
The echo reflection intensity value is the reflection intensity of the light pulse after irradiating the object, and the numerical range of the echo reflection intensity value can be 0-255.
In implementation, referring to fig. 2, a specific implementation of generating the first grid map according to the predicted pose information may include: and acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprises the detected echo reflection intensity value and the detected elevation value of each point. And converting the first point cloud corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data. And generating a first raster map based on the second point cloud data and the first historical point cloud data in a specified time period before the current time.
The first historical point cloud data is historical point cloud data in a world coordinate system.
That is to say, the first point cloud data at the current time may be acquired, the coordinate system of the first point cloud data may be converted to obtain the second point cloud data in the world coordinate system, and then the first grid map may be generated according to the second point cloud data and the first historical point cloud data in the world coordinate system.
As an example, first point cloud data of the current time may be acquired by a laser radar, the first point cloud data includes three-dimensional coordinates x, y, and z of each detected point in a local coordinate system, and a reflection intensity value of an echo of each point, and z in the three-dimensional coordinates may be taken as an elevation value of each point. For example, the point cloud data of a certain point can be represented as (x, y, z, I), (x, y, z) being three-dimensional coordinates, I being the echo reflection intensity value of the point.
As an example, after the first point cloud data at the current time is obtained, coordinate conversion may be performed on the first point cloud data according to the predicted pose information, and the point cloud under the local coordinate system is converted into the world coordinate system, so as to obtain the second point cloud data. Exemplarily, the first point cloud data may be converted by formula (1) to obtain the second point cloud data.
Pi g=Ti*Pi l (1)
Wherein, Pi gPoint cloud data of the point cloud representing the time i in the world coordinate system, Pi lPoint cloud data of the point cloud representing the time i in the local coordinate system, TiAnd representing the pose information of the automatic driving vehicle at the moment i. Wherein, Ti=[T0,T1,...Tn]And n is the number of the point cloud midpoint at the moment i.
The first point cloud data in the local coordinate system can be converted into the second point cloud data in the world coordinate system by the above formula (1).
As an example, first historical point cloud data in a world coordinate system at a plurality of times within a specified time period before the current time may be stored. The specified time period may be set by a user according to actual needs, or may be set by a device, which is not limited in this embodiment of the present application.
Illustratively, the first historical point cloud data may be stored in the form of a buffer queue. The buffer queue may be Q ═ F 0,F1,F2,...Fi]And F is (point _ cloud, timestamp, point), wherein point _ cloud is first historical point cloud data of a certain moment in a world coordinate system, timestamp is the time stamp of the moment, and point is position and pose information of the automatic driving vehicle at the moment.
As another example, historical point cloud data under the local coordinate system at multiple times within a specified time period before the current time may be stored, and then the historical point cloud data under the local coordinate system may be converted into first historical point cloud data under the world coordinate system according to formula (1) based on the historical point cloud data and its corresponding historical pose information.
As an example, after the second point cloud data and the first historical point cloud data are obtained, the second point cloud data and the first historical point cloud data may be superimposed, each point is projected onto a two-dimensional plane according to a three-dimensional coordinate of each point after the superimposition, the two-dimensional plane including each point after the superimposition is divided into a plurality of grids having the same size and shape, and the first grid map may be obtained.
Wherein, in the generated first grid map, each grid may include a plurality of points therein, and each grid corresponds to an echo reflection intensity average and an elevation average. The echo reflection intensity average value corresponding to a single grid may be obtained by summing and averaging echo reflection intensity values of all points included in the grid, and the elevation average value corresponding to a single grid may be obtained by summing and averaging elevation values of all points included in the grid.
As an example, the information corresponding to the grid in the first grid map may be referred to as a grid element, and the grid element may include a position coordinate of the grid in the first grid map, an echo reflected intensity average value corresponding to the grid, and an elevation average value corresponding to the grid.
Illustratively, the grid element set of the first grid map may be denoted as Localmap ═ G0,G1,G2,...,Gl]Where, Localmap represents a first grid map, G ═ may be used to represent a grid element, (x, y) is a position coordinate of a grid corresponding to the grid element in the first grid map, mean _ I represents an echo reflection intensity mean value corresponding to the grid element, and mean _ z represents an elevation mean value corresponding to the grid element.
Step 103: and determining first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map.
As an example, a global offline grid map is a map that is generated in advance and generally does not change, and the global offline grid map can be used to describe the layout of roads and surrounding environment of a certain area. The global grid map may be generated by the method of generating the first grid map in step 102, or may be generated by other methods, which is not limited in this embodiment.
In some embodiments, a two-dimensional plane of an area in a world coordinate system may be divided into a plurality of grids, and each grid has the same shape and size. Referring to fig. 2, performing SLAM (Simultaneous Localization and Mapping, instant positioning and map building) optimization processing on all point clouds acquired by a laser radar to obtain pose information of the point clouds at each moment, converting the point clouds under a local coordinate system into a world coordinate system through a formula (1) according to the point cloud data and the pose information, and then projecting points in the area into a grid of a two-dimensional plane to obtain a global offline grid map. Each grid of the global offline grid map corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid.
As an example, information corresponding to a grid in the global offline grid map may be referred to as a grid element, and the grid element may include a position coordinate of the grid in the global offline grid map, an echo reflection intensity average value corresponding to the grid, and an elevation average value corresponding to the grid.
Illustratively, the grid element set of the global offline grid map may be denoted as base ═ B0,B1,B2,...,Bl]The base is a global offline grid map, B ═ x, y, mean _ I, mean _ z) may be used to represent a grid element, (x, y) are position coordinates of a grid corresponding to the grid element in the global offline grid map, mean _ I represents an echo reflection intensity mean value corresponding to the grid element, and mean _ z represents an elevation mean value corresponding to the grid element.
Because the first grid map and the global offline grid map are both relatively large maps and include a large number of grids, the calculation amount is large during matching, and the device resources are wasted. Moreover, the global offline grid map includes maps of all areas of an area, which is too wide in coverage, and the position information of the autonomous vehicle at the current time is usually within a small range, and the parts of the global offline grid map other than the small range are not very helpful in locating the autonomous vehicle, and may not be needed. Therefore, a part of the map can be acquired from the first grid map and the global offline grid map respectively for matching.
In implementation, determining a specific implementation of the first matching probability distribution of the first grid map and the global offline grid map according to the elevation mean of the grids in the first grid map and the global offline grid map may include: in the first grid map, a second grid map of a first size is acquired with a position corresponding to the predicted pose information as a center, and in the global offline grid map, a third grid map of a second size is acquired with the position corresponding to the predicted pose information as a center. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
The first size is smaller than the second size, and the first size and the second size may be set by a user according to actual needs or may be set by default by a device, which is not limited in the embodiment of the present application.
That is, referring to fig. 2, a second grid map may be acquired in the first grid map with the position corresponding to the predicted pose information as the center, and a third grid map may be acquired in the global offline grid map with the position corresponding to the predicted pose information as the center, where the size of the second grid map is smaller than that of the third grid map. And then determining a first matching probability of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
Because the position corresponding to the predicted pose information may not be very accurate, the third grid map obtained from the global offline map according to the predicted pose information may be larger than the second grid map, and when the predicted pose information is far from the real position information, the first matching probability distribution of the first grid map and the global offline grid map can be accurately determined.
As an example, in the first grid map, a grid in which a position corresponding to the predicted pose information is located is determined, with the grid as a center and the first size as a radius, grid elements of all grids located within the radius are acquired, and the second grid map is generated from all grids and the grid elements of each grid. Illustratively, the first size may be 50, that is, the second grid map may include 101 × 101 grids.
Similarly, a third grid map may be obtained from the global offline grid map.
Illustratively, the second grid map may be a square map with a side length of the first dimension being 2 times. Alternatively, the second grid map may be a circular map with the first size as a radius. When the second grid map is square, the third grid map is a square map having the second size 2 times as large as the side length, and when the second grid map is circular, the third grid map is a circular map having the second size 2 times as large as the radius. That is, the shape of the second grid map is the same as the shape of the third grid map.
In some embodiments, after obtaining the second grid map and the third grid map, a first matching probability distribution of the first grid map and the global offline grid map may be determined according to an elevation mean value of grids in the second grid map and the third grid map, and the specific implementation may include:
and moving the second grid map on the third grid map by taking the designated offset as a moving step so as to traverse the third grid map. After each designated offset is moved, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved at this time. And determining a first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
Wherein the specified offset may be the same as the size of the grid, and the specified offset may represent a movement distance in the x-direction or a movement distance in the y-direction. In implementation, it may be determined whether to move in the x direction or the y direction each time according to actual situations, as long as the third grid map can be traversed.
The first designated point may be any point in the second grid map, and the second designated point may be a point in the third grid map that coincides with the first designated point when the vertex on the upper left corner of the second grid map coincides with the vertex on the upper left corner of the third grid map.
That is, the second grid map may be moved on the third grid map with a specified offset as a movement step, one movement position coordinate per movement, and a first matching probability corresponding to the movement position coordinate may be determined, and the first matching probability distribution may be determined based on the movement position coordinate determined per time and the first matching probability determined per time.
As an example, the top left corner vertex of the second grid map and the top left corner vertex of the third grid map may be coincident, the position coordinate of the coincident point may be determined to be (0,0), and the second grid map may be moved from the top left corner vertex of the third grid map, each time by a specified offset, to traverse the third grid map.
As an example, the top left vertex in the second grid map may be used as the first designated point, the top left vertex in the third grid map may be used as the second designated point, and the mobile position coordinate may be determined according to the mobile direction and the mobile distance each time the mobile designated offset is moved, and then the first matching probability corresponding to the mobile position coordinate at this time may be determined based on the elevation mean value of each grid in the second grid map and the elevation mean value of each grid in the third grid map corresponding to the second grid map in an overlapping manner.
For example, referring to fig. 3, it is assumed that the vertex at the top left corner in the second grid map is the first designated point, the vertex at the top left corner in the third grid map is the second designated point, the second grid map includes 5 × 5 grids, the third grid map includes 10 × 10 grids, the designated offset is 1, and after the second grid map is moved by 1 grid in the x direction for the first time, the displacement of the first designated point relative to the second designated point in fig. 2 is 1 grid along the x direction, so that the current movement position coordinate may be determined to be (1, 0).
Continuing with the above example, referring to fig. 4, assuming that the top left vertex in the second grid map is the first designated point, the top left vertex in the third grid map is the second designated point, the second grid map includes 5 × 5 grids, the third grid map includes 10 × 10 grids, the designated offset is still 1, and after the second grid map is moved by 1 grid in the x direction, the displacement of the first designated point relative to the second designated point in fig. 3 is 2 grids along the x direction, so that the current moving position coordinate may be determined to be (2, 0). By analogy, one mobile position coordinate can be determined after each movement.
Illustratively, the first matching probability corresponding to the single movement position coordinate may be determined according to the following formula (2).
Wherein R isZ(x, y) is a first matching probability corresponding to the mobile position coordinate (x, y), T (x ', y') is an elevation mean value corresponding to the grid with the position coordinate (x ', y') in the second grid map, and I (x + x ', y + y') is an elevation mean value corresponding to the grid with the position coordinate (x + x ', y + y') in the third grid map.
For example, assuming that the mobile position coordinate is (0,1), the second grid map includes 5 × 5 grids, and the third grid map includes 10 × 10 grids, the first matching probability R corresponding to the mobile position coordinate (0,1) may be determined according to the elevation value corresponding to each grid in the second grid map and the elevation value corresponding to each grid in the third grid map corresponding to the second grid map in an overlapping mannerZ(0,1)。
The above equation (2) is an equation for performing template matching using SSD (Sum of Squared Differences algorithm). That is, a first match probability of the first grid map and the global offline grid map may be determined using an algorithm of SSD template matching.
According to the formula (2), a first matching probability corresponding to the moving position coordinates of each movement in the traversal process can be determined, so that a plurality of first matching probabilities are determined, and a first matching probability distribution can be determined according to all the moving position coordinates and the first matching probabilities corresponding to all the moving position coordinates.
Further, after the first matching probabilities corresponding to all the mobile position coordinates are determined, normalization processing may be performed on the first matching probability corresponding to each mobile position coordinate through the following formula (3), so as to obtain the normalized first matching probability.
R'Z(x,y)=1.0-(RZ(x,y)-RZmin)/(RZmax-RZmin) (3)
Wherein R'Z(x, y) is the first matching probability after normalization processing corresponding to the mobile position coordinate (x, y), RZmaxThe maximum first matching probability R in the first matching probabilities corresponding to all the mobile position coordinatesZminAnd the minimum first matching probability in the first matching probabilities corresponding to all the mobile position coordinates.
Step 104: and determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map.
Before executing this step, a second grid map may be obtained from the first grid map in step 103, and a third grid map may be obtained from the global grid map, where the size of the third grid map is larger than that of the second grid map.
Since the autonomous vehicle generally travels on a lane, in order to reduce the calculation amount of matching, grids including lanes may be determined in the second grid map and the third grid map, respectively.
As an example, referring to fig. 2, lane lines in the second grid map and the third grid map may be extracted through a lane line extraction model by a deep learning method. And then determining the grid positioned between the two lane lines as a grid including lanes according to the lane lines. Further, the grid comprising lanes may be marked, e.g. the grid comprising lanes may be labeled.
In an implementation, the second matching probability of the first grid map and the global offline grid map may be determined according to echo reflection intensity means of grids including lanes in the second grid map and the third grid map. The specific implementation can include:
and moving the second grid map on the third grid map by taking the specified offset as a moving step to traverse the third grid map. After each specified offset is moved, determining a second matching probability corresponding to the current moving position coordinate based on the echo reflection intensity mean value of the grids including the lane in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first specified point in the second grid map relative to a second specified point in the third grid map after the second grid map is moved at this time. And determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and the second matching probabilities corresponding to all the mobile position coordinates.
That is, the second grid map may be moved on the third grid map with a specified offset as a moving step, one moving position coordinate per movement, a second matching probability corresponding to the moving position coordinate may be determined according to the echo reflection intensity average of the grids including lanes in the second grid map and the third grid map, and the second matching probability distribution may be determined based on the moving position coordinate determined per time and the second matching probability determined per time.
As an example, the vertex at the top left corner in the second grid map may be used as a first designated point, the vertex at the top left corner in the third grid map may be used as a second designated point, and each time the designated offset is moved, the mobile position coordinate may be determined according to the moving direction and the moving distance, and then the second matching probability corresponding to the mobile position coordinate at this time may be determined based on the echo reflection intensity average value of the grid including the lane in the second grid map and the echo reflection intensity average value of the grid including the lane in the third grid map that is overlapped with the second grid map.
As an example, the process of moving the second grid map in the third grid map is the same as the step 104, and the specific implementation can refer to the related description of the step 104.
As an example, after the specified offset amount is moved each time, when the second matching probability corresponding to the current movement position coordinate is determined based on the echo reflection intensity average values of the grids including the lanes in the second grid map and the third grid map, the determination may be performed by the following formula (4).
Wherein R isI(x, y) is a second matching probability corresponding to the mobile position coordinate (x, y), T '(x', y ') is a mean value of echo reflection intensities corresponding to the grid including the lane with the position coordinate (x', y ') in the second grid map, and I' (x + x ', y + y') is a mean value of echo reflection intensities corresponding to the grid including the lane with the position coordinate (x + x ', y + y') in the third grid map.
In the above formula (4), the above calculation can be performed only when both the grid having the position coordinate (x ', y') in the second grid map and the grid having the corresponding position coordinate (x + x ', y + y') in the third grid map are grids including lanes, and otherwise, the value of T '(x', y '). I' (x + x ', y + y') is 0.
According to the formula (4), a second matching probability corresponding to the mobile position coordinates of each movement in the traversal process can be determined, so that a plurality of second matching probabilities are determined, and a second matching probability distribution can be determined according to all the mobile position coordinates and the second matching probabilities corresponding to all the mobile position coordinates.
Further, after the second matching probabilities corresponding to all the mobile position coordinates are determined, normalization processing may be performed on the second matching probability corresponding to each mobile position coordinate through the following formula (5), so as to obtain the normalized second matching probability.
R'I(x,y)=1.0-(RI(x,y)-RImin)/(RImax-RImin) (5)
Wherein R'I(x, y) is the first matching probability after normalization processing corresponding to the mobile position coordinate (x, y), RImaxThe maximum second matching probability R of the second matching probabilities corresponding to all the mobile position coordinatesIminAnd the smallest second matching probability in the second matching probabilities corresponding to all the mobile position coordinates.
Step 105: and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
The first matching probability and the second matching probability are combined to determine the position information of the automatic driving vehicle in the global off-line map at the current moment, namely, the elevation value and the echo reflection intensity value of the point cloud data are combined to position the automatic driving vehicle, so that the positioning result is more accurate.
In an implementation, determining the specific implementation of the position information of the autonomous vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution may include: and determining fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
That is, the first matching probability distribution and the second matching probability distribution may be combined according to a certain weight to determine a fused matching probability distribution of the first grid map and the global offline grid map.
In some embodiments, determining a fused matching probability distribution of the first grid map and the global offline grid map according to the first matching probability distribution and the second matching probability distribution may include the following steps:
(1) and determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance, and determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance.
As an example, the variance of the first matching probability distribution in the x-direction, i.e. the first variance, and the variance of the second matching probability distribution in the x-direction, i.e. the third variance, can be determined by equation (6).
Wherein, (X, y) is the coordinate of the mobile position, and X is the value range of X in the coordinate of the mobile positionThe maximum value may be the difference between the number of grids in the x-direction of the third grid map and the number of grids in the x-direction of the second grid map plus one. Y is the value range of Y in the mobile position coordinate, and the maximum value may be the difference between the number of grids in the Y direction of the third grid map and the number of grids in the Y direction of the second grid map plus one. x is the value in the x direction in the coordinate of the mobile position; when the above equation (6) is used to determine the first variance, R (x, y) is R
Z(x,y),
Can be expressed as
When the above equation (6) is used to determine the third variance, R (x, y) is R
I(x,y),
Can be expressed as
Can be calculated by the formula (7).
As an example, the variance of the first matching probability distribution in the y direction, i.e., the second variance, and the variance of the second matching probability distribution in the y direction, i.e., the fourth variance, may be determined by equation (8).
Wherein, (X, y) is a moving position coordinate, X is a value range of X in the moving position coordinate, and the maximum value can be a difference between the number of the grids in the X direction of the third grid map and the number of the grids in the X direction of the second grid mapAnd adding one. Y is the value range of Y in the mobile position coordinate, and the maximum value can be the difference between the number of the grids in the Y direction of the third grid map and the number of the grids in the Y direction of the second grid map plus one. y is the value in the y direction in the coordinate of the mobile position; when the above formula (8) is used to determine the second variance, R (x, y) is R
Z(x,y),
Can be expressed as
When the above equation (8) is used to determine the fourth difference, R (x, y) is R
I(x,y),
Can be expressed as
Can be calculated by equation (9).
From the above equations (6), (7), (8) and (9), the first variance, the second variance, the third variance and the fourth variance can be determined.
(2) Determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance and the fourth difference.
As an example, the second weight of the second match probability distribution may be determined according to equation (10).
Wherein, gamma is a second weight,
in order to be the first variance, the first variance is,
in order to be the second variance, the first variance is,
in order to be the third variance, the first variance is,
is the fourth variance.
The first weight of the first matching probability distribution is 1-gamma. Where γ is the second weight.
(3) And determining fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
As an example, the fused match probability may be determined according to equation (11).
Wherein, (x, y) is the coordinate of the mobile position, P (x, y) is the fusion matching probability corresponding to the coordinate of the mobile position, 1-gamma is the first weight, gamma is the second weight, R is the second weightZ(x, y) is the first match probability distribution, RI(x, y) is the second match probability distribution.
According to the three steps, the fusion matching probability distribution of the first grid map and the global off-line grid map can be determined.
In some embodiments, after determining the fusion matching probability distribution, the position information of the autonomous vehicle in the global offline grid map at the current time may be determined based on the fusion matching probability distribution and the predicted pose information, and specifically, the method may include the following steps:
(1) A plurality of target fusion matching probabilities are selected from the fusion matching probability distribution.
As an example, the specific implementation of selecting the multiple target fused matching probabilities from the fused matching probability distribution may include: and determining the maximum fusion matching probability from the multiple fusion matching probabilities, and determining the fusion matching probability which is more than N times of the maximum fusion matching probability from the multiple fusion matching probabilities as multiple target fusion matching probabilities.
Wherein N is an integer greater than 0 and less than 1. For example, N may be 0.85.
Illustratively, the set of multiple target fusion matching probabilities may be represented by equation (12).
P=[P0(x0,y0),P1(x1,y1),P2(x2,y2),…,Pn(xn,yn)] (12)
Wherein, P0(x0, y0) is the maximum fused match probability, and (x0, y0) is the mobile position coordinate corresponding to the maximum fused match probability, P1(x1,y1),P2(x2,y2),…,Pn(xn, yn) is a number of fusion match probabilities greater than N times the maximum fusion match probability.
Wherein, Pn(xn,yn)>N*P0(x0,y0)。
Illustratively, assuming that the plurality of fusion matching probabilities are 90, 80, 90, 30, 60, 75, 30, 20, respectively, and N is 0.8, the maximum fusion matching probability can be determined to be 90, and the N times the maximum fusion matching probability is 72, the plurality of target fusion matching probabilities can be determined to be 90, 80, 90, and 75.
As another example, the specific implementation of selecting the multiple target fusion matching probabilities from the fusion matching probability distribution may further include: and acquiring fusion matching probability corresponding to each mobile position coordinate, and arranging the fusion matching probabilities corresponding to all the mobile position coordinates according to the sequence of the fusion matching probabilities from high to low to obtain a plurality of arranged fusion matching probabilities. And then determining the first M fusion matching probabilities from the arranged multiple fusion matching probabilities, and determining the first M fusion matching probabilities as multiple target fusion matching probabilities.
Wherein M is an integer greater than 1. For example, M may be 10.
Continuing the above example, assuming that M is 5, it can be determined that the ranked plurality of fused match probabilities is 90, 80, 75, 60, 30, 20, and the top M fused match probabilities is 90, 80, 75, 60, i.e., the plurality of target fused match probabilities is 90, 80, 75, and 60.
It should be noted that both M and N may be set by a user according to actual needs, or may be set by default of a device, which is not limited in this embodiment of the application.
(2) And determining the standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution.
When the sizes of the multiple target fusion matching probabilities are relatively close, it is difficult to determine the target mobile position coordinates from the mobile position coordinates corresponding to the multiple fusion matching probabilities.
And the target moving position coordinate is the deviation between the predicted pose information obtained by prediction and the position information of the automatic driving vehicle in the global offline grid map.
As an example, the standard deviation of the single target fusion matching probability in the x direction can be calculated by formula (13).
Wherein, P (X, y) is a fusion matching probability corresponding to all the mobile position coordinates in the area with the first threshold as the radius and centered on the mobile position coordinate corresponding to the target fusion matching probability, and X' is a set of values in the X direction of all the mobile position coordinates in the area with the first threshold as the radius and centered on the mobile position coordinate corresponding to the target fusion matching probability; y' is a set of values of all the mobile position coordinates in the Y direction in an area with a first threshold value as a radius and with the mobile position coordinates corresponding to the target fusion matching probability as a center; x is the number of0And fusing the value in the x direction in the moving position coordinate corresponding to the matching probability for the target.
The first threshold may be set by a user according to actual needs, or may be set by default by a device, which is not limited in the embodiment of the present application.
For example, assuming that the coordinates of the mobile position corresponding to the target fusion matching probability are (2,2) and the first threshold is 1, x can be determined0The number of (x, y) in P (x, y) is 2, (1,2), (1,1), (3,2), (2,3), (3,1), (1,3), and the like, and these moving position coordinates, the fusion matching probability corresponding to these moving position coordinates, the moving position coordinates (2,2), and the target fusion matching probability can be substituted into formula (13), and the standard deviation corresponding to the target fusion matching probability in the x direction corresponding to the moving position coordinates (2,2) can be obtained.
The standard deviation of the single target fusion matching probability in the y direction can be calculated by formula (14).
Wherein, P (X, y) is a fusion matching probability corresponding to all mobile position coordinates within a region with the first threshold as a radius and with the mobile position coordinates corresponding to the target fusion matching probability as a center, and X' is a set of values in the X direction of all mobile position coordinates within a region with the first threshold as a radius and with the mobile position coordinates corresponding to the target fusion matching probability as a center; y' is a set of values in the Y direction of all mobile position coordinates in an area with a first threshold value as the radius and with the mobile position coordinate corresponding to the target fusion matching probability as the center; y is0And fusing the value of the y direction in the mobile position coordinate corresponding to the matching probability for the target.
According to the above formula (13) and formula (14), the standard deviation of each target fusion matching probability in the x direction and the y direction can be determined.
(3) And determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and the standard deviation corresponding to each target fusion matching probability.
When the target fusion matching probability corresponding to the mobile position coordinate is high, and the target fusion matching probability corresponding to other mobile position coordinates in the area taking the mobile position coordinate as the center and the first threshold as the radius is low, the standard deviation corresponding to the mobile position coordinate is low. When the target fusion matching probability corresponding to the mobile position coordinate is high, and the target fusion matching probability corresponding to other mobile position coordinates in the area with the mobile position coordinate as the center and the first threshold as the radius is also high, the standard deviation corresponding to the mobile position coordinate is high. In this step, the matching value of the target fusion matching probability corresponding to the first condition may be increased, so that the subsequent determination of the target mobile position coordinate is more accurate.
For example, assuming that the target fusion matching probability corresponding to the mobile position coordinate (x, y) is 99, the target fusion matching probabilities corresponding to other mobile position coordinates within a region whose radius is equal to the first threshold value are 99, 98, 99, 96, 99, the target fusion matching probability corresponding to the mobile position coordinate (x ', y') is 90, the target fusion matching probabilities corresponding to other mobile position coordinates within a region whose radius is equal to the first threshold value are 60, 30, 20, 30, and the target fusion matching probability corresponding to other mobile position coordinates within a region whose radius is equal to the mobile position coordinate (x, y) is greater than the target fusion matching probability corresponding to the mobile position coordinate (x ', y'), but the target fusion matching probability corresponding to the mobile position coordinate (x ', y') is much greater than the fusion matching probability corresponding to other mobile position coordinates in the vicinity thereof, the matching value of the target fusion matching probability corresponding to the mobile position coordinate (x ', y') can be increased, so that the subsequently determined target mobile position coordinate is more accurate.
As an example, the matching value corresponding to the single target fusion matching probability can be determined by formula (15).
P'(x,y)=P(x,y)·(σxσy)β (15)
Where P' (x, y) is a matching value corresponding to the target fusion matching probability P (x, y), σ xThe corresponding standard deviation, sigma, of the target fusion matching probability P (x, y) in the x directionyMatching probability P (x, y) in y-direction for target fusionCorresponding standard deviation, beta is a parameter which can be adjusted according to actual needs, and beta<0。
When P (x, y) is different target fusion matching probabilities, a matching value corresponding to each target fusion matching probability may be determined.
(4) And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
Because the predicted pose information is predicted and may be inaccurate, it is necessary to determine a deviation between a position corresponding to the predicted pose information and a position corresponding to the position information of the autonomous vehicle, that is, to determine a target moving position coordinate, and then determine the position information of the autonomous vehicle in the global offline grid map at the current time according to the target moving position coordinate and the predicted pose information.
As an example, the target movement position coordinates may be determined according to equation (16).
Wherein,
is the value of x in the target mobile position coordinates,
is the value of y in the target moving position coordinate, P' (X, y) is the matching value of the target fusion matching probability corresponding to the moving position coordinate (X, y), X
PSet of x values, Y, for multiple target fusion match probabilities in equation (12)
PThe multiple targets in equation (12) fuse the set of y values of the match probability.
As an example, after the target moving position coordinate is determined, the position coordinate of the grid where the position corresponding to the predicted pose information is located may be determined in the global offline grid map, the determined position coordinate of the grid and the target moving position coordinate may be added to obtain the position coordinate of the grid where the autonomous vehicle is located in the global offline grid map, and the position coordinate of the grid where the autonomous vehicle is located in the global offline grid map may be converted according to the conversion relationship between the position coordinate of the grid and the plane coordinate of the midpoint of the grid, so as to obtain the position information of the autonomous vehicle in the global offline grid map.
Further, the autonomous vehicle may be equipped with a wheel speed meter that may be used to estimate the pose information of the autonomous vehicle. Referring to fig. 2, the autonomous driving vehicle receives wheel speed meter data including pose information sent by a wheel speed meter, and the wheel speed meter data, IMU data, and the determined position information of the autonomous driving vehicle in the global offline grid map may be input into a multi-sensor fusion filter for fusion, so as to obtain more accurate pose information of the autonomous driving vehicle in the global offline grid map, i.e., obtain high-precision pose information of the autonomous driving vehicle.
In the embodiment of the application, the position and pose information of the automatic driving vehicle at the current moment is predicted to obtain the predicted position and pose information. And generating a first grid map comprising a plurality of grids according to the predicted pose information, wherein each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map. And determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution. That is to say, the position information of the automatic driving vehicle in the global offline grid map at the current moment is determined according to the elevation average value of the grid and the echo reflection intensity average value of the grid, and compared with the method for positioning the automatic driving vehicle by using a single elevation average value, the method reduces the probability of inaccurate positioning and improves the accuracy rate of positioning the automatic driving vehicle.
Fig. 5 is a schematic diagram illustrating a structure of a positioning apparatus according to an exemplary embodiment, which may be implemented by software, hardware or a combination of the two as part or all of a device. Referring to fig. 5, the apparatus includes: a prediction module 501, a generation module 502, a first determination module 503, a second determination module 504 and a third determination module 505.
The prediction module 501 is configured to predict pose information of the autonomous vehicle at the current time to obtain predicted pose information;
a generating module 502, configured to generate a first grid map according to the predicted pose information, where the first grid map includes multiple grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
a first determining module 503, configured to determine a first matching probability distribution of the first grid map and the global offline grid map according to an elevation mean of grids in the first grid map and the global offline grid map;
a second determining module 504, configured to determine a second matching probability distribution of the first grid map and the global offline grid map according to an echo reflection intensity average of grids including lanes in the first grid map and the global offline grid map;
And a third determining module 505, configured to determine, based on the first matching probability distribution and the second matching probability distribution, position information of the autonomous vehicle in the global offline grid map at the current moment.
In one possible implementation manner of the present application, the prediction module 501 is configured to:
predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU data, historical pose information of the automatic driving vehicle at the previous moment and the time difference between the current moment and the previous moment, wherein the historical pose information comprises historical position information and historical posture information.
In one possible implementation manner of the present application, the generating module 502 is configured to:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprise detected echo reflection intensity values and elevation values of all points;
converting first point clouds corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating a first raster map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
In one possible implementation manner of the present application, the first determining module 503 is configured to:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking a position corresponding to the predicted pose information as a center in the global off-line grid map, wherein the first size is smaller than the second size;
and determining first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
In one possible implementation manner of the present application, the first determining module 503 is configured to:
moving the second grid map on a third grid map by taking the designated offset as a moving step length so as to traverse the third grid map;
after the designated offset is moved every time, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved this time;
And determining first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the second determining module 504 is further configured to:
respectively determining grids comprising lanes in the second grid map and the third grid map;
moving the second grid map on the third grid map by taking the designated offset as a moving step length so as to traverse the third grid map;
after each movement of the designated offset, determining a second matching probability corresponding to a current movement position coordinate based on the elevation mean value of grids including lanes in the second grid map and the third grid map, wherein the movement position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the current movement of the second grid map;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and the second matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
determining fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
determining variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance and the fourth difference;
and determining fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
determining a maximum fusion matching probability from the plurality of fusion matching probabilities;
and determining fusion matching probability which is more than N times of the maximum fusion matching probability in the fusion matching probabilities as a plurality of target fusion matching probabilities, wherein N is an integer which is more than 0 and less than 1.
In the embodiment of the application, the pose information of the automatic driving vehicle at the current moment is predicted to obtain the predicted pose information. And generating a first grid map comprising a plurality of grids according to the predicted pose information, wherein each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map. And determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution. That is to say, the position information of the automatic driving vehicle in the global offline grid map at the current moment is determined according to the elevation average value of the grid and the echo reflection intensity average value of the grid, and compared with the method for positioning the automatic driving vehicle by using a single elevation average value, the method reduces the probability of inaccurate positioning and improves the accuracy rate of positioning the automatic driving vehicle.
It should be noted that: in the positioning device provided in the above embodiment, only the division of the above functional modules is used for illustration in positioning, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the positioning apparatus and the positioning method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 6 is a block diagram illustrating the structure of a device 600 according to an example embodiment. The device 600 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Device 600 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the apparatus 600 includes: a processor 601 and a memory 602.
Processor 601 may include one or more processing cores, such as 4-core processors, 8-core processors, and so forth. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the positioning method provided by the method embodiments of the present application.
In some embodiments, the apparatus 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the device 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the device 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the device 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. The microphones may be provided in a plurality, respectively at different locations of the device 600 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is operative to locate a current geographic Location of the device 600 for navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
A power supply 609 is used to provide power to the various components in the device 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the device 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the apparatus 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the device 600, and the gyro sensor 612 may acquire a 3D motion of the user on the device 600 in cooperation with the acceleration sensor 611. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
The pressure sensors 613 can be disposed on the side bezel of the device 600 and/or underneath the touch display screen 605. When the pressure sensor 613 is disposed on the side frame of the device 600, the holding signal of the user to the device 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is arranged at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the device 600. When a physical key or vendor Logo is provided on the device 600, the fingerprint sensor 614 may be integrated with the physical key or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is higher, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
Proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of device 600. The proximity sensor 616 is used to capture the distance between the user and the front of the device 600. In one embodiment, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state when the proximity sensor 616 detects that the distance between the user and the front surface of the device 600 is gradually decreased; when the proximity sensor 616 detects that the distance between the user and the front of the device 600 is gradually increasing, the touch display screen 605 is controlled by the processor 601 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 does not constitute a limitation of the device 600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In some embodiments, a computer-readable storage medium is also provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the positioning method in the above embodiments. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the positioning method described above.
The above-mentioned embodiments are provided by way of example and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.