CN111784798A - Map generation method and device, electronic equipment and storage medium - Google Patents
Map generation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111784798A CN111784798A CN202010622443.XA CN202010622443A CN111784798A CN 111784798 A CN111784798 A CN 111784798A CN 202010622443 A CN202010622443 A CN 202010622443A CN 111784798 A CN111784798 A CN 111784798A
- Authority
- CN
- China
- Prior art keywords
- road image
- position information
- reference road
- dimensional position
- feature point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application provides a map generation method, a map generation device, computer equipment and a storage medium, wherein the method comprises the following steps: determining at least one feature point pair based on attribute information of the feature points included in the reference road image and attribute information of the feature points included in the reference road image; for each reference road image, determining first three-dimensional position information of a second characteristic point in the reference road image according to three-dimensional position information of the first characteristic point in the reference road image in a virtual three-dimensional space, two-dimensional position information of the first characteristic point in the reference road image and two-dimensional position information of the second characteristic point in the reference road image; determining second three-dimensional position information of ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image respectively; generating a map for the target road based on the determined first three-dimensional position information and second three-dimensional position information.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a map generation method, an apparatus, an electronic device, and a storage medium.
Background
When a map is constructed, a special acquisition vehicle is generally used, a laser radar, a positioning module and the like are arranged on the acquisition vehicle, the attitude of a camera and the accurate distance from an object to the camera are acquired, and a three-dimensional map scene is constructed, or a multi-view camera is used, the depth of field, namely the distance from the object to the camera, is acquired through calibration, the attitude of the camera is solved, and the three-dimensional map scene is constructed.
When the map is constructed by the method, the cost for acquiring data and maintaining the map is high, the map is difficult to be applied in a large scale, and if a monocular camera is adopted to reconstruct a three-dimensional scene, the three-dimensional scene is influenced by factors such as camera imaging quality and positioning accuracy, so that the finally reconstructed scene has scale drift or the reconstructed map has low accuracy.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a method and an apparatus for generating map data, an electronic device, and a storage medium, so as to improve the accuracy of the obtained map data.
In a first aspect, an embodiment of the present application provides a map generation method, where the method includes:
determining at least one feature point pair based on attribute information of the feature points included in the reference road image and attribute information of the feature points included in the reference road image; two feature points in the same feature point pair respectively belong to a reference road image and a reference road image; the reference road image and the reference road image are obtained by shooting aiming at a target road;
for each reference road image, determining the pose variation of the reference road image relative to the reference road image according to the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image; the first characteristic point and the second characteristic point belong to the same characteristic point pair;
for each reference road image, determining first three-dimensional position information of a second characteristic point in the reference road image according to the pose variation of the reference road image, the pose of the reference road image, two-dimensional position information of the second characteristic point in the reference road image and two-dimensional position information of a first characteristic point in the reference road image;
determining second three-dimensional position information of ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image respectively;
generating a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information.
In one embodiment, the reference road image and the reference road image are determined according to the following steps:
acquiring historical driving track data of a vehicle driving in the target road;
selecting a target collection vehicle with a running speed falling within a preset speed range from a plurality of collection vehicles based on the historical running track data;
and acquiring a reference road image and a reference road image shot on the target road when the target acquisition vehicle runs.
In one embodiment, determining at least one feature point pair based on attribute information of feature points included in a reference road image and attribute information of feature points included in a reference road image includes:
determining similarity between the feature points in the reference road image and the feature points in the reference road image based on the attribute information of the feature points included in the reference road image and the attribute information of the feature points included in the reference road image;
and determining at least one characteristic point pair based on the similarity and a preset threshold value.
In one embodiment, determining at least one pair of characteristic points comprises:
extracting two-position information of the feature points and two-dimensional position information of the included traffic elements included in each reference road image;
for each similarity, if the similarity is determined to be greater than the preset threshold, judging whether the two feature points corresponding to the similarity belong to the same traffic element according to the two-dimensional position information of the two feature points corresponding to the similarity and the two-dimensional position information of the traffic element;
and if the two characteristic points corresponding to the similarity belong to the same traffic element, determining the two characteristic points corresponding to the similarity as the characteristic point pair.
In one embodiment, determining the amount of change in the pose of the reference road image relative to the reference road image based on the three-dimensional position information of the first feature point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first feature point in the reference road image, and the two-dimensional position information of the second feature point in the reference road image comprises:
determining a target feature point pair belonging to the same ground traffic element from the at least one feature point pair based on the two-dimensional position information of the ground traffic element included in the reference road image and the two-dimensional position information of the ground traffic element included in the reference road image, and the two-dimensional position information of the feature point included in the reference road image;
determining the initial pose variation of the reference road image relative to a reference road image based on the two-dimensional position information of the two feature points, the two-dimensional position information of the second feature point, the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and a preset position estimation algorithm, wherein the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and the preset position estimation algorithm are included in the target feature point pair;
and calibrating the initial pose variation quantity based on the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image, and taking the calibrated pose variation quantity as the pose variation quantity of the reference road image relative to the reference road image.
In one embodiment, for each reference road image, determining first three-dimensional position information of a second feature point in the reference road image according to the pose variation of the reference road image, the pose of the reference road image, two-dimensional position information of the second feature point in the reference road image, and two-dimensional position information of the first feature point in the reference road image, comprises:
and carrying out triangulation processing on the pose variation of the reference road image, the two-dimensional position information of the second characteristic point in the reference road image and the two-dimensional position information of the first characteristic point in the reference road image by using a triangulation processing algorithm to obtain the first three-dimensional position information of the second characteristic point in the reference road image.
In one embodiment, determining second three-dimensional position information of ground traffic elements in the virtual three-dimensional space from a reference road image and each reference road image respectively comprises:
converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image;
and converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image.
In one embodiment, generating a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information includes:
respectively calculating a first type identifier corresponding to the first three-dimensional position information and a second type identifier corresponding to the second three-dimensional position information;
based on first three-dimensional position information corresponding to each type of first type identification and a first plane to which a traffic element under each type of first type identification belongs, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic element under each type of first type identification;
based on second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, carrying out segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier:
and vectorizing the first three-dimensional position information corresponding to the traffic element under each first type of identifier and the second three-dimensional position information corresponding to the traffic element under each second type of identifier to obtain the map of the target road.
In one embodiment, the obtaining the first three-dimensional position information corresponding to the traffic element under each first type identifier by segmenting the first three-dimensional position information based on the first three-dimensional position information corresponding to each first type identifier and the first plane to which the traffic element under each first type identifier belongs includes:
respectively calculating first distances from the first three-dimensional position information to each first plane corresponding to each first type identification;
based on the first distances and a preset threshold value, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic elements under each first type of identification;
based on the second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier, including:
respectively calculating a second distance from the first three-dimensional position information to each second plane corresponding to each second type identifier;
and based on the second distances and the preset threshold, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic elements under each second type identifier.
In a second aspect, an embodiment of the present application provides a map generating apparatus, including:
a first determination module configured to determine at least one feature point pair based on attribute information of feature points included in the reference road image and attribute information of feature points included in the reference road image; two feature points in the same feature point pair respectively belong to a reference road image and a reference road image; the reference road image and the reference road image are obtained by shooting aiming at a target road;
the second determining module is used for determining the pose variation of the reference road image relative to the reference road image according to the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image; the first characteristic point and the second characteristic point belong to the same characteristic point pair;
a third determining module, configured to determine, for each reference road image, first three-dimensional position information of a second feature point in the reference road image according to the pose variation amount of the reference road image, the pose of the reference road image, two-dimensional position information of the second feature point in the reference road image, and two-dimensional position information of the first feature point in the reference road image;
the fourth determining module is used for determining second three-dimensional position information of the ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image respectively;
a generating module for generating a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information.
In a third aspect, an embodiment of the present application provides an electronic device, including: the map generation device comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium are communicated through the bus, and the processor executes the machine-readable instructions to execute the steps of the map generation method.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the map generation method.
The map generation method provided by the embodiment of the application determines at least one characteristic point pair based on the attribute information of the characteristic point included in the reference road image and the attribute information of the characteristic point included in the reference road image, determines the pose change amount of the reference road image relative to the reference road image according to the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the characteristic point pair in the reference road image and between the reference road images for each reference road image, determines the first three-dimensional position information of the second characteristic point in the reference road image according to the pose change amount of the reference road image, the pose of the reference road image, the two-dimensional position information of the second characteristic point in the reference road image and the two-dimensional position information of the first characteristic point in the reference road image for each reference road image, the second three-dimensional position information of the ground traffic element is determined from the reference road image and each reference road image respectively, and the map for the target road is generated based on the determined first three-dimensional position information and the second three-dimensional position information.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a map generation method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a final generated map provided by an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a map generating apparatus provided in an embodiment of the present application;
fig. 4 shows a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The map generation method of the embodiment of the application can be applied to a server of a travel service platform and can also be applied to any other computing equipment with a processing function. In some embodiments, the server or computing device may include a processor. The processor may process information and/or data related to the service request to perform one or more of the functions described herein.
In the related art, the construction method of the high-precision map may include three types: the first method is as follows: using a special acquisition vehicle, wherein the acquisition vehicle is provided with a laser radar or a Global Positioning System (GPS) device based on Real-time kinematic (RTK) and the like, acquiring the posture of a camera and the accurate distance from an object to the camera, and constructing a three-dimensional map scene; the second method comprises the following steps: using a multi-view camera to acquire the depth of field in a calibration mode, namely acquiring the distance from an object to the camera, solving the posture of the camera and constructing a three-dimensional map scene; the third method comprises the following steps: the three-dimensional map scene is constructed by using a crowdsourcing form and using a monocular camera and a three-dimensional reconstruction algorithm (SfM).
The three-dimensional map scenes constructed in the first mode and the second mode have high precision, but the cost of data acquisition and scene maintenance is high, and the large-scale application is difficult. The third mode has the following disadvantages although the cost is lower: the imaging quality of a camera in the automobile data recorder is poor, the distortion of a lens is obvious, and a large number of automobile data recorders running on line cannot be calibrated; the GPS module in the automobile data recorder has poor precision and cannot provide a precise position; when the images are collected, the shielding of vehicles and pedestrians is serious, the speed of the vehicle is too fast, the vehicle turns too fast at the intersection and other practical factors, the usability of the images is greatly reduced, the problems of reconstruction failure, scale drift, serious texture dependence on reconstructed scenes and the like can occur in a third adoption mode, and the practical application is difficult to put into practice.
Based on this, the present application proposes a map generation method of determining at least one feature point pair based on attribute information of feature points included in a reference road image and attribute information of feature points included in a reference road image, determining, for each reference road image, a pose change amount of the reference road image with respect to the reference road image from three-dimensional position information of a first feature point in the reference road image in a virtual three-dimensional space, two-dimensional position information of a pair of feature points in the reference road image and between the reference road images, determining, for each reference road image, a two-dimensional position information of a second feature point in the reference road image and a two-dimensional position information of a first feature point in the reference road image from the pose change amount of the reference road image, the pose of the reference road image, and the two-dimensional position information of a second feature point in the reference road image, the first three-dimensional position information of the second feature point in the reference road image is determined, the second three-dimensional position information of the ground traffic element is determined from the reference road image and each reference road image respectively, and the map for the target road is generated based on the determined first three-dimensional position information and the second three-dimensional position information. This is described in detail below.
An embodiment of the present application provides a map generation method, as shown in fig. 1, where the method is applied to a server, and the method specifically includes the following steps:
s101, determining at least one characteristic point pair based on attribute information of the characteristic points included in the reference road image and attribute information of the characteristic points included in the reference road image; two feature points in the same feature point pair respectively belong to a reference road image and a reference road image; the reference road image and the reference road image are obtained by shooting the target road.
Here, the target road may be any road in a road network, and the road network may be a road network in a certain city; the road image may be shot by a vehicle registered in the travel service platform, or may be shot manually, and in a specific implementation process, the road image may be shot by a vehicle registered in the travel service platform, the road image includes a moving object, a ground traffic element, and an air traffic element, the moving object may include a vehicle, a pedestrian, a motorcycle, a bicycle, and the like, the ground traffic element includes a traffic marking on a road surface, such as a lane line, a yellow solid line, a stop line, a sidewalk, a guide arrow, a traffic guidance area, and the like, and the air traffic element may include a traffic sign, a traffic light, an electronic eye, and the like.
Considering that a vehicle generally travels in the design direction of a target road when traveling on the target road, road images captured for the target road are generally an image sequence arranged in the order of the capturing time, and when selecting an initial reference road image, the most forward image in the image sequence may be used as the initial reference road image.
After the initial reference road image is selected, a virtual three-dimensional space (namely a three-dimensional coordinate system) is initialized by using the initial reference road image and a second road image in the image sequence, if the initialization is successful, the first road image in the selected image sequence can be used as the initial reference road image, if the initialization is failed, the second road image in the image sequence is used as the initial reference road image, the initialization is continued until the initialization is successful, and if the initialization is successful, the selected image is used as the initial reference road image.
It should be noted here that the reference road image is not fixed, and when the reference road image is selected for the first time, the first image in the image sequence is used as an initial reference road image, new road images are sequentially introduced in the process of constructing the map, the newly introduced road image is used as a reference road image, and each time a reference road image is introduced, the previously introduced road image can be used as the reference road image, which will be described in detail below.
In a specific implementation process, when a vehicle runs on a target road, a traffic jam may exist, for example, a traffic accident occurs in the target road, and then when the vehicle runs at a slow speed, traffic elements in a captured road image are blocked by the vehicle; when the vehicle is traveling at too high speed, the captured traffic elements may be unclear, and therefore, after a large number of road images are acquired, the road images are screened, and the screening process is described in detail below.
Acquiring historical driving track data of a vehicle driving in the target road; selecting a target collection vehicle with a running speed falling within a preset speed range from a plurality of collection vehicles based on the historical running track data; and acquiring a reference road image and a reference road image shot on the target road when the target acquisition vehicle runs.
Here, the historical travel track data can be obtained from a historical completion order of the travel service platform, and when the collected vehicle completes the historical order, the order data is uploaded, and the order data generally comprises the historical travel track data of the vehicle; the preset speed range may be preset, for example, the speed range may be greater than or equal to 30km/h and less than or equal to 60 km/h.
In the specific implementation process, after the historical driving track of the collected vehicle driving on the target road is obtained, the distance from one track point to another track point can be calculated by utilizing the coordinate information of the track points in the historical driving track, the driving time of the collected vehicle driving from one track point to another track point is calculated through the time of the vehicle passing through each track point, and therefore the driving speed of the collected vehicle on the target road is calculated by utilizing the distance and the driving time. The track for calculating the running speed can be track points separated by a preset distance threshold value in the target road so as to ensure the accuracy of the calculated running speed.
And comparing the running speed with a preset speed range, and if the running speed is within the preset speed range, taking the collected vehicle as a target collected vehicle and acquiring a road image shot by the target collected vehicle for a target road.
After obtaining a series of road images, extracting attribute information of feature points from each road image, where the feature points are points on a fixed object in the road images, the fixed object may be a traffic sign, a traffic light, and the like, and the attribute information may be information of color, shape, semantic of representation, and the like of the fixed object, and when executing S101, the method may include the following steps:
determining similarity between the feature points in the reference road image and the feature points in the reference road image based on the attribute information of the feature points included in the reference road image and the attribute information of the feature points included in the reference road image; and determining at least one characteristic point pair based on the similarity and a preset threshold value.
Here, the preset threshold is preset and may be determined according to actual conditions.
Taking a reference road image and a reference road image as an example for explanation, a plurality of feature points may be extracted from the reference road image and the reference road image, respectively, and after obtaining attribute information of the feature points, a feature vector may be generated for each feature point. The feature vector may be generated by using a feature vector generation model that is trained in advance, and the training process of the feature vector generation model is not described in detail.
And calculating the similarity between each feature point in the reference road image and each feature point in the reference road image according to the feature vector of the feature point and the feature vector of each feature point in the reference road image. The algorithm for calculating the similarity may include euclidean distance, manhattan distance, chebyshev distance, etc., which will not be described in detail herein.
After the similarity is obtained, the similarity is compared with a preset threshold, if the similarity is greater than the preset threshold, it is determined that two feature points corresponding to the similarity are a feature point pair, and the determination process of the road image and the feature point pair in the road image may refer to the above examples, which are not described herein.
The method includes that a certain error exists in a characteristic point pair obtained by calculating similarity, in order to improve accuracy of the obtained characteristic point pair, outliers (characteristic point pairs with wrong identification) in the characteristic point pair can be removed, when the outliers are removed, the outliers can be removed by using one or combination of a visual similarity method, a local motion consistency method and a geometric verification method, besides the outlier extraction by the method, the outliers can be removed according to traffic elements included in a road image, and the method includes the following steps:
extracting two-position information of the feature points and two-dimensional position information of the included traffic elements included in each reference road image; for each similarity, if the similarity is determined to be greater than the preset threshold, judging whether the two feature points corresponding to the similarity belong to the same traffic element according to the two-dimensional position information of the two feature points corresponding to the similarity and the two-dimensional position information of the traffic element; and if the two characteristic points corresponding to the similarity belong to the same traffic element, determining the two characteristic points corresponding to the similarity as the characteristic point pair.
Here, the two-dimensional position information is pixel coordinates of the feature point in the reference road image; the traffic elements comprise ground traffic elements and air traffic elements, and reference can be made to the above specifically; the two-dimensional position information of the feature points can be acquired through a coordinate picker, and the two-dimensional position information of the traffic elements can be acquired through an information recognition model which completes training.
Taking the similarity between one feature point in the reference road image and one feature point in the reference road image as an example, if the similarity is greater than a preset similarity threshold, then, determining the traffic element to which the feature point in the reference road map belongs according to the two-dimensional position information of the feature point in the reference road image corresponding to the similarity and the two-dimensional position information of the traffic element in the reference road image, determining the traffic element to which the feature point in the reference road image belongs according to the two-dimensional position information of the feature point in the reference road image corresponding to the similarity and the two-dimensional position information of the traffic element in the reference road image, further judging the traffic element to which the feature point in the reference road image corresponding to the similarity belongs in the reference road image, and whether the traffic elements of the characteristic points in the reference road image are the same or not is obtained according to the similarity.
When the traffic element of the feature point in the reference road image corresponding to the similarity in the reference road image is the same as the traffic element of the feature point in the reference road image corresponding to the similarity in the reference road image, it may be determined that the two feature points corresponding to the similarity are a feature point pair, otherwise, the feature point pair is rejected. The method for removing other feature point pairs is the same as the above method, and will not be described in detail here.
S102, for each reference road image, determining the pose variation of the reference road image relative to the reference road image according to the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image; the first characteristic point and the second characteristic point belong to the same characteristic point pair.
Here, the virtual three-dimensional space is a virtual three-dimensional coordinate system established by an initial reference road image and an initial reference road image (the first reference road image introduced in the calculation process) selected from the image sequence, and the three-dimensional position information is spatial position information of a spatial point corresponding to the feature point in the virtual three-dimensional space; the pose variation comprises a rotation matrix and an offset vector, wherein the rotation matrix is a rotation matrix of the reference road image relative to the reference road image in the virtual three-dimensional space, and the offset vector is an offset vector of the reference road image relative to the reference road image in the virtual three-dimensional space.
In step S102, a virtual three-dimensional space of the map is first determined by the initial reference road image and a reference road image, and then the reference road image is sequentially introduced to determine a spatial point of the map in the virtual three-dimensional space, where the spatial point is a three-dimensional point corresponding to the virtual three-dimensional space by the feature point, the rotation matrix of the initial reference road image is a unit matrix, and the offset vector of the initial reference road image is a unit vector.
When a map is initialized based on the initial reference road image and the reference road image, the two-dimensional position information of the feature point pairs in the initial reference road image and the reference road image is processed by using a epipolar constraint algorithm, so that the pose variation of the reference road image relative to the initial reference road image is obtained. Wherein the epipolar constraint algorithm is not described in detail.
For example, taking an initial reference road image and a feature point pair in a reference road image as an example, the feature point pair includes a feature point a and a feature point B, the feature point a is a feature point in the initial reference road image, the feature point B is a feature point in the reference road image, and the two-dimensional position information of the feature point a in the initial reference road image and the two-dimensional position information of the feature point B in the reference road image are processed by using a epipolar constraint algorithm to obtain a pose variation of the reference road image relative to the initial reference road image.
After the pose variation of the reference road image is obtained, the triangulation technology is utilized to process the initial pose of the initial reference road image, the pose variation of the reference road image and the two-dimensional position information of the feature point pairs in the initial reference road image and the reference road image, and the three-dimensional position information of the feature point pairs in the reference road image and the initial reference road image in the virtual three-dimensional space is obtained. The triangulation technique may be a Delaunay algorithm, and the following describes an implementation principle of the triangulation technique.
For example, continuing the previous example, the triangulation technique is used to process the initial pose of the initial reference road image, the pose variation of the reference road image, the two-dimensional position information of the feature point a in the initial reference road image, and the two-dimensional position information of the feature point B in the reference road image, so as to obtain the three-dimensional position information of the spatial point corresponding to the feature point a and the feature point B in the virtual three-dimensional space.
After the map is initialized, new reference road images are sequentially introduced, the introduced road images are used as reference road images, and three-dimensional position information of feature points in the newly introduced reference road images in a virtual three-dimensional space is calculated, which is described in detail below.
Determining a target feature point pair belonging to the same ground traffic element from the at least one feature point pair based on the two-dimensional position information of the ground traffic element included in the reference road image and the two-dimensional position information of the ground traffic element included in the reference road image, and the two-dimensional position information of the feature point included in the reference road image;
determining the initial pose variation of the reference road image relative to a reference road image based on the two-dimensional position information of the two feature points, the two-dimensional position information of the second feature point, the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and a preset position estimation algorithm, wherein the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and the preset position estimation algorithm are included in the target feature point pair;
and calibrating the initial pose variation quantity based on the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image, and taking the calibrated pose variation quantity as the pose variation quantity of the reference road image relative to the reference road image.
Here, the position estimation algorithm may be a P3P algorithm, a PnP algorithm, etc., and may be determined according to actual situations, and in practical applications, the PnP algorithm may be selected.
The description will be given taking one reference road image and one reference road image as an example.
After obtaining the two-dimensional position information of the ground traffic elements included in the reference road image, the two-dimensional position information of the feature points included in the reference road image, the two-dimensional position information of the ground traffic elements included in the reference road image, and the two-dimensional position information of the feature points included in the reference road image, for each feature point pair, the ground traffic element to which the feature point belongs is determined based on the two-dimensional position information of one feature point in the feature point pair in each reference road image and the two-dimensional position information of the ground traffic element in each reference road image (e.g., the two-dimensional position information of a plurality of boundary points corresponding to the ground traffic elements), that is, if the two-dimensional position information of the feature point falls into each position area corresponding to the ground traffic element, the feature point is determined to belong to the ground traffic element, and when the reference road image includes at least two, the feature points in the feature point pairs need to belong to the regions to which the ground traffic elements in each reference road image belong, and the first feature points in each reference road image belong to the same feature point pair.
Based on the two-dimensional position information of the other feature point in the pair of feature points in the reference road image and the two-dimensional position information of the ground traffic element in the reference road image, the ground traffic element to which the feature point belongs is judged (the determination process of the feature point in the reference road image can be referred to), and further, whether the ground traffic element to which the feature point belongs in the reference road image and the ground traffic element to which the other feature point belongs in the reference road image in the pair of feature points are the same ground traffic element is judged.
And if the ground traffic element of one characteristic point in each reference road image and the ground traffic element of the other characteristic point in the reference road image are the same ground traffic element, determining that the characteristic point pair is a target characteristic point pair.
After the target feature point pair is determined, a first target feature point pair matching the first feature point and the second feature point is determined from the plurality of target feature point pairs based on the two-dimensional position information included in the target feature point pair and the two-dimensional position information of the first feature point in the reference road image, the two-dimensional position information of the second feature point in the reference road image, that is, when the two-dimensional position information of one feature point is the same as the two-dimensional position information of the first feature point and the two-dimensional position information of the other feature point is the same as the two-dimensional position information of the second feature point in the target feature point pair, the target feature point pair is considered to be matched with the first feature point and the second feature point, when the reference road image includes at least two, the two-dimensional position information of one feature point in the target feature point pair needs to be the same as the two-dimensional position information of the first feature point included in each reference road image.
After the first target characteristic point pair is determined, three-dimensional position information corresponding to a first characteristic point and a second characteristic point which are matched with the first target characteristic point pair is obtained, two-dimensional position information of the two characteristic points included in the first target characteristic point pair and the obtained three-dimensional position information are input into a preset pose estimation algorithm (such as a PnP algorithm), and initial pose variation of the reference road image relative to the reference road image is obtained. The calculation process of the PnP algorithm is not described in detail here.
The initial pose variation is determined by ground traffic elements, the accuracy is relatively poor, the initial pose variation can be optimized in order to improve the accuracy of the obtained pose variation, two-dimensional position information and corresponding three-dimensional position information of feature point pairs in the reference road image and the reference road image are input into a pose optimization algorithm (such as a Bundle ajustment algorithm), the initial pose variation is optimized, and the pose variation with high accuracy is obtained.
It should be noted here that the introduced calculation process of the pose variation of the new reference road image with respect to the reference road image is similar to the above process, and is not described here again until the pose variation corresponding to each road image corresponding to the target road is determined.
S103, for each reference road image, determining first three-dimensional position information of a second characteristic point in the reference road image according to the pose variation of the reference road image, the pose of the reference road image, two-dimensional position information of the second characteristic point in the reference road image and two-dimensional position information of a first characteristic point in the reference road image.
In step S103, the pose variation amount of the reference road image, the pose of the reference road image, the two-dimensional position information of the second feature point in the reference road image, and the two-dimensional position information of the first feature point in the reference road image may be triangulated to obtain the first three-dimensional position information of the second feature point in the reference road image.
Taking a reference road image and a reference road image as an example for explanation, the triangulation technology is used for processing the pose variation of the obtained reference road image relative to the reference road image, the pose of the reference road image, the two-dimensional position information and the three-dimensional position information of the first feature point in the reference road image, and the two-dimensional position information of the second feature point in the reference road image, so as to obtain the three-dimensional position information of the second feature point in the reference road image in the virtual three-dimensional space. It is to be noted herein that when the road image that has been previously introduced includes at least two, the reference road image that participates in the calculation of the three-dimensional position information of the second feature point in the virtual three-dimensional space may be a partial road image in the road image that has been previously introduced, for example, a road image that has a feature point pair with a feature point in the reference road image and that has been previously introduced.
The principle of triangulation is as follows:
taking a pair of feature points in two road images as an example, a two-dimensional point in the first road image is X _1, and a two-dimensional point in the second road image is X _2, which are feature point pairs, that is, observations of the same three-dimensional point X in space in the two road images.
From the projection relationship, X _1 ═ P _1 × X, X _2 ═ P _2 × X; wherein, P _1 is a camera pose of the road image where x _1 is located, and may be [ R _1| t _1], R _1 is a rotation matrix of the road image where x _1 is located relative to the reference road image, t _1 is a translation vector of the road image where x _1 is located relative to the reference road image, P _2 is a camera pose of the road image where x _2 is located, and may be [ R _2| t _2], R _2 is a rotation matrix of the road image where x _2 is located relative to the reference road image, and t _2 is a translation vector of the road image where x _2 is located relative to the reference road image.
Because the left side and the right side in the equation have a difference of one scale, that is, all points along a certain ray from the center of the camera are projected to the image and are the same pixel point, the left side and the right side of the equation can be regarded as two parallel vectors, a new equation is constructed by a cross multiplication mode, taking the road image where x _1 is as an example: x _1 cross-product (P _ 1X) ═ 0, this equation can provide two linearly independent equations, and the process of constructing the equation for the road image where X _2 is located is the same as the above process. Because there are two observations, 4 linearly independent equations can be provided, and the 4 linearly independent equations can be combined to solve the value of X, i.e., to obtain the position information of a feature point pair in the virtual three-dimensional space.
When determining the three-dimensional position information of the second characteristic point in the reference road image in the virtual three-dimensional space, the constraint relation between part of the space point and the camera pose is used, therefore, the constructed map may not be optimal, in order to solve the problem, the obtained space point cloud is generally optimized by means of a Bundle ajustment algorithm, when optimizing, the pose variation of each road image and the three-dimensional position information corresponding to the characteristic point in the corresponding road image can be input to the Bundle ajustment algorithm to construct the optimal problem, in addition to the reprojection error between the pixel coordinate of the optimized space point reprojected to the road image and the pixel coordinate of the corresponding road image, the distance error between the camera position (i.e. the coordinate of the optical center of the camera shooting the current road image in the virtual three-dimensional space) and the GPS coordinate of the road image shot by the camera is also considered, in order to use the GPS coordinates, it is necessary to transfer both the three-dimensional position information and the GPS coordinates of the virtual three-dimensional space to the ECEF coordinate system, perform optimization processing using an optimized objective function, where the optimization error is the sum of the minimized reprojection error and the position error, and perform optimization processing on the spatial point cloud.
When extracting the feature points from the road image, since the ground traffic elements are formed by painting, it is difficult to extract the feature points from the ground traffic elements, or the number of the feature points extracted from the ground traffic elements is small, the finally obtained first three-dimensional position information corresponding to the ground traffic elements is also small, and the ground traffic elements are traffic elements indispensable in the map, so that the three-dimensional position information of the ground traffic elements included in the road image in the virtual three-dimensional space is determined for each road image in order to obtain a scene in which the map is more fit to the reality, which will be described in detail below.
And S104, determining second three-dimensional position information of the ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image respectively.
Here, the ground traffic elements have been described in detail above and will not be described here.
In the step S104, the two-dimensional position information of the ground traffic element included in the reference road image may be converted based on the internal and external parameters of the camera that captures the reference road image, so as to obtain second three-dimensional position information of the ground traffic element included in the reference road image;
and converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image.
Here, the camera may be a camera in a vehicle event data recorder of a vehicle that captures a road image, where the internal and external parameters of the camera include internal parameters and external parameters, where the internal parameters include a focal length and a radial distortion parameter of the camera, and the external parameters include a height of the camera from a plane where a target road is located and a pitch angle of the camera, and the road image acquired by the camera may be processed by an SFM algorithm to acquire the internal and external parameters of the camera, and a process of calibrating the internal and external parameters of the camera is not described in detail here.
In the specific implementation process, a plurality of artificially-made traffic marked lines are arranged on the target ground, the marked lines have fewer matching characteristic points among images, and the space coordinates of three-dimensional points are difficult to recover through triangulation.
It should be noted here that when processing different road images, it is necessary to use the inside and outside parameters of the camera that captures the road image, for example, when processing a reference road image, it is necessary to use the inside and outside parameters of the camera that captures the reference road image to perform conversion processing on the two-dimensional position information of the traffic element included in the reference road image, so that the accuracy of the obtained three-dimensional position information can be improved.
And S105, generating a map aiming at the target road based on the determined first three-dimensional position information and the second three-dimensional position information.
Here, the generated map of the target road is a vector map.
In performing S105, the following steps may be included: respectively calculating a first type identifier corresponding to the first three-dimensional position information and a second type identifier corresponding to the second three-dimensional position information; based on first three-dimensional position information corresponding to each type of first type identification and a first plane to which a traffic element under each type of first type identification belongs, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic element under each type of first type identification; based on second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, carrying out segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier: and vectorizing the first three-dimensional position information corresponding to the traffic element under each first type of identifier and the second three-dimensional position information corresponding to the traffic element under each second type of identifier to obtain the map of the target road.
Here, the first type identifier and the second type identifier may both represent attributes to which feature points in the road image belong, and for example, the type identifier may be a traffic sign or a traffic light in a spatial traffic element, or may be a traffic marking in a ground traffic element. The first type identifier may correspond to the same traffic element at different positions, for example, when the first type identifier is a traffic sign, the traffic sign may be located at different positions in the target road; different types of identifiers, including a first type of identifier and a second type of identifier, may be identified by different colors or shapes.
In a specific implementation process, it is considered that the type identifiers of the feature points belonging to the same feature point pair are the same, and therefore, a determination process of the first type identifier of the first three-dimensional position information may be determined by using the reference road image, and certainly, when the first type identifier of the first three-dimensional position information is determined, the first type identifier of the first three-dimensional position information may also be determined by using the reference road image, or the first type identifier of the first three-dimensional position information may be determined by combining the reference road image and the reference road image, which is not limited by the present disclosure.
Taking a corresponding feature point in a first three-dimensional position information as an example, for convenience of description, such a feature point is referred to as a first target feature point hereinafter, a type identifier and two-dimensional position information of the feature point are extracted from each road image, for the first three-dimensional position information, according to two-dimensional position information of a first target feature point corresponding to the first three-dimensional position information in a corresponding road image (a road image including the first target feature point) and two-dimensional position information of a feature point included in a road image including the first target feature point, a feature point having the same position as the first target feature point is determined from the road image including the first target feature point, and if a ratio of the number of the determined feature points to the number of road images including the first target feature point is greater than a preset ratio threshold (which may be determined according to actual circumstances), then, the determined type identifier of the feature point is used as the first type identifier of the first three-dimensional position information, for example, the feature point corresponding to the first three-dimensional position information is a, the feature point a is included in 10 road images, the feature point matching with the two-dimensional position information of the feature point a is determined from the feature points included in the 10 road images, if there are feature points included in the 8 road images that are the same as the two-dimensional position information of the feature point a, and the type identifiers of the feature points included in the 8 road images are the same type identifier, and 8/10 is greater than a preset duty threshold α, then the type identifier corresponding to the feature point included in the 8 road images is the first type identifier corresponding to which position information the first three-dimensional position information corresponds. The process of determining the first type identifier of the first three-dimensional position information by referring to the road image is not described in detail.
When the second type identifier of the second three-dimensional position information is determined, the second type identifier of the second three-dimensional position information may be determined by using the reference road image, or the second type identifier of the second three-dimensional position information may be determined by combining the reference road image and the reference road image, which is not limited in the present application.
Taking the corresponding feature point in the second three-dimensional position information as an example, for convenience of describing that such feature point is called as a second target feature point hereinafter, the type identifier and the two-dimensional position information of the feature point are extracted from each road image, for the second three-dimensional position information, based on the two-dimensional position information of the second target feature point corresponding to the second three-dimensional position information in the corresponding road image (the road image including the second target feature point) and the two-dimensional position information of the feature point included in the road image including the second target feature point, the feature point having the same position as the second target feature point is determined from the road image including the second target feature point, and if the ratio of the number of the determined feature points to the number of the road images including the second target feature point is greater than the preset ratio threshold (which can be determined according to the actual situation), then, the determined type identifier of the feature point is used as the first type identifier of the second three-dimensional position information, and the process of determining the first type identifier of the second three-dimensional position information by referring to the road image is not repeated.
After obtaining the first type identifier corresponding to the first three-dimensional position information and the second type identifier corresponding to the second three-dimensional position information, segmenting the first three-dimensional position information corresponding to each first type identifier, which may include the following steps:
respectively calculating first distances from the first three-dimensional position information to each first plane corresponding to each first type identification; and based on the first distances and a preset threshold value, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic elements under each first type of identification.
Here, each first plane corresponding to each first type identifier may be a plane to which air traffic elements at different positions corresponding to the first type identifier belong, and the number of the first planes corresponding to the first type identifier is the same as the number of the air traffic elements corresponding to the first type identifier.
In a specific implementation process, for each first type identifier, performing multi-plane estimation on each piece of first three-dimensional position information corresponding to the first type identifier by using a RANSAC algorithm to obtain a first plane corresponding to each air traffic element under the first type identifier. The process of performing multi-plane estimation using the RANSAC algorithm will not be described in detail.
For each piece of first three-dimensional position information, calculating a first distance from the first three-dimensional position information to each first plane, for each first plane, comparing the first distance to the first plane with a preset distance threshold, determining the first three-dimensional position information of which the first distance is smaller than the preset distance threshold as the first three-dimensional position information of the air traffic element corresponding to the first plane, further projecting the first three-dimensional position information corresponding to each air traffic element into the corresponding first plane, and performing convex hull solution to implement vectorization processing on each piece of first three-dimensional position information, that is, to obtain vectorization expression of the plane corresponding to each air traffic element, which may refer to fig. 2.
The segmenting of the second three-dimensional position information corresponding to each second type identifier may include the following steps:
respectively calculating a second distance from the first three-dimensional position information to each second plane corresponding to each second type identifier; and based on the second distances and the preset threshold, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic elements under each second type identifier.
Here, each second plane corresponding to each second type mark may be a plane to which ground traffic elements at different positions corresponding to the second type mark belong, and the number of the second planes corresponding to the second type mark is the same as the number of the ground traffic elements corresponding to the second type mark.
In a specific implementation process, for each second type identifier, performing multi-plane estimation on each piece of second three-dimensional position information corresponding to the second type identifier by using a RANSAC algorithm to obtain a second plane corresponding to each ground traffic element under the second type identifier. The process of performing multi-plane estimation using the RANSAC algorithm will not be described in detail.
And for each second three-dimensional position information, calculating a second distance from the second three-dimensional position information to each second plane, for each second plane, comparing the second distance to the second plane with a preset distance threshold, determining the second three-dimensional position information of which the second distance is smaller than the preset distance threshold as the first three-dimensional position information of the ground traffic element corresponding to the second plane, further projecting the second three-dimensional position information corresponding to each ground traffic element into the corresponding second plane, and performing convex hull solving to realize vectorization processing on each second three-dimensional position information, namely obtaining the vectorization expression of the plane corresponding to each ground traffic element. The process of solving the convex hull will not be described in detail.
When the map of the target road is constructed, if vehicles pass through the map, the road image can be collected again, three-dimensional position information of traffic elements is generated and compared with the three-dimensional position information in the map, the traffic elements are detected to be changed, and the map is updated.
Based on the same inventive concept, the embodiment of the present application further provides a data processing apparatus corresponding to the map generation method, and as the principle of solving the problem of the method in the embodiment of the present application is similar to that of the map generation method in the embodiment of the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
An embodiment of the present application provides a map generating apparatus, as shown in fig. 3, the apparatus includes:
a first determining module 31 configured to determine at least one feature point pair based on attribute information of feature points included in the reference road image and attribute information of feature points included in the reference road image; two feature points in the same feature point pair respectively belong to a reference road image and a reference road image; the reference road image and the reference road image are obtained by shooting aiming at a target road;
a second determining module 32, configured to determine, for each reference road image, a pose variation amount of the reference road image relative to the reference road image according to three-dimensional position information of the first feature point in the reference road image in the virtual three-dimensional space, two-dimensional position information of the first feature point in the reference road image, and two-dimensional position information of the second feature point in the reference road image; the first characteristic point and the second characteristic point belong to the same characteristic point pair;
a third determining module 33, configured to determine, for each reference road image, first three-dimensional position information of a second feature point in the reference road image according to the pose variation amount of the reference road image, the pose of the reference road image, two-dimensional position information of the second feature point in the reference road image, and two-dimensional position information of the first feature point in the reference road image;
a fourth determining module 34, configured to determine second three-dimensional position information of the ground traffic element in the virtual three-dimensional space from the reference road image and each reference road image, respectively;
a generating module 35, configured to generate a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information.
In one embodiment, the first determination module 31 is configured to determine the reference road image and the reference road image according to the following steps:
acquiring historical driving track data of a vehicle driving in the target road;
selecting a target collection vehicle with a running speed falling within a preset speed range from a plurality of collection vehicles based on the historical running track data;
and acquiring a reference road image and a reference road image shot on the target road when the target acquisition vehicle runs.
In one embodiment, the first determining module 31 is configured to determine at least one characteristic point pair according to the following steps:
determining similarity between the feature points in the reference road image and the feature points in the reference road image based on the attribute information of the feature points included in the reference road image and the attribute information of the feature points included in the reference road image;
and determining at least one characteristic point pair based on the similarity and a preset threshold value.
In one embodiment, the first determining module 31 is configured to determine at least one characteristic point pair according to the following steps:
extracting two-position information of the feature points and two-dimensional position information of the included traffic elements included in each reference road image;
for each similarity, if the similarity is determined to be greater than the preset threshold, judging whether the two feature points corresponding to the similarity belong to the same traffic element according to the two-dimensional position information of the two feature points corresponding to the similarity and the two-dimensional position information of the traffic element;
and if the two characteristic points corresponding to the similarity belong to the same traffic element, determining the two characteristic points corresponding to the similarity as the characteristic point pair.
In one embodiment, the second determining module 32 is configured to determine the pose change amount of the reference road image relative to the reference road image according to the following steps:
determining a target feature point pair belonging to the same ground traffic element from the at least one feature point pair based on the two-dimensional position information of the ground traffic element included in the reference road image and the two-dimensional position information of the ground traffic element included in the reference road image, and the two-dimensional position information of the feature point included in the reference road image;
determining the initial pose variation of the reference road image relative to a reference road image based on the two-dimensional position information of the two feature points, the two-dimensional position information of the second feature point, the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and a preset position estimation algorithm, wherein the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and the preset position estimation algorithm are included in the target feature point pair;
and calibrating the initial pose variation quantity based on the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image, and taking the calibrated pose variation quantity as the pose variation quantity of the reference road image relative to the reference road image.
In one embodiment, the third determining module 33 is configured to determine the first three-dimensional position information of the second feature point in the reference road image according to the following steps:
and carrying out triangulation processing on the pose variation of the reference road image, the two-dimensional position information of the second characteristic point in the reference road image and the two-dimensional position information of the first characteristic point in the reference road image by using a triangulation processing algorithm to obtain the first three-dimensional position information of the second characteristic point in the reference road image.
In one embodiment, the fourth determination module 34 is configured to determine the second three-dimensional position information of the ground traffic element in the virtual three-dimensional space from the reference road image and the respective reference road image according to the following steps:
converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image;
and converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image.
In one embodiment, the generating module 35 is configured to generate a map for the target road according to the following steps:
respectively calculating a first type identifier corresponding to the first three-dimensional position information and a second type identifier corresponding to the second three-dimensional position information;
based on first three-dimensional position information corresponding to each type of first type identification and a first plane to which a traffic element under each type of first type identification belongs, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic element under each type of first type identification;
based on second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, carrying out segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier:
and vectorizing the first three-dimensional position information corresponding to the traffic element under each first type of identifier and the second three-dimensional position information corresponding to the traffic element under each second type of identifier to obtain the map of the target road.
In one embodiment, the generating module 35 is configured to obtain the first three-dimensional position information corresponding to the traffic element under each first type identifier according to the following steps:
respectively calculating first distances from the first three-dimensional position information to each first plane corresponding to each first type identification;
based on the first distances and a preset threshold value, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic elements under each first type of identification;
respectively calculating a second distance from the first three-dimensional position information to each second plane corresponding to each second type identifier;
and based on the second distances and the preset threshold, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic elements under each second type identifier.
An embodiment of the present application further provides a computer device 40, as shown in fig. 4, which is a schematic structural diagram of the computer device 40 provided in the embodiment of the present application, and includes: a processor 41, a memory 42, and a bus 43. The memory 42 stores machine-readable instructions executable by the processor 41 (for example, execution instructions corresponding to the first determining module 31, the second determining module 32, the third determining module 33, the fourth determining module 34, and the generating module 35 in the apparatus in fig. 3, and the like), when the computer device 40 is operated, the processor 41 communicates with the memory 42 through the bus 43, and when the processor 41 executes the following processing:
determining at least one feature point pair based on attribute information of the feature points included in the reference road image and attribute information of the feature points included in the reference road image; two feature points in the same feature point pair respectively belong to a reference road image and a reference road image; the reference road image and the reference road image are obtained by shooting aiming at a target road;
for each reference road image, determining the pose variation of the reference road image relative to the reference road image according to the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image; the first characteristic point and the second characteristic point belong to the same characteristic point pair;
for each reference road image, determining first three-dimensional position information of a second characteristic point in the reference road image according to the pose variation of the reference road image, the pose of the reference road image, two-dimensional position information of the second characteristic point in the reference road image and two-dimensional position information of a first characteristic point in the reference road image;
determining second three-dimensional position information of ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image respectively;
generating a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information.
In one possible embodiment, processor 41 executes instructions for determining the reference road image and the reference road image according to the following steps:
acquiring historical driving track data of a vehicle driving in the target road;
selecting a target collection vehicle with a running speed falling within a preset speed range from a plurality of collection vehicles based on the historical running track data;
and acquiring a reference road image and a reference road image shot on the target road when the target acquisition vehicle runs.
In one possible embodiment, the processor 41 executes instructions for determining at least one feature point pair based on the attribute information of the feature point included in the reference road image and the attribute information of the feature point included in the reference road image, including:
determining similarity between the feature points in the reference road image and the feature points in the reference road image based on the attribute information of the feature points included in the reference road image and the attribute information of the feature points included in the reference road image;
and determining at least one characteristic point pair based on the similarity and a preset threshold value.
In one possible embodiment, the instructions executed by processor 41 to determine at least one characteristic point pair include:
extracting two-position information of the feature points and two-dimensional position information of the included traffic elements included in each reference road image;
for each similarity, if the similarity is determined to be greater than the preset threshold, judging whether the two feature points corresponding to the similarity belong to the same traffic element according to the two-dimensional position information of the two feature points corresponding to the similarity and the two-dimensional position information of the traffic element;
and if the two characteristic points corresponding to the similarity belong to the same traffic element, determining the two characteristic points corresponding to the similarity as the characteristic point pair.
In one possible embodiment, the processor 41 executes instructions for determining a pose change amount of the reference road image relative to the reference road image according to three-dimensional position information of the first feature point in the reference road image in the virtual three-dimensional space, two-dimensional position information of the first feature point in the reference road image, and two-dimensional position information of the second feature point in the reference road image, including:
determining a target feature point pair belonging to the same ground traffic element from the at least one feature point pair based on the two-dimensional position information of the ground traffic element included in the reference road image and the two-dimensional position information of the ground traffic element included in the reference road image, and the two-dimensional position information of the feature point included in the reference road image;
determining the initial pose variation of the reference road image relative to a reference road image based on the two-dimensional position information of the two feature points, the two-dimensional position information of the second feature point, the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and a preset position estimation algorithm, wherein the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and the preset position estimation algorithm are included in the target feature point pair;
and calibrating the initial pose variation quantity based on the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image, and taking the calibrated pose variation quantity as the pose variation quantity of the reference road image relative to the reference road image.
In one possible embodiment, the instructions executed by the processor 41 for determining, for each reference road image, first three-dimensional position information of a second feature point in the reference road image according to the amount of change in the pose of the reference road image, two-dimensional position information of the second feature point in the reference road image, and two-dimensional position information of a first feature point in the reference road image include:
and carrying out triangulation processing on the pose variation of the reference road image, the two-dimensional position information of the second characteristic point in the reference road image and the two-dimensional position information of the first characteristic point in the reference road image by using a triangulation processing algorithm to obtain the first three-dimensional position information of the second characteristic point in the reference road image.
In one possible embodiment, the processor 41 executes instructions for determining second three-dimensional position information of the ground traffic element in the virtual three-dimensional space from the reference road image and each reference road image, respectively, including:
converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image;
and converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image.
In a possible embodiment, the instructions executed by processor 41 for generating a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information include:
respectively calculating a first type identifier corresponding to the first three-dimensional position information and a second type identifier corresponding to the second three-dimensional position information;
based on first three-dimensional position information corresponding to each type of first type identification and a first plane to which a traffic element under each type of first type identification belongs, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic element under each type of first type identification;
based on second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, carrying out segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier:
and vectorizing the first three-dimensional position information corresponding to the traffic element under each first type of identifier and the second three-dimensional position information corresponding to the traffic element under each second type of identifier to obtain the map of the target road.
In one possible embodiment, in the instructions executed by the processor 41, based on the first three-dimensional position information corresponding to each first type identifier and the first plane to which the traffic element under each first type identifier belongs, the dividing process of the first three-dimensional position information is performed to obtain the first three-dimensional position information corresponding to the traffic element under each first type identifier, which includes:
respectively calculating first distances from the first three-dimensional position information to each first plane corresponding to each first type identification;
based on the first distances and a preset threshold value, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic elements under each first type of identification;
based on the second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier, including:
respectively calculating a second distance from the first three-dimensional position information to each second plane corresponding to each second type identifier;
and based on the second distances and the preset threshold, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic elements under each second type identifier.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the steps of the map generation method are performed.
Specifically, the storage medium can be a general-purpose storage medium such as a mobile disk, a hard disk, or the like, and when executed, the computer program on the storage medium can execute the above map generation method for solving the problem of low map generation accuracy in the related art, and the map generation method provided by the embodiment of the present application determines at least one feature point pair based on attribute information of a feature point included in a reference road image and attribute information of a feature point included in a reference road image, determines, for each reference road image, a pose change amount of the reference road image with respect to the reference road image from three-dimensional position information of a first feature point in the reference road image in a virtual three-dimensional space, two-dimensional position information of a feature point pair in the reference road image and between the reference road images, and determines, for each reference road image, a pose change amount, a, The method comprises the steps of determining the pose of a reference road image, the two-dimensional position information of a second characteristic point in the reference road image and the two-dimensional position information of a first characteristic point in the reference road image, determining the first three-dimensional position information of the second characteristic point in the reference road image, determining the second three-dimensional position information of ground traffic elements from the reference road image and each reference road image respectively, and generating a map for a target road based on the determined first three-dimensional position information and the determined second three-dimensional position information.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (12)
1. A map generation method, characterized in that the method comprises:
determining at least one feature point pair based on attribute information of the feature points included in the reference road image and attribute information of the feature points included in the reference road image; two feature points in the same feature point pair respectively belong to a reference road image and a reference road image; the reference road image and the reference road image are obtained by shooting aiming at a target road;
for each reference road image, determining the pose variation of the reference road image relative to the reference road image according to the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image; the first characteristic point and the second characteristic point belong to the same characteristic point pair;
for each reference road image, determining first three-dimensional position information of a second characteristic point in the reference road image according to the pose variation of the reference road image, the pose of the reference road image, two-dimensional position information of the second characteristic point in the reference road image and two-dimensional position information of a first characteristic point in the reference road image;
determining second three-dimensional position information of ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image respectively;
generating a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information.
2. The method of claim 1, wherein the reference road image and the reference road image are determined according to the following steps:
acquiring historical driving track data of a vehicle driving in the target road;
selecting a target collection vehicle with a running speed falling within a preset speed range from a plurality of collection vehicles based on the historical running track data;
and acquiring a reference road image and a reference road image shot on the target road when the target acquisition vehicle runs.
3. The method according to claim 1, wherein determining at least one feature point pair based on the attribute information of the feature point included in the reference road image and the attribute information of the feature point included in the reference road image includes:
determining similarity between the feature points in the reference road image and the feature points in the reference road image based on the attribute information of the feature points included in the reference road image and the attribute information of the feature points included in the reference road image;
and determining at least one characteristic point pair based on the similarity and a preset threshold value.
4. The method of claim 3, wherein determining at least one pair of eigen points comprises:
extracting two-position information of the feature points and two-dimensional position information of the included traffic elements included in each reference road image;
for each similarity, if the similarity is determined to be greater than the preset threshold, judging whether the two feature points corresponding to the similarity belong to the same traffic element according to the two-dimensional position information of the two feature points corresponding to the similarity and the two-dimensional position information of the traffic element;
and if the two characteristic points corresponding to the similarity belong to the same traffic element, determining the two characteristic points corresponding to the similarity as the characteristic point pair.
5. The method of claim 1, wherein determining the amount of change in the pose of the reference road image relative to the reference road image based on the three-dimensional position information of the first feature point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first feature point in the reference road image, and the two-dimensional position information of the second feature point in the reference road image comprises:
determining a target feature point pair belonging to the same ground traffic element from the at least one feature point pair based on the two-dimensional position information of the ground traffic element included in the reference road image and the two-dimensional position information of the ground traffic element included in the reference road image, and the two-dimensional position information of the feature point included in the reference road image;
determining the initial pose variation of the reference road image relative to a reference road image based on the two-dimensional position information of the two feature points, the two-dimensional position information of the second feature point, the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and a preset position estimation algorithm, wherein the two-dimensional position information of the first feature point, the three-dimensional position information of the first feature point and the preset position estimation algorithm are included in the target feature point pair;
and calibrating the initial pose variation quantity based on the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image, and taking the calibrated pose variation quantity as the pose variation quantity of the reference road image relative to the reference road image.
6. The method of claim 1, wherein determining, for each reference road image, first three-dimensional position information of a second feature point in the reference road image based on the amount of change in the pose of the reference road image, two-dimensional position information of the second feature point in the reference road image, and two-dimensional position information of the first feature point in the reference road image, comprises:
and carrying out triangulation processing on the pose variation of the reference road image, the two-dimensional position information of the second characteristic point in the reference road image and the two-dimensional position information of the first characteristic point in the reference road image by using a triangulation processing algorithm to obtain the first three-dimensional position information of the second characteristic point in the reference road image.
7. The method of claim 1, wherein determining second three-dimensional position information of ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image, respectively, comprises:
converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image;
and converting the two-dimensional position information of the ground traffic elements included in the reference road image based on the internal and external parameters of the camera for shooting the reference road image to obtain second three-dimensional position information of the ground traffic elements included in the reference road image.
8. The method of claim 1, wherein generating a map for the target road based on the determined first three-dimensional location information and the second three-dimensional location information comprises:
respectively calculating a first type identifier corresponding to the first three-dimensional position information and a second type identifier corresponding to the second three-dimensional position information;
based on first three-dimensional position information corresponding to each type of first type identification and a first plane to which a traffic element under each type of first type identification belongs, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic element under each type of first type identification;
based on second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, carrying out segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier:
and vectorizing the first three-dimensional position information corresponding to the traffic element under each first type of identifier and the second three-dimensional position information corresponding to the traffic element under each second type of identifier to obtain the map of the target road.
9. The method of claim 8, wherein the dividing the first three-dimensional position information based on the first three-dimensional position information corresponding to each first type identifier and the first plane to which the traffic element under each first type identifier belongs to obtain the first three-dimensional position information corresponding to the traffic element under each first type identifier comprises:
respectively calculating first distances from the first three-dimensional position information to each first plane corresponding to each first type identification;
based on the first distances and a preset threshold value, carrying out segmentation processing on the first three-dimensional position information to obtain first three-dimensional position information corresponding to the traffic elements under each first type of identification;
based on the second three-dimensional position information corresponding to each second type identifier and a second plane to which the traffic element belongs under each second type identifier, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic element under each second type identifier, including:
respectively calculating a second distance from the first three-dimensional position information to each second plane corresponding to each second type identifier;
and based on the second distances and the preset threshold, performing segmentation processing on the second three-dimensional position information to obtain second three-dimensional position information corresponding to the traffic elements under each second type identifier.
10. A map generation apparatus, characterized by comprising:
a first determination module configured to determine at least one feature point pair based on attribute information of feature points included in the reference road image and attribute information of feature points included in the reference road image; two feature points in the same feature point pair respectively belong to a reference road image and a reference road image; the reference road image and the reference road image are obtained by shooting aiming at a target road;
the second determining module is used for determining the pose variation of the reference road image relative to the reference road image according to the three-dimensional position information of the first characteristic point in the reference road image in the virtual three-dimensional space, the two-dimensional position information of the first characteristic point in the reference road image and the two-dimensional position information of the second characteristic point in the reference road image; the first characteristic point and the second characteristic point belong to the same characteristic point pair;
a third determining module, configured to determine, for each reference road image, first three-dimensional position information of a second feature point in the reference road image according to the pose variation amount of the reference road image, the pose of the reference road image, two-dimensional position information of the second feature point in the reference road image, and two-dimensional position information of the first feature point in the reference road image;
the fourth determining module is used for determining second three-dimensional position information of the ground traffic elements in the virtual three-dimensional space from the reference road image and each reference road image respectively;
a generating module for generating a map for the target road based on the determined first three-dimensional position information and the second three-dimensional position information.
11. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the map generation method according to any one of claims 1 to 9.
12. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the map generation method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010622443.XA CN111784798B (en) | 2020-06-30 | 2020-06-30 | Map generation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010622443.XA CN111784798B (en) | 2020-06-30 | 2020-06-30 | Map generation method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111784798A true CN111784798A (en) | 2020-10-16 |
CN111784798B CN111784798B (en) | 2021-04-09 |
Family
ID=72761640
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010622443.XA Active CN111784798B (en) | 2020-06-30 | 2020-06-30 | Map generation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111784798B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114111813A (en) * | 2021-10-18 | 2022-03-01 | 阿波罗智能技术(北京)有限公司 | High-precision map element updating method and device, electronic equipment and storage medium |
CN114689036A (en) * | 2022-03-29 | 2022-07-01 | 深圳海星智驾科技有限公司 | Map updating method, automatic driving method, electronic device and storage medium |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105224582A (en) * | 2014-07-03 | 2016-01-06 | 联想(北京)有限公司 | Information processing method and equipment |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
CN106485744A (en) * | 2016-10-10 | 2017-03-08 | 成都奥德蒙科技有限公司 | A kind of synchronous superposition method |
US20170228933A1 (en) * | 2016-02-04 | 2017-08-10 | Autochips Inc. | Method and apparatus for updating navigation map |
WO2018027206A1 (en) * | 2016-08-04 | 2018-02-08 | Reification Inc. | Methods for simultaneous localization and mapping (slam) and related apparatus and systems |
CN107705333A (en) * | 2017-09-21 | 2018-02-16 | 歌尔股份有限公司 | Space-location method and device based on binocular camera |
CN107741233A (en) * | 2017-11-10 | 2018-02-27 | 邦鼓思电子科技(上海)有限公司 | A kind of construction method of the outdoor map of three-dimensional |
CN108694882A (en) * | 2017-04-11 | 2018-10-23 | 百度在线网络技术(北京)有限公司 | Method, apparatus and equipment for marking map |
CN109544443A (en) * | 2018-11-30 | 2019-03-29 | 北京小马智行科技有限公司 | A kind of route drawing generating method and device |
CN109887053A (en) * | 2019-02-01 | 2019-06-14 | 广州小鹏汽车科技有限公司 | A kind of SLAM map joining method and system |
CN109934862A (en) * | 2019-02-22 | 2019-06-25 | 上海大学 | A kind of binocular vision SLAM method that dotted line feature combines |
CN110033489A (en) * | 2018-01-12 | 2019-07-19 | 华为技术有限公司 | A kind of appraisal procedure, device and the equipment of vehicle location accuracy |
CN110148099A (en) * | 2019-05-29 | 2019-08-20 | 北京百度网讯科技有限公司 | Modification method and device, electronic equipment, the computer-readable medium of projection relation |
CN110288710A (en) * | 2019-06-26 | 2019-09-27 | Oppo广东移动通信有限公司 | A kind of processing method of three-dimensional map, processing unit and terminal device |
CN110617821A (en) * | 2018-06-19 | 2019-12-27 | 北京嘀嘀无限科技发展有限公司 | Positioning method, positioning device and storage medium |
CN110686686A (en) * | 2019-06-04 | 2020-01-14 | 北京嘀嘀无限科技发展有限公司 | System and method for map matching |
CN111210518A (en) * | 2020-01-15 | 2020-05-29 | 西安交通大学 | Topological map generation method based on visual fusion landmark |
-
2020
- 2020-06-30 CN CN202010622443.XA patent/CN111784798B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105224582A (en) * | 2014-07-03 | 2016-01-06 | 联想(北京)有限公司 | Information processing method and equipment |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
US20170228933A1 (en) * | 2016-02-04 | 2017-08-10 | Autochips Inc. | Method and apparatus for updating navigation map |
WO2018027206A1 (en) * | 2016-08-04 | 2018-02-08 | Reification Inc. | Methods for simultaneous localization and mapping (slam) and related apparatus and systems |
CN106485744A (en) * | 2016-10-10 | 2017-03-08 | 成都奥德蒙科技有限公司 | A kind of synchronous superposition method |
CN108694882A (en) * | 2017-04-11 | 2018-10-23 | 百度在线网络技术(北京)有限公司 | Method, apparatus and equipment for marking map |
CN107705333A (en) * | 2017-09-21 | 2018-02-16 | 歌尔股份有限公司 | Space-location method and device based on binocular camera |
CN107741233A (en) * | 2017-11-10 | 2018-02-27 | 邦鼓思电子科技(上海)有限公司 | A kind of construction method of the outdoor map of three-dimensional |
CN110033489A (en) * | 2018-01-12 | 2019-07-19 | 华为技术有限公司 | A kind of appraisal procedure, device and the equipment of vehicle location accuracy |
CN110617821A (en) * | 2018-06-19 | 2019-12-27 | 北京嘀嘀无限科技发展有限公司 | Positioning method, positioning device and storage medium |
CN109544443A (en) * | 2018-11-30 | 2019-03-29 | 北京小马智行科技有限公司 | A kind of route drawing generating method and device |
CN109887053A (en) * | 2019-02-01 | 2019-06-14 | 广州小鹏汽车科技有限公司 | A kind of SLAM map joining method and system |
CN109934862A (en) * | 2019-02-22 | 2019-06-25 | 上海大学 | A kind of binocular vision SLAM method that dotted line feature combines |
CN110148099A (en) * | 2019-05-29 | 2019-08-20 | 北京百度网讯科技有限公司 | Modification method and device, electronic equipment, the computer-readable medium of projection relation |
CN110686686A (en) * | 2019-06-04 | 2020-01-14 | 北京嘀嘀无限科技发展有限公司 | System and method for map matching |
CN110288710A (en) * | 2019-06-26 | 2019-09-27 | Oppo广东移动通信有限公司 | A kind of processing method of three-dimensional map, processing unit and terminal device |
CN111210518A (en) * | 2020-01-15 | 2020-05-29 | 西安交通大学 | Topological map generation method based on visual fusion landmark |
Non-Patent Citations (1)
Title |
---|
李祎承 等: "面向智能车定位的道路环境视觉地图构建", 《中国公路学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114111813A (en) * | 2021-10-18 | 2022-03-01 | 阿波罗智能技术(北京)有限公司 | High-precision map element updating method and device, electronic equipment and storage medium |
CN114111813B (en) * | 2021-10-18 | 2024-06-18 | 阿波罗智能技术(北京)有限公司 | High-precision map element updating method and device, electronic equipment and storage medium |
CN114689036A (en) * | 2022-03-29 | 2022-07-01 | 深圳海星智驾科技有限公司 | Map updating method, automatic driving method, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111784798B (en) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112785702B (en) | SLAM method based on tight coupling of 2D laser radar and binocular camera | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
CN111563415B (en) | Binocular vision-based three-dimensional target detection system and method | |
CN110853075B (en) | Visual tracking positioning method based on dense point cloud and synthetic view | |
CN111612728B (en) | 3D point cloud densification method and device based on binocular RGB image | |
CN108519102B (en) | Binocular vision mileage calculation method based on secondary projection | |
CN111274847B (en) | Positioning method | |
CN105160702A (en) | Stereoscopic image dense matching method and system based on LiDAR point cloud assistance | |
CN116295412A (en) | Depth camera-based indoor mobile robot dense map building and autonomous navigation integrated method | |
Parra et al. | Robust visual odometry for vehicle localization in urban environments | |
Zhou et al. | Lane information extraction for high definition maps using crowdsourced data | |
CN111784798B (en) | Map generation method and device, electronic equipment and storage medium | |
EP4455875A1 (en) | Feature map generation method and apparatus, storage medium, and computer device | |
CN112257668A (en) | Main and auxiliary road judging method and device, electronic equipment and storage medium | |
Bartl et al. | Optinopt: Dual optimization for automatic camera calibration by multi-target observations | |
Giosan et al. | Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information | |
Hermann et al. | Real-time dense 3d reconstruction from monocular video data captured by low-cost uavs | |
CN113227713A (en) | Method and system for generating environment model for positioning | |
CN113781639B (en) | Quick construction method for digital model of large-scene road infrastructure | |
CN115496873A (en) | Monocular vision-based large-scene lane mapping method and electronic equipment | |
Kang et al. | 3D urban reconstruction from wide area aerial surveillance video | |
Yabuuchi et al. | VMVG-Loc: Visual Localization for Autonomous Driving using Vector Map and Voxel Grid Map | |
CN114708321A (en) | Semantic-based camera pose estimation method and system | |
Unger et al. | Efficient stereo matching for moving cameras and decalibrated rigs | |
Busch et al. | High definition mapping using lidar traced trajectories |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |