CN112241010A - Positioning method, positioning device, computer equipment and storage medium - Google Patents
Positioning method, positioning device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112241010A CN112241010A CN201910873938.7A CN201910873938A CN112241010A CN 112241010 A CN112241010 A CN 112241010A CN 201910873938 A CN201910873938 A CN 201910873938A CN 112241010 A CN112241010 A CN 112241010A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- target body
- time point
- real
- global
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000009466 transformation Effects 0.000 claims description 28
- 238000012545 processing Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 230000005484 gravity Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- UDHXJZHVNHGCEC-UHFFFAOYSA-N Chlorophacinone Chemical compound C1=CC(Cl)=CC=C1C(C=1C=CC=CC=1)C(=O)C1C(=O)C2=CC=CC=C2C1=O UDHXJZHVNHGCEC-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
Abstract
The application relates to a positioning method, a positioning system, a computer device and a storage medium. The method comprises the following steps: the method comprises the steps of obtaining a global point cloud map and target body real-time point cloud data, determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data, further performing characteristic matching on the global point cloud characteristic information and the target body real-time point cloud characteristic information, determining characteristic matching point pairs, and finally determining pose information of a target body according to the characteristic matching point pairs. By adopting the method, the position and the attitude of the vehicle can be obtained by matching the features observed by the laser radar with the stored features, and high-precision positioning and high reliability can be realized.
Description
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a positioning method, an apparatus, a computer device, and a storage medium.
Background
In the driving process of the automatic driving automobile, the automatic driving automobile needs to be accurately positioned in real time so as to ensure the safety of personnel in the automobile and other vehicles on the road. The positioning technology is one of key technologies in the field of automatic driving and is a basic technology for sensing the surrounding environment by an unmanned vehicle.
In the prior art, Global Navigation Satellite System (GNSS) positioning is generally adopted.
However, when there is an obstacle or the signal is weak, the gnss positioning may have problems such as inaccurate positioning and poor reliability.
Disclosure of Invention
In view of the above, it is necessary to provide a positioning method, an apparatus, a computer device, and a storage medium that can achieve high-precision positioning and high reliability in view of the above technical problems.
A method of positioning, the method comprising:
acquiring a global point cloud map and real-time point cloud data of a target body;
determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
and determining the pose information of the target body according to the feature matching point pairs.
In one embodiment, the determining, according to the global point cloud map and the target real-time point cloud data, global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data includes:
performing down-sampling processing on the target body real-time point cloud data by adopting a voxel grid filtering method to obtain filtered target body real-time point cloud data;
and respectively extracting the characteristics of the global point cloud map and the filtered target body real-time point cloud data by utilizing a PointNet network to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
In one embodiment, the down-sampling the real-time point cloud data of the target volume by using a voxel grid filtering method to obtain the filtered real-time point cloud data of the target volume includes:
eliminating noise points and outliers in the target body real-time point cloud data to obtain the eliminated target body real-time point cloud data;
according to the removed target body real-time point cloud data, creating a three-dimensional voxel grid corresponding to the removed target body real-time point cloud data;
and calculating and summarizing the gravity center point of each voxel in the three-dimensional voxel grid, and determining the filtered real-time point cloud data of the target body.
In one embodiment, the performing, by using a PointNet network, feature extraction on the global point cloud map and the filtered target body real-time point cloud data respectively to obtain global point cloud feature information corresponding to the global point cloud map and target body real-time point cloud feature information corresponding to the target body real-time point cloud data includes:
and respectively carrying out point cloud data angle conversion, multilayer convolution processing, characteristic data conversion, multilayer convolution processing, pooling operation and full connection layer processing on the global point cloud map and the filtered target body real-time point cloud data in sequence to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
In one embodiment, the performing feature matching on the global point cloud feature information and the target real-time point cloud feature information, and determining a pair of feature matching points includes:
acquiring an initial pose transformation relation;
calculating initial position information corresponding to the target body real-time point cloud characteristic information in the global point cloud characteristic information according to the initial pose transformation relation;
performing feature matching on the target body real-time point cloud feature information and initial position information in a preset range by adopting a Flann algorithm, and determining homonymous point pairs;
and eliminating wrong point pairs in the same-name point pairs by using a Ranpac method to obtain feature matching point pairs.
In one embodiment, the acquiring the initial pose transformation relationship includes:
acquiring positioning data of a target body through a global navigation satellite system and acquiring attitude data of the target body through an inertial measurement unit;
and establishing an initial pose transformation relation according to the positioning data of the target body, the posture data of the target body and the pose information of the target body at the last moment.
In one embodiment, the determining pose information of the target body according to the feature matching point pairs includes:
calculating a target pose transformation relation between the global point cloud characteristic information and the target body real-time point cloud characteristic information according to the characteristic matching point pairs;
and determining the pose information of the target body according to the target pose transformation relation between the global point cloud characteristic information and the real-time point cloud characteristic information of the target body.
In one embodiment, the obtaining the global point cloud map comprises:
acquiring global data returned by a sensor;
and sequentially carrying out adjacent frame matching and data optimization on the global data returned by the sensor by adopting a laser Slam algorithm to obtain a global point cloud map.
A positioning device, the device comprising:
the data acquisition module is used for acquiring a global point cloud map and real-time point cloud data of a target body;
the characteristic information determining module is used for determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
the matching point pair determining module is used for performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information to determine a feature matching point pair;
and the pose information determining module is used for determining the pose information of the target body according to the feature matching point pairs.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method as claimed in any one of the above when the computer program is executed.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of the preceding claims.
According to the positioning method, the positioning device, the positioning computer equipment and the positioning storage medium, the global point cloud map and the target body real-time point cloud data are obtained, the global point cloud characteristic information corresponding to the global point cloud map and the target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data are determined according to the global point cloud map and the target body real-time point cloud data, feature matching is further conducted on the global point cloud characteristic information and the target body real-time point cloud characteristic information, a feature matching point pair is determined, and finally the pose information of the target body is determined according to the feature matching point pair. According to the method, the position and the posture of the vehicle are obtained by matching the features observed by the laser radar with the stored features, and high-precision positioning and high reliability can be realized.
Drawings
FIG. 1 is a diagram of an exemplary location method;
FIG. 2 is a flow chart illustrating a positioning method according to an embodiment;
FIG. 3 is a flowchart illustrating step S22 according to an embodiment;
FIG. 4 is a block diagram of a positioning device in accordance with one embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The positioning method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 and the server 104 communicate via a network. The terminal 102 acquires the global point cloud map and the target real-time point cloud data, and transmits the global point cloud map and the target real-time point cloud data to the server 104 through the network. The server 104 determines global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data, then performs characteristic matching on the global point cloud characteristic information and the target body real-time point cloud characteristic information, determines a characteristic matching point pair, and finally determines the pose information of the target body according to the characteristic matching point pair. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a positioning method is provided, which is described by taking the method as an example applied to the server 104 in fig. 1, and includes the following steps:
step S1: acquiring a global point cloud map and real-time point cloud data of a target body;
step S2: determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
step S3: performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
step S4: and determining the pose information of the target body according to the feature matching point pairs.
In steps S1-S4, the global point cloud map refers to a point cloud map corresponding to the surrounding environment in which the target object is located. The target body real-time point cloud data refers to point cloud data corresponding to any object at any time. The target real-time point cloud data are acquired by the aid of the laser radar, the laser radar can realize accurate ranging and positioning, and the system has the advantages of being long in detection distance, high in accuracy, simple in model calculation, stable in operation of different illumination intensities and the like, is an indispensable sensor in an automatic driving environment sensing system, and is also the best sensor in three environment sensing sensors in comprehensive performance.
Further, the feature matching point pair refers to a pair of points that meet the matching requirement. The pose information of the target body refers to the position and the posture of the target body, wherein the target body can be a vehicle or other objects.
The laser positioning technology generally adopts a feature extraction method based on artificial design features, the identification capability of feature descriptors is not high, the deep learning technology is introduced into an automatic driving laser positioning module, and a new method is provided for automatic driving real-time positioning by utilizing the characteristics of strong feature description capability and high robustness of the deep learning method. And mainly adopting deep learning to extract the features of the global point cloud map and the real-time point cloud data of the target body.
According to the positioning method, a global point cloud map and target body real-time point cloud data are obtained, global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data are determined according to the global point cloud map and the target body real-time point cloud data, feature matching is further conducted on the global point cloud characteristic information and the target body real-time point cloud characteristic information, a feature matching point pair is determined, and finally the pose information of the target body is determined according to the feature matching point pair. According to the method, the position and the posture of the vehicle are obtained by matching the features observed by the laser radar with the stored features, and high-precision positioning and high reliability can be realized.
In one embodiment, the step S2 includes:
step S21: performing down-sampling processing on the target body real-time point cloud data by adopting a voxel grid filtering method to obtain filtered target body real-time point cloud data;
step S22: and respectively extracting the characteristics of the global point cloud map and the filtered target body real-time point cloud data by utilizing a PointNet network to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
In steps S21-S22, the geometrical structure of the point cloud is not only a macroscopic geometrical shape, but also includes a microscopic arrangement thereof, such as a similar size in the transverse direction and the same distance in the longitudinal direction. If the point clouds are sampled by using equipment such as a high-resolution camera, the point clouds are often dense, and the subsequent segmentation work is difficult due to the excessive number of the point clouds. Based on the reasons, the point cloud is subjected to downsampling by adopting a voxel grid filtering method, and the voxel grid filter can achieve the function of downsampling without damaging the geometrical structure of the point cloud.
Further, the PointNet network is a new type of neural network that directly processes point clouds, taking the point clouds directly as input, and outputting each classification label of the whole input or each point segment/each partial label for each point of the input. The basic architecture is very simple, each point is treated identically and independently in the initial phase, and in the basic setup, each point is represented by its three coordinates (x, y, z) only, other dimensions can be added by computing normals and other local or global features. The PointNet network well reflects the sequence invariance of the input point cloud and provides a complete system structure from the aspects of object classification, partial segmentation to scene semantic analysis and the like. Although simple, PointNet is efficient and effective.
In one embodiment, the step S21 includes:
step S211: eliminating noise points and outliers in the target body real-time point cloud data to obtain the eliminated target body real-time point cloud data;
step S212: according to the removed target body real-time point cloud data, creating a three-dimensional voxel grid corresponding to the removed target body real-time point cloud data;
step S213: and calculating and summarizing the gravity center point of each voxel in the three-dimensional voxel grid, and determining the filtered real-time point cloud data of the target body.
In steps S211-S213, the voxel grid unit is required to be set when the three-dimensional voxel grid is established, such as the size, which can be set according to specific needs. Furthermore, the center of gravity point of each voxel represents all points within the voxel.
The method realizes downsampling by using a voxelized grid method, namely, the number of points is reduced, point cloud data is reduced, the shape characteristics of the point cloud are kept, and the method is very practical in improving algorithm speeds of registration, curved surface reconstruction, shape recognition and the like.
In one embodiment, referring to fig. 3, the step S22 includes:
step S221: and respectively carrying out point cloud data angle conversion, multilayer convolution processing, characteristic data conversion, multilayer convolution processing, pooling operation and full connection layer processing on the global point cloud map and the filtered target body real-time point cloud data in sequence to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
Specifically, the angle conversion of the point cloud data is mainly to rotate the point cloud so that the point cloud is at an angle more favorable for classification. The pooling operation mainly uses a maximum pooling method to form 1 × 1024 eigenvectors. The full-connection layer process calculates the eigenvalues of the eigenvectors using the Softmax Loss function.
In one embodiment, the step S3 includes:
step S31: acquiring an initial pose transformation relation;
step S32: calculating initial position information corresponding to the target body real-time point cloud characteristic information in the global point cloud characteristic information according to the initial pose transformation relation;
step S33: performing feature matching on the target body real-time point cloud feature information and initial position information in a preset range by adopting a Flann algorithm, and determining homonymous point pairs;
step S34: and eliminating wrong point pairs in the same-name point pairs by using a Ranpac method to obtain feature matching point pairs.
In steps S31-S34, the initial pose transformation relationship is adopted as an auxiliary positioning in the present application to improve the accuracy of positioning. The Fast Nearest neighbor algorithm (Flann) For adaptive Nearest Neighbors of high-dimensional data is mainly a process For finding consistency between frames, namely, the Nearest neighbor of one descriptor set (query set) is found in the other set (equivalent to a training set), and the matching efficiency can be improved by performing feature matching through the Flann method. In addition, because the initial position information is a point marked with specific geographic information, if the target body moves along with time, some points in a certain range need to be selected for gradual matching in the matching process, and therefore, the preset range can be in a circular shape, a rectangular shape or other shapes and is specifically adjusted according to needs.
Random Sample Consensus (Random Sample Consensus) belongs to the fine matching method, where the Random is used to fit the parameters of the model (the values of the matrix rows and columns) and identify the same object in different photographs.
In one embodiment, the step S31 includes:
step S311: acquiring positioning data of a target body through a global navigation satellite system and acquiring attitude data of the target body through an inertial measurement unit;
step S312: and establishing an initial pose transformation relation according to the positioning data of the target body, the posture data of the target body and the pose information of the target body at the last moment.
In one embodiment, the step S4 includes:
step S41: calculating a target pose transformation relation between the global point cloud characteristic information and the target body real-time point cloud characteristic information according to the characteristic matching point pairs;
step S42: and determining the pose information of the target body according to the target pose transformation relation between the global point cloud characteristic information and the real-time point cloud characteristic information of the target body.
In steps S41-S42, a target pose transformation relation R, T between the global point cloud feature information and the target real-time point cloud feature information is calculated according to the matched feature point pairs, so as to obtain the position and posture information of the target, where R represents a rotation matrix and T represents a translation matrix.
Specifically, R, T is of the form:
T=[tx ty tz]T
wherein, alpha, beta, gamma, tx、ty、tzRespectively, the angles with the x-axis, the y-axis and the z-axis and the translation position.
In one embodiment, the step S1 includes:
step S11: acquiring global data returned by a sensor;
step S12: and sequentially carrying out adjacent frame matching and data optimization on the global data returned by the sensor by adopting a laser Slam algorithm to obtain a global point cloud map.
In steps S11-S12, an instantaneous positioning And Mapping (Slam, Simultaneous Localization And Mapping) specifically operates to start the robot to move from an unknown position in an unknown environment, perform self-positioning according to position estimation And a map in the moving process, And simultaneously build an incremental map on the basis of self-positioning to realize autonomous positioning And navigation of the robot.
For the acquisition of the global Point cloud map, front-end processing and back-end optimization are required, and the front-end processing realizes matching and comparison between two adjacent frames of Point clouds by an Iterative Closest Point algorithm (ICP) matching method; and the rear-end optimization adopts an extended Kalman filtering method to optimize the output result of the front end.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a positioning device comprising: the data acquisition module 110, the feature information determination module 210, the matching point pair determination module 310, and the pose information determination module 410, wherein:
a data acquisition module 110, configured to acquire a global point cloud map and target real-time point cloud data;
a feature information determining module 210, configured to determine, according to the global point cloud map and the target real-time point cloud data, global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data;
a matching point pair determining module 310, configured to perform feature matching on the global point cloud feature information and the target real-time point cloud feature information, and determine a feature matching point pair;
and a pose information determining module 410, configured to determine pose information of the target body according to the feature matching point pairs.
In one embodiment, the feature information determining module 210 includes:
a down-sampling module 2101, configured to perform down-sampling processing on the target real-time point cloud data by using a voxel grid filtering method to obtain filtered target real-time point cloud data;
a feature information extraction module 2102, configured to perform feature extraction on the global point cloud map and the filtered target real-time point cloud data by using a PointNet network, respectively, to obtain global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data.
In one embodiment, the down-sampling module 2101 comprises:
the eliminating module 2101a is used for eliminating noise points and outliers in the target body real-time point cloud data to obtain the eliminated target body real-time point cloud data;
a grid establishing module 2101b, configured to create a three-dimensional voxel grid corresponding to the removed target body real-time point cloud data according to the removed target body real-time point cloud data;
and a center of gravity point calculation module 2101c, configured to calculate and summarize a center of gravity point of each voxel in the three-dimensional voxel grid, and determine filtered real-time point cloud data of the target volume.
In one embodiment, the feature information extraction module 2102 comprises:
a data processing module 2102a, configured to perform point cloud data angle conversion, multilayer convolution processing, feature data conversion, multilayer convolution processing, pooling operation, and full link layer processing on the global point cloud map and the filtered target real-time point cloud data in sequence, respectively, to obtain global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data.
In one embodiment, the matching point pair determining module 310 includes:
an initial transformation relation obtaining module 3101, configured to obtain an initial pose transformation relation;
a position information determining module 3102, configured to calculate, according to the initial pose transformation relationship, initial position information corresponding to the target real-time point cloud feature information in the global point cloud feature information;
a homonymous point pair determining module 3103, configured to perform feature matching on the target real-time point cloud feature information and initial position information within a preset range by using a Flann algorithm, and determine a homonymous point pair;
and an erroneous point pair removing module 3104, configured to remove erroneous point pairs from the same-name point pairs by using a ranac method to obtain feature matching point pairs.
In one embodiment, the transformation relation obtaining module 3101 includes:
an attitude data acquisition module 3101a, configured to acquire positioning data of the target body through a global navigation satellite system and acquire attitude data of the target body through an inertial measurement unit;
the initial transformation relation establishing module 3101b is configured to establish an initial pose transformation relation according to the positioning data of the target body, the pose data of the target body, and the pose information of the target body at the previous time.
In one embodiment, the pose information determination module 410 includes:
a target transformation relation calculation module 4101, configured to calculate a target pose transformation relation between the global point cloud feature information and the target real-time point cloud feature information according to the feature matching point pairs;
a pose information calculating module 4102, configured to determine pose information of the target object according to a target pose transformation relationship between the global point cloud feature information and the target object real-time point cloud feature information.
In one embodiment, the data acquisition module 110 includes:
a global data acquisition module 1101, configured to acquire global data returned by the sensor;
and the global point cloud map determining module 1102 is configured to sequentially perform adjacent frame matching and data optimization on the global data returned by the sensor by using a laser Slam algorithm to obtain a global point cloud map.
For the specific definition of the positioning device, reference may be made to the above definition of the positioning method, which is not described herein again. The modules in the positioning device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing relevant data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a positioning method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a global point cloud map and real-time point cloud data of a target body;
determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
and determining the pose information of the target body according to the feature matching point pairs.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a global point cloud map and real-time point cloud data of a target body;
determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
and determining the pose information of the target body according to the feature matching point pairs.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method of positioning, the method comprising:
acquiring a global point cloud map and real-time point cloud data of a target body;
determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs;
and determining the pose information of the target body according to the feature matching point pairs.
2. The method of claim 1, wherein determining global point cloud feature information corresponding to the global point cloud map and target real-time point cloud feature information corresponding to the target real-time point cloud data from the global point cloud map and the target real-time point cloud data comprises:
performing down-sampling processing on the target body real-time point cloud data by adopting a voxel grid filtering method to obtain filtered target body real-time point cloud data;
and respectively extracting the characteristics of the global point cloud map and the filtered target body real-time point cloud data by utilizing a PointNet network to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data.
3. The method of claim 2, wherein the down-sampling the real-time point cloud data of the target volume by using a voxel grid filtering method to obtain the filtered real-time point cloud data of the target volume comprises:
eliminating noise points and outliers in the target body real-time point cloud data to obtain the eliminated target body real-time point cloud data;
according to the removed target body real-time point cloud data, creating a three-dimensional voxel grid corresponding to the removed target body real-time point cloud data;
and calculating and summarizing the gravity center point of each voxel in the three-dimensional voxel grid, and determining the filtered real-time point cloud data of the target body.
4. The method of claim 3, wherein the extracting the features of the global point cloud map and the filtered real-time point cloud data of the target object by using a PointNet network to obtain the global point cloud feature information corresponding to the global point cloud map and the real-time point cloud feature information corresponding to the real-time point cloud data of the target object comprises:
respectively and sequentially carrying out point cloud data angle conversion, multilayer convolution processing, characteristic data conversion, multilayer convolution processing, pooling operation and full connection layer processing on the global point cloud map and the filtered target body real-time point cloud data to obtain global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data;
and/or performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information, and determining feature matching point pairs comprises:
acquiring an initial pose transformation relation;
calculating initial position information corresponding to the target body real-time point cloud characteristic information in the global point cloud characteristic information according to the initial pose transformation relation;
performing feature matching on the target body real-time point cloud feature information and initial position information in a preset range by adopting a Flann algorithm, and determining homonymous point pairs;
and eliminating wrong point pairs in the same-name point pairs by using a Ranpac method to obtain feature matching point pairs.
5. The method according to claim 4, wherein the acquiring an initial pose transformation relationship comprises:
acquiring positioning data of a target body through a global navigation satellite system and acquiring attitude data of the target body through an inertial measurement unit;
and establishing an initial pose transformation relation according to the positioning data of the target body, the posture data of the target body and the pose information of the target body at the last moment.
6. The method according to claim 4, wherein the determining pose information of the target volume according to the pair of feature matching points comprises:
calculating a target pose transformation relation between the global point cloud characteristic information and the target body real-time point cloud characteristic information according to the characteristic matching point pairs;
and determining the pose information of the target body according to the target pose transformation relation between the global point cloud characteristic information and the real-time point cloud characteristic information of the target body.
7. The method of claim 1, wherein the obtaining a global point cloud map comprises:
acquiring global data returned by a sensor;
and sequentially carrying out adjacent frame matching and data optimization on the global data returned by the sensor by adopting a laser Slam algorithm to obtain a global point cloud map.
8. A positioning device, the device comprising:
the data acquisition module is used for acquiring a global point cloud map and real-time point cloud data of a target body;
the characteristic information determining module is used for determining global point cloud characteristic information corresponding to the global point cloud map and target body real-time point cloud characteristic information corresponding to the target body real-time point cloud data according to the global point cloud map and the target body real-time point cloud data;
the matching point pair determining module is used for performing feature matching on the global point cloud feature information and the target body real-time point cloud feature information to determine a feature matching point pair;
and the pose information determining module is used for determining the pose information of the target body according to the feature matching point pairs.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910873938.7A CN112241010A (en) | 2019-09-17 | 2019-09-17 | Positioning method, positioning device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910873938.7A CN112241010A (en) | 2019-09-17 | 2019-09-17 | Positioning method, positioning device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112241010A true CN112241010A (en) | 2021-01-19 |
Family
ID=74168161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910873938.7A Pending CN112241010A (en) | 2019-09-17 | 2019-09-17 | Positioning method, positioning device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112241010A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950708A (en) * | 2021-02-05 | 2021-06-11 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device and robot |
CN113008274A (en) * | 2021-03-19 | 2021-06-22 | 奥特酷智能科技(南京)有限公司 | Vehicle initialization positioning method, system and computer readable medium |
CN113447032A (en) * | 2021-06-29 | 2021-09-28 | 东软睿驰汽车技术(沈阳)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
CN113763475A (en) * | 2021-09-24 | 2021-12-07 | 北京百度网讯科技有限公司 | Positioning method, device, equipment, system, medium and automatic driving vehicle |
CN114353780A (en) * | 2021-12-31 | 2022-04-15 | 高德软件有限公司 | Attitude optimization method and equipment |
CN115201833A (en) * | 2021-04-08 | 2022-10-18 | 中强光电股份有限公司 | Object positioning method and object positioning system |
WO2022247306A1 (en) * | 2021-05-28 | 2022-12-01 | 同济大学 | Unmanned aerial vehicle positioning method based on millimeter wave radar |
CN117274331A (en) * | 2023-09-19 | 2023-12-22 | 北京斯年智驾科技有限公司 | Positioning registration optimization method, system, device and storage medium |
CN117670785A (en) * | 2022-08-31 | 2024-03-08 | 北京三快在线科技有限公司 | Ghost detection method of point cloud map |
CN118379356A (en) * | 2024-06-21 | 2024-07-23 | 南方电网调峰调频发电有限公司 | Object security determination method, device, computer equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308737A (en) * | 2018-07-11 | 2019-02-05 | 重庆邮电大学 | A kind of mobile robot V-SLAM method of three stage point cloud registration methods |
CN110047142A (en) * | 2019-03-19 | 2019-07-23 | 中国科学院深圳先进技术研究院 | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium |
-
2019
- 2019-09-17 CN CN201910873938.7A patent/CN112241010A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308737A (en) * | 2018-07-11 | 2019-02-05 | 重庆邮电大学 | A kind of mobile robot V-SLAM method of three stage point cloud registration methods |
CN110047142A (en) * | 2019-03-19 | 2019-07-23 | 中国科学院深圳先进技术研究院 | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
赵兴东 等: "矿用三维激光数字测量原理及其工程应用", 31 January 2016, 冶金工业出版社, pages: 51 - 54 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950708A (en) * | 2021-02-05 | 2021-06-11 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device and robot |
CN112950708B (en) * | 2021-02-05 | 2023-12-15 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device and robot |
CN113008274A (en) * | 2021-03-19 | 2021-06-22 | 奥特酷智能科技(南京)有限公司 | Vehicle initialization positioning method, system and computer readable medium |
CN115201833A (en) * | 2021-04-08 | 2022-10-18 | 中强光电股份有限公司 | Object positioning method and object positioning system |
WO2022247306A1 (en) * | 2021-05-28 | 2022-12-01 | 同济大学 | Unmanned aerial vehicle positioning method based on millimeter wave radar |
CN113447032A (en) * | 2021-06-29 | 2021-09-28 | 东软睿驰汽车技术(沈阳)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
CN113763475A (en) * | 2021-09-24 | 2021-12-07 | 北京百度网讯科技有限公司 | Positioning method, device, equipment, system, medium and automatic driving vehicle |
CN114353780A (en) * | 2021-12-31 | 2022-04-15 | 高德软件有限公司 | Attitude optimization method and equipment |
CN114353780B (en) * | 2021-12-31 | 2024-04-02 | 高德软件有限公司 | Gesture optimization method and device |
CN117670785A (en) * | 2022-08-31 | 2024-03-08 | 北京三快在线科技有限公司 | Ghost detection method of point cloud map |
CN117274331A (en) * | 2023-09-19 | 2023-12-22 | 北京斯年智驾科技有限公司 | Positioning registration optimization method, system, device and storage medium |
CN118379356A (en) * | 2024-06-21 | 2024-07-23 | 南方电网调峰调频发电有限公司 | Object security determination method, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112241010A (en) | Positioning method, positioning device, computer equipment and storage medium | |
CN112907491B (en) | Laser point cloud loop detection method and system suitable for underground roadway | |
US20200159222A1 (en) | Object tracking using depth information | |
CN110047108B (en) | Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium | |
CN110176032B (en) | Three-dimensional reconstruction method and device | |
CN112837352B (en) | Image-based data processing method, device and equipment, automobile and storage medium | |
CN114088081B (en) | Map construction method for accurate positioning based on multistage joint optimization | |
CN112327326A (en) | Two-dimensional map generation method, system and terminal with three-dimensional information of obstacles | |
CN108053445A (en) | The RGB-D camera motion methods of estimation of Fusion Features | |
CN108776991A (en) | Three-dimensional modeling method, device, storage medium and computer equipment | |
CN112986982B (en) | Environment map reference positioning method and device and mobile robot | |
CN114549286A (en) | Lane line generation method and device, computer-readable storage medium and terminal | |
Sun et al. | An improved binocular visual odometry algorithm based on the random sample consensus in visual navigation systems | |
CN115307641A (en) | Robot positioning method, device, robot and storage medium | |
CN117115414B (en) | GPS-free unmanned aerial vehicle positioning method and device based on deep learning | |
CN117664124A (en) | Inertial guidance and visual information fusion AGV navigation system and method based on ROS | |
CN117333846A (en) | Detection method and system based on sensor fusion and incremental learning in severe weather | |
CN117036484A (en) | Visual positioning and mapping method, system, equipment and medium based on geometry and semantics | |
Zhong et al. | A factor graph optimization mapping based on normaldistributions transform | |
CN112802095B (en) | Positioning method, device and equipment, and automatic driving positioning system | |
CN115984321A (en) | Speed measuring method, device, equipment and storage medium | |
CN115239902A (en) | Method, device and equipment for establishing surrounding map of mobile equipment and storage medium | |
Li et al. | Pose estimation of metal workpieces based on RPM-Net for robot grasping from point cloud | |
CN111951304A (en) | Target tracking method, device and equipment based on mutual supervision twin network | |
Shilin et al. | Application of a Depth Camera for Constructing Complex Three-Dimensional Models in Multiple Scanning Complexes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 100176 floor 10, building 1, zone 2, yard 9, Taihe 3rd Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant after: Beijing National New Energy Vehicle Technology Innovation Center Co.,Ltd. Address before: 100176 1705, block a, building 1, No. 10, Ronghua Middle Road, economic and Technological Development Zone, Daxing District, Beijing Applicant before: BEIJING NEW ENERGY VEHICLE TECHNOLOGY INNOVATION CENTER Co.,Ltd. |