Nothing Special   »   [go: up one dir, main page]

CN114325634B - A highly robust method for extracting traversable areas in wild environments based on LiDAR - Google Patents

A highly robust method for extracting traversable areas in wild environments based on LiDAR Download PDF

Info

Publication number
CN114325634B
CN114325634B CN202111592871.3A CN202111592871A CN114325634B CN 114325634 B CN114325634 B CN 114325634B CN 202111592871 A CN202111592871 A CN 202111592871A CN 114325634 B CN114325634 B CN 114325634B
Authority
CN
China
Prior art keywords
points
point cloud
laser radar
plane
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111592871.3A
Other languages
Chinese (zh)
Other versions
CN114325634A (en
Inventor
黄凯
林洪权
单云霄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202111592871.3A priority Critical patent/CN114325634B/en
Publication of CN114325634A publication Critical patent/CN114325634A/en
Application granted granted Critical
Publication of CN114325634B publication Critical patent/CN114325634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention aims to provide a method for extracting a passable area in a highly robust field environment based on a laser radar, which comprises the following steps: s1, correcting the level of the point cloud; s2, projecting a point cloud coordinate system; s3, performing semantic segmentation on the point cloud; s4, multi-section plane fitting; s5, clustering non-planar points; s6, correcting the positive and negative obstacle labels; s7, completing missing barriers. The situation that perception is difficult due to low visibility and complex unstructured environment at night is solved. The method not only provides important information for unmanned vehicles and autonomous robots to avoid obstacles in a field environment, but also provides necessary conditions for future path planning and autonomous detection. The method solves the problem that the laser radar can change rapidly in continuous frames when data are acquired due to jolt of unmanned vehicle equipment, and effectively improves the accuracy and the robustness of extracting the ground passable area.

Description

Laser radar-based method for extracting passable region in high-robustness field environment
Technical Field
The invention relates to the technical field of unmanned vehicle intelligent driving, in particular to a laser radar-based method for extracting a passable area in a high-robustness field environment.
Background
In the prior art, the accessible area is extracted based on the camera sensor, and the accessible area can be used only under the condition of proper environmental conditions such as illumination. The camera sensor has harsh environmental requirements and is sensitive to illumination change, but the field environment is complex and changeable, and the night recognition is difficult. Based on the extraction of the markers such as lane lines or boundary contour lines, only the passable region in the fixed structured environment can be identified and analyzed, and the lack of the lane lines or boundary contour lines with fixed rules in the field environment can influence the accuracy of extracting the passable region. The laser radar is used for extracting a land passable area or performing path planning, but the application of the laser radar in a field scene is more complex and irregular than a flat urban road, and some extraction algorithms based on regular road marks can be invalid. In addition, noise caused by jolting of the vehicle body traveling on a rough road may also affect the performance of the algorithm, and particularly the algorithm using a camera sensor, camera imaging may generate visual blur due to jolt.
Disclosure of Invention
The invention provides a method for extracting a passable area in a highly-robust field environment based on a laser radar, which aims to overcome the defects in the prior art, and effectively improves the accuracy and the robustness of extracting the ground passable area.
In order to solve the technical problems, the invention adopts the following technical scheme: a method for extracting a passable region in a highly robust field environment based on a laser radar comprises the following steps:
S1, correcting the level of point cloud: the inertia measurement unit is utilized to horizontally correct the obtained laser radar point cloud data, so that the influence of laser radar and vehicle body inclination on the point cloud data is reduced;
s2, projecting a point cloud coordinate system: projecting the corrected three-dimensional laser radar point cloud to a two-dimensional depth map space, and simultaneously reserving the three-dimensional point cloud;
S3, semantic segmentation of point cloud: semantic segmentation is performed by referring to a convolutional neural network of 3D-mininet, and potential passable areas, positive obstacles, negative obstacles and non-interested areas of interest are extracted from a complex large environment;
S4, multi-section plane fitting: processing the potential interesting Ping Miandian cloud obtained in the step S3; for potential plane points, performing multistage plane fitting on the potential plane points by using a RANSAC method to further obtain more accurate plane points;
S5, non-planar point clustering: obtaining non-planar points in a driving range on the basis of the planar points extracted in the step S4, and performing European clustering on the non-planar points;
S6, correcting positive and negative obstacle labels: processing the potential interesting positive barrier and the potential interesting negative barrier obtained in the steps S3 and S5; for potential positive and negative obstacle points, correcting misclassification occurring in the convolutional neural network by combining the plane model obtained in the step S4;
S7, missing obstacle complement: and tracking the obtained positive and negative obstacles by using a Kalman filter, and judging whether the obstacles are missed to be detected according to the Kalman pre-detection of the previous frame.
Aiming at the problem that an autonomous robot automatically runs or autonomously explores in a field environment to extract a ground passable area, the invention provides a method for extracting the ground passable area based on a laser radar sensor, which solves the problem that perception is difficult due to low field night visibility and complex unstructured environment, realizes a high-precision field environment passable area extraction algorithm, and ensures that the algorithm has high robustness.
In one embodiment, the inertial measurement unit is mounted directly below the lidar and the local coordinate system of the inertial measurement unit is aligned with the rotation of the coordinate system of the lidar point cloud; and the laser radar is rigidly connected with the inertial measurement element.
In one embodiment, in the step S1, for each laser radar point cloud p (x, y, z, i), i is the reflection intensity, xyz is taken, and then a coordinate vector [ x, y, z ] is formed; for the obtained rotation angle data, it is combined into a rotation matrix of three axes using the following formula:
Obtaining a corresponding total rotation matrix:
in the formula, θ、Rotation angles around x, y and z axes respectively;
and multiplying the coordinate vector by the rotation matrix to obtain corrected laser radar point cloud data.
In one embodiment, in the step S2, the converting the three-dimensional point cloud data into the two-dimensional depth map through spherical projection specifically includes: a point (x 1,y1,z1) in the three-dimensional point cloud data is projected onto a 2D image to obtain a new 2D coordinate representation, firstly, an x-axis is assumed to be a front view of the laser radar, and an angle is formed between a connecting line of the point (x 1,y1,z1) and an origin and an xy plane, namely a pitch angle; the projection of the point (x 1,y1,z1) on the xy plane forms a deflection angle yaw with the line of the origin and the central line x or xz plane; the deflection angle and the pitch angle of each point in the three-dimensional point cloud data are calculated through the following steps of:
pitch=sin-1(z1/r)=arcsin(z1/r)
yaw=tan-1(y1/x1)=arctan(y1/x1)
Where r is the distance from the three-dimensional point to the origin.
In one embodiment, the 3D-mininet network first learns the 2D representation from the original points by novel projection, extracting local and global information from the 3D data; then input to a 2D full convolution neural network, produce 2D semantic segmentation; the obtained 2D semantic tags are then re-projected back into 3D space and enhanced by a post-processing module.
In one embodiment, the step S3 specifically includes:
s1, training data of a point cloud network: point cloud data is labeled, and the data is divided into four types of objects: a non-region of interest, a passable region, a positive obstacle, a negative obstacle;
s2, data equalization: using multi-class weighted cross entropy loss as regularization to mitigate the effects of extreme class imbalance problems, computing respective weights for class imbalance regularization from training set statistics, the cross entropy loss function L being expressed as:
wherein M is the number of points, C is the number of categories, y c,m is a two-category indicator of whether M belongs to category C, The probability that the point m predicted for CNN belongs to the category c, f t is the median of the occurrence frequencies of all the category points, and f c is the occurrence frequency of the point belonging to the category c;
S3, a full convolution neural network: firstly, calculating a W/4 XH/4 XC 6 tensor through a projection learning module, and then obtaining a semantic segmentation result through running an efficient two-dimensional semantic segmentation CNN; the neural network module uses a full convolution neural network and is based on MiniNetV < 2 > architecture; the encoder of the network performs an L1 depth separable convolution and an L2 multi-dilation depth separable convolution; for a decoder, the network performs an L3 separable convolution at W/4×h/4 resolution and an L4 separable convolution at W/2×h/2 resolution using bilinear interpolation as an upsampling layer; finally, two-dimensional semantic segmentation prediction is obtained at W multiplied by H resolution;
S4, two-dimensional to three-dimensional back projection: the predicted 2D semantic segmentation labels are re-projected into three-dimensional space; the projection of points that have not been projected into three-dimensional space has no semantic tags, and for these points, point tags in their neighboring 2D coordinates are assigned.
In one embodiment, the RANSAC method specifically includes:
Randomly sampling K points in a defined data range; constructing a plane model for the K points by using a fitting method; calculating the distances from other points to the fitting model, taking the points smaller than the set threshold value as internal points, and counting the number of the internal points; constructing a plane model by using the selected interior points again by using a fitting method, then counting the interior points again, repeating the steps until the increasing speed of the interior points converges, and recording the interior points and the model as an iteration; k points are sampled again randomly, and a new round of iteration is carried out; repeating the iteration for a plurality of times, and selecting the model with the largest number of interior points from the existing models; after the model is found, the model is used for searching the interior points of the region of interest, and if the interior points containing the semantic non-planar labels exist, the interior points are corrected to be planar labels.
In one embodiment, in the step S5, the clustering method used is DBSACAN, which specifically includes the following steps:
S51, detecting points p which are not checked in the data, if the passing points p are not processed, namely are not classified into a certain cluster or marked as noise, checking the points with the distance smaller than the field range in the adjacent points, if the included points are not smaller than minP, establishing a new cluster T, and adding all the points into a candidate set N;
S52, checking points with the distance smaller than the field range in adjacent points for all points q which are not processed in the candidate set N, and adding the points into N if the points at least contain minP points; if the point q does not fall into any cluster, adding the point q to T;
S53, repeating the step S52, and continuously checking unprocessed points in the N until the current candidate set N is empty;
s54, repeating the steps S51-S53 until all points fall into a certain cluster or are marked as noise.
In one embodiment, the kalman filtering used in the step S7 specifically includes the following steps:
S71, predicting:
Pk=APk-1AT+Q
in the formula, The physical quantity x, mu k at the moment k is noise at the moment k, A is a transfer matrix, B is a noise matrix, P k is a prediction error at the moment k, and Q is a covariance matrix;
s72, updating:
Gk=PkHT(HPkHT+R2)-1
Pk=(1-GkH)Pk
wherein G k represents Kalman gain, H represents a scaling coefficient matrix, and R2 represents a measurement noise average matrix.
In one embodiment, center matching is performed on all the established Kalman filtering predicted value centers xy and the clusters obtained in the step S5, if unprocessed clusters exist, the labels of the clusters are updated, and the updated categories are positive obstacles above a plane and negative obstacles below the plane according to the positions of the clusters relative to nearby planes; after matching and processing, the Kalman filter is updated with all accounting results.
Compared with the prior art, the beneficial effects are that:
1. Compared with other extraction methods which use sensors such as cameras to realize passable areas, the method provided by the invention has less environmental requirements. The camera is sensitive to illumination conditions and has severe environmental requirements, and the shaking of the car body has great influence and interference on the camera;
2. the method provided by the invention expands the range of the use scene. Compared with other urban roads, most of the urban road can only be applied to more regular environments with lane lines. The experimental scene of the invention is a field environment with complex background, and the application scene range is greatly expanded. In addition, the invention can process the obstacle on the passable area, is different from the traditional road segmentation only, and is more suitable for detecting the occurrence of the uncertain obstacle in the field environment;
3. The ground passable region extraction method provided by the invention has higher accuracy and robustness. The use of lidar in field environments to collect data also has limitations such as too small obstacles due to the radar itself and the varying tilt and sweep angle of the vehicle body. According to the invention, the horizontality of the laser radar is corrected through the inertial measurement unit (imu), so that the accuracy of extracting the passable area in the field environment is greatly improved;
4. The invention solves the problem that the laser radar can change more rapidly in continuous frames when collecting data due to jolt of unmanned vehicle equipment. The unmanned aerial vehicle runs in the field, and has jolts of different degrees, so that the number of points of certain obstacles scanned by the laser radar installed on the unmanned aerial vehicle at a certain moment is too small, or the scanned obstacles are irregular in shape compared with the previous moment, the obstacles can be missed in detection due to the fact that the semantic neural network and the obstacles are clustered, the change is severe in a continuous time frame, and the stability and the accuracy of an algorithm are seriously affected. The continuous tracking of the obstacle by the Kalman filter can effectively reduce the missed detection of the examples.
Drawings
FIG. 1 is a schematic overall flow chart of the present invention.
FIG. 2 is a schematic representation of the conversion of a three-dimensional point cloud of the present invention to rangeimg.
FIG. 3 is a 3D-mininet-based neural network model of the present invention.
Fig. 4 is a schematic diagram of the correction of the planar point tag in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. The invention is described in one of its examples in connection with the following detailed description. Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to be limiting of the present patent; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
In the description of the present invention, it should be understood that, if there is an azimuth or positional relationship indicated by terms such as "upper", "lower", "left", "right", etc., based on the azimuth or positional relationship shown in the drawings, it is only for convenience of describing the present invention and simplifying the description, but it is not indicated or implied that the apparatus or element referred to must have a specific azimuth, be constructed and operated in a specific azimuth, and thus terms describing the positional relationship in the drawings are merely illustrative and should not be construed as limitations of the present patent, and specific meanings of the terms described above may be understood by those skilled in the art according to specific circumstances. In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" as it appears throughout is meant to include three side-by-side schemes, for example, "A and/or B", including the A scheme, or the B scheme, or the scheme where A and B meet at the same time.
Example 1:
as shown in fig. 1, the method for extracting the passable area in the highly robust field environment based on the laser radar comprises the following steps:
and step1, horizontally correcting the obtained laser radar point cloud data by utilizing an inertial measurement unit, and reducing the influence of laser radar and vehicle body inclination on the point cloud data.
The inertial measurement unit can measure three-axis attitude angles (or angular rates) and accelerations of an object, and is widely applied to the field of automatic driving. In the application of the invention, the imu is arranged right below the laser radar, and the local coordinate system of the imu is aligned with the rotation of the coordinate system of the laser radar point cloud, so that the coordinate system conversion calibration between the laser radar and the imu can be avoided by the arrangement mode. Meanwhile, rigid connection between the laser radar and the imu is ensured, otherwise, the gesture transformation angle obtained by the imu can be applied only by certain correction.
After the conditions are ensured, the numerical value measured in imu is read out by utilizing corresponding serial port software, and the three-axis attitude angle is only used. Since the imu coordinate system rotation is guaranteed to be consistent with the laser radar coordinate system, the read rotation angle is only needed to be applied to the laser point cloud. The application method comprises the following steps: for each laser radar point cloud p (x, y, z, i), i is the reflection intensity, its xyz is taken, and then the coordinate vector [ x, y, z ] is composed. For the resulting rotation angle (euler angle) data, it is combined into a three-axis rotation matrix using the following formula:
Obtaining a corresponding total rotation matrix:
in the formula, θ、Rotation angles around x, y and z axes respectively;
and multiplying the coordinate vector by the rotation matrix to obtain corrected laser radar point cloud data.
And 2, projecting the corrected three-dimensional laser radar point cloud to a two-dimensional depth map space, and simultaneously reserving the three-dimensional point cloud.
Because of the bulkiness, sparsity and irregularity of the data volume of the three-dimensional point cloud data, the direct processing of the original three-dimensional data is often challenging and time-consuming, the traditional two-dimensional CNN cannot be directly processed, the three-dimensional point cloud data needs to be converted into a CNN-friendly data structure, and the point cloud can be converted into rangeimg by using spherical projection.
A schematic of a specific transition is shown in fig. 2.
For the point (x 1,y1,z1) to the left in fig. 2, it is necessary to project it onto a 2D image to obtain a new 2D coordinate representation. First, in the left figure, it is assumed that the X-axis is the front view of the laser radar, i.e. the direction of travel of the car, the line connecting the point with the origin will form an angle with the xy-plane (understood as the ground), the pitch angle (pitch), the line connecting the projection of the point on the xy-plane with the origin forms a yaw angle (yaw) with the centre line X or xz-plane. By using trigonometric functions, the pitch and yaw values for each point can be obtained. Since the origin is located in the center of the image, these yaw and pitch values constitute each pixel location of the projected image. Therefore, by calculating the yaw angle and pitch angle of each point, a projected image can be formed entirely. The pitch angle and yaw angle can be expressed as follows:
pitch=sin-1(z1/r)=arcsin(z1/r)
yaw=tan-1(y1/x1)=arctan(y1/x1)
Where r is the distance from the three-dimensional point to the origin.
And 3, performing semantic segmentation by referring to a convolutional neural network of 3D-mininet, and extracting the potential passable region, the positive barrier, the negative barrier and the non-interested region from the complex large environment.
As shown in fig. 3, the 3D-mininet network is a new method of LIDAR semantic segmentation that combines 3D and 2D learning layers. It first learns the 2D representation from the original points by novel projection, extracting local and global information from the 3D data. The representation is input to a 2D Full Convolutional Neural Network (FCNN) that can produce 2D semantic segmentation. The obtained 2D semantic tags are then re-projected back into 3D space and enhanced by a post-processing module.
The step S3 specifically includes:
s1, training data of a point cloud network: point cloud data is labeled, and the data is divided into four types of objects: a non-region of interest, a passable region, a positive obstacle, a negative obstacle;
s2, data equalization: using multi-class weighted cross entropy loss as regularization to mitigate the effects of extreme class imbalance problems, computing respective weights for class imbalance regularization from training set statistics, the cross entropy loss function L being expressed as:
wherein M is the number of points, C is the number of categories, y c,m is a two-category indicator of whether M belongs to category C, The probability that the point m predicted for CNN belongs to the category c, f t is the median of the occurrence frequencies of all the category points, and f c is the occurrence frequency of the point belonging to the category c;
S3, a full convolution neural network: firstly, calculating a W/4 XH/4 XC 6 tensor through a projection learning module, and then obtaining a semantic segmentation result through running an efficient two-dimensional semantic segmentation CNN; the neural network module uses a full convolution neural network and is based on MiniNetV < 2 > architecture; the encoder of the network performs an L1 depth separable convolution and an L2 multi-dilation depth separable convolution; for a decoder, the network performs an L3 separable convolution at W/4×h/4 resolution and an L4 separable convolution at W/2×h/2 resolution using bilinear interpolation as an upsampling layer; finally, two-dimensional semantic segmentation prediction is obtained at W multiplied by H resolution;
S4, two-dimensional to three-dimensional back projection: the predicted 2D semantic segmentation labels are re-projected into three-dimensional space; the projection of points that have not been projected into three-dimensional space has no semantic tags, and for these points, point tags in their neighboring 2D coordinates are assigned.
Step4, as shown in fig. 4, the potential interesting Ping Miandian cloud obtained in the step 3 is processed; for potential plane points, the RANSAC method is used for carrying out multistage plane fitting on the potential plane points, so that more accurate plane points are further obtained.
The RANSAC method basically assumes that the samples contain correct data (inliers, data that can be described by a model) and also contain abnormal data (outliers, data that deviates far from the normal range and cannot adapt to a mathematical model), i.e. the dataset contains noise. These anomalous data may be due to erroneous measurements, erroneous assumptions, erroneous calculations, etc. At the same time RANSAC also assumes that, given a correct set of data, there is a way in which model parameters that fit these data can be calculated. In the scenario of the present invention, outliers is a non-planar point, inliers is a planar point, the model describing the planar data is ax+by+cz+d=0, and the RANSAC is applied in such a way that inliers, which corresponds to the model, is found out of the plane points of interest extracted from the network as the plane points after correction processing.
The RANSAC method specifically comprises the following steps:
Randomly sampling K (the minimum required number of points for fitting the planar model) points in a limited data range; constructing a planar model for the K points using a fitting method (e.g., least squares); calculating the distances from other points to the fitting model, taking the points smaller than the set threshold value as internal points, and counting the number of the internal points; constructing a plane model by using the selected interior points again by using a fitting method, then counting the interior points again, repeating the steps until the increasing speed of the interior points converges, and recording the interior points and the model as an iteration; k points are sampled again randomly, and a new round of iteration is carried out; repeating the iteration for a plurality of times, and selecting the model with the largest number of interior points from the existing models; after the model is found, the model is used for searching the interior points of the region of interest, and if the interior points containing the semantic non-planar labels exist, the interior points are corrected to be planar labels. In the scene of the invention, the interested plane points are divided into a plurality of sections (at least 2) according to the running direction of the vehicle, then the RANSAC algorithm processing is carried out on the plurality of sections of plane points to obtain respective models, and then the inner points are searched on the respective models for modification.
And 5, obtaining non-planar points in the driving range on the basis of the planar points extracted in the step 4, and performing European clustering on the non-planar points.
Step 6, treating the potential interesting positive barrier and the potential interesting negative barrier obtained in the step 3 and the step 5; and (3) for potential positive and negative obstacle points, correcting misclassification in the convolutional neural network by combining the plane model obtained in the step (4).
All clusters obtained in the step 5 are provided with labels obtained during network segmentation, and meanwhile, part of labels are the joints of the ground and the obstacles, which cannot be distinguished in the neural network, and the non-planar points are separated and European clustered together through the planar model, so that the part of the labels are connected with the points with the obstacle labels. At this time, all clusters can be traversed, and the ground labels contained in the clusters are modified into the labels of the corresponding obstacle points. Obstacle labels contained in the clusters can also be changed in phase to filter out boundary points of passable areas which are clustered out by mistake (the points are connected with grasslands to cause larger distance and are difficult to distinguish in a neural network), so that excessive processing is prevented.
And 7, tracking the obtained positive and negative obstacles by using a Kalman filter, and judging whether the obstacles are missed to be detected according to Kalman pre-detection of the previous frame. Because the scanning angle and the bumping effect of the vehicle body are accompanied in the acquisition process of the field laser radar data, some objects or pits with obvious last frame can lack a part or even be incomplete in the current frame, and the point cloud cluster is difficult to detect by a deep learning network and subsequent refinement processing. Based on the principle that the surrounding objects are basically stable and only observers have large variation, the existing targets can be tracked by utilizing a target tracking algorithm. A kalman filter is one of the algorithms.
The embodiment provides a method for extracting a passable area in a highly robust field environment based on a laser radar, which can be applied to not only a normal daytime visible time period, but also a night time period and a period with difficult visible conditions. The method comprises the steps of firstly obtaining point cloud data under a field environment based on a laser radar, then carrying out horizontal correction on the point cloud by using attitude change data obtained by an inertial measurement unit (imu) on the point cloud data before entering a network training, and reducing the influence of the laser radar and the inclination of a vehicle body on the point cloud data. And projecting the corrected three-dimensional laser radar point cloud into a two-dimensional depth image space, and performing semantic segmentation on the laser radar point cloud through the two-dimensional depth image and the corrected three-dimensional laser radar point cloud by using a convolutional neural network of 3D-mininet. The deep learning network can initially extract points of the passable area of interest including positive and negative obstacles in a relatively complex background, and then refine the extracted points. The main refinement processing comprises post-optimization processing of Ping Miandian clouds and positive and negative obstacle point clouds. For the initially extracted plane point cloud, a RANSAC algorithm based on a plane model is used for carrying out segmented fitting processing on the plane point cloud, then plane points on different segments and non-plane points above a plane are obtained, and for the plane points, if other labels exist, the plane points can be corrected. Instead, non-planar points are clustered using Euclidean distance clustering, and then the existing labels on each cluster are revised. And then, checking stored targets in the Kalman filter, carrying out label correction on the targets in the Kalman filter, inputting all clustered targets into the Kalman filter, updating parameters of the targets so as to facilitate the use of the next laser radar point cloud frame, and improving the recognition rate of positive and negative obstacles in the surrounding field environment.
Example 2
The other steps in this example are the same as those in example 1, except that in the above-described step 5, a clustering method of DBSACAN is used. The Euclidean distance clustering has the core measurement mode of linear distance in three-dimensional space and the calculation mode of binary norm of the difference value of xyz. The clustering method used by the invention is DBSACAN, and the specific steps are as follows:
S51, detecting points p which are not checked in the data, if the passing points p are not processed, namely are not classified into a certain cluster or marked as noise, checking the points with the distance smaller than the field range in the adjacent points, if the included points are not smaller than minP, establishing a new cluster T, and adding all the points into a candidate set N;
S52, checking points with the distance smaller than the field range in adjacent points for all points q which are not processed in the candidate set N, and adding the points into N if the points at least contain minP points; if the point q does not fall into any cluster, adding the point q to T;
S53, repeating the step S52, and continuously checking unprocessed points in the N until the current candidate set N is empty;
s54, repeating the steps S51-S53 until all points fall into a certain cluster or are marked as noise.
Example 3
Other steps of this embodiment are the same as those of embodiment 1, except that this embodiment discloses a detailed procedure of specific kalman filtering. The specific flow corresponding to the Kalman filtering is as follows:
1. An initialization stage, initializing state estimation and uncertainty of the estimation.
2. And calculating a state transition matrix, and transferring the initial state from the previous moment to the current moment.
3. Covariance of the prior state is calculated.
4. The kalman gain is calculated.
5. The a priori state is updated when the measurement is received.
6. After correcting the state estimate, the a priori error covariance is updated.
7. State estimation and measurement update at the next moment is entered.
In this embodiment, the kalman filter used in step 7 specifically includes the following steps:
S71, predicting:
Pk=APk-1AT+Q
in the formula, The physical quantity x, mu k at the moment k is noise at the moment k, A is a transfer matrix, B is a noise matrix, P k is a prediction error at the moment k, and Q is a covariance matrix;
s72, updating:
Gk=PkHT(HPkHT+R2)-1
Pk=(1-GkH)Pk
wherein G k represents Kalman gain, H represents a scaling coefficient matrix, and R2 represents a measurement noise average matrix.
In the process of algorithm operation, performing center matching on all the established Kalman filtering predicted value centers xy and the clusters obtained in the step S5, if unprocessed cluster exists, updating the labels of the clusters, wherein the update categories are positive barriers above a plane and negative barriers below the plane according to the positions of the clusters relative to nearby planes; after matching and processing, the Kalman filter is updated with all accounting results.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (7)

1. The method for extracting the passable area in the high-robustness field environment based on the laser radar is characterized by comprising the following steps of:
S1, correcting the level of point cloud: the inertia measurement unit is utilized to horizontally correct the obtained laser radar point cloud data, so that the influence of laser radar and vehicle body inclination on the point cloud data is reduced;
s2, projecting a point cloud coordinate system: projecting the corrected three-dimensional laser radar point cloud to a two-dimensional depth map space, and simultaneously reserving the three-dimensional point cloud;
S3, semantic segmentation of point cloud: semantic segmentation is performed by referring to a convolutional neural network of 3D-mininet, and potential passable areas, positive obstacles, negative obstacles and non-interested areas of interest are extracted from a complex large environment;
s4, multi-section plane fitting: processing the potential interesting Ping Miandian cloud obtained in the step S3; for potential plane points, performing multistage plane fitting on the potential plane points by using a RANSAC method to further obtain more accurate plane points; the RANSAC method specifically comprises the following steps:
Randomly sampling K points in a defined data range; constructing a plane model for the K points by using a fitting method; calculating the distances from other points to the fitting model, taking the points smaller than the set threshold value as internal points, and counting the number of the internal points; constructing a plane model by using the selected interior points again by using a fitting method, then counting the interior points again, repeating the steps until the increasing speed of the interior points converges, and recording the interior points and the model as an iteration; k points are sampled again randomly, and a new round of iteration is carried out; repeating the iteration for a plurality of times, and selecting the model with the largest number of interior points from the existing models; after a model is found, searching interior points of all points of the region of interest by using the model, and if the interior points containing the semantic non-planar labels exist, correcting the interior points into the planar labels;
S5, non-planar point clustering: obtaining non-planar points in a driving range on the basis of the planar points extracted in the step S4, and performing European clustering on the non-planar points;
S6, correcting positive and negative obstacle labels: processing the potential interesting positive barrier and the potential interesting negative barrier obtained in the steps S3 and S5; for potential positive and negative obstacle points, correcting misclassification occurring in the convolutional neural network by combining the plane model obtained in the step S4;
s7, missing obstacle complement: tracking the obtained positive and negative obstacles by using a Kalman filter, and judging whether the obstacles are missed to be detected according to the Kalman pre-detection of the previous frame; the kalman filtering used in the step S7 specifically includes the following steps:
S71, predicting:
Pk=APk-1AT+Q
in the formula, The physical quantity x, mu k at the moment k is noise at the moment k, A is a transfer matrix, B is a noise matrix, P k is a prediction error at the moment k, and Q is a covariance matrix;
s72, updating:
Gk=PkHT(HPkHT+R2)-1
Pk=(1-GkH)Pk
wherein G k represents Kalman gain, H represents a scaling coefficient matrix, and R2 represents a measurement noise average matrix;
performing center matching on all the established Kalman filtering predicted value centers xy and the clusters obtained in the step S5, if unprocessed clusters exist, updating the labels of the clusters, wherein the updated categories are positive obstacles above a plane and negative obstacles below the plane according to the positions of the clusters relative to a nearby plane; after matching and processing, the kalman filter is updated with all obstacle results.
2. The method for extracting a passable region in a highly robust field environment based on a laser radar according to claim 1, wherein the inertial measurement unit is installed right under the laser radar, and a local coordinate system of the inertial measurement unit is rotationally aligned with a coordinate system of a point cloud of the laser radar; and the laser radar is rigidly connected with the inertial measurement element.
3. The method for extracting passable region in highly robust field environment based on lidar according to claim 1, wherein in the step S1, for each lidar point cloud p (x, y, z, i), i is the reflection intensity, xyz is taken, and then coordinate vector [ x, y, z ] is formed; for the obtained rotation angle data, it is combined into a rotation matrix of three axes using the following formula:
Obtaining a corresponding total rotation matrix:
in the formula, θ、Rotation angles around x, y and z axes respectively;
and multiplying the coordinate vector by the rotation matrix to obtain corrected laser radar point cloud data.
4. The method for extracting a passable region in a highly robust field environment based on a lidar according to claim 3, wherein in the step S2, three-dimensional point cloud data is converted into a two-dimensional depth map by spherical projection, and the method specifically comprises: a point (x 1,y1,z1) in the three-dimensional point cloud data is projected onto a 2D image to obtain a new 2D coordinate representation, firstly, an x-axis is assumed to be a front view of the laser radar, and an angle is formed between a connecting line of the point (x 1,y1,z1) and an origin and an xy plane, namely a pitch angle; the projection of the point (x 1,y1,z1) on the xy plane forms a deflection angle yaw with the line of the origin and the central line x or xz plane; the deflection angle and the pitch angle of each point in the three-dimensional point cloud data are calculated through the following steps of:
pitch=sin-1(z1/r)=arcsin(z1/r)
yaw=tan-1(y1/x1)=arctan(y1/x1)
Where r is the distance from the three-dimensional point to the origin.
5. The method for extracting passable region in highly robust field environment based on lidar as claimed in claim 4, wherein the network of 3D-mininet first learns 2D representation from original point by novel projection, extracts local and global information from 3D data; then input to a 2D full convolution neural network, produce 2D semantic segmentation; the obtained 2D semantic tags are then re-projected back into 3D space and enhanced by a post-processing module.
6. The method for extracting a passable region in a highly robust field environment based on lidar according to claim 5, wherein the step S3 specifically comprises:
s1, training data of a point cloud network: point cloud data is labeled, and the data is divided into four types of objects: a non-region of interest, a passable region, a positive obstacle, a negative obstacle;
s2, data equalization: using multi-class weighted cross entropy loss as regularization to mitigate the effects of extreme class imbalance problems, computing respective weights for class imbalance regularization from training set statistics, the cross entropy loss function L being expressed as:
wherein M is the number of points, C is the number of categories, y c,m is a two-category indicator of whether M belongs to category C, The probability that the point m predicted for CNN belongs to the category c, f t is the median of the occurrence frequencies of all the category points, and f c is the occurrence frequency of the point belonging to the category c;
S3, a full convolution neural network: firstly, calculating a W/4 XH/4 XC 6 tensor through a projection learning module, and then obtaining a semantic segmentation result through running an efficient two-dimensional semantic segmentation CNN; the neural network module uses a full convolution neural network and is based on MiniNetV < 2 > architecture; the encoder of the network performs an L1 depth separable convolution and an L2 multi-dilation depth separable convolution; for a decoder, the network performs an L3 separable convolution at W/4×h/4 resolution and an L4 separable convolution at W/2×h/2 resolution using bilinear interpolation as an upsampling layer; finally, two-dimensional semantic segmentation prediction is obtained at W multiplied by H resolution;
S4, two-dimensional to three-dimensional back projection: the predicted 2D semantic segmentation labels are re-projected into three-dimensional space; the projection of points that have not been projected into three-dimensional space has no semantic tags, and for these points, point tags in their neighboring 2D coordinates are assigned.
7. The method for extracting passable areas in highly robust field environment based on laser radar according to claim 1, wherein in the step S5, a clustering mode DBSACAN is used, and the method specifically comprises the following steps:
s51, detecting points p which are not checked in the data, if the points p are not processed, namely are not classified into a certain cluster or marked as noise, checking the points with the distance smaller than the field range in the adjacent points, if the included points are not smaller than minP, establishing a new cluster T, and adding all the points into a candidate set N;
S52, checking points with the distance smaller than the field range in adjacent points for all points q which are not processed in the candidate set N, and adding the points into N if the points at least contain minP points; if the point q does not fall into any cluster, adding the point q to T;
S53, repeating the step S52, and continuously checking unprocessed points in the N until the current candidate set N is empty;
s54, repeating the steps S51-S53 until all points fall into a certain cluster or are marked as noise.
CN202111592871.3A 2021-12-23 2021-12-23 A highly robust method for extracting traversable areas in wild environments based on LiDAR Active CN114325634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111592871.3A CN114325634B (en) 2021-12-23 2021-12-23 A highly robust method for extracting traversable areas in wild environments based on LiDAR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111592871.3A CN114325634B (en) 2021-12-23 2021-12-23 A highly robust method for extracting traversable areas in wild environments based on LiDAR

Publications (2)

Publication Number Publication Date
CN114325634A CN114325634A (en) 2022-04-12
CN114325634B true CN114325634B (en) 2024-11-26

Family

ID=81055436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111592871.3A Active CN114325634B (en) 2021-12-23 2021-12-23 A highly robust method for extracting traversable areas in wild environments based on LiDAR

Country Status (1)

Country Link
CN (1) CN114325634B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019891B (en) * 2022-06-08 2023-07-07 郑州大学 A method for predicting individual driver genes based on semi-supervised graph neural network
CN115063777B (en) * 2022-06-27 2024-06-04 厦门大学 Unmanned vehicle obstacle recognition method in field environment
CN114983302B (en) * 2022-06-28 2023-08-08 追觅创新科技(苏州)有限公司 Gesture determining method and device, cleaning equipment, storage medium and electronic device
CN114862957B (en) * 2022-07-08 2022-09-27 西南交通大学 A 3D Lidar-based Subway Vehicle Bottom Positioning Method
CN116533998B (en) * 2023-07-04 2023-09-29 深圳海星智驾科技有限公司 Automatic driving method, device, equipment, storage medium and vehicle of vehicle
CN118424320A (en) * 2024-07-05 2024-08-02 石家庄铁道大学 A method for road condition recognition and vehicle path planning on unstructured roads

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A detection method of road passable area based on 3D lidar
CN111880197A (en) * 2020-07-30 2020-11-03 南京超达信息科技有限公司 Road boundary detection method based on four-line laser radar

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188696B (en) * 2019-05-31 2023-04-18 华南理工大学 Multi-source sensing method and system for unmanned surface equipment
CN112184736B (en) * 2020-10-10 2022-11-11 南开大学 Multi-plane extraction method based on European clustering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A detection method of road passable area based on 3D lidar
CN111880197A (en) * 2020-07-30 2020-11-03 南京超达信息科技有限公司 Road boundary detection method based on four-line laser radar

Also Published As

Publication number Publication date
CN114325634A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN114325634B (en) A highly robust method for extracting traversable areas in wild environments based on LiDAR
US20220026232A1 (en) System and method for precision localization and mapping
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN114842438B (en) Terrain detection method, system and readable storage medium for automatic driving automobile
Siegemund et al. A temporal filter approach for detection and reconstruction of curbs and road surfaces based on conditional random fields
Wang et al. Bionic vision inspired on-road obstacle detection and tracking using radar and visual information
CN112740225B (en) A kind of pavement element determination method and device
CN115049700A (en) Target detection method and device
CN112464812B (en) A vehicle-based sunken obstacle detection method
CN103697883B (en) A kind of aircraft horizontal attitude defining method based on skyline imaging
CN113506318A (en) A 3D object perception method in vehicle edge scene
CN111862673A (en) Parking lot vehicle self-positioning and map construction method based on top view
Liao et al. SE-calib: Semantic edge-based LiDAR–camera boresight online calibration in urban scenes
CN115861968A (en) Dynamic obstacle removing method based on real-time point cloud data
CN110992424B (en) Positioning method and system based on binocular vision
Song et al. End-to-end learning for inter-vehicle distance and relative velocity estimation in ADAS with a monocular camera
Zhou et al. Lane information extraction for high definition maps using crowdsourced data
CN110864670B (en) Method and system for acquiring position of target obstacle
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar
Gökçe et al. Recognition of dynamic objects from UGVs using interconnected neuralnetwork-based computer vision system
Huang et al. A coarse-to-fine LiDAR-based SLAM with dynamic object removal in dense urban areas
CN115994934B (en) Data time alignment method and device and domain controller
CN117516560A (en) An unstructured environment map construction method and system based on semantic information
Li et al. RF-LOAM: Robust and fast LiDAR odometry and mapping in urban dynamic environment
CN116045965A (en) Multi-sensor-integrated environment map construction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant