Nothing Special   »   [go: up one dir, main page]

CN111414848B - Full-class 3D obstacle detection method, system and medium - Google Patents

Full-class 3D obstacle detection method, system and medium Download PDF

Info

Publication number
CN111414848B
CN111414848B CN202010194582.7A CN202010194582A CN111414848B CN 111414848 B CN111414848 B CN 111414848B CN 202010194582 A CN202010194582 A CN 202010194582A CN 111414848 B CN111414848 B CN 111414848B
Authority
CN
China
Prior art keywords
point cloud
road surface
spatial point
dimensional
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010194582.7A
Other languages
Chinese (zh)
Other versions
CN111414848A (en
Inventor
段志祥
杨奎元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202010194582.7A priority Critical patent/CN111414848B/en
Publication of CN111414848A publication Critical patent/CN111414848A/en
Application granted granted Critical
Publication of CN111414848B publication Critical patent/CN111414848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A full-category 3D obstacle detection method, system, and medium, comprising: s100, 2D obstacle detection is carried out on the road image, and pixel coordinates of road surface points in the road image are obtained through a road surface segmentation model; s200, acquiring spatial point cloud data of a road; s300, converting the spatial point cloud data into a spatial point cloud based on a road surface equation, and meshing the spatial point cloud; s400, placing and screening a three-dimensional frame for detection in the gridded spatial point cloud; and S500, classifying and regressing the screened three-dimensional frames, thereby detecting the 3D obstacles on the road surface. The road plane is segmented from the picture and the parameterized road plane equation is estimated, so that the quality of obstacle detection is improved, different sensor inputs can be selected according to the precision requirement and the cost, the method is not limited by the type of the detected obstacle, and the 3D obstacle detection results of all types can be output.

Description

Full-class 3D obstacle detection method, system and medium
Technical Field
The present invention relates to the field of automatic driving, and more particularly, to a method and system for detecting an obstacle in automatic driving, and a readable storage medium.
Background
With the development of the automatic driving technology, the automobile safety technology receives more and more attention. Sensor-based obstacle detection techniques play a crucial role in the field of autonomous driving.
Various sensor devices may be used to obtain various real-time traffic information. Commonly used vehicle-mounted sensors mainly include visual types such as infrared, monocular camera, binocular camera, etc., and also include ranging types such as ultrasonic, millimeter wave radar, laser radar, etc. Each sensor is unique and has limited applicability and use. For example, lidar has a wide field of view, accurate depth information, and long range and night vision capabilities in target recognition, however, as the distance from the scan center increases, the distribution of the lidar point cloud becomes more and more sparse, which makes it difficult for the lidar to detect specific objects in the classification. The camera can provide high-resolution images, is small in size, low in cost and low in power consumption, but is greatly influenced by light changes such as illumination and shadow, the position of a 3D object is lacked, and three-dimensional information is difficult to obtain only by the image of the camera. Therefore, in recent years, a plurality of sensors are used in a comprehensive manner, and problems caused when a single sensor is used are avoided by multi-sensor fusion. However, the application of multiple sensor data is single during the existing multi-sensor fusion, and the data of each sensor is not effectively fused. And if a certain sensor fails, the obstacle detection system and method cannot be well adapted, so that the robustness of the whole system is poor.
In addition, the conventional obstacle detection method mainly includes:
1. the method for detecting the obstacle by adopting the geometric constraint, namely, when a 3D frame of the obstacle is projected into a picture, the parameter of the 3D frame of the obstacle is estimated by using a priori that the 3D frame of the obstacle is matched with a 2D detection frame. However, the geometric constraint method can only detect regular obstacles of limited types, and cannot detect obstacles with irregular shapes.
2. And roughly estimating the vehicle distance by combining the camera model through the ratio of the height of the vehicle in the picture and the height of the real physical world. However, this estimation method is not accurate, and if the height of the actual object varies greatly, the estimation is less accurate.
Disclosure of Invention
In order to detect all kinds of obstacles, better fuse data of other sensors and have stronger system robustness, the invention provides an obstacle detection method, a system and a medium.
The invention aims to provide a full-category 3D obstacle detection method, a system and a storage medium.
In a first aspect, an embodiment of the present invention provides a full-class 3D obstacle detection method, including:
s100, 2D obstacle detection is carried out on the road image, and pixel coordinates of road surface points in the road image are obtained through a road surface segmentation model;
s200, acquiring spatial point cloud data of a road;
s300, converting the spatial point cloud data into a spatial point cloud based on a road surface equation, and meshing the spatial point cloud;
s400, placing and screening out a three-dimensional frame for detection in the gridded spatial point cloud;
and S500, classifying and regressing the screened three-dimensional frames, thereby detecting the 3D obstacles on the road surface.
S200, acquiring the spatial point cloud data of the road comprises the step of acquiring the spatial point cloud data by combining a laser radar with a visual sensor.
Wherein S300, the space point cloud data is converted into space point cloud based on the road surface equation, and the space point cloud is gridded, including,
s310, fitting a road surface equation;
s320, converting the spatial point cloud into the spatial point cloud based on the road surface equation;
and S330, gridding the spatial point cloud based on the road surface equation.
Wherein, S310, fitting the road surface equation comprises,
fitting a road surface equation of the road according to the space three-dimensional coordinates of the ground point cloud data: ax + by + cz + d =0;
a. b, c and d are parameters of a road surface equation obtained based on the space three-dimensional coordinates of the ground point cloud data;
the three-dimensional space coordinates of the ground point cloud data are calculated according to the pixel coordinates of the points on the road surface and the space point cloud data of the road.
S100, acquiring a lane line in the road image through a lane line detection model;
s310, matching the lane line with a lane line in a high-precision map, calculating the position of the vehicle in the high-precision map, screening points close to the road surface through a road surface equation in the high-precision map, and optimizing a fitted road surface equation according to the screened points.
The step S320 of converting the spatial point cloud into the spatial point cloud based on the road surface equation includes projecting the spatial point cloud data obtained in the step S200 onto the road surface to obtain the spatial point cloud data in the vehicle body coordinate system.
Wherein, S330, the gridding of the spatial point cloud based on the road surface equation comprises,
selecting spatial point cloud data in a certain coordinate range under a vehicle body coordinate system, and dividing the spatial point cloud data into three-dimensional grids; for each three-dimensional grid, if one space point exists, the coordinate of the space point is used as the coordinate of the grid, if no space point exists, the coordinate of the three-dimensional grid is marked as 0, the three-dimensional grid is removed, and if a plurality of space points exist, the highest space point or the average value of the plurality of space points is selected as the coordinate of the grid.
Wherein, S400, placing and screening out the three-dimensional frame for detection in the gridded spatial point cloud comprises placing a cuboid three-dimensional frame for detection for each three-dimensional grid.
S400, placing and screening out the three-dimensional frame for detection in the gridded spatial point cloud comprises placing a cuboid three-dimensional frame for detection at intervals of a plurality of three-dimensional grids.
Wherein S100 further comprises marking each detected 2D obstacle with a rectangular frame;
the S400 may further include the step of,
s4210, projecting each three-dimensional frame into the road image;
s4220, comparing the projection of each three-dimensional frame in the road image with a corresponding rectangular frame for marking the barrier, and determining that the barrier is present if the coincidence area of the two frames is larger than or equal to a threshold value; and if the two overlapped areas are less than the threshold value, the obstacle is determined to be not an obstacle.
Wherein, S400 still includes, surveys out the spatial position of barrier through millimeter wave radar and/or ultrasonic radar, calculates the distance of each barrier to corresponding cuboid three-dimensional frame center, then screens the cuboid three-dimensional frame through setting up a threshold value.
In a second aspect, an embodiment of the present invention provides a full-category 3D obstacle detection system, including a memory and a processor, where the memory stores instructions; wherein the processor is configured to perform the above method according to instructions stored in the memory.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the storage medium, where the computer program is implemented, when executed by a processor, to implement the method described above.
Compared with the existing obstacle detection method, the method has the advantages that the road plane is segmented from the picture and the parameterized road plane equation is estimated, so that the obstacle detection quality is improved, the expandability is high, different sensor inputs can be selected according to the precision requirement and the cost, the method is not limited by the type of the detected obstacle, and the full-type 3D obstacle detection result can be output.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flow chart of the obstacle detection method of the present invention.
FIG. 2 is a flow chart of the invention for spatial point cloud meshing based on a road surface equation.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to one aspect of the invention, the full-class 3D obstacle detection method is provided, and the specific implementation idea is to perform obstacle detection by segmenting a road plane from a picture, estimating a parameterized road plane equation and clustering points on the road plane by fitting the road plane equation.
Referring to fig. 1, a schematic flow chart of a method for full class 3D obstacle detection according to an embodiment of the present application is shown. The present embodiment is mainly explained by using the method in a server having a data processing capability, a terminal device, and an independent computer system. The application provides a full-class 3D obstacle detection method, which comprises the following steps:
s100, 2D obstacle detection is carried out on the road image, each detected 2D obstacle is marked by a rectangular frame, pixel coordinates of points of the road surface in the road image are obtained through a road surface segmentation model, and a lane line in the road image is obtained through a lane line detection model;
s200, acquiring spatial point cloud data of a road;
s300, converting the spatial point cloud data into a spatial point cloud based on a road surface equation, and meshing the spatial point cloud;
s400, placing and screening a three-dimensional frame for detection in the gridded spatial point cloud;
and S500, classifying and regressing the screened three-dimensional frames, thereby detecting the 3D obstacles on the road surface.
As shown in fig. 1, in step S100, 2D obstacle detection is performed on a road image, each detected 2D obstacle is marked with a rectangular frame, and pixel coordinates of a point of a road surface in the road image are acquired by a road surface segmentation model, and a lane line in the road image is acquired by a lane line detection model.
In this embodiment, a sensor, such as a binocular or monocular camera, may be used to acquire image data of the road. The monocular or binocular camera is mounted on a mobile device, such as an unmanned vehicle.
It should be noted that the sensor in this embodiment may be installed at the same end as the moving direction of the moving device, and/or at the opposite end of the moving device, and/or at both sides of the moving device. When the mobile device is a vehicle, the sensor may be mounted at the front or rear end of the vehicle, or rotatably mounted at the top end of the vehicle. Wherein, the sensor includes binocular or monocular camera to, binocular or monocular camera can shoot the image that vehicle the place ahead, rear, both sides, surrounding include the vehicle top.
In another embodiment of the present invention, the sensors may also include visual or ranging type sensors such as laser radar, millimeter wave radar, ultrasonic radar, precision maps, and/or fisheye cameras.
After the image data of the road is acquired by a vision sensor, such as a binocular or monocular camera, in this step, obstacles in the image data are first identified by an obstacle detection model, so that the obstacles in the image data are detected, and each detected obstacle is marked by a rectangular frame.
In this embodiment, when acquiring image data by using a monocular camera, the obstacle detection may directly identify the image data acquired by the monocular camera through an obstacle detection model, so as to directly acquire obstacles in the image data and mark each detected obstacle with a rectangular frame. The detection and identification means are well known in the art and are not specifically limited herein.
In the embodiment, when the image data is acquired by adopting the binocular camera, the binocular camera is provided with two cameras, the two cameras can be used jointly or independently, in order to save computer resources, the data of one camera is called to perform a deep learning algorithm, and the camera can be used for identifying obstacles in the image and marking the obstacles with a rectangular frame; and performing binocular matching by using data of the other camera to obtain a disparity map, so that depth information can be calculated.
An obstacle detection model generated based on machine learning may be utilized to detect an obstacle in road image data. The method of acquiring the obstacle detection model is well known in the art and is not expanded here.
Optionally, in this embodiment, the road surface in the road image may also be segmented by the road surface segmentation model, so as to obtain the pixel coordinates of the point of the road surface in the road image. For example, the road surface in the image data may be segmented using a road surface segmentation model generated based on machine learning, and the method of obtaining the road surface segmentation model is well known in the art and is not expanded here.
Optionally, in this embodiment, the lane line in the road image may also be detected by the lane line detection model, so as to obtain the lane line in the road image, and the lane line is represented by a straight line or a curved line. Specifically, a machine learning method may be employed to identify lane lines in an image. For example, a model for lane line recognition is trained using the sample data, resulting in parameters and output of the model. The acquired road image data is input into the model during recognition, and a recognition result is output.
As shown in fig. 1, in step S200, spatial point cloud data of a road is acquired.
In this embodiment, the spatial point cloud data of the road may be acquired in various ways.
When the monocular camera is adopted, the spatial point cloud data of the road can be obtained through monocular depth estimation. When a binocular camera is adopted, the parallax value of the picture can be obtained through a stereo matching method, and then the spatial point cloud data corresponding to the picture pixels are calculated according to the camera model. When a lidar is employed, scanning may be performed by the lidar to generate a three-dimensional point cloud image of the surrounding environment.
Optionally, the space point cloud data can be obtained by combining a laser radar with a visual sensor, and more accurate and dense point cloud data can be obtained by means of fusion of the laser radar and visual depth. And is not particularly limited herein.
In an alternative embodiment, an onboard monocular or binocular camera and/or an onboard laser scanning system may be employed to acquire the point cloud data. The vehicle-mounted laser scanning System may include an INS (Inertial Navigation System) and an IMU (Inertial Measurement Unit) in addition to the laser radar. The inertial navigation System can acquire GPS (Global Positioning System) data of a scanned position and vehicle travel data such as speed, acceleration, and the like. The IMU may acquire roll, pitch, and yaw data for the vehicle.
Of course, an image data acquisition step may be further included before step S100 to acquire two-dimensional image data and three-dimensional laser point cloud data.
As shown in fig. 1, in step S300, a road surface equation is fitted, and the spatial point cloud is converted into a spatial point based on the road surface equation and is gridded.
To convert the spatial point cloud into a spatial point based on a road surface equation, the road surface equation needs to be obtained first.
As shown in fig. 2, in step S310, a road surface equation is fitted.
The spatial three-dimensional coordinates of the ground point cloud data, that is, the spatial coordinates of the points of the road surface region in the camera coordinate system, can be calculated by the pixel coordinates of the points of the road surface in the road image obtained by dividing the road image in the step S100 and the spatial point cloud data of the road obtained in the step S200. In this embodiment, the coordinate system of the three-dimensional coordinate system is that the X-axis points to the right of the vehicle, the Y-axis is vertical to the road surface and faces downwards, and the Z-axis points to the front of the vehicle.
In an embodiment, a road surface equation of the road may be fitted according to the spatial three-dimensional coordinates of the ground point cloud data:
ax+by+cz+d=0;
and a, b, c and d are parameters of a road plane equation obtained based on the space three-dimensional coordinates of the ground point cloud data.
The specific fitting method of the road surface equation may be, for example, solving the road surface equation based on the consistency of the estimated samples. Firstly, randomly selecting point cloud data from the ground point cloud data for calculating parameters a, b, c and d as candidate ground equations; then calculating the matching degree of the candidate ground equation and all the point cloud data, taking the distance from the point cloud data to the ground as a matching degree judgment standard, adding ground height constraint, considering the point cloud which accords with the candidate ground equation as the ground point cloud data, and if enough point cloud data is considered as the ground plane point cloud data, the candidate ground equation is reasonable; if the iteration is not finished, estimating a candidate ground equation by using randomly selected ground point cloud data again, and repeating the iteration until the specified iteration times are reached to obtain road plane parameters a, b, c and d.
Alternatively, the lane lines detected in step S100 may be matched with the lane lines in the high-precision map in combination with the high-precision map, so as to calculate the position of the vehicle in the high-precision map, then the points close to the road surface are screened out by the road surface equation in the high-precision map, and then the road surface equation estimation is performed according to the screened points, so that the influence of some noise points in the air on the fitted road surface equation can be reduced.
In addition, in the embodiment of the present invention, the fitted road surface equation is solved based on the road surface being a plane, and in an actual working environment, the road surface may not be very flat, so that the road surface may be subdivided into a plurality of small regions, and then a plane equation may be estimated for each region of the road surface.
After the road surface equation is fitted in step S310, the spatial point cloud data obtained in step S200 is converted into spatial point cloud data based on the road surface equation.
As shown in fig. 2, in step S320, the spatial point cloud is converted into a spatial point cloud based on a road surface equation.
The point cloud data generated from the image in step S200 is located under the camera coordinate system, and when the road surface equation is known, the point cloud data may be converted to under the vehicle body coordinate system (the origin of coordinates of the vehicle body coordinate system is located on the road surface). Because most obstacles are located on the road surface, the road surface equation provides a priori for subsequent obstacle detection, which can be accomplished by placing subsequent anchor frames on the road surface.
In the embodiment of the present invention, after the road surface equation of the road is fitted in step S310, the spatial point cloud data obtained in step S200 may be projected onto the road surface, so as to obtain the spatial point cloud data in the vehicle body coordinate system. For example, the distance from each spatial point to the road surface can be calculated, so that the y coordinate of the spatial point can be updated by the distance from the spatial point to the road surface. Specifically, the coordinate conversion method is well known and is not particularly limited herein.
After the spatial point cloud is converted into the spatial point cloud based on the road surface equation through the step S320, the spatial point cloud may be gridded to reduce the amount of calculation.
As shown in fig. 2, step S330 is to grid the spatial point cloud.
In the embodiment of the invention, based on the space point cloud data updated by projecting the space point cloud data on the road surface, the space point cloud data of the interested area in a certain coordinate range under a vehicle body coordinate system is only needed to be selected, for example, the space point cloud data in the space ranges of 100 meters ahead, 80 meters left and right and 4 meters up and down are extracted by taking the vehicle body as the center. The spatial point cloud data within the coordinate range is then divided into a three-dimensional mesh resembling a magic cube, wherein the three-dimensional mesh may have a length, width, and height of 0.05 m 0.1 m, for example.
For each of the three-dimensional grids, if there are multiple spatial points, the coordinates of only one spatial point are selected as the coordinates of the grid, and the selected spatial point may be, for example, the highest spatial point or the average of the multiple spatial points. If no spatial point exists in the three-dimensional grid, the coordinate of the three-dimensional grid is marked as 0, namely the three-dimensional grid is rejected. The spatial point cloud after the gridding, that is, the spatial point cloud after the grid without the spatial point is removed, completes the voxelization (voxel) of the spatial point cloud. The gridded spatial point cloud reduces the number of point clouds, thereby reducing the subsequent calculation amount for detecting the obstacle.
As shown in fig. 1, in step S400, a three-dimensional frame for detection is placed and screened out.
After the space point cloud is subjected to voxelization through the steps, screening of obstacles needs to be performed based on the space point cloud.
First, in step S410, a three-dimensional frame for detection is placed.
After the gridding point cloud is generated, a cuboid three-dimensional frame for detection can be placed on each three-dimensional grid. The cuboid three-dimensional box represents a potential obstacle, for example for a vehicle, approximately 4.1 meters by 1.6 meters by 1.56 meters (length x width x height) of the cuboid three-dimensional box is placed.
The three-dimensional frame is placed at a position of a center point of the three-dimensional frame, and is independent of the size of the grid and the cuboid three-dimensional frame. In addition, the size of each rectangular parallelepiped three-dimensional frame for detection is not fixed, and the size thereof varies depending on the detected obstacle, for example, 4.1 m × 1.6 m × 1.56 m for a vehicle, and 0.4 m × 0.6 m × 1.8 m for a pedestrian. The place where the cuboid three-dimensional frame for detection is placed (for example, the position of the center point of the three-dimensional frame) may be one grid for each grid, or may be one grid placed at intervals, which is related to the speed and effect of the algorithm, and may be set according to the actual needs, and the present invention is not particularly limited.
Optionally, in order to reduce the amount of calculation, a manner of placing fewer cuboid three-dimensional frames may be adopted, for example, one cuboid three-dimensional frame for detection may be placed every several three-dimensional grids. The number of spaced three-dimensional grids may be, for example, 1, 2, 3, or more.
After the three-dimensional frame is placed, the three-dimensional frame with the obstacle can be screened out.
In step S420, a three-dimensional frame in which an obstacle exists is screened out.
Optionally, for example, the screening may be performed by comparing with the obstacle detected in step S100, and specifically includes:
s4210, projecting each three-dimensional frame into the road image in the step S100;
s4220, comparing the projection of each three-dimensional frame in the road image with the corresponding rectangular frame representing the obstacle obtained in the step S100, and determining the obstacle if the coincidence area of the two frames is greater than or equal to a threshold value; and if the two overlapped areas are less than the threshold value, the obstacle is determined to be not an obstacle.
Alternatively, the three-dimensional frame may be filtered by using a sensor such as a millimeter wave radar or an ultrasonic radar in combination. Specifically, sensors such as millimeter wave radar and ultrasonic radar can measure the spatial position of the obstacle, and according to potential obstacles detected by millimeter waves and ultrasonic waves, the distance from the potential obstacles to the center of each three-dimensional cuboid is increased, and then a certain threshold value is set, so that a cuboid three-dimensional frame which is possibly the obstacle can be effectively screened out.
As shown in fig. 1, in S500, the three-dimensional frame is classified and regressed to detect an obstacle on the road surface.
After the three-dimensional frame having the obstacle is screened out through the step S400, the three-dimensional frame may be classified and regressed, thereby identifying the obstacle and the kind thereof on the road surface.
The characteristics of the obtained gridded point cloud can be extracted through a network model, and the characteristics of the point cloud in the frame can be obtained through calculation for each three-dimensional frame. Thus, the three-dimensional frame can be classified and regressed according to the characteristics. For example, for an object having a regular shape such as a pedestrian, a vehicle, or the like, a certain three-dimensional frame such as a center point coordinate, a length, a width, a height, and a corner may be given. And for some special-shaped obstacles such as branches, soil piles and the like, a plurality of three-dimensional frames can be combined and output in the form of polygonal three-dimensional frames.
The specific classification and regression method may be, for example, sending the three-dimensional frame obtained in the above step to a Softmax layer in a deep learning neural network for regression, obtaining an accurate position and category of each frame, and then screening all labeled frames by a maximum suppression algorithm to obtain a final result. Of course, the classification and regression method is not limited thereto, and the present application is not particularly limited.
Compared with the existing 3D obstacle detection method, the obstacle detection method provided by the invention has strong expansibility, can select different sensor inputs according to the precision requirement and the cost, is not limited by the detection type, and can output the 3D obstacle detection of the whole type.
The present application further relates to a system for obstacle detection comprising a memory and a processor, the memory storing instructions; the processor is configured to execute the method of the above embodiments according to instructions stored in the memory.
The application also relates to a computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of the above-mentioned embodiments.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the modules and the instructions described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A full-category 3D obstacle detection method, comprising:
s100, 2D obstacle detection is carried out on the road image, and pixel coordinates of road surface points in the road image are obtained through a road surface segmentation model;
s200, acquiring spatial point cloud data of a road;
s300, converting the spatial point cloud data into a spatial point cloud based on a road surface equation, and meshing the spatial point cloud;
s400, placing and screening a three-dimensional frame for detection in the gridded spatial point cloud;
and S500, classifying and regressing the screened three-dimensional frames, thereby detecting the 3D obstacles on the road surface.
2. The method of claim 1, wherein acquiring spatial point cloud data of the road S200 comprises obtaining the spatial point cloud data by a lidar in combination with a vision sensor.
3. The method of claim 1 or 2, wherein S300, converting the spatial point cloud data into a spatial point cloud based on a road surface equation and gridding the spatial point cloud comprises,
s310, fitting a road surface equation;
s320, converting the space point cloud into a space point cloud based on a road surface equation;
and S330, gridding the spatial point cloud based on the road surface equation.
4. The method of claim 3, wherein fitting the road surface equation comprises, S310,
fitting a road surface equation of the road according to the space three-dimensional coordinates of the ground point cloud data: ax + by + cz + d =0;
a. b, c and d are parameters of a road surface equation obtained based on the space three-dimensional coordinates of the ground point cloud data;
the three-dimensional space coordinates of the ground point cloud data are calculated according to the pixel coordinates of the points on the road surface and the space point cloud data of the road.
5. The method according to claim 3 or 4, wherein S100 further comprises acquiring a lane line in the road image by a lane line detection model;
s310, matching the lane line with a lane line in a high-precision map, calculating the position of the vehicle in the high-precision map, screening points close to the road surface through a road surface equation in the high-precision map, and optimizing a fitted road surface equation according to the screened points.
6. The method according to any one of claims 3-5, wherein the converting S320 of the spatial point cloud into the spatial point cloud based on the road surface equation comprises projecting the spatial point cloud data obtained in the step S200 onto the road surface to obtain the spatial point cloud data in the vehicle body coordinate system.
7. The method of claim 6, wherein the gridding of the spatial point cloud based on the road surface equation S330 comprises,
selecting spatial point cloud data in a certain coordinate range under a vehicle body coordinate system, and dividing the spatial point cloud data into three-dimensional grids; for each three-dimensional grid, if one space point exists, the coordinate of the space point is used as the coordinate of the grid, if no space point exists, the coordinate of the three-dimensional grid is marked as 0, the three-dimensional grid is removed, and if a plurality of space points exist, the highest space point or the average value of the plurality of space points is selected as the coordinate of the grid.
8. The method according to any one of claims 1-7, wherein S400, placing and screening out three-dimensional boxes for detection in the gridded spatial point cloud comprises, for each three-dimensional grid, placing a cuboid three-dimensional box for detection; or
S400, placing and screening out three-dimensional frames for detection in the gridded spatial point cloud, wherein the step of placing a cuboid three-dimensional frame for detection at intervals of a plurality of three-dimensional grids.
9. The method of claim 8, wherein S100 further comprises, marking each detected 2D obstacle with a rectangular box;
the S400 may further include the step of,
s4210, projecting each three-dimensional frame into the road image;
s4220, comparing the projection of each three-dimensional frame in the road image with a corresponding rectangular frame for marking the barrier, and determining the barrier if the coincidence area of the two frames is greater than or equal to a threshold value; and if the two overlapped areas are less than the threshold value, the obstacle is determined to be not an obstacle.
10. The method according to claim 8 or 9, wherein S400 further comprises detecting the spatial position of the obstacle by a millimeter wave radar and/or an ultrasonic radar, calculating the distance from each obstacle to the center of the corresponding cuboid three-dimensional frame, and then screening the cuboid three-dimensional frames by setting a threshold.
11. A full category 3D obstacle detection system comprising a memory and a processor, the memory storing instructions; wherein the processor is configured to perform the method of any of claims 1-10 according to instructions stored in the memory.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN202010194582.7A 2020-03-19 Full-class 3D obstacle detection method, system and medium Active CN111414848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010194582.7A CN111414848B (en) 2020-03-19 Full-class 3D obstacle detection method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010194582.7A CN111414848B (en) 2020-03-19 Full-class 3D obstacle detection method, system and medium

Publications (2)

Publication Number Publication Date
CN111414848A CN111414848A (en) 2020-07-14
CN111414848B true CN111414848B (en) 2023-04-07

Family

ID=

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103411536A (en) * 2013-08-23 2013-11-27 西安应用光学研究所 Auxiliary driving obstacle detection method based on binocular stereoscopic vision
CN103745018A (en) * 2014-02-11 2014-04-23 天津市星际空间地理信息工程有限公司 Multi-platform point cloud data fusion method
CN104931977A (en) * 2015-06-11 2015-09-23 同济大学 Obstacle identification method for smart vehicles
CN105354875A (en) * 2015-09-25 2016-02-24 厦门大学 Construction method and system for two-dimensional and three-dimensional joint model of indoor environment
CN105627938A (en) * 2016-01-07 2016-06-01 厦门大学 Pavement asphalt thickness detection method based on vehicle-mounted laser scanning spot cloud
CN106005383A (en) * 2016-06-02 2016-10-12 中国矿业大学(北京) Underground roadway high-precision three-dimensional model scanning device and method
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN107161141A (en) * 2017-03-08 2017-09-15 深圳市速腾聚创科技有限公司 Pilotless automobile system and automobile
CN107464223A (en) * 2017-07-19 2017-12-12 西安理工大学 A kind of dot cloud hole method for repairing and mending based on section
CN107610061A (en) * 2017-08-30 2018-01-19 西安理工大学 A kind of guarantor's edge point cloud hole repair method based on two-dimensional projection
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110632608A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device based on laser point cloud
CN110675392A (en) * 2019-09-29 2020-01-10 山东科技大学 Rut fine three-dimensional feature extraction method based on pavement continuous laser point cloud
CN110751090A (en) * 2019-10-18 2020-02-04 宁波博登智能科技有限责任公司 Three-dimensional point cloud labeling method and device and electronic equipment
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103411536A (en) * 2013-08-23 2013-11-27 西安应用光学研究所 Auxiliary driving obstacle detection method based on binocular stereoscopic vision
CN103745018A (en) * 2014-02-11 2014-04-23 天津市星际空间地理信息工程有限公司 Multi-platform point cloud data fusion method
CN104931977A (en) * 2015-06-11 2015-09-23 同济大学 Obstacle identification method for smart vehicles
CN105354875A (en) * 2015-09-25 2016-02-24 厦门大学 Construction method and system for two-dimensional and three-dimensional joint model of indoor environment
CN105627938A (en) * 2016-01-07 2016-06-01 厦门大学 Pavement asphalt thickness detection method based on vehicle-mounted laser scanning spot cloud
CN106005383A (en) * 2016-06-02 2016-10-12 中国矿业大学(北京) Underground roadway high-precision three-dimensional model scanning device and method
CN107161141A (en) * 2017-03-08 2017-09-15 深圳市速腾聚创科技有限公司 Pilotless automobile system and automobile
CN106951847A (en) * 2017-03-13 2017-07-14 百度在线网络技术(北京)有限公司 Obstacle detection method, device, equipment and storage medium
CN107464223A (en) * 2017-07-19 2017-12-12 西安理工大学 A kind of dot cloud hole method for repairing and mending based on section
CN107610061A (en) * 2017-08-30 2018-01-19 西安理工大学 A kind of guarantor's edge point cloud hole repair method based on two-dimensional projection
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN110632608A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device based on laser point cloud
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110675392A (en) * 2019-09-29 2020-01-10 山东科技大学 Rut fine three-dimensional feature extraction method based on pavement continuous laser point cloud
CN110751090A (en) * 2019-10-18 2020-02-04 宁波博登智能科技有限责任公司 Three-dimensional point cloud labeling method and device and electronic equipment
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
冉险生 ; 林立 ; 黄泽好 ; .近景摄影测量在三维数据采集中的应用.现代制造工程.(第03期),第106-110页. *
冯登超 ; 秦焕禹 ; 曾湧,王春喜.基于三维可视化空中走廊体系的城市低空空域航图绘制研究.电子测量与仪器学报.(第04期),第19-21页. *
孔金玲 ; 杨笑天 ; 吴哲超 ; 邵永军 ; 袁雷 ; 赵绍兵 ; 先涛 ; .基于三维激光扫描技术的高速公路滑坡体建模及应用.公路交通科技(应用技术版).(第12期),第63-69页. *
黄荣娟 ; 仲思东 ; 屠礼芬 ; .基于四目立体视觉的三维全自动建模.计算机工程与设计.(第02期),第159-163页. *

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN110032949B (en) Target detection and positioning method based on lightweight convolutional neural network
WO2021223368A1 (en) Target detection method based on vision, laser radar, and millimeter-wave radar
Shinzato et al. Road terrain detection: Avoiding common obstacle detection assumptions using sensor fusion
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
US8180100B2 (en) Plane detector and detecting method
JP6442834B2 (en) Road surface height shape estimation method and system
Chen et al. Next generation map making: Geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction
CN110648389A (en) 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
CN110197173B (en) Road edge detection method based on binocular vision
CN115761550A (en) Water surface target detection method based on laser radar point cloud and camera image fusion
JPH10143659A (en) Object detector
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN112464812A (en) Vehicle-based sunken obstacle detection method
CN112749584A (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN111814602A (en) Intelligent vehicle environment dynamic target detection method based on vision
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
CN111414848B (en) Full-class 3D obstacle detection method, system and medium
WO2021004813A1 (en) Method and mobile entity for detecting feature points in an image
CN111414848A (en) Full-class 3D obstacle detection method, system and medium
CN116189138A (en) Visual field blind area pedestrian detection algorithm based on vehicle-road cooperation
Huang et al. Rear obstacle warning for reverse driving using stereo vision techniques
CN112020722B (en) Three-dimensional sensor data-based road shoulder identification
WO2022133986A1 (en) Accuracy estimation method and system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220804

Address after: Room 618, 6 / F, building 5, courtyard 15, Kechuang 10th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Applicant after: Xiaomi Automobile Technology Co.,Ltd.

Address before: 100080 soho1219, Zhongguancun, 8 Haidian North 2nd Street, Haidian District, Beijing

Applicant before: SHENDONG TECHNOLOGY (BEIJING) Co.,Ltd.

GR01 Patent grant