Nothing Special   »   [go: up one dir, main page]

CN112330702A - Point cloud completion method and device, electronic equipment and storage medium - Google Patents

Point cloud completion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112330702A
CN112330702A CN202011204932.XA CN202011204932A CN112330702A CN 112330702 A CN112330702 A CN 112330702A CN 202011204932 A CN202011204932 A CN 202011204932A CN 112330702 A CN112330702 A CN 112330702A
Authority
CN
China
Prior art keywords
point cloud
cloud set
frame point
target object
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011204932.XA
Other languages
Chinese (zh)
Inventor
童柏琛
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mushroom Car Union Information Technology Co Ltd
Original Assignee
Mushroom Car Union Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mushroom Car Union Information Technology Co Ltd filed Critical Mushroom Car Union Information Technology Co Ltd
Priority to CN202011204932.XA priority Critical patent/CN112330702A/en
Publication of CN112330702A publication Critical patent/CN112330702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a point cloud completion method, a point cloud completion device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining an Nth frame point cloud set and an N +1 th frame point cloud set of a target object, wherein N is an integer not less than 2; tracking and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set in sequence, and determining the position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set; and completing the (N + 1) th frame point cloud set based on the position corresponding relation and the Nth frame point cloud set. The scheme provided by the embodiment of the invention realizes the automatic completion of the point cloud set, the completed point cloud set can clearly and accurately reflect the outline of the target object, and the efficiency of point cloud completion is improved.

Description

Point cloud completion method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a point cloud completion method and device, electronic equipment and a storage medium.
Background
With the continuous development of artificial intelligence, intelligent driving technology is developed. The smart driving technique is a technique in which a machine assists or replaces a human to drive. The detection and perception of the objects on the driving road are the basis for decision and control of the intelligent driving system.
The intelligent driving system detects objects on a road through the laser detector. Because the laser may be shielded when detecting the object, the point cloud data obtained by the detection of the laser detector cannot cover the whole outline of the detected object. In addition, as the distance between the laser detector and the object to be measured increases, the accuracy of the profile estimation thereof also deteriorates.
In the prior art, the obtained point cloud of the measured object is supplemented in a manual mode, so that the point cloud supplementing efficiency is low and the accuracy is poor.
Disclosure of Invention
The embodiment of the invention provides a point cloud completion method and device, electronic equipment and a storage medium, which are used for solving the problems of low point cloud completion efficiency and poor accuracy in the prior art.
In a first aspect, an embodiment of the present invention provides a point cloud complementing method, including:
determining an Nth frame point cloud set and an N +1 th frame point cloud set of a target object, wherein N is an integer not less than 2;
tracking and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set in sequence, and determining the position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set;
and completing the (N + 1) th frame point cloud set based on the position corresponding relation and the Nth frame point cloud set.
Optionally, the sequentially tracking and registering the nth frame point cloud set and the (N + 1) th frame point cloud set, and determining a position corresponding relationship between the nth frame point cloud set and the (N + 1) th frame point cloud set specifically include:
tracking the target object in the (N + 1) th frame point cloud set based on the motion direction of the target object and the (N) th frame point cloud set to obtain a plurality of shape point pairs of the target object, and/or tracking the target object in the (N + 1) th frame point cloud set based on the shape feature of the target object and the (N) th frame point cloud set to obtain a plurality of shape point pairs of the target object;
and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set based on a plurality of shape point pairs of the target object to obtain a position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set.
Optionally, the tracking, in the N +1 frame point cloud set, the target object based on the motion direction of the target object and the nth frame point cloud set to obtain a plurality of shape point pairs of the target object, and/or the tracking, in the N +1 frame point cloud set, the target object based on the shape feature of the target object and the nth frame point cloud set to obtain a plurality of shape point pairs of the target object specifically includes:
determining a weight coefficient corresponding to each point in the N frame point cloud set based on the motion direction of the target object, and/or determining a weight coefficient corresponding to each point in the N frame point cloud set based on the shape feature of the target object, wherein the weight coefficient comprises the feature identification degree of each point in the N frame point cloud set;
and tracking the target object in the (N + 1) th frame point cloud set based on the weight coefficient corresponding to each point to obtain a plurality of shape point pairs of the target object.
Optionally, the position correspondence includes a translation vector and a rotation matrix characterizing the position change of the target object;
registering the nth frame point cloud set and the (N + 1) th frame point cloud set based on the plurality of shape point pairs of the target object to obtain a position corresponding relation between the nth frame point cloud set and the (N + 1) th frame point cloud set, wherein the registering comprises the following steps of:
registering the N frame point cloud set and the (N + 1) frame point cloud set based on a plurality of shape point pairs of the target object to obtain a translation vector and a rotation matrix of each shape point pair;
and counting the number of shape point pairs respectively corresponding to each translation vector and each rotation matrix, and taking the translation vector and the rotation matrix with the largest number of corresponding shape point pairs as a position corresponding relation.
Optionally, the completing the N +1 th frame point cloud set based on the position correspondence and the nth frame point cloud set further includes:
determining the estimated position of each point in the N frame point cloud set in the (N + 1) th frame point cloud set based on the position corresponding relation and the N frame point cloud set;
determining the position deviation between the estimated position of the point cloud set of the Nth frame and the actual position of the point cloud set of the (N + 1) th frame in each shape point pair;
if the position deviation of any shape point pair is larger than a preset threshold value, points in any shape point pair are removed from the N frame point cloud set and the (N + 1) frame point cloud set respectively.
Optionally, the completing the N +1 th frame point cloud set based on the position correspondence and the nth frame point cloud set includes:
based on the position corresponding relation, projecting the Nth frame point cloud set to the (N + 1) th frame point cloud set to obtain a projection point cloud set of the target object;
and integrating the projection point cloud set and the (N + 1) th frame point cloud set into a complete point cloud set of the target object.
Optionally, the integrating the projection point cloud set and the N +1 th frame point cloud set into a complete point cloud set of the target object includes:
and reserving points which do not have a corresponding relation in the projection point cloud set and the N +1 frame point cloud set in the N +1 frame point cloud set.
In a second aspect, an embodiment of the present invention provides a point cloud completing device, including:
the point cloud determining unit is used for determining an N +1 frame point cloud set of a target object to be complemented by point clouds and an N +1 frame point cloud set of the N +1 frame point cloud set, wherein N is an integer not less than 2;
a tracking and registering unit, configured to track and register the nth frame point cloud set and the (N + 1) th frame point cloud set, and determine a position correspondence between the nth frame point cloud set and the (N + 1) th frame point cloud set;
and the point cloud completion unit is used for completing the (N + 1) th frame point cloud set based on the position corresponding relation and the Nth frame point cloud set.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a bus, where the processor and the communication interface complete mutual communication through the bus, and the processor may call a logic command in the memory to execute the steps of the point cloud completion method provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the point cloud complementing method as provided in the first aspect.
According to the point cloud completion method, the point cloud completion device, the electronic equipment and the storage medium, the position corresponding relation between the point cloud set of the Nth frame and the point cloud set of the (N + 1) th frame is determined by tracking and registering the point cloud set of the Nth frame and the point cloud set of the (N + 1) th frame of the target object; and completing the point cloud set of the (N + 1) th frame based on the position corresponding relation and the point cloud set of the Nth frame, wherein the point cloud completion is automatically performed between two adjacent frames, so that the obtained point cloud set can continuously track the movement position change of the target object, the automatic completion of the point cloud set is realized, the supplemented point cloud set can clearly and accurately reflect the outline of the target object, and the point cloud completion efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a point cloud completion method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a point cloud completion apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The specific application scene of the embodiment of the invention can be an automatic driving scene, and can also be the field of other target object detection, such as robot obstacle avoidance, aircraft tracking and the like. The following description will be given taking automated driving as an example.
In an automatic driving scene, a vehicle-mounted intelligent driving system detects a target object on a road through a laser detector. And the laser detector sends and receives the ranging laser reflected by the target object to obtain a point cloud set containing the surface shape information of the target object. The laser detector transmits and receives ranging laser according to a fixed time interval so as to realize point cloud collection of a target object. The acquisition time interval of the laser detector can be set according to actual requirements, and the setting of the acquisition time interval is not particularly limited in the embodiment of the invention.
Fig. 1 is a schematic flow chart of a point cloud completion method provided in an embodiment of the present invention, as shown in fig. 1, the method includes:
step 110, determining an N +1 th frame point cloud set of a target object to be complemented by point clouds and an N +1 th frame point cloud set of the N +1 th frame point cloud set, wherein N is an integer not less than 2.
In particular, objects can be divided into rigid objects and non-rigid objects. The rigid object refers to an object whose shape does not change with time or a motion state, for example, a car or the like running on a road. The non-rigid object refers to an object whose shape changes with time or a motion state, for example, a pedestrian or an animal walking on a road. The target object in the embodiment of the invention is a rigid object which is positioned on a driving road of the vehicle and can influence the driving safety.
And obtaining the Nth frame point cloud set and the (N + 1) th frame point cloud set of the target object according to the sequence of the acquisition time.
Here, the point cloud set of the target object is a point cloud set that only includes the target object and is obtained by performing background filtering processing on an original point cloud set obtained by a laser detector.
And 120, tracking and registering the nth frame point cloud set and the (N + 1) th frame point cloud set, and determining the position corresponding relation between the nth frame point cloud set and the (N + 1) th frame point cloud set.
Specifically, the nth frame point cloud set and the (N + 1) th frame point cloud set are two adjacent frame point cloud sets, and the position change of the target object from the acquisition time corresponding to the nth frame point cloud set to the acquisition time corresponding to the (N + 1) th frame point cloud set is reflected.
And tracking the position change of the target object in the two-frame point cloud set, and registering the point clouds in the two adjacent frame point cloud set according to the tracked position change to obtain the position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set. The position corresponding relation is used for representing the coordinate conversion relation between the point describing the target object in the point cloud set of the Nth frame and the point describing the same position of the target object in the point cloud set of the (N + 1) th frame.
And step 130, completing the point cloud set of the (N + 1) th frame based on the position corresponding relation and the point cloud set of the Nth frame.
Specifically, according to the position correspondence between the nth frame point cloud set and the (N + 1) th frame point cloud set, coordinate conversion can be performed on points in the nth frame point cloud set, and the points are projected into the (N + 1) th frame point cloud set, so that a complemented point cloud set of the target object is obtained. According to the supplemented point cloud set of the target object, the clear outline of the target object can be obtained, so that the intelligent driving system can be helped to quickly identify the target object.
In a preferred embodiment, when the contour point of the rigid target object rotates or translates with the same displacement under time sequence conditions, each point on the rigid target object rotates or translates with the object center, the motion law of each point is the same, partial points do not move, and the contour point of the object still exists even if the object turns or is blocked.
The point cloud completion process is performed in the whole process from the appearance of the target object to the disappearance of the target object in the view of the vehicle. The Nth frame point cloud set can be a point cloud set completed according to the N-1 th frame point cloud set. And so on, the N +1 th frame point cloud set of the target object is supplemented according to the Nth frame point cloud set all the time and is continuously carried out.
The embodiment of the invention provides a point cloud complementing method, which comprises the steps of tracking and registering an Nth frame point cloud set and an (N + 1) th frame point cloud set of a target object, and determining a position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set; and completing the point cloud set of the (N + 1) th frame based on the position corresponding relation and the point cloud set of the Nth frame, wherein the point cloud completion is automatically performed between two adjacent frames, so that the obtained point cloud set can continuously track the movement position change of the target object, the automatic completion of the point cloud set is realized, the supplemented point cloud set can clearly and accurately reflect the outline of the target object, and the point cloud completion efficiency is improved.
In one embodiment, step 120 specifically includes:
tracking the target object in the (N + 1) th frame point cloud set based on the motion direction of the target object and the (N) th frame point cloud set to obtain a shape point pair of the target object, and/or tracking the target object in the (N + 1) th frame point cloud set based on the shape feature of the target object and the (N) th frame point cloud set to obtain a shape point pair of the target object;
and registering the point cloud set of the Nth frame and the point cloud set of the (N + 1) th frame based on the shape point pair of the target object to obtain the position corresponding relation between the point cloud set of the Nth frame and the point cloud set of the (N + 1) th frame.
In specific implementation, information such as the motion direction, the shape characteristics and the like of the target object can be obtained according to each point in the nth frame point cloud set and the (N + 1) th frame point cloud set. For example, point cloud centers in the nth frame point cloud set and the (N + 1) th frame point cloud set can be obtained according to distribution conditions of point clouds in the nth frame point cloud set and the (N + 1) th frame point cloud set, and a motion direction of the target object can be obtained according to a moving direction of the point cloud centers. For another example, shape features such as length, width, and height of the target object can be obtained according to the distribution of the point clouds in the point cloud set.
The target object can be tracked according to the moving direction of the target object, or according to the shape feature of the target object, or according to the moving direction and the shape feature of the target object, so as to obtain a plurality of shape point pairs. The shape point pair is a point pair formed by points which describe the target object in the point cloud set of the Nth frame and points which describe the same position of the target object in the point cloud set of the (N + 1) th frame.
And registering the Nth frame point cloud set and the (N + 1) th frame point cloud set according to a plurality of shape point pairs of the target object, so that the space position difference between two adjacent frame point cloud sets is minimized, and the position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set is obtained.
In one embodiment, the tracking, based on the motion direction of the target object and the nth frame point cloud set, of the target object in the (N + 1) th frame point cloud set to obtain shape point pairs of the target object, and/or the tracking, based on the shape feature of the target object and the nth frame point cloud set, of the target object in the (N + 1) th frame point cloud set to obtain a plurality of shape point pairs of the target object specifically includes:
determining a weight coefficient corresponding to each point in the Nth frame of point cloud set based on the motion direction of the target object, wherein the weight coefficient comprises the characteristic identification degree of each point in the Nth frame of point cloud set, and/or determining the weight coefficient corresponding to each point in the Nth frame of point cloud set based on the shape characteristic of the target object, wherein the weight coefficient comprises the characteristic identification degree of each point in the Nth frame of point cloud set;
and tracking the target object in the (N + 1) th frame point cloud set based on the weight coefficient corresponding to each point to obtain a plurality of shape point pairs of the target object.
Specifically, the feature recognition degree is the recognition difficulty of each point in the point cloud set in the surface shape of the target object. For example, if the target object is an automobile, the points representing the wheels and the vehicle body protrusions in the point cloud set are most easily recognized, and the feature recognition degrees of the points representing the wheels and the vehicle body protrusions are high. For example, in a moving automobile, the head is easier to identify than the tail, and the feature identification degree of the points of the head is higher.
Therefore, different weight coefficients can be set for each point in the point cloud set of the Nth frame according to the motion direction and/or the shape characteristics of the target object, and represent the characteristic identification degree of each point in the point cloud set. The higher the feature recognition, the higher the weight.
And tracking the target object in the (N + 1) th frame point cloud set according to the weight coefficient corresponding to each point in the (N) th frame point cloud set to describe the points at the same position of the target object, so as to obtain a plurality of shape point pairs of the target object.
The tracking algorithm of the shape point pairs may adopt Hungarian algorithm (Hungarian algorithm) and Kalman Filter algorithm (Kalman Filter algorithm), and the embodiment of the present invention does not specifically limit the selection of the tracking algorithm.
In one specific embodiment, the position correspondence includes a translation vector and a rotation matrix characterizing the position change of the target object; registering the point cloud set of the Nth frame and the point cloud set of the (N + 1) th frame based on the shape point pair of the target object to obtain a position corresponding relation between the point cloud set of the Nth frame and the point cloud set of the (N + 1) th frame, and specifically comprising the following steps: registering the N frame point cloud set and the (N + 1) frame point cloud set based on the shape point pairs of the target object to obtain a translation vector and a rotation matrix of each shape point pair; and counting the number of shape point pairs respectively corresponding to each translation vector and each rotation matrix, and taking the translation vector and the rotation matrix with the largest number of corresponding shape point pairs as a position corresponding relation.
Specifically, the position correspondence relationship may use a translation vector and a rotation matrix to represent a coordinate transformation relationship between a point describing the target object in the nth frame point cloud set and a point describing the same position of the target object in the N +1 th frame point cloud set.
And registering the N frame point cloud set and the (N + 1) frame point cloud set according to the plurality of shape point pairs of the target object obtained by tracking to obtain a translation vector and a rotation matrix of each shape point pair.
The translation vector describes a conversion relation between a point cloud space origin in the point cloud set of the Nth frame and a point cloud space origin in the point cloud set of the (N + 1) th frame. The rotation matrix describes a conversion relation between a point cloud space coordinate axis in the point cloud set of the Nth frame and a point cloud space coordinate axis in the point cloud set of the (N + 1) th frame. And converting the points describing the target object in the point cloud set of the N frame into the points describing the same position of the target object in the point cloud set of the (N + 1) frame through the translation vector and the rotation matrix of each shape point pair.
The algorithm for registering the point cloud set of the nth frame and the point cloud set of the (N + 1) th frame may adopt an ICP (Iterative Closest point) algorithm, and the selection of the registration algorithm in the embodiment of the present invention is not specifically limited.
After registration, traversing all shape point pairs, counting the number of shape point pairs respectively corresponding to each translation vector and rotation matrix, and taking the translation vector and the rotation matrix with the largest number of corresponding shape point pairs as a position corresponding relation, so that the translation vector and the rotation matrix which finally represent the position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set can be suitable for more shape point pairs on the basis of meeting the registration requirement, and the registration accuracy is improved.
In a specific embodiment, the method for completing the point cloud set of the (N + 1) th frame based on the position corresponding relationship and the point cloud set of the nth frame further includes: determining the estimated position of each point in the N frame point cloud set in the (N + 1) th frame point cloud set based on the position corresponding relation and the N frame point cloud set; determining the position deviation between the estimated position of the point cloud set of the Nth frame and the actual position of the point cloud set of the (N + 1) th frame in each shape point pair; if the position deviation of any shape point pair is larger than a preset threshold value, points in the shape point pair are removed from the N frame point cloud set and the N +1 frame point cloud set respectively.
Specifically, coordinate transformation is carried out on each point in the point cloud set of the Nth frame, and the estimated position of each point in the point cloud set of the (N + 1) th frame is obtained.
And for each shape point pair, comparing the estimated position of the point cloud set of the Nth frame with the actual position of the point cloud set of the (N + 1) th frame to obtain a position deviation. A preset threshold may be set for measuring a distance difference between a point of the nth frame point cloud set and a point of the (N + 1) th frame point cloud set in each shape point pair.
If the position deviation of any shape point pair is larger than a preset threshold value, the point in the point pair is not considered to be the point on the target object, and the point in the shape point pair is removed from the N frame point cloud set and the N +1 frame point cloud set respectively.
The preset threshold value can be set according to the moving distance of the target object in the nth frame point cloud set and the (N + 1) th frame point cloud set. The larger the moving distance of the target object is, the larger the preset threshold value is, and the smaller the moving distance of the target object is, the smaller the preset threshold value is.
In one embodiment, step 130 specifically includes:
based on the position corresponding relation, projecting the N frame point cloud set into the (N + 1) frame point cloud set to obtain a projection point cloud set of the target object; and integrating the projection point cloud set and the (N + 1) th frame point cloud set into a complete point cloud set of the target object.
Specifically, according to the position correspondence between the nth frame point cloud set and the (N + 1) th frame point cloud set, coordinate conversion can be performed on points in the nth frame point cloud set, and the points are projected into the (N + 1) th frame point cloud set to obtain a projection point cloud set of the target object. And integrating the projection point cloud set and the (N + 1) th frame point cloud set, for example, overlaying point clouds to obtain a complete point cloud set of the target object.
Further, points in a shape point pair with a position deviation larger than a preset threshold value can be removed from the nth frame point cloud set and the (N + 1) th frame point cloud set, and according to the position corresponding relation between the nth frame point cloud set and the (N + 1) th frame point cloud set, coordinate conversion can be performed on the points in the nth frame point cloud set and the points are projected to the (N + 1) th frame point cloud set, so that a supplemented point cloud set of the target object is obtained. Because points which are not on the target object are removed, the accuracy of the obtained completion point cloud set is higher, and the obtained contour of the target object is more accurate.
The point cloud set of the final target object can be obtained after sampling the points in the supplemented point cloud set. The sampling method can adopt uniform sampling, geometric sampling, random sampling, lattice point sampling and the like, and the embodiment of the invention does not specifically limit the method for sampling the point cloud set.
In one embodiment, integrating the projection point cloud set and the N +1 th frame point cloud set into a complete point cloud set of the target object includes:
and on the basis of the (N + 1) th frame point cloud set, retaining points which do not have corresponding relation in the projection point cloud set and the (N + 1) th frame point cloud set.
Specifically, after removing points in a shape point pair with a position deviation larger than a preset threshold value from an nth frame point cloud set and an N +1 th frame point cloud set, projecting remaining points in the nth frame point cloud set into the N +1 th frame point cloud set according to a position corresponding relation to obtain a projection point cloud set of a target object, integrating the projection point cloud set and the N +1 th frame point cloud set into a complete point cloud set of the target object, and regarding the points without corresponding relations in the projection point cloud set and the N +1 th frame point cloud set:
if the number of the points in the projection point cloud set is more than that of the points in the (N + 1) th frame point cloud set, reserving redundant points in the projection point cloud set; if the number of the points in the projection point cloud set is less than that of the points in the (N + 1) th frame point cloud set, redundant points in the (N + 1) th frame point cloud set are reserved.
According to the point cloud completion method provided by the embodiment of the invention, the points which do not have the corresponding relation in the projection point cloud set and the (N + 1) th frame point cloud set are reserved, so that the points used for representing the motion change of the target object are reserved as much as possible, and the shape characteristics of the target object can be more abundantly represented.
Based on any of the above embodiments, fig. 2 is a schematic structural diagram of a point cloud complementing device provided by an embodiment of the present invention, as shown in fig. 2, the device includes:
a point cloud determining unit 210, configured to determine an N +1 th frame point cloud set of a target object to be complemented by a point cloud, and an N +1 th frame point cloud set of the N +1 th frame point cloud set, where N is an integer not less than 2;
a tracking and registering unit 220, configured to track and register the nth frame point cloud set and the (N + 1) th frame point cloud set, and determine a position correspondence between the nth frame point cloud set and the (N + 1) th frame point cloud set;
and a point cloud complementing unit 230, configured to complement the N +1 th frame point cloud set based on the position correspondence and the nth frame point cloud set.
Specifically, the point cloud determining unit 210 transmits and receives ranging laser at fixed time intervals through the laser detector to realize point cloud collection of the target object, and obtain an nth frame point cloud set and an (N + 1) th frame point cloud set of the target object. The tracking and registering unit 220 tracks the position change of the target object in the two frames of point cloud sets, and registers the point clouds in the two adjacent frames of point cloud sets according to the tracked position change, so as to obtain the position corresponding relationship between the nth frame of point cloud set and the (N + 1) th frame of point cloud set. The point cloud complementing unit 230 may perform coordinate transformation on points in the nth frame point cloud set according to a position correspondence between the nth frame point cloud set and the (N + 1) th frame point cloud set, and project the points into the (N + 1) th frame point cloud set, thereby obtaining a complemented point cloud set of the target object. According to the supplemented point cloud set of the target object, the clear outline of the target object can be obtained, so that the intelligent driving system can be helped to quickly identify the target object.
The embodiment of the invention provides a point cloud complementing device, which is used for determining the position corresponding relation between an Nth frame point cloud set and an (N + 1) th frame point cloud set by tracking and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set of a target object; and completing the point cloud set of the (N + 1) th frame based on the position corresponding relation and the point cloud set of the Nth frame, wherein the point cloud completion is automatically performed between two adjacent frames, so that the obtained point cloud set can continuously track the movement position change of the target object, the automatic completion of the point cloud set is realized, the supplemented point cloud set can clearly and accurately reflect the outline of the target object, and the point cloud completion efficiency is improved.
In one embodiment, the tracking registration unit 220 includes:
the tracking subunit is configured to track the target object in the (N + 1) th frame point cloud set based on the motion direction of the target object and the (N) th frame point cloud set to obtain a plurality of shape point pairs of the target object, and/or track the target object in the (N + 1) th frame point cloud set based on the shape feature of the target object and the (N) th frame point cloud set to obtain a plurality of shape point pairs of the target object;
and the registration subunit is used for registering the nth frame point cloud set and the (N + 1) th frame point cloud set based on the plurality of shape point pairs of the target object to obtain a position corresponding relation between the nth frame point cloud set and the (N + 1) th frame point cloud set.
In one embodiment, the tracking subunit is specifically configured to:
determining a weight coefficient corresponding to each point in the point cloud set of the Nth frame based on the motion direction of the target object, and/or determining a weight coefficient corresponding to each point in the point cloud set of the Nth frame based on the shape characteristics of the target object, wherein the weight coefficient comprises the characteristic identification degree of each point in the point cloud set of the Nth frame; and tracking the target object in the (N + 1) th frame point cloud set based on the weight coefficient corresponding to each point to obtain a plurality of shape point pairs of the target object.
In one specific embodiment, the position correspondence includes a translation vector and a rotation matrix characterizing the position change of the target object;
the registration subunit is specifically configured to:
registering the N frame point cloud set and the (N + 1) frame point cloud set based on a plurality of shape point pairs of the target object to obtain a translation vector and a rotation matrix of each shape point pair; and counting the number of shape point pairs respectively corresponding to each translation vector and each rotation matrix, and taking the translation vector and the rotation matrix with the largest number of corresponding shape point pairs as a position corresponding relation.
In one embodiment, the apparatus further comprises:
the abnormal removing unit is used for determining the estimated position of each point in the N & ltth & gt frame point cloud set in the (N + 1) th frame point cloud set based on the position corresponding relation and the N & ltth & gt frame point cloud set; determining the position deviation between the estimated position of the point cloud set of the Nth frame and the actual position of the point cloud set of the (N + 1) th frame in each shape point pair; and if the position deviation of any shape point pair is larger than a preset threshold value, removing the points in any shape point pair from the N frame point cloud set and the N +1 frame point cloud set respectively.
In an embodiment, the point cloud complementing unit 230 specifically includes:
a shadow casting unit, which is used for projecting the N frame point cloud set into the (N + 1) frame point cloud set based on the position corresponding relation to obtain a projection point cloud set of the target object;
and the complementing subunit is used for integrating the projection point cloud set and the (N + 1) th frame point cloud set into a complementing point cloud set of the target object.
In one embodiment, the completion subunit is specifically configured to:
and on the basis of the (N + 1) th frame point cloud set, retaining points which do not have corresponding relation in the projection point cloud set and the (N + 1) th frame point cloud set.
Based on any of the above embodiments, fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention, and as shown in fig. 3, the electronic device may include: a Processor (Processor)310, a communication Interface (Communications Interface)320, a Memory (Memory)330, and a communication Bus (Communications Bus)340, wherein the Processor 310, the communication Interface 320, and the Memory 330 communicate with each other via the communication Bus 340. The processor 310 may call logical commands in the memory 330 to perform the following method:
determining an Nth frame point cloud set and an N +1 th frame point cloud set of a target object, wherein N is an integer not less than 2; tracking and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set in sequence, and determining a position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set; and completing the (N + 1) th frame point cloud set based on the position corresponding relation and the Nth frame point cloud set.
In addition, the logic commands in the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic commands are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes a plurality of commands for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided in the foregoing embodiments when executed by a processor, and the method includes:
determining an Nth frame point cloud set and an N +1 th frame point cloud set of a target object, wherein N is an integer not less than 2; tracking and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set in sequence, and determining a position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set; and completing the (N + 1) th frame point cloud set based on the position corresponding relation and the Nth frame point cloud set.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A point cloud complementing method is characterized by comprising the following steps:
determining an Nth frame point cloud set and an N +1 th frame point cloud set of a target object, wherein N is an integer not less than 2;
tracking and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set in sequence, and determining the position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set;
and completing the (N + 1) th frame point cloud set based on the position corresponding relation and the Nth frame point cloud set.
2. The method according to claim 1, wherein the tracking and registering the nth frame point cloud set and the N +1 th frame point cloud set sequentially, and the determining a position correspondence between the nth frame point cloud set and the N +1 th frame point cloud set comprises:
tracking the target object in the (N + 1) th frame point cloud set based on the motion direction of the target object and the (N) th frame point cloud set to obtain a plurality of shape point pairs of the target object, and/or tracking the target object in the (N + 1) th frame point cloud set based on the shape feature of the target object and the (N) th frame point cloud set to obtain a plurality of shape point pairs of the target object;
and registering the Nth frame point cloud set and the (N + 1) th frame point cloud set based on a plurality of shape point pairs of the target object to obtain a position corresponding relation between the Nth frame point cloud set and the (N + 1) th frame point cloud set.
3. The method of claim 2, wherein the tracking the target object in the N +1 frame point cloud set based on the motion direction of the target object and the N frame point cloud set to obtain a plurality of shape point pairs of the target object, and/or tracking the target object in the N +1 frame point cloud set based on the shape feature of the target object and the N frame point cloud set to obtain a plurality of shape point pairs of the target object comprises:
determining a weight coefficient corresponding to each point in the N frame point cloud set based on the motion direction of the target object, and/or determining a weight coefficient corresponding to each point in the N frame point cloud set based on the shape feature of the target object, wherein the weight coefficient comprises the feature identification degree of each point in the N frame point cloud set;
and tracking the target object in the (N + 1) th frame point cloud set based on the weight coefficient corresponding to each point to obtain a plurality of shape point pairs of the target object.
4. The method according to claim 2, wherein the positional correspondence includes a translation vector and a rotation matrix characterizing a change in position of the target object;
registering the nth frame point cloud set and the (N + 1) th frame point cloud set based on the plurality of shape point pairs of the target object to obtain a position corresponding relation between the nth frame point cloud set and the (N + 1) th frame point cloud set, wherein the registering comprises the following steps of:
registering the N frame point cloud set and the (N + 1) frame point cloud set based on a plurality of shape point pairs of the target object to obtain a translation vector and a rotation matrix of each shape point pair;
and counting the number of shape point pairs respectively corresponding to each translation vector and each rotation matrix, and taking the translation vector and the rotation matrix with the largest number of corresponding shape point pairs as the position corresponding relation.
5. The method according to any one of claims 2 to 4, wherein the complementing the N +1 frame point cloud set based on the position correspondence and the N frame point cloud set further comprises:
determining the estimated position of each point in the N frame point cloud set in the (N + 1) th frame point cloud set based on the position corresponding relation and the N frame point cloud set;
determining the position deviation between the estimated position of the point cloud set of the Nth frame and the actual position of the point cloud set of the (N + 1) th frame in each shape point pair;
if the position deviation of any shape point pair is larger than a preset threshold value, points in any shape point pair are removed from the N frame point cloud set and the (N + 1) frame point cloud set respectively.
6. The method of any of claims 1 to 4, wherein the complementing the N +1 frame point cloud set based on the location correspondence and the N frame point cloud set comprises:
based on the position corresponding relation, projecting the Nth frame point cloud set to the (N + 1) th frame point cloud set to obtain a projection point cloud set of the target object;
and integrating the projection point cloud set and the (N + 1) th frame point cloud set into a complete point cloud set of the target object.
7. The method of claim 6, wherein the integrating the set of projection point clouds and the set of N +1 frame point clouds into a complementary set of point clouds for the target object comprises:
and reserving points which do not have a corresponding relation in the projection point cloud set and the N +1 frame point cloud set in the N +1 frame point cloud set.
8. A point cloud complementing device, comprising:
the point cloud determining unit is used for determining an N +1 frame point cloud set of a target object to be complemented by point clouds and an N +1 frame point cloud set of the N +1 frame point cloud set, wherein N is an integer not less than 2;
a tracking and registering unit, configured to track and register the nth frame point cloud set and the (N + 1) th frame point cloud set, and determine a position correspondence between the nth frame point cloud set and the (N + 1) th frame point cloud set;
and the point cloud completion unit is used for completing the (N + 1) th frame point cloud set based on the position corresponding relation and the Nth frame point cloud set.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the point cloud complementing method according to any one of claims 1 to 7 when executing the computer program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the point cloud complementing method according to any one of claims 1 to 7.
CN202011204932.XA 2020-11-02 2020-11-02 Point cloud completion method and device, electronic equipment and storage medium Pending CN112330702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011204932.XA CN112330702A (en) 2020-11-02 2020-11-02 Point cloud completion method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011204932.XA CN112330702A (en) 2020-11-02 2020-11-02 Point cloud completion method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112330702A true CN112330702A (en) 2021-02-05

Family

ID=74324403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011204932.XA Pending CN112330702A (en) 2020-11-02 2020-11-02 Point cloud completion method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112330702A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609632A (en) * 2021-10-08 2021-11-05 天津云圣智能科技有限责任公司 Method and device for determining power line compensation point and server
CN114127785A (en) * 2021-04-15 2022-03-01 商汤国际私人有限公司 Point cloud completion method, network training method, device, equipment and storage medium
CN114529652A (en) * 2022-04-24 2022-05-24 深圳思谋信息科技有限公司 Point cloud compensation method, device, equipment, storage medium and computer program product
CN115876098A (en) * 2022-12-12 2023-03-31 苏州思卡信息系统有限公司 Vehicle size measuring method of multi-beam laser radar
TWI806481B (en) * 2021-03-12 2023-06-21 大陸商騰訊科技(深圳)有限公司 Method and device for selecting neighboring points in a point cloud, encoding device, decoding device and computer device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467753A (en) * 2010-11-04 2012-05-23 中国科学院深圳先进技术研究院 Time-varying point cloud reconstruction method and system based on skeleton registration
CN104778688A (en) * 2015-03-27 2015-07-15 华为技术有限公司 Method and device for registering point cloud data
CN109285220A (en) * 2018-08-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device
EP3716103A2 (en) * 2019-03-29 2020-09-30 Ricoh Company, Ltd. Method and apparatus for determining transformation matrix, and non-transitory computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467753A (en) * 2010-11-04 2012-05-23 中国科学院深圳先进技术研究院 Time-varying point cloud reconstruction method and system based on skeleton registration
CN104778688A (en) * 2015-03-27 2015-07-15 华为技术有限公司 Method and device for registering point cloud data
CN109285220A (en) * 2018-08-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map
CN109633688A (en) * 2018-12-14 2019-04-16 北京百度网讯科技有限公司 A kind of laser radar obstacle recognition method and device
EP3716103A2 (en) * 2019-03-29 2020-09-30 Ricoh Company, Ltd. Method and apparatus for determining transformation matrix, and non-transitory computer-readable recording medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI806481B (en) * 2021-03-12 2023-06-21 大陸商騰訊科技(深圳)有限公司 Method and device for selecting neighboring points in a point cloud, encoding device, decoding device and computer device
CN114127785A (en) * 2021-04-15 2022-03-01 商汤国际私人有限公司 Point cloud completion method, network training method, device, equipment and storage medium
CN113609632A (en) * 2021-10-08 2021-11-05 天津云圣智能科技有限责任公司 Method and device for determining power line compensation point and server
CN113609632B (en) * 2021-10-08 2021-12-21 天津云圣智能科技有限责任公司 Method and device for determining power line compensation point and server
CN114529652A (en) * 2022-04-24 2022-05-24 深圳思谋信息科技有限公司 Point cloud compensation method, device, equipment, storage medium and computer program product
CN115876098A (en) * 2022-12-12 2023-03-31 苏州思卡信息系统有限公司 Vehicle size measuring method of multi-beam laser radar
CN115876098B (en) * 2022-12-12 2023-10-24 苏州思卡信息系统有限公司 Vehicle size measurement method of multi-beam laser radar

Similar Documents

Publication Publication Date Title
CN112330702A (en) Point cloud completion method and device, electronic equipment and storage medium
Asvadi et al. 3D object tracking using RGB and LIDAR data
CN108550318B (en) Map construction method and device
Vatavu et al. Stereovision-based multiple object tracking in traffic scenarios using free-form obstacle delimiters and particle filters
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
CN113267761B (en) Laser radar target detection and identification method, system and computer readable storage medium
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN112683228A (en) Monocular camera ranging method and device
CN114119729A (en) Obstacle identification method and device
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN114972443A (en) Target tracking method and device and unmanned vehicle
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN117670928A (en) Object tracking method, device, equipment and storage medium
CN118311955A (en) Unmanned aerial vehicle control method, terminal, unmanned aerial vehicle and storage medium
CN115083199A (en) Parking space information determination method and related equipment thereof
CN115601435B (en) Vehicle attitude detection method, device, vehicle and storage medium
Fries et al. Real-time unsupervised feature model generation for a vehicle following system
Dave et al. Statistical survey on object detection and tracking methodologies
CN115359089A (en) Point cloud target tracking method, electronic device, medium and vehicle
CN114049542A (en) Fusion positioning method based on multiple sensors in dynamic scene
CN115063771A (en) Error correction method, system, storage medium and device for distance detection of obstacle
CN112149687A (en) Method for object recognition
CN115752489B (en) Positioning method and device of movable equipment and electronic equipment
CN114924289B (en) Laser radar point cloud target fitting algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination