Nothing Special   »   [go: up one dir, main page]

CN117475170B - FPP-based high-precision point cloud registration method guided by local-global structure - Google Patents

FPP-based high-precision point cloud registration method guided by local-global structure Download PDF

Info

Publication number
CN117475170B
CN117475170B CN202311776827.7A CN202311776827A CN117475170B CN 117475170 B CN117475170 B CN 117475170B CN 202311776827 A CN202311776827 A CN 202311776827A CN 117475170 B CN117475170 B CN 117475170B
Authority
CN
China
Prior art keywords
point cloud
local
point
target
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311776827.7A
Other languages
Chinese (zh)
Other versions
CN117475170A (en
Inventor
柏连发
杨乐
王兴国
陈霄宇
韩静
张毅
郑东亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202311776827.7A priority Critical patent/CN117475170B/en
Publication of CN117475170A publication Critical patent/CN117475170A/en
Application granted granted Critical
Publication of CN117475170B publication Critical patent/CN117475170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Optimization (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Operations Research (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a FPP-based high-precision point cloud registration method guided by a local-global structure, which comprises the following steps: obtaining multi-mode data characteristics of a source point cloud and a target point cloud by using stripe projection profilometry; performing point cloud clustering by utilizing multi-mode data characteristics to obtain point cloud clustering cut blocks of source point cloud and target point cloud, and matching corresponding node pairs through characteristic similarity to obtain a corresponding relationship between the source point cloud and local point cloud blocks in the target point cloud; performing feature interaction on local point cloud blocks with corresponding relations in the source point cloud and the target point cloud by using an overlapped attention network model; and carrying out feature correspondence analysis on the clustered point cloud pairs based on the feature matching module of the corresponding clustered voting, and selecting a pose matrix with high confidence. The invention provides a point cloud registration framework based on FPP data characteristics, and the point cloud registration with higher precision and robustness is realized by training and reasoning of a clustering structured priori constraint network model.

Description

FPP-based high-precision point cloud registration method guided by local-global structure
Technical Field
The invention relates to a FPP-based high-precision point cloud registration method guided by a local-global structure, and belongs to the technical field of weld detection.
Background
Developments in the fields of indoor graphics building, intelligent manufacturing and the like generate urgent demands for large scene point clouds. Fringe projection profilometry has great application prospect as a mature point cloud reconstruction algorithm, but at present, there is little work related to point cloud registration based on FPP. The existing point cloud registration algorithm cannot mine multi-mode data features of the FPP, and the precision and the robustness are limited due to the global feature retrieval mode.
Stripe projection profile (FPP) is a very important point cloud reconstruction method, and is widely applied to the fields of industrial measurement, intelligent manufacturing, reverse engineering and the like by the characteristics of non-contact, high precision, high resolution and the like. In recent years, with the development of fields such as intelligent manufacturing and indoor graphics building, high-precision and high-quality large-scene point clouds are becoming urgent demands. However, since the measurement field of the fringe projection profilometry is limited, point cloud data of a large scene cannot be directly obtained, which makes the application of the fringe projection profilometry very limited. The high accuracy of the point cloud of the FPP makes the FPP have high requirements on registration accuracy. At present, although many point cloud registration algorithms achieve better effects on a common data set, registration features only depend on point cloud coordinates, which can cause a certain limitation on registration effects. The method has the advantages that multidimensional alignment data such as target images, point clouds and phases can be obtained through the FPP, so that the performance of a registration algorithm is further improved, but at present, few related works appear, and how to realize large-scene high-precision point cloud registration based on the FPP is a worth discussing problem.
In order to obtain a complete scene point cloud, the point cloud data under each measurement view angle needs to be unified into one coordinate system. The existing point cloud registration scheme is to obtain corresponding point pairs by analyzing point cloud characteristics, and finally fit conversion matrixes of different view point clouds. Classical algorithms such as ICP and the like are widely applied to point cloud registration tasks, but the limited feature extraction capability leads to the fact that registration results are easy to fall into a local optimal solution. In contrast, the deep learning has strong feature analysis capability, and can fully mine the high-dimensional features of the source point cloud and the target point cloud and perform similarity comparison. PointNetLK fuses PointNet and LK (Lucas & Kanade) into an end-to-end neural network, and calculates a transformation matrix in an iterative manner; 3DMatch establishes a local correspondence between the source point cloud and the target point cloud by learning descriptors of local space blocks. The feature mining capability of deep learning enables the performance of the methods to be greatly improved compared with the traditional algorithm. However, these models do not focus on the overlapping area of the source point cloud and the target point cloud, and the performance is easily interfered by points outside the overlapping area.
The FIRE-Net provides a feature interaction mechanism, and local feature interaction and global feature interaction are carried out between a source point cloud and a target point cloud; the Predator establishes an overlapping attention module between the source point cloud and the target point cloud for early information exchange, so that an overlapping area between the two point clouds is predicted. The overlapping region information may focus the model on common features of the two point clouds to improve registration performance. However, the FIRE-Net and the Predator are used for positioning the position of the overlapped area and carrying out similarity constraint in the global feature analysis, so that a large amount of data training is needed on one hand to enable the model to have hidden perceptibility of the overlapped area, and on the other hand, redundant feature points can interfere the similarity constraint of the global feature to influence registration accuracy.
Besides the overlapping area of the source point cloud and the target point cloud, the retrieval interval of the feature points is also an important factor affecting the algorithm performance.
The anti and the like divide the point cloud into two different semantics according to the smoothness of the point cloud, and semantic information is introduced into the point cloud registration process, so that the search range of the feature points is reduced from the global to the same semantics to improve the registration robustness. The problem of algorithm performance degradation caused by the loss of point cloud details and the like is avoided to a certain extent, but the semantic classification is based on coarser and divided semantic categories are fewer, so that the performance improvement is limited. Anties et al use Pointet as a source of semantic tags and then search for 3D points in the same semantic taxonomy. Compared with the prior work, the point cloud semantic information is further enriched by using the Pointet, the registration performance is obviously improved by using a more accurate and fine retrieval interval, but the complexity of the data set is improved due to the introduction of the semantic label, and the semantic annotation data set of the FPP high-precision point cloud is difficult to obtain. Moreover, the above works are all classical algorithms based on semantic assistance, and no related research has been done in the point cloud registration task based on deep learning. How to divide the search interval of the feature points and guide the feature constraint of the depth model by using the unlabeled modal information is the research focus of the text.
For a large-scene high-precision point cloud registration task based on an FPP system, the current point cloud registration algorithm is only limited to analysis of point cloud characteristics, and the utilization of images and phase characteristics is ignored. Qian and the like firstly utilize the FPP multi-mode data characteristic, obtain sift characteristic points and corresponding relations by analyzing a source image and a target image, map the sift characteristic points onto a point cloud, calculate a pose matrix as a coarse registration result, and finally perform fine matching by using ICP. They realize high-precision registration effect by utilizing the data characteristic of the FPP system, but only the analysis of the image sift characteristic causes the registration algorithm to lack robustness, and can only be applied to a high-speed fringe projection system.
In summary, the key point of the point cloud registration is the positioning of the overlapping area of the point clouds to be registered and the retrieval precision of the feature point pairs, and the previous method for positioning the overlapping area based on the global features needs more data samples to perceive the common features of the point clouds, and the method for guiding the point cloud registration by semantic information increases the difficulty of manufacturing the data set.
Therefore, a new FPP-based high-precision point cloud registration method guided by local-to-global structures is needed to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to provide a FPP-based high-precision point cloud registration method guided by a local-global structure, so as to solve the problems in the background technology.
A FPP-based high-precision point cloud registration method guided by a local-to-global structure comprises the following steps:
1. obtaining multi-mode data characteristics of a source point cloud and a target point cloud by using stripe projection profilometry;
2. performing point cloud clustering by utilizing the multi-mode data characteristics obtained in the step one to obtain point cloud clustering cut blocks of source point clouds and target point clouds, and matching out corresponding node pairs in the source point clouds and the target point clouds by the characteristic similarity of the point cloud clustering cut blocks to obtain the corresponding relation between the source point clouds and local point cloud blocks in the target point clouds;
3. performing feature interaction on local point cloud blocks with corresponding relations in the source point cloud and the target point cloud by using an overlapped attention network model;
4. and carrying out feature correspondence analysis on the cluster point cloud pairs based on the feature matching module of the corresponding cluster voting, and selecting a corresponding cluster calculation pose matrix with high confidence.
Furthermore, in the first step, the multi-mode data features include stripe images, absolute phase information and three-dimensional point cloud data, and RGB features, phase features and geometric features are obtained respectively by using the stripe images, the absolute phase information and the three-dimensional point cloud data.
Furthermore, in the first step, multi-mode data features of a source point cloud and a target point cloud are obtained by using stripe projection profilometry, and a three-dimensional reconstruction system is adopted, wherein the three-dimensional reconstruction system comprises a projector and a camera, and the projector and the camera are opposite to a target to be detected, and the method comprises the following steps:
1.1, projecting sinusoidal stripes to a target to be detected by using a projector, and photographing the target to be detected by using a camera to obtain a corresponding stripe image;
1.2, obtaining absolute phase information of the target to be detected through a phase unwrapping algorithm;
and 1.3, calculating the three-dimensional point cloud data of the target to be measured by combining the calibration data of the hardware system and the absolute phase information obtained in the step 1.2.
Furthermore, in the second step, the point cloud clustering is performed by using the multi-mode data features obtained in the first step, and the points with similar features are clustered into different point cloud clustering and dicing respectively by minimizing the following formula:
in [ (within) the material is a metal.]For the term averson bracket,is proportional to the length of the edgeWeight parameter of->Determining the roughness of point cloud clusters, +.>Representing each point in the point cloud, +.>Representing the geometrical, RGB and phase characteristics of the ith point in a point cloud,/->Representing the geometric features, RGB features and phase features of the ith point in another point cloud,representing a set of adjacent clusters in X, (i, j) represents two points in adjacent clusters, respectively:
wherein,for points in the neighborhood, ++>For the i-th point in the neighborhood, +.>Is the number of midpoints in the neighborhood; lambda (lambda) 1 ,λ 2 And lambda (lambda) 3 Three eigenvalues of covariance respectively; l (L) λ 、P λ And S is λ Linearity, flatness, and divergence of the local neighborhood, respectively.
Further, in the second step, the feature similarity of the point cloud clustering and dicing is analyzed to match the corresponding node pairs in the source point cloud and the target point cloud, so as to obtain the corresponding relationship between the source point cloud and the local point cloud block in the target point cloud, and the method is completed by using the following formula:
in the method, in the process of the invention,representing feature similarity of an ith node in the source point cloud and a jth node in the target point cloud,representing the i-th node in the source point cloud, < >>Representing the j-th node in the target point cloud, < >>,/>,/>For parameter adjustment, add>,/>The number of nodes in the source point cloud and the target point cloud respectively,/->Representing the distance threshold value(s),representation->And->With corresponding nodeDistance difference, wherein->And->The corresponding node is AND->And->Neighborhood retrieval feature of->Similar node->Representing the characteristics of a node, wherein ∈>The number of points in the node; />The color characteristic with the largest point number in the node is obtained; />Is the distance of the node from other nodes.
Furthermore, the corresponding relation of the local point cloud blocks in the second step is physical prior.
Further, in the third step, feature interaction is performed on the local point cloud blocks with corresponding relations in the source point cloud and the target point cloud by using the overlapped attention network model, and the method comprises the following steps:
3.1, excavating geometric and color characteristics of the point cloud by utilizing an FCGF method to obtain corresponding high-dimensional characteristicsAnd->
3.2 use of multipleThe head attention mechanism performs information intercommunication, and each feature vector in the source point cloudFeature vector in target point cloud +.>Obtaining an attention score matrix, and then carrying out +.>Obtaining an information output result:
in the method, in the process of the invention,MLPis a full-connection layer for characteristic dimension reduction,representing a color point cloud,/->Representing another color point cloud.
Further, by Local loss supervising the degree of matching between S and T Local features, local loss is expressed by:
wherein,reference->Calculated out->For the corresponding number of point cloud blocks, +.>For the number of points in the local point cloud of S, < >>For either cluster of S and T +.>And obtaining matched point pairs according to the calculated feature similarity matrix.
Further, the description of features within a local point cloud is supervised using circle loss, calculated as follows,
wherein,reference->Calculated out->For the corresponding number of point cloud blocks, +.>For the number of points in the local point cloud of S, < >>Representing characteristic distance>、/>And Supermarameter->Determine weight->And->Is of the size of (1), wherein->,/>Use of GT transform matrix pair Ji Dian cloud->Sum point cloudPoint->Corresponding set +.>Defined as->Middle distance>Less than->Is to be assembled->Defined as->Middle distance>Is greater than->Is a point of (2).
Further, the feature similarity of the point pairs in the overlapping region is supervised using an overlap loss, which is expressed by the following equation:
in the method, the feature similarity is calculatedObtain->Corresponding point set->Then the Gt tag +.>Is defined as the following formula:
wherein,for point cloud alignment transformations, +.>And calculating for the nearest neighbor of the point cloud.
Further, in the fourth step, feature correspondence analysis is performed on the clustered point cloud pairs by using a feature matching module based on corresponding clustered voting, and a corresponding clustered computing pose matrix with high confidence is selected, including the following steps:
4.1 using model pairsAnd->Extracting features, and using each group of point cloud block pair +.>Features and obtaining the transformation matrix->And corresponding point pairs according to the transformed matrix +.>Post alignment +.>And->To evaluate the confidence of the point cloud pairs:
wherein,reference->Calculated out->For the number of overlapping points +.>For point cloud alignment transformations, +.>For point cloud nearest neighbor calculation, +.>Is->Quantity of midpoint->,/>
4.2, sorting the point cloud block pairs according to the confidence coefficient, and screening the confidence coefficient threshold value of the point cloud block pairs outside the preset number
4.3 then confidence inPoint cloud block pair->Obtaining a global point pair by the local point pair of the (a);
4.4, finally, analyzing the characteristic fitting of the global point pair by RANAC to obtain an output pose matrixAnd (5) finishing point cloud alignment.
The principle of the invention: the corresponding relation between the source point cloud and the local point cloud blocks in the target point cloud is used as a physical priori, and the physical priori establishes the corresponding relation between the source point cloud and the point cloud blocks of the target point cloud, so that the distribution range of corresponding point pairs is reduced from the global state to each group of point cloud block pairs. The difficulty of the point cloud registration task is reduced, so that the feature constraint of the corresponding point pair is faster, and the training efficiency of the model is higher.
The beneficial effects are that: the high-precision point cloud registration method guided by the local-global structure based on the FPP provides a point cloud registration framework based on the FPP data characteristic, and adopts training and reasoning of a clustering structural priori constraint network model to realize higher-precision and robustness point cloud registration, and physical priors establish the corresponding relation between the source point cloud and the point cloud blocks of the target point cloud, so that the distribution range of corresponding point pairs is reduced from the global state to each group of point cloud block pairs. The difficulty of the point cloud registration task is reduced, so that the feature constraint of the corresponding point pair is faster, and the training efficiency of the model is higher.
Drawings
FIG. 1 is a flow chart of a FPP-based local-to-global structure guided high-precision point cloud registration method;
FIG. 2 is a flow chart of FPP multi-modal data processing;
FIG. 3 is a schematic flow chart of a point cloud consistency cluster analysis;
FIG. 4 is a schematic flow diagram of an overlapping attention model based on cluster correspondence;
FIG. 5 is a schematic flow diagram of feature matching based on corresponding clustered votes;
FIG. 6 is a graph showing the training efficiency improvement of six models according to the present invention.
Description of the embodiments
The present invention is further illustrated in the accompanying drawings and detailed description which are to be understood as being merely illustrative of the invention and not limiting of its scope, and various modifications of the invention, which are equivalent to those skilled in the art upon reading the invention, will fall within the scope of the invention as defined in the appended claims.
The invention provides a new point cloud registration framework for adapting to the data characteristics of an FPP system, a structured priori of clustering features of an overlapping region is obtained in an unsupervised mode, a retrieval interval of feature points is reduced by a corresponding point cloud block containing information of the overlapping region is screened out on the premise of not increasing complexity of a data set, and the model is guided to perform feature interaction and pose matrix reasoning. In particular the number of the elements,
the invention focuses on the mining and utilization of overlapping area information of source point cloud and target point cloud by using a physical mechanism, provides a consistency clustering module for positioning feature clusters of the overlapping area, and carries out consistency analysis on the cluster features to obtain the corresponding relation among the cluster features; the CloaNet is provided for carrying out characteristic constraint on the corresponding clusters of the overlapped areas; and the feature matching module based on the corresponding clustering voting performs feature corresponding analysis on the clustering point cloud pairs, and selects a corresponding clustering calculation pose matrix with high confidence.
The method comprises four parts of FPP multi-modal data analysis, point cloud consistency cluster analysis, overlapped attention model based on cluster correspondence and feature matching based on corresponding cluster voting. The logic flow is shown in fig. 1.
Stripe projection profile (FPP) is a high-precision and high-resolution three-dimensional active measurement method, can obtain three-dimensional profile of a target to be measured on the premise of not damaging the target to be measured, is widely applied to the fields of intelligent manufacturing and the like, and is shown in a figure 2 in a point cloud reconstruction process.
Firstly, projecting sinusoidal stripes to a target to be detected by a projector and triggering a camera to capture corresponding stripe images; then obtaining absolute phase information through a phase unwrapping algorithm; and finally, calculating the three-dimensional point cloud of the target to be measured by combining the calibration data and the absolute phase information of the hardware system.
In order to realize high-precision large-scene point cloud registration, the invention describes the physical geometric information of the measured object through point cloud data; the phase data intuitively describes the change condition of the surface profile of the measured object; the RGB image describes color information of the object under test. According to the corresponding relation between the image pixels and the point clouds in the FPP system, RGB and phase information of each frame are integrated into the corresponding point clouds, so that each point not only contains position information, but also contains color and phase information.
FPP has limited target reconstruction effects on severe surface profile variations and is prone to misalignment. In order to ensure the stability of data, the method avoids the influence of misplaced mixed points by filtering out the points with large phase gradient. In addition, the method and the device further improve the quality of the point cloud by removing discrete points in the point cloud through statistical filtering.
Firstly, multi-mode information features are analyzed to perform point cloud clustering, then, corresponding matching among point cloud cut blocks is achieved through a graph matching algorithm, and a local corresponding relation between source point cloud features and target point cloud features is obtained.
The physical geometric characteristics of each point in the point cloud are jointly described by the points in the local neighborhood of the point cloud, and the invention calculates the corresponding geometric attributes [30,31,32 ] of each point according to the local characteristics of each point]. Specifically, for each point in the point cloudThe linearity, flatness and divergence of the method can be described by eigenvalues of covariance matrixes of local neighborhoods respectively, and the eigenvalues are shown in the formula. Wherein X is the point in the neighborhood, and k is the number of the points in the neighborhood; lambda (lambda) 1 ,λ 2 ,λ 3 Three eigenvalues of covariance respectively; l (L) λ ,P λ ,S λ Linearity, flatness, divergence of local neighborhood:
analysis of multi-mode point cloud characteristics by using baud energy modelAnd clustering the point clouds. Feature f of each point i e C in the point cloud i ∈R 3 Including geometric features, RGB features, and phase features, points with similar features are clustered separately into different regions by minimizing the following equation. Wherein [.]For Iverson breeset Eiffen bracket,>is a weight parameter proportional to the length of the edge, < ->And determining the roughness degree of the point cloud clusters.
In order to obtain the local characteristic corresponding relation between the source point cloud and the target point cloud, the invention uses a graph matching algorithm [34 ]]And performing point cloud cluster block feature analysis and matching. The point cloud can be segmented through clusteringThe representation, wherein each cluster cut S is a node of the graph, the distance between the cuts is defined as an edge of the graph.
Any cluster in the source point cloud is diced, and the characteristics of the cluster can be represented by the following formula:
wherein,the number of points in the node; />The color characteristic with the largest point number in the node is obtained;for the distance of this node from other nodes, +.>Is the number of nodes in the source point cloud.
After defining the description characteristics of the clustering block, any two nodes in the source point cloud and the target point cloud are subjected toAndfeature similarity can be described by node size, color features, and neighborhood node distribution, as shown in the following formula:
wherein the method comprises the steps of,/>,/>For parameter adjustment, add>The number of nodes in the source point cloud and the target point cloud respectively; respectively at->And->Neighborhood retrieval feature of->Similar nodes, get->And->Distance difference from corresponding node->
The feature similarity of the clustering cut blocks is analyzed to match the corresponding node pairs in the source point cloud and the target point cloud, so that the corresponding relation of the local point cloud blocks in the two point clouds is obtained. The invention takes the corresponding relation of the local point cloud blocks as physical priori guiding CloaNet training and reasoning.
An overlapping attention model based on cluster correspondence is proposed, as shown in fig. 4. Unlike the previous method based on the global feature analysis of the point cloud, the CloaNet performs feature interaction and analysis on local point cloud blocks of the source point cloud and the target point cloud based on physical prior corresponding to the local features of the point cloud.
Consider two color point clouds,/>. First, the present invention uses Fully Convolutional Geometric Features (FCGF) [17 ]]Digging the geometric and color characteristics of the point cloud to obtain corresponding high-dimensional characteristics +.>And->. But the independent feature extraction approach cannot locate the overlapping region, so the invention introduces a cross-attention module for feature interaction. The invention uses a transducer [35 ]]Multi-head attention mechanism in information intercommunication [18 ]]. Each eigenvector in the source point cloud +.>Feature vector in cloud of target pointsObtaining an attention score matrix, and then carrying out +.>Obtaining an information output result, wherein the information output result is shown in the following formula:
the interactive features are->
Wherein MLP is a fully connected layer used for feature dimension reduction.
To achieve the feature interaction of S and T, the interaction attention module may be applied in two directions.
In order to improve model training efficiency and registration accuracy, local corresponding loss function constraint model training is designed according to physical priori, the search range of corresponding points is reduced from global point cloud to each corresponding local point cloud block, and global constraint of the point cloud is achieved by constraining the characteristics of each corresponding local point cloud block.
First, the degree of matching between S and T Local features is supervised by Local loss. The correctly matched points are corresponding to the local point cloud blocks which are all in the corresponding part, and any cluster in S and T is corresponding toObtaining matched point pairs according to the calculated feature similarity matrix>And obtaining the true matching point number through the true aligned point cloud>Local loss can be expressed as:
wherein,for the corresponding number of point cloud blocks, +.>For the number of points in the local point cloud of S, then get +.>
The invention uses circular loss [36 ]]Feature description in a supervised local point cloud block is performed by first using a GT transformation matrix pair Ji Dian cloud S and a point cloud T, and performing point-to-point transformation on the point cloud TCorresponding set +.>Defined as->Middle distance>Less than->Is to be assembled->Defined as->Middle distance>Is greater than->Is a point of (2). The circle loss calculation is as follows,
wherein the method comprises the steps ofFor the corresponding number of point cloud blocks, +.>For the number of points in the local point cloud of S, < >>Representing characteristic distance>And->And Supermarameter->Determine weight->And->Is of a size of (a) and (b).
In the same way, the method can be used for preparing the composite material,
in addition, the invention uses the overlap loss in the pre to monitor the feature similarity of the point pairs in the overlapping area: first, feature similarity is calculatedObtain->Corresponding point set->Then, gt tag ++>The definition is as follows:
wherein,for point cloud alignment transformations, +.>For point cloud nearest neighbor calculation, +.>It can be calculated as:
the invention aims at the characteristic analysis of the corresponding local point cloud block, so that the matching accuracy of the point cloud block directly influences the accuracy of the corresponding point pair, thereby finally influencing the accuracy of the conversion matrix. In addition, the number of corresponding point cloud blocks is also of great importance, and too few corresponding point cloud block pairs can lead to the point pairs being limited to local areas of the point cloud, so that a high-precision conversion matrix cannot be obtained. In order to improve the reasoning precision of the model, the point cloud blocks with high confidence are selected through voting on the premise of guaranteeing the number of the point cloud blocks. And (5) gathering the corresponding characteristic points of the point cloud blocks to obtain global corresponding point pairs, and finally fitting a gesture matrix, as shown in fig. 5.
First, a model pair is usedAnd->Feature extraction is performed, and then each group of point cloud block pairs is analyzed by using RANSACFeatures and obtaining the transformation matrix->And corresponding point pairs. According to the transformed matrix->Post alignment +.>And->To evaluate the confidence of the point cloud pair, as shown in the following equation:
wherein,for the number of overlapping points +.>Is->The number of midpoints can be calculated in opposite directions>
The point cloud block pairs are ordered in descending order according to the confidence coefficient, the point cloud blocks outside the preset number are subjected to confidence coefficient threshold screening, and then key point cloud block pairs are summarizedObtaining a global point pair, and finally fitting an output matrix by analyzing the characteristics of the global point pair through RANAC>And (5) finishing point cloud alignment. Among them, RANSAC is referred to in the literature: phillips M A, boles R c. random sample consistency: model fitting paradigm for image analysis and automatic mapping [ J ]]American society of computers, 1981, 24 (6): 381-395.
The invention introduces a point cloud dataset fabrication based on FPP and an evaluation of the performance of a registration algorithm. Wherein the performance evaluation of the algorithm is divided into three parts: (1) Testing the improvement effect of the point cloud registration precision and robustness of the feature matching module based on the corresponding clustering voting; (2) Comparing the accuracy and the robustness of the algorithm of the invention with other point cloud registration algorithms; (3) The framework of the invention was tested for its lifting effect on the deep learning model.
The multi-view point cloud registration data set of the FPP system is acquired by adopting a mode of cooperative processing of the FPP system and the industrial robot. Firstly, fixing an FPP system at the tail end of an industrial robot, and calibrating the hand and the eye of the industrial robot to obtain the conversion relation between a robot tail end coordinate system and a camera coordinate system; then capturing point cloud data of different visual angles and pose information of the robot by a robot mobile FPP system; and obtaining coordinates of point clouds with different visual angles under a robot base coordinate system through pose information, hand-eye calibration information and point cloud coordinates of the robot, then calculating a conversion matrix between the point clouds, and optimizing a more accurate registration matrix by using an ICP algorithm. The entire dataset includes daily items of an indoor scene (tables, computers, cups, backpacks, etc.), and the figure shows some examples of datasets. The real point cloud conversion matrix is the result after ICP optimization. The experimental setup included a projector (DLP 4500), an industrial camera (acA 1920-40 gm), a lens with a focal length of 8mm, and an industrial robot (ERER-MA 02010-A00-C).
The present invention evaluates CloaNet Batch size to 1 based on PyTorch and Minkowski Engine, with a learning rate decreasing exponentially from 0.05 with epoch, at NVIDIA TITANRTX GPU.
In the registration accuracy test process, the invention predicts the error between the rotation matrix and the real rotation matrix by using the Relative Rotation Error (RRE), and predicts the Euclidean distance [40] between the translation matrix and the real translation matrix by using the Relative Translation Error (RTE); the measurement standard of the robustness of the point Yun Peizhun is Registration Recall (RR), and the point cloud comparison example with the conversion error smaller than a certain threshold value (RRE <0.8, RTE < 0.03); in addition, the feature matching recall (point comparison with feature distance <1cm in the point cloud overlapping area after GT conversion) is used as a measurement standard.
In the reasoning process of the transformation matrix, the confidence and the number of the corresponding point cloud blocks influence the matching accuracy of the point pairs and the area distribution, and finally influence the fitting accuracy of the transformation matrix. In order to evaluate the effect of the feature matching module based on the corresponding clustering vote, the invention analyzes the influence of the number of point cloud blocks and the confidence level on the error of the conversion matrix and the registration recall rate respectively, as shown in table 1. The results are shown in the number of cloud blocks (Num>On the premise of=3), toThe filtering of the point cloud block for the confidence threshold value=0.8 can obtain a higher-precision conversion matrix. When the number of the preset point cloud blocks is too small, the number of the point cloud blocks is reduced by screening the high confidence threshold, so that point pairs are limited to small part areas of the point cloud, and a high-precision conversion matrix cannot be fitted; however, the low confidence threshold cannot screen out the mismatching point cloud block pairs, so that the mismatching point pairs are introduced and the fitting accuracy of the conversion matrix is reduced. However, an excessive number of preset point cloud blocks (including the case of processing by the module) can directly introduce some point cloud blocks with low confidence so as to reduce the accuracy of the transformation matrix. In summary, the feature matching module based on the corresponding clustering voting can screen out a sufficient number of high-confidence key point cloud blocks through reasonable preset point cloud block number and confidence threshold, which is helpful to improve the reasoning precision and recall rate of the model.
TABLE 1
To evaluate the algorithm performance of the present invention, the present invention will be compared with classical point cloud registration algorithms Go-ICP, SAC-IA, FGR and recent deep learning based point cloud registration algorithms FCGF, D3Feat, predator. Classical point cloud registration algorithms are derived from Open3D and PCL released versions, and point cloud registration algorithms based on deep learning are all trained using the same dataset. The invention analyzes the matching precision and robustness of RRE, RTE and RR comparison algorithm, and analyzes the feature extraction capability of the algorithm based on deep learning through FMR, as shown in table 2.
The registration accuracy and robustness of the data set based on the deep learning method are obviously superior to those of a classical algorithm. Compared with the classical point cloud registration algorithm, RRE and RTE of the algorithm are obviously reduced, and the registration recall rate is also obviously improved. Compared with the scheme of global feature constraint of FCGF, D3Feat and Predator, the method reduces the influence of global interference point features and improves the FMR index by physically and priori constraining the point features in the point cloud block pairs; in addition, FCGF, D3Feat and Predator search corresponding points in global features, and the invention reduces the feature search range through physical priori, and is more beneficial to searching similar feature point pairs so as to improve matching precision and robustness. The relative rotation error and relative translation error of the present invention are lower and the match recall is higher than other deep learning based algorithms. In conclusion, the local feature processing scheme of CloaNet provided by the invention is more beneficial to the constraint and search of similar feature points, and can obviously improve the accuracy and robustness of point cloud registration.
TABLE 2
To further evaluate the method of the present invention, ablation experiments were performed with the feature extraction and feature interaction modules in the framework replaced with FCGF, D3 coat, pre's back bone, respectively, as shown in table 3. Consistent with the experimental results of the upper group, the FMR of the algorithms is improved through a strategy of physically priori constraining the point characteristics in the point cloud block pairs; the RRE and RTE are obviously reduced due to the reduction of the retrieval range of similar feature points, and the registration recall rate is also obviously improved. In summary, the framework of the invention can optimize the feature extraction capability of other point cloud registration algorithms, and improve the accuracy and robustness of point cloud registration.
TABLE 3 Table 3
In addition, the framework of the invention also improves the training efficiency of other models, as shown in fig. 6. During the training process, the loss function value of the improved model converges faster than the original method. This illustrates that the feature constraint mode of the local point cloud based on physical priors is more efficient than the global feature constraint mode. The physical prior establishes the corresponding relation between the point cloud blocks of the source point cloud and the target point cloud, so that the distribution range of the corresponding point pairs is reduced from the global state to each group of point cloud block pairs. The difficulty of the point cloud registration task is reduced, so that the feature constraint of the corresponding point pair is faster, and the training efficiency of the model is higher.
In summary, the invention provides a point cloud registration framework based on FPP data characteristics, and the training and reasoning of a clustering structured prior constraint network model are adopted for the first time to realize the point cloud registration with higher precision and robustness. In addition, the framework of the invention is adapted to other deep learning models and improves training efficiency and registration effect thereof. The framework of the invention provides a new mode for acquiring the large scene point cloud of the FPP, which is beneficial to promoting the FPP to develop new application in the fields of indoor graphics building, intelligent manufacturing and the like.

Claims (10)

1. The FPP-based high-precision point cloud registration method guided by a local-global structure is characterized by comprising the following steps of:
1. obtaining multi-mode data characteristics of a source point cloud and a target point cloud by using stripe projection profilometry;
2. performing point cloud clustering by utilizing the multi-mode data characteristics obtained in the step one to obtain point cloud clustering cut blocks of source point clouds and target point clouds, and matching out corresponding node pairs in the source point clouds and the target point clouds by the characteristic similarity of the point cloud clustering cut blocks to obtain the corresponding relation between the source point clouds and local point cloud blocks in the target point clouds;
3. performing feature interaction on local point cloud blocks with corresponding relations in the source point cloud and the target point cloud by using an overlapped attention network model;
4. and carrying out feature correspondence analysis on the cluster point cloud pairs based on the feature matching module of the corresponding cluster voting, and selecting a corresponding cluster calculation pose matrix with high confidence.
2. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 1, wherein: in the first step, the multi-mode data features comprise stripe images, absolute phase information and three-dimensional point cloud data, and the stripe images, the absolute phase information and the three-dimensional point cloud data are used for respectively obtaining RGB features, phase features and geometric features.
3. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 1, wherein:
in the first step, multi-mode data characteristics of a source point cloud and a target point cloud are obtained by utilizing stripe projection profilometry, a three-dimensional reconstruction system is adopted, the three-dimensional reconstruction system comprises a projector and a camera, the projector and the camera are opposite to a target to be detected, and the method comprises the following steps:
1.1, projecting sinusoidal stripes to a target to be detected by using a projector, and photographing the target to be detected by using a camera to obtain a corresponding stripe image;
1.2, obtaining absolute phase information of the target to be detected through a phase unwrapping algorithm;
and 1.3, calculating the three-dimensional point cloud data of the target to be measured by combining the calibration data of the hardware system and the absolute phase information obtained in the step 1.2.
4. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 1, wherein: in the second step, point cloud clustering is carried out by utilizing the multi-mode data features obtained in the first step, and points with similar features are clustered to different point cloud clustering and cutting blocks respectively through minimizing the following formula:
in [ (within) the material is a metal.]For the term averson bracket,is a weight parameter proportional to the length of the edge, < >>Determining the roughness of point cloud clusters, +.>Representing each point in the point cloud, +.>Representing the geometrical, RGB and phase characteristics of the ith point in a point cloud,/->Representing the geometrical, RGB-and phase-characteristics of the ith point in another point cloud,/->Representing a set of adjacent clusters in X, (i, j) represents two points in adjacent clusters, respectively:
wherein,for points in the neighborhood, ++>For the i-th point in the neighborhood, +.>Is the number of midpoints in the neighborhood; lambda (lambda) 1 ,λ 2 And lambda (lambda) 3 Three eigenvalues of covariance respectively;L λ 、P λ and S is λ Linearity, flatness, and divergence of the local neighborhood, respectively.
5. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 1, wherein: in the second step, the corresponding relation between the source point cloud and the local point cloud block in the target point cloud is obtained by analyzing the characteristic similarity matching of the point cloud clustering cut blocks and matching the corresponding node pairs in the source point cloud and the target point cloud, and the method is completed by using the following formula:
in the method, in the process of the invention,representing the feature similarity of the ith node in the source point cloud and the jth node in the target point cloud,/for>Representing the i-th node in the source point cloud, < >>Representing the j-th node in the target point cloud, < >>,/>,/>For parameter adjustment, add>The number of nodes in the source point cloud and the target point cloud respectively,/->Representing the distance threshold value(s),representation->And->Distance difference from the corresponding node, wherein +.>And->The corresponding node is AND->And->Neighborhood retrieval feature of->Similar node->Representing the characteristics of a node, wherein ∈>The number of points in the node; />The color characteristic with the largest point number in the node is obtained;is the distance of the node from other nodes.
6. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 1, wherein: and thirdly, performing characteristic interaction on the local point cloud blocks with corresponding relations in the source point cloud and the target point cloud by using an overlapped attention network model, wherein the method comprises the following steps of:
3.1, excavating geometric and color characteristics of the point cloud by utilizing an FCGF method to obtain corresponding high-dimensional characteristicsAnd
3.2, information intercommunication is carried out by utilizing a multi-head attention mechanism, and each characteristic vector in the source point cloudFeature vector in target point cloud +.>Obtaining an attention score matrix, and then carrying out +.>Obtaining an information output result:
wherein, MLP is a fully connected layer for feature dimension reduction,representing a color point cloud,/->Representing another color point cloud.
7. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 6, wherein: by Local loss supervision of the degree of matching between S and T Local features, local loss is expressed by the following formula:
wherein,reference->Calculated out->For the corresponding number of point cloud blocks, +.>For the number of points in the local point cloud of S, < >>For either cluster of S and T +.>And obtaining matched point pairs according to the calculated feature similarity matrix.
8. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 6, wherein: feature descriptions within the local point cloud were supervised using the circle loss, calculated as follows,
wherein,reference->Calculated out->For the corresponding number of point cloud blocks, +.>For the number of points in the local point cloud of S, < >>Representing characteristic distance>、/>And Supermarameter->Determine weight->Andis of the size of (1), wherein->=0.1,/>=1.4, use GT transformation matrix pair Ji Dian cloud->And Point cloud->Point->Corresponding set +.>Defined as->Middle distance>Less than->Is to be assembled->Defined as->Middle distance>Is greater than->Is a point of (2).
9. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 6, wherein: feature similarity of point pairs in the overlapping region is supervised using an overlap loss, which is expressed by:
in the method, the feature similarity is calculatedObtain->Corresponding point set->Then the Gt tag +.>Is defined as the following formula:
wherein,for point cloud alignment transformations, +.>And calculating for the nearest neighbor of the point cloud.
10. The FPP-based local-to-global structure guided high-precision point cloud registration method of claim 1, wherein: in the fourth step, feature correspondence analysis is carried out on the cluster point cloud pairs based on the feature matching module of the corresponding cluster voting, and a corresponding cluster calculation pose matrix with high confidence is selected, and the method comprises the following steps:
4.1 using model pairsAnd->Extracting features, and using each group of point cloud block pair +.>Features and obtaining the transformation matrix->And corresponding point pairs according to the transformed matrix/>Post alignment +.>And->To evaluate the confidence of the point cloud pairs:
wherein,reference->Calculated out->For the number of overlapping points,for point cloud alignment transformations, +.>For point cloud nearest neighbor calculation, +.>Is->Quantity of midpoint->,/>
4.2, sorting the point cloud block pairs according to the confidence coefficient, and screening the confidence coefficient threshold value of the point cloud block pairs outside the preset number
4.3 then confidence inPoint cloud block pair->Obtaining a global point pair by the local point pair of the (a);
4.4, finally, analyzing the characteristic fitting of the global point pair by RANAC to obtain an output pose matrixAnd (5) finishing point cloud alignment.
CN202311776827.7A 2023-12-22 2023-12-22 FPP-based high-precision point cloud registration method guided by local-global structure Active CN117475170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311776827.7A CN117475170B (en) 2023-12-22 2023-12-22 FPP-based high-precision point cloud registration method guided by local-global structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311776827.7A CN117475170B (en) 2023-12-22 2023-12-22 FPP-based high-precision point cloud registration method guided by local-global structure

Publications (2)

Publication Number Publication Date
CN117475170A CN117475170A (en) 2024-01-30
CN117475170B true CN117475170B (en) 2024-03-22

Family

ID=89633238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311776827.7A Active CN117475170B (en) 2023-12-22 2023-12-22 FPP-based high-precision point cloud registration method guided by local-global structure

Country Status (1)

Country Link
CN (1) CN117475170B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726673B (en) * 2024-02-07 2024-05-24 法奥意威(苏州)机器人系统有限公司 Weld joint position obtaining method and device and electronic equipment
CN117830676B (en) * 2024-03-06 2024-06-04 国网湖北省电力有限公司 Unmanned aerial vehicle-based power transmission line construction risk identification method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN113379818A (en) * 2021-05-24 2021-09-10 四川大学 Phase analysis method based on multi-scale attention mechanism network
CN113888748A (en) * 2021-09-27 2022-01-04 北京经纬恒润科技股份有限公司 Point cloud data processing method and device
CN114543787A (en) * 2022-04-21 2022-05-27 南京理工大学 Millimeter-scale indoor map positioning method based on fringe projection profilometry
CN115578408A (en) * 2022-07-28 2023-01-06 四川大学 Point cloud registration blade profile optical detection method, system, equipment and terminal
CN115816471A (en) * 2023-02-23 2023-03-21 无锡维度机器视觉产业技术研究院有限公司 Disordered grabbing method and equipment for multi-view 3D vision-guided robot and medium
CN115841517A (en) * 2022-10-25 2023-03-24 北京计算机技术及应用研究所 Structural light calibration method and device based on DIC double-circle cross ratio
CN116071570A (en) * 2023-02-16 2023-05-05 河海大学 3D target detection method under indoor scene
CN116433841A (en) * 2023-04-12 2023-07-14 南京理工大学 Real-time model reconstruction method based on global optimization
CN116468764A (en) * 2023-06-20 2023-07-21 南京理工大学 Multi-view industrial point cloud high-precision registration system based on super-point space guidance

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN113379818A (en) * 2021-05-24 2021-09-10 四川大学 Phase analysis method based on multi-scale attention mechanism network
CN113888748A (en) * 2021-09-27 2022-01-04 北京经纬恒润科技股份有限公司 Point cloud data processing method and device
CN114543787A (en) * 2022-04-21 2022-05-27 南京理工大学 Millimeter-scale indoor map positioning method based on fringe projection profilometry
CN115578408A (en) * 2022-07-28 2023-01-06 四川大学 Point cloud registration blade profile optical detection method, system, equipment and terminal
CN115841517A (en) * 2022-10-25 2023-03-24 北京计算机技术及应用研究所 Structural light calibration method and device based on DIC double-circle cross ratio
CN116071570A (en) * 2023-02-16 2023-05-05 河海大学 3D target detection method under indoor scene
CN115816471A (en) * 2023-02-23 2023-03-21 无锡维度机器视觉产业技术研究院有限公司 Disordered grabbing method and equipment for multi-view 3D vision-guided robot and medium
CN116433841A (en) * 2023-04-12 2023-07-14 南京理工大学 Real-time model reconstruction method based on global optimization
CN116468764A (en) * 2023-06-20 2023-07-21 南京理工大学 Multi-view industrial point cloud high-precision registration system based on super-point space guidance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"面向点云配准的局部-全局动态图更新框架";石佳潼 等;《中国计量大学学报》;20230630;第34卷(第2期);第292-302页 *

Also Published As

Publication number Publication date
CN117475170A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN117475170B (en) FPP-based high-precision point cloud registration method guided by local-global structure
Wang et al. Vision-assisted BIM reconstruction from 3D LiDAR point clouds for MEP scenes
Cui et al. 3D semantic map construction using improved ORB-SLAM2 for mobile robot in edge computing environment
CN108171133B (en) Dynamic gesture recognition method based on characteristic covariance matrix
SG192768A1 (en) System for detection of non-uniformities in web-based materials
Xu et al. GraspCNN: Real-time grasp detection using a new oriented diameter circle representation
CN114881955A (en) Slice-based annular point cloud defect extraction method and device and equipment storage medium
Chen et al. A local tangent plane distance-based approach to 3D point cloud segmentation via clustering
CN114723764A (en) Parameterized edge curve extraction method for point cloud object
CN109741358A (en) Superpixel segmentation method based on the study of adaptive hypergraph
Hou et al. Multi-modal feature fusion for 3D object detection in the production workshop
Huang et al. Overview of LiDAR point cloud target detection methods based on deep learning
Liang et al. DIG-SLAM: an accurate RGB-D SLAM based on instance segmentation and geometric clustering for dynamic indoor scenes
Xin et al. Accurate and complete line segment extraction for large-scale point clouds
Li et al. Rethinking scene representation: A saliency-driven hierarchical multi-scale resampling for RGB-D scene point cloud in robotic applications
Xie et al. A method of small face detection based on CNN
Wu et al. A Systematic Point Cloud Edge Detection Framework for Automatic Aircraft Skin Milling
Jain et al. 3D object recognition: Representation and matching
Wang et al. Energy-based automatic recognition of multiple spheres in three-dimensional point cloud
CN116952154A (en) Method and system for measuring depth of mouth of detonating tube assembly based on machine vision
CN115719363B (en) Environment sensing method and system capable of performing two-dimensional dynamic detection and three-dimensional reconstruction
Zhao et al. A review of visual SLAM for dynamic objects
Long et al. A triple-stage robust ellipse fitting algorithm based on outlier removal
CN114964206A (en) Monocular vision odometer target pose detection method
Qi et al. Research progress of part point cloud detection based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant