CN112801945A - Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction - Google Patents
Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction Download PDFInfo
- Publication number
- CN112801945A CN112801945A CN202110032732.9A CN202110032732A CN112801945A CN 112801945 A CN112801945 A CN 112801945A CN 202110032732 A CN202110032732 A CN 202110032732A CN 112801945 A CN112801945 A CN 112801945A
- Authority
- CN
- China
- Prior art keywords
- model
- point cloud
- attention mechanism
- gaussian mixture
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000000203 mixture Substances 0.000 title claims abstract description 30
- 230000007246 mechanism Effects 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 22
- 210000003625 skull Anatomy 0.000 title claims abstract description 20
- 238000000605 extraction Methods 0.000 title claims abstract description 13
- 230000009977 dual effect Effects 0.000 title description 3
- 239000011159 matrix material Substances 0.000 claims abstract description 13
- 230000009466 transformation Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 4
- 238000011176 pooling Methods 0.000 claims description 24
- 230000004913 activation Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 239000011541 reaction mixture Substances 0.000 claims description 3
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a depth Gaussian mixture model skull registration method based on double attention mechanism feature extraction, which comprises the following steps of: step 1, acquiring a three-dimensional point cloud model through a three-dimensional scanner; step 2, processing the three-dimensional point cloud model into a skull model only containing 1700 points of vertex information; step 3, inputting the point cloud model into a convolutional neural network to extract features; step 4, calculating a corresponding relation matrix between the characteristics and the parameters of the Gaussian mixture model to obtain matching parameters; step 5, recovering the optimal transformation from the matched parameters; the invention solves the problem that the prior local registration method fails to realize large transformation matching due to no good initialization, solves the problems of low speed and low efficiency of the prior global registration method, and effectively establishes data association between points and models to realize the high-efficiency point cloud registration.
Description
Technical Field
The invention belongs to the technical field of three-dimensional point cloud model registration and relates to a depth Gaussian mixture model skull registration method based on double attention mechanism feature extraction.
Background
With the rapid development of the three-dimensional acquisition technology, the acquired three-dimensional point cloud data better reproduces the shape information of a real object from a geometric angle, and is widely popularized in the practical application fields of reverse engineering, computer vision, unmanned driving and the like at present. Three-dimensional point cloud registration is one of the key steps of subsequent restoration, and aims to uniformly transform point clouds in different coordinate systems to the same coordinate system through an optimal transformation matrix estimation.
Because the acquired point cloud has the characteristics of disorder and irregular structure, the point cloud is converted into a regular voxel grid by the existing method so as to be convenient to process, but some important geometric information can be lost. Deep learning has gained general attention of people in recent years, and key information of original point clouds can be reserved by directly processing the point clouds through the deep learning.
Attention mechanism is commonly used in two-dimensional image segmentation, classification, and other application fields. In two-dimensional image segmentation, visual feature association in different dimensions is captured by introducing an attention mechanism, which is more focused on finding significant useful information related to the current output in input data, thereby improving the output quality. The method is widely applied to two-dimensional images, but has few related researches on the application of the method to the characteristic extraction stage of the three-dimensional model point cloud registration early stage.
Depth Gaussian mixture model registration defines the point cloud registration problem as the problem of solving the KL divergence minimum value of probability distribution of two Gaussian mixture models. The main idea is to extract a corresponding relation matrix between the characteristic points and the parameters of the Gaussian mixture model, wherein the elements in the matrix represent the probability that a certain point belongs to the components of the Gaussian mixture model, the higher the probability is, the higher the relevance of the certain point belongs to the components of the Gaussian mixture model is, so that the matching parameters are obtained, and the optimal transformation is recovered according to the matching parameters.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a deep Gaussian mixture model skull registration method based on double attention mechanism feature extraction, which learns finer features in a picture, reduces the operation calculation amount, improves the image matching accuracy, improves the algorithm operation speed, and enables the feature identification of the Qin's chamber facial makeup to be faster and more accurate and to have better effect.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the depth Gaussian mixture model skull registration method based on the double attention mechanism feature extraction is characterized by comprising the following steps of:
step 2, processing the three-dimensional point cloud model into a skull model only containing 1700 points of vertex information;
step 3, inputting the point cloud model into a convolutional neural network PointNet, and reducing model storage and calculation overhead by adopting a residual block + double attention mechanism to extract key features by giving different weights to the input point cloud in the encoding and decoding stages;
the residual error network is formed by connecting a plurality of residual error blocks in series, input information is directly connected to output in a jumping mode, the attention mechanism is decomposed along two directions of a channel and a space, and a double attention mechanism is combined;
the channel attention mechanism adopts average pooling and maximum pooling to fuse partial feature information to generate average pooling featuresAnd maximum pooling characteristics
Propagation of two features to a multi-layered perceptron-generated channel feature map M with only one hidden layer by equation (1)c∈RC×1×1And finally, using element-by-element summation to obtainTo the fused features:
in the formula (1), sigma is sigmoid activation function, W0And W1Representing multi-layer perceptron weights, W0∈RC/r×C,W1∈RC×C/rR represents the deceleration rate;
the spatial attention mechanism applies average pooling and maximum pooling operations along the channel axis, fuses the channel maps of the upper layer outputs, generates average pooling characteristicsAnd maximum pooling characteristics
The two characteristics are convoluted by the standard convolution layer, and the significant characteristic descriptor M is generated by the sigmoid activating function through the formula (2)s(F)∈RH×W:
In the formula (2), σ is sigmoid activation function, f7×7A convolution kernel representing a kernel size of 7x 7;
step 4, calculating a corresponding relation matrix between the characteristics and the parameters of the Gaussian mixture model to obtain matching parameters, and calculating the parameters of the Gaussian mixture model through the formulas (3), (4) and (5), wherein the parameters comprise weight, mean value and variance;
in the formula (3), αjA weight scalar representing the jth Gaussian mixture model, N representing the total number of points, ri,jRepresenting the relevance of the ith point to the J-th component in the Gaussian mixture model;
in the formula (4), mujIs a mean vector of size 3 × 1, piRepresenting the probability that the ith point belongs to a specified Gaussian component;
and 5, recovering the optimal transformation from the matched parameters by the formula (6):
further, the point cloud model of each three-dimensional skull in step 2 is processed into 1700 points only containing vertex information.
The invention has the beneficial effects that:
(1) the registration method adopted by the invention carries out point cloud registration by acquiring the data association relation between points and model parameters, obtains a corresponding association matrix by adding double attention mechanism strengthening feature extraction in a replacement invariant network, and recovers the optimal pose transformation between two Gaussian mixture models through two model parameter units, thereby effectively establishing the data association relation between the points and the model parameters and further improving the point cloud registration precision. The problem that the existing local registration method has large transformation matching failure caused by no good initialization is solved;
(2) the registration method adopted by the invention is a deep Gaussian mixture model skull registration method based on double attention mechanism feature extraction, effectively establishes a data association relation between points and a model to realize high-efficiency point cloud registration, and overcomes the problems of low speed and low efficiency of the existing global registration method.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a model obtained from a three-dimensional scan;
FIG. 3(a) is a source point cloud diagram of a three-dimensional skull model containing 1700 points of vertex information after processing;
FIG. 3(b) is a cloud image of target points of the processed three-dimensional skull model containing only 1700 points of vertex information;
FIG. 4 is a calculated relationship matrix;
FIG. 5 is a diagram of the optimal transformation process recovered from the matching parameters;
FIG. 6(a) is a two-point cloud initial pose diagram;
FIG. 6(b) is a graph of experimental registration results;
Detailed Description
The present invention is described in further detail below with reference to specific examples, but the present invention is not limited thereto.
The invention provides a point cloud registration method by acquiring a data association relation between points and model parameters, a corresponding association matrix is obtained by adding a double attention mechanism in a replacement invariant network to enhance feature extraction, and optimal pose transformation between two Gaussian mixture models is recovered through two model parameter units, so that the data association relation between the points and the model parameters is effectively established, and the point cloud model registration efficiency is improved.
Example 1
The embodiment provides a depth gaussian mixture model skull registration method based on dual attention mechanism feature extraction, as shown in fig. 1, including the following steps:
this example uses a hand-held three-dimensional laser scanner Handyscan3D to scan an object and obtain three-dimensional model information, as shown in fig. 2.
Step 2, processing the three-dimensional point cloud model into a skull model with 1700 points only containing vertex information, as shown in fig. 3, wherein fig. 3(a) represents a source point cloud, and fig. 3(b) represents a target point cloud.
Step 3, inputting the point cloud model into a convolutional neural network PointNet, extracting key features by giving different weights to the input point cloud in the encoding and decoding stages by adopting a residual block + double attention mechanism, and reducing model storage and calculation overhead:
the residual error network is formed by connecting a plurality of residual error blocks in series, input information is directly connected to output in a jumping mode, the attention mechanism is decomposed along two directions of a channel and a space, and a double attention mechanism is combined;
the channel attention mechanism adopts average pooling and maximum pooling to fuse partial feature information to generate average pooling featuresAnd maximum pooling characteristics
Propagation of two features to a multi-layered perceptron-generated channel feature map M with only one hidden layer by equation (1)c∈RC×1×1Finally, fused features are obtained using element-by-element summation:
in the formula (1), sigma is sigmoid activation function, W0And W1Representing multi-layer perceptron weights, W0∈RC/r×C,W1∈RC×C/rR represents the deceleration rate;
the spatial attention mechanism applies average pooling and maximum pooling operations along the channel axis, fuses the channel maps of the upper layer outputs, generates average pooling characteristicsAnd maximum pooling characteristics
The two characteristics are convoluted by the standard convolution layer, and the significant characteristic descriptor M is generated by the sigmoid activating function through the formula (2)s(F)∈RH×W:
In the formula (2), σ is sigmoid activation function, f7×7A convolution kernel representing a kernel size of 7x 7;
and 4, calculating a corresponding relation matrix between the characteristics and the parameters of the Gaussian mixture model to obtain matching parameters, and calculating the parameters of the Gaussian mixture model including weight, mean value and variance through the formulas (3), (4) and (5), as shown in FIG. 4.
In the formula (3), αjA weight scalar representing the jth Gaussian mixture model, N representing the total number of points, ri,jRepresenting the relevance of the ith point to the J-th component in the Gaussian mixture model;
in the formula (4), mujIs a mean vector of size 3 × 1, piRepresenting the probability that the ith point belongs to a specified Gaussian component;
and 5, recovering the optimal transformation from the matched parameters by the formula (6), as shown in fig. 5.
The present invention is described in detail with reference to the above embodiments, and those skilled in the art will understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
Claims (2)
1. The depth Gaussian mixture model skull registration method based on the double attention mechanism feature extraction is characterized by comprising the following steps of:
step 1, acquiring a three-dimensional point cloud skull model through a three-dimensional scanner;
step 2, processing the three-dimensional point cloud model into a skull model only containing 1700 points of vertex information;
step 3, inputting the point cloud model into a convolutional neural network PointNet, and reducing model storage and calculation overhead by adopting a residual block + double attention mechanism to extract key features by giving different weights to the input point cloud in the encoding and decoding stages;
the residual error network is formed by connecting a plurality of residual error blocks in series, input information is directly connected to output in a jumping mode, the attention mechanism is decomposed along two directions of a channel and a space, and a double attention mechanism is combined;
the channel attention mechanism adopts average pooling and maximum pooling to fuse partial feature information to generate average pooling featuresAnd maximum pooling characteristics
Propagation of two features to a multi-layered perceptron-generated channel feature map M with only one hidden layer by equation (1)c∈RC ×1×1Finally, fused features are obtained using element-by-element summation:
in the formula (1), sigma is sigmoid activation function, W0And W1Representing multi-layer perceptron weights, W0∈RC/r×C,W1∈RC×C/rR represents the deceleration rate;
the spatial attention mechanism applies average pooling and maximum pooling operations along the channel axis, fuses the channel maps of the upper layer outputs, generates average pooling characteristicsAnd maximum pooling characteristics
The two characteristics are convoluted by the standard convolution layer, and the significant characteristic descriptor M is generated by the sigmoid activating function through the formula (2)s(F)∈RH×W:
In the formula (2), σ is sigmoid activation function, f7×7A convolution kernel representing a kernel size of 7x 7;
step 4, calculating a corresponding relation matrix between the characteristics and the parameters of the Gaussian mixture model to obtain matching parameters, and calculating the parameters of the Gaussian mixture model through the formulas (3), (4) and (5), wherein the parameters comprise weight, mean value and variance;
in the formula (3), αjA weight scalar representing the jth Gaussian mixture model, N representing the total number of points, ri,jRepresenting the relevance of the ith point to the J-th component in the Gaussian mixture model;
in the formula (4), mujIs a mean vector of size 3 × 1, piRepresenting the probability that the ith point belongs to a specified Gaussian component;
and 5, recovering the optimal transformation from the matched parameters by the formula (6):
2. the skull registration method based on the depth Gaussian mixture model with double attention mechanism feature extraction as claimed in claim 1, wherein the point cloud model of each three-dimensional skull in step 2 is processed into 1700 points containing only vertex information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110032732.9A CN112801945A (en) | 2021-01-11 | 2021-01-11 | Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110032732.9A CN112801945A (en) | 2021-01-11 | 2021-01-11 | Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112801945A true CN112801945A (en) | 2021-05-14 |
Family
ID=75809860
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110032732.9A Pending CN112801945A (en) | 2021-01-11 | 2021-01-11 | Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112801945A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113538654A (en) * | 2021-06-11 | 2021-10-22 | 五邑大学 | Method, device and computer readable storage medium for generating image of cranial implant |
CN113658236A (en) * | 2021-08-11 | 2021-11-16 | 浙江大学计算机创新技术研究院 | Incomplete point cloud registration method based on graph attention machine system |
CN113989340A (en) * | 2021-10-29 | 2022-01-28 | 天津大学 | Point cloud registration method based on distribution |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190050999A1 (en) * | 2017-08-14 | 2019-02-14 | Siemens Healthcare Gmbh | Dilated Fully Convolutional Network for Multi-Agent 2D/3D Medical Image Registration |
WO2019153908A1 (en) * | 2018-02-11 | 2019-08-15 | 北京达佳互联信息技术有限公司 | Image recognition method and system based on attention model |
CN111192200A (en) * | 2020-01-02 | 2020-05-22 | 南京邮电大学 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
CN111292259A (en) * | 2020-01-14 | 2020-06-16 | 西安交通大学 | Deep learning image denoising method integrating multi-scale and attention mechanism |
US20200311914A1 (en) * | 2017-04-25 | 2020-10-01 | The Board Of Trustees Of Leland Stanford University | Dose reduction for medical imaging using deep convolutional neural networks |
CN112200843A (en) * | 2020-10-09 | 2021-01-08 | 福州大学 | CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels |
-
2021
- 2021-01-11 CN CN202110032732.9A patent/CN112801945A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200311914A1 (en) * | 2017-04-25 | 2020-10-01 | The Board Of Trustees Of Leland Stanford University | Dose reduction for medical imaging using deep convolutional neural networks |
US20190050999A1 (en) * | 2017-08-14 | 2019-02-14 | Siemens Healthcare Gmbh | Dilated Fully Convolutional Network for Multi-Agent 2D/3D Medical Image Registration |
WO2019153908A1 (en) * | 2018-02-11 | 2019-08-15 | 北京达佳互联信息技术有限公司 | Image recognition method and system based on attention model |
CN111192200A (en) * | 2020-01-02 | 2020-05-22 | 南京邮电大学 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
CN111292259A (en) * | 2020-01-14 | 2020-06-16 | 西安交通大学 | Deep learning image denoising method integrating multi-scale and attention mechanism |
CN112200843A (en) * | 2020-10-09 | 2021-01-08 | 福州大学 | CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels |
Non-Patent Citations (5)
Title |
---|
SANGHYUN WOO 等: "CBAM: Convolutional Block Attention Module", 《ARXIV》 * |
WENTAO YUAN等: "DeepGMR: Learning Latent Gaussian Mixture Models for Registration", 《ARXIV》 * |
赵夫群等: "颅骨点云模型的局部特征配准方法", 《中国图象图形学报》 * |
赵欣等: "基于3D全卷积深度神经网络的脑白质病变分割方法", 《计算机与现代化》 * |
高健 等: "基于混合注意力机制的表情识别研究", 《信息技术与网络安全》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113538654A (en) * | 2021-06-11 | 2021-10-22 | 五邑大学 | Method, device and computer readable storage medium for generating image of cranial implant |
CN113538654B (en) * | 2021-06-11 | 2024-04-02 | 五邑大学 | Skull implant image generation method, device and computer readable storage medium |
CN113658236A (en) * | 2021-08-11 | 2021-11-16 | 浙江大学计算机创新技术研究院 | Incomplete point cloud registration method based on graph attention machine system |
CN113658236B (en) * | 2021-08-11 | 2023-10-24 | 浙江大学计算机创新技术研究院 | Incomplete point cloud registration method based on graph attention mechanism |
CN113989340A (en) * | 2021-10-29 | 2022-01-28 | 天津大学 | Point cloud registration method based on distribution |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Melekhov et al. | Dgc-net: Dense geometric correspondence network | |
CN110348330B (en) | Face pose virtual view generation method based on VAE-ACGAN | |
Huang et al. | A coarse-to-fine algorithm for matching and registration in 3D cross-source point clouds | |
CN112801945A (en) | Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction | |
CN111161364B (en) | Real-time shape completion and attitude estimation method for single-view depth map | |
CN110852182B (en) | Depth video human body behavior recognition method based on three-dimensional space time sequence modeling | |
CN110032925B (en) | Gesture image segmentation and recognition method based on improved capsule network and algorithm | |
US12106482B2 (en) | Learning-based active surface model for medical image segmentation | |
CN112766160A (en) | Face replacement method based on multi-stage attribute encoder and attention mechanism | |
CN110490158B (en) | Robust face alignment method based on multistage model | |
Guo et al. | JointPruning: Pruning networks along multiple dimensions for efficient point cloud processing | |
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN104077742B (en) | Human face sketch synthetic method and system based on Gabor characteristic | |
CN112785526A (en) | Three-dimensional point cloud repairing method for graphic processing | |
CN112418041A (en) | Multi-pose face recognition method based on face orthogonalization | |
CN115018999B (en) | Multi-robot collaboration dense point cloud map construction method and device | |
CN115830375B (en) | Point cloud classification method and device | |
CN111428689A (en) | Multi-pool information fusion human face image feature extraction method | |
CN114626476A (en) | Bird fine-grained image recognition method and device based on Transformer and component feature fusion | |
CN114972016A (en) | Image processing method, image processing apparatus, computer device, storage medium, and program product | |
CN116958420A (en) | High-precision modeling method for three-dimensional face of digital human teacher | |
CN113688842B (en) | Local image feature extraction method based on decoupling | |
CN114693923A (en) | Three-dimensional point cloud semantic segmentation method based on context and attention | |
CN116740498B (en) | Model pre-training method, model training method, object processing method and device | |
CN112686202A (en) | Human head identification method and system based on 3D reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210514 |
|
RJ01 | Rejection of invention patent application after publication |