A Robust Multi-Local to Global with Outlier Filtering for Point Cloud Registration
<p>The input point clouds is downsampled into dense and sparse points. High-quality sparse point correspondences are obtained after LG-Net and attention mechanism. Then, the sparse point correspondences are propagated to dense points. Finally, the transformation matrix is computed with the multi-local-to-global registration strategy.</p> "> Figure 2
<p>Registration recall of previous methods on 3DMatch (<b>top</b>) and 3DLoMatch (<b>bottom</b>). The numbers 250, 500, 1000, 2500, and 5000 indicate the number of correspondences, respectively. It can be observed that, as the number of correspondences decreases, the registration recall of these methods decreases dramatically. A stable registration method should be robust to the number of samples, which is the main direction of our research.</p> "> Figure 3
<p>Local geometric network (LG-Net) structure diagram. We add LG-Net to the original self-attention and cross-attention mechanisms to learn local geometric features and generate discriminative feature descriptors, thus producing robust correspondences.</p> "> Figure 4
<p>Multi-local-to-global registration strategy. Circles and triangles with the same color represent pairs of points where correspondence exists. We evaluate the correspondence in different transformation schemes, and correspondences that are evaluated as inlier matching in multiple schemes will receive higher scores (e.g., bottom, last row), and, conversely, correspondences with low scores will be filtered (e.g., penultimate row).</p> "> Figure 5
<p>Registration visualization on 3DMatch, The input (<b>a</b>) image shows the original point clouds pose, and, to make it easier to distinguish between the two point clouds, we have pulled the two point clouds apart by a distance. Ground truth (<b>b</b>) denotes ground truth pose. Output (<b>c</b>) shows the point clouds pose after alignment.</p> "> Figure 6
<p>The visualization results of GeoTransformer and MLGR on the 3DMatch dataset. RMSE (m) includes two values, the left one being the error for this sample and the right one being the average error tested on the dataset. Point corr indicates the quantity of point correspondences. Overlap represents the overlap rate between the two point clouds. Patch corr denotes the quantity of patch correspondences, and the colors of different patches vary. Compared with GeoTransformer, the average value of our RMSE is smaller. (<b>a</b>) input; (<b>b</b>) ground truth; (<b>c</b>) GeoTransformer-pose; (<b>d</b>) MLGR-pose; (<b>e</b>) GeoTransformer-patch correspondences; and (<b>f</b>) MLGR-patch correspondences.</p> "> Figure 6 Cont.
<p>The visualization results of GeoTransformer and MLGR on the 3DMatch dataset. RMSE (m) includes two values, the left one being the error for this sample and the right one being the average error tested on the dataset. Point corr indicates the quantity of point correspondences. Overlap represents the overlap rate between the two point clouds. Patch corr denotes the quantity of patch correspondences, and the colors of different patches vary. Compared with GeoTransformer, the average value of our RMSE is smaller. (<b>a</b>) input; (<b>b</b>) ground truth; (<b>c</b>) GeoTransformer-pose; (<b>d</b>) MLGR-pose; (<b>e</b>) GeoTransformer-patch correspondences; and (<b>f</b>) MLGR-patch correspondences.</p> "> Figure 7
<p>Ablation study of the pose refinement on 3DMatch (<b>top</b>) and 3DLoMatch (<b>bottom</b>). Pose refinement continuously improves results and saturates after 7 iterations.</p> ">
Abstract
:1. Introduction
- We design and add a local feature aggregation module (LG-Net) based on the geometric transformer. Our design is simple and efficient, and improves the overall performance with little overhead. While using the attention mechanism for global information exchange, local features are aggregated to increase feature diversity and make the generated feature descriptors more distinguishable. This will be conducive to obtaining more accurate correspondences, thus reducing the probability of outlier matching.
- We design a multi-local-to-global registration (MLGR) strategy to filter outlier matches. In LGR, the evaluated single correspondence scheme with the most inlier matches is used directly to compute the global transformation matrix. However, there are instances where outlier correspondence is incorrectly evaluated as inlier mathing in this process. For this reason, we propose MLGR, where we pick the top-k correspondence schemes with the most inlier matching. Correspondence scores that are evaluated as inlier matching in all k schemes will be maintained, the ones that are evaluated as inlier matching in just a few schemes will be lowered, and the rest will be masked as outlier matching. Reliable correspondences will be retained and the weight of unreliable correspondences will be reduced as well as filtered. In this way, we effectively filter outlier matching and improve the stability of the registration.
- Our method is quite robust under different sample numbers, which outperforms the state of the art on KITTI and 3DMatch with the highest registration accuracy. It improves the inlier ratio by 3.62% and 4.36% on 3DMatch and 3DLoMatch, respectively. With the number of point correspondences decreasing, the results of the other methods either become unacceptable or drop dramatically, while our method maintains good results, reflecting the robustness of our method.
2. Related Work
3. Methods
3.1. Sparse Point Matching with LG-Net
3.1.1. Local Geometric Network
3.1.2. Self-Attention
3.1.3. Cross-Attention
3.2. Dense Point Matching with MLGR
Multi-Local-to-Global Registration
Algorithm 1 Multi-local-to-global registration |
4. Experiments
4.1. Experimental Settings
4.2. Evaluation Metric
4.2.1. Evaluation Metric on KITTI
4.2.2. Evaluation Metric on 3DMatch
4.2.3. Evaluation Metric on ModelNet40
4.3. Evaluation on KITTI
4.4. Evaluation on ModelNet40
4.5. Evaluation on 3DMatch and 3DLoMatch
4.6. Ablation Study
5. Conclusions and Limitations
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zheng, Y.; Li, Y.; Yang, S.; Lu, H. Global-PBNet: A novel point cloud registration for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22312–22319. [Google Scholar] [CrossRef]
- Bash, E.A.; Wecker, L.; Rahman, M.M.; Dow, C.F.; McDermid, G.; Samavati, F.F.; Whitehead, K.; Moorman, B.J.; Medrzycka, D.; Copland, L. A Multi-Resolution Approach to Point Cloud Registration without Control Points. Remote Sens. 2023, 15, 1161. [Google Scholar] [CrossRef]
- Song, Y.; Shen, W.; Peng, K. A novel partial point cloud registration method based on graph attention network. Vis. Comput. 2023, 39, 1109–1120. [Google Scholar] [CrossRef]
- Dang, Z.; Wang, L.; Guo, Y.; Salzmann, M. Learning-based point cloud registration for 6d object pose estimation in the real world. In Proceedings of the Computer Vision–ECCV 2022: 17th European Conference, (Proceedings, Part I), Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 19–37. [Google Scholar]
- Mei, G.; Poiesi, F.; Saltori, C.; Zhang, J.; Ricci, E.; Sebe, N. Overlap-guided gaussian mixture models for point cloud registration. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikola, HI, USA, 2–7 January 2023; pp. 4511–4520. [Google Scholar]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 12–15 November 1991; SPIE: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
- Pottmann, H.; Huang, Q.X.; Yang, Y.L.; Hu, S.M. Geometry and convergence analysis of algorithms for registration of 3D shapes. Int. J. Comput. Vis. 2006, 67, 277–296. [Google Scholar] [CrossRef]
- Bouaziz, S.; Tagliasacchi, A.; Pauly, M. Sparse iterative closest point. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2013; Volume 32, pp. 113–123. [Google Scholar]
- Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar]
- Pomerleau, F.; Colas, F.; Siegwart, R.; Magnenat, S. Comparing ICP variants on real-world data sets: Open-source library and experimental protocol. Auton. Robot. 2013, 34, 133–148. [Google Scholar] [CrossRef]
- Zhang, J.; Yao, Y.; Deng, B. Fast and robust iterative closest point. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3450–3466. [Google Scholar] [CrossRef]
- Yin, P.; Yuan, S.; Cao, H.; Ji, X.; Zhang, S.; Xie, L. Segregator: Global Point Cloud Registration with Semantic and Geometric Cues. arXiv 2023, arXiv:2301.07425. [Google Scholar]
- Zhang, Z.; Lyu, E.; Min, Z.; Zhang, A.; Yu, Y.; Meng, M.Q.H. Robust Semi-Supervised Point Cloud Registration via Latent GMM-Based Correspondence. Remote Sens. 2023, 15, 4493. [Google Scholar] [CrossRef]
- Cheng, X.; Yan, S.; Liu, Y.; Zhang, M.; Chen, C. R-PCR: Recurrent Point Cloud Registration Using High-Order Markov Decision. Remote Sens. 2023, 15, 1889. [Google Scholar] [CrossRef]
- Hu, E.; Sun, L. VODRAC: Efficient and robust correspondence-based point cloud registration with extreme outlier ratios. J. King Saud-Univ. Comput. Inf. Sci. 2023, 35, 38–55. [Google Scholar] [CrossRef]
- Wei, P.; Yan, L.; Xie, H.; Huang, M. Automatic coarse registration of point clouds using plane contour shape descriptor and topological graph voting. Autom. Constr. 2022, 134, 104055. [Google Scholar] [CrossRef]
- Chen, Z.; Yang, F.; Tao, W. Detarnet: Decoupling translation and rotation by siamese network for point cloud registration. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 22 February–1 March 2022; Volume 36, pp. 401–409. [Google Scholar]
- Yuan, W.; Eckart, B.; Kim, K.; Jampani, V.; Fox, D.; Kautz, J. Deepgmr: Learning latent gaussian mixture models for registration. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, (Proceedings, Part V 16), Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 733–750. [Google Scholar]
- Yew, Z.J.; Lee, G.H. Rpm-net: Robust point matching using learned features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11824–11833. [Google Scholar]
- Wang, J.; Wu, B.; Kang, J. Registration of 3D point clouds using a local descriptor based on grid point normal. Appl. Opt. 2021, 60, 8818–8828. [Google Scholar] [CrossRef] [PubMed]
- Gojcic, Z.; Zhou, C.; Wegner, J.D.; Wieser, A. The perfect match: 3D point cloud matching with smoothed densities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5545–5554. [Google Scholar]
- Choy, C.; Park, J.; Koltun, V. Fully convolutional geometric features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 2 November 2019; pp. 8958–8966. [Google Scholar]
- Poiesi, F.; Boscaini, D. Learning general and distinctive 3D local deep descriptors for point cloud registration. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 3979–3985. [Google Scholar] [CrossRef] [PubMed]
- Deng, H.; Birdal, T.; Ilic, S. PPF-FoldNet: Unsupervised learning of rotation invariant 3d local descriptors. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 602–618. [Google Scholar]
- Bai, X.; Luo, Z.; Zhou, L.; Fu, H.; Quan, L.; Tai, C.L. D3feat: Joint learning of dense detection and description of 3D local features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 6359–6367. [Google Scholar]
- Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; Schindler, K. Predator: Registration of 3D point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4267–4276. [Google Scholar]
- Bai, X.; Luo, Z.; Zhou, L.; Chen, H.; Li, L.; Hu, Z.; Fu, H.; Tai, C.L. Pointdsc: Robust point cloud registration using deep spatial consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15859–15869. [Google Scholar]
- Choy, C.; Dong, W.; Koltun, V. Deep global registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2514–2523. [Google Scholar]
- Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; Xu, K. Geometric transformer for fast and robust point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11143–11152. [Google Scholar]
- Yu, H.; Li, F.; Saleh, M.; Busam, B.; Ilic, S. Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. Adv. Neural Inf. Process. Syst. 2021, 34, 23872–23884. [Google Scholar]
- Rocco, I.; Cimpoi, M.; Arandjelović, R.; Torii, A.; Pajdla, T.; Sivic, J. Neighbourhood consensus networks. In Advances in Neural Information Processing Systems; Springer: Berlin/Heidelberg, Germany, 2018; Volume 31. [Google Scholar]
- Zhou, Q.; Sattler, T.; Leal-Taixe, L. Patch2pix: Epipolar-guided pixel-level correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4669–4678. [Google Scholar]
- Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A comprehensive survey on point cloud registration. arXiv 2021, arXiv:2103.02690. [Google Scholar]
- Han, D.; Pan, X.; Han, Y.; Song, S.; Huang, G. Flatten transformer: Vision transformer using focused linear attention. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2023; pp. 5961–5971. [Google Scholar]
- Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of laser scanning point clouds: A review. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef] [PubMed]
- Li, B.; Guan, D.; Zheng, X.; Chen, Z.; Pan, L. SD-CapsNet: A Siamese Dense Capsule Network for SAR Image Registration with Complex Scenes. Remote Sens. 2023, 15, 1871. [Google Scholar] [CrossRef]
- Li, J.; Hu, Q.; Ai, M. Point cloud registration based on one-point ransac and scale-annealing biweight estimation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9716–9729. [Google Scholar] [CrossRef]
- Brightman, N.; Fan, L.; Zhao, Y. Point cloud registration: A mini-review of current state, challenging issues and future directions. AIMS Geosci. 2023, 9, 68–85. [Google Scholar] [CrossRef]
- Wu, Y.; Yao, Q.; Fan, X.; Gong, M.; Ma, W.; Miao, Q. Panet: A point-attention based multi-scale feature fusion network for point cloud registration. IEEE Trans. Instrum. Meas. 2023, 72, 2512913. [Google Scholar] [CrossRef]
- Wang, Y.; Solomon, J.M. Prnet: Self-supervised learning for partial-to-partial registration. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 32. [Google Scholar]
- Li, J.; Zhang, C.; Xu, Z.; Zhou, H.; Zhang, C. Iterative distance-aware similarity matrix convolution with mutual-supervised point elimination for efficient point cloud registration. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, (Proceedings, Part XXIV 16), Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 378–394. [Google Scholar]
- Chen, S.; Nan, L.; Xia, R.; Zhao, J.; Wonka, P. PLADE: A plane-based descriptor for point cloud registration with small overlap. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2530–2540. [Google Scholar] [CrossRef]
- Salti, S.; Tombari, F.; Di Stefano, L. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
- Ao, S.; Hu, Q.; Yang, B.; Markham, A.; Guo, Y. Spinnet: Learning a general surface descriptor for 3D point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–23 June 2021; pp. 11753–11762. [Google Scholar]
- Wang, H.; Liu, Y.; Dong, Z.; Wang, W. You only hypothesize once: Point cloud registration with rotation-equivariant descriptors. In Proceedings of the 30th ACM International Conference on Multimedia, Nashville, TN, USA, 19–25 June 2022; pp. 1630–1641. [Google Scholar]
- Wang, Y.; Solomon, J.M. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF international Conference on Computer Vision, New Orleans, LA, USA, 19–24 June 2019; pp. 3523–3532. [Google Scholar]
- Jiang, H.; Xie, J.; Qian, J.; Yang, J. Planning with learned dynamic model for unsupervised point cloud registration. arXiv 2021, arXiv:2108.02613. [Google Scholar]
- Jiang, H.; Shen, Y.; Xie, J.; Li, J.; Qian, J.; Yang, J. Sampling network guided cross-entropy method for unsupervised point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 6128–6137. [Google Scholar]
- Shen, Y.; Hui, L.; Jiang, H.; Xie, J.; Yang, J. Reliable inlier evaluation for unsupervised point cloud registration. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2022; Volume 36, pp. 2198–2206. [Google Scholar]
- Yew, Z.J.; Lee, G.H. Regtr: End-to-end point cloud correspondences with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 6677–6686. [Google Scholar]
- Liu, W.; Wang, C.; Bian, X.; Chen, S.; Li, W.; Lin, X.; Li, Y.; Weng, D.; Lai, S.H.; Li, J. AE-GAN-Net: Learning invariant feature descriptor to match ground camera images and a large-scale 3D image-based point cloud for outdoor augmented reality. Remote Sens. 2019, 11, 2243. [Google Scholar] [CrossRef]
- Fu, K.; Liu, S.; Luo, X.; Wang, M. Robust point cloud registration framework based on deep graph matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2021; pp. 8893–8902. [Google Scholar]
- Pais, G.D.; Ramalingam, S.; Govindu, V.M.; Nascimento, J.C.; Chellappa, R.; Miraldo, P. 3DRegNet: A deep neural network for 3D point registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7193–7203. [Google Scholar]
- Chen, Z.; Sun, K.; Yang, F.; Tao, W. Sc2-pcr: A second order spatial compatibility for efficient and robust point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 13221–13231. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6411–6420. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Yew, Z.J.; Lee, G.H. 3DFeat-Net: Weakly supervised local 3D features for point cloud registration. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 607–623. [Google Scholar]
- Huang, X.; Mei, G.; Zhang, J. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11366–11374. [Google Scholar]
- Lu, F.; Chen, G.; Liu, Y.; Zhang, L.; Qu, S.; Liu, S.; Gu, R. Hregnet: A hierarchical network for large-scale outdoor lidar point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 16014–16023. [Google Scholar]
Model | RTE (cm) ↓ | RRE () ↓ | RR (%) ↑ |
---|---|---|---|
3DFeat-Net [58] | 25.9 | 0.25 | 96.0 |
FCGF [22] | 9.5 | 0.30 | 96.6 |
D3Feat [25] | 7.2 | 0.30 | 99.8 |
SpinNet [44] | 9.9 | 0.47 | 99.1 |
Predator [26] | 6.8 | 0.27 | 99.8 |
CoFiNet [30] | 8.2 | 0.41 | 99.8 |
GeoTransformer (RANSAC-50k) [29] | 7.4 | 0.27 | 99.8 |
Ours (RANSAC-50k) | 6.8 | 0.22 | 99.8 |
FMR [59] | ∼66 | 1.49 | 90.6 |
DGR [28] | ∼32 | 0.37 | 98.7 |
HRegNet [60] | ∼12 | 0.29 | 99.7 |
GeoTransformer (LGR) [29] | 6.8 | 0.24 | 99.8 |
Ours (MLGR) | 6.4 | 0.20 | 99.8 |
Model | ModelNet | ModelLoNet | ||
---|---|---|---|---|
RRE () ↓ | RTE ↓ | RRE () ↓ | RTE ↓ | |
Small Rotation | ||||
RPM-Net [19] | 2.357 | 0.028 | 8.123 | 0.086 |
RGM [19] | 4.548 | 0.049 | 14.806 | 0.139 |
Predator [26] | 2.064 | 0.023 | 5.022 | 0.091 |
CoFiNet [30] | 3.584 | 0.044 | 6.992 | 0.091 |
GeoTransformer (LGR) [29] | 2.160 | 0.024 | 3.638 | 0.064 |
Ours (MLGR) | 1.452 | 0.018 | 3.750 | 0.101 |
Large Rotation | ||||
RPM-Net [19] | 31.509 | 0.206 | 51.478 | 0.346 |
RGM [52] | 45.560 | 0.289 | 68.724 | 0.442 |
Predator [26] | 24.839 | 0.171 | 46.990 | 0.378 |
CoFiNet [30] | 10.496 | 0.084 | 32.578 | 0.226 |
GeoTransformer (LGR) [29] | 6.436 | 0.047 | 23.478 | 0.152 |
Ours (MLGR) | 6.763 | 0.044 | 21.597 | 0.174 |
Samples | 3DMatch | 3DLoMatch | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
250 | 500 | 1000 | 2500 | 5000 | 250 | 500 | 1000 | 2500 | 5000 | |
Feature Matching Recall (%) ↑ | ||||||||||
PerfectMatch [21] | 82.9 | 90.1 | 92.9 | 94.3 | 95.0 | 34.2 | 45.2 | 53.6 | 61.7 | 63.6 |
FCGF [22] | 96.6 | 96.7 | 97.0 | 97.3 | 97.4 | 67.3 | 71.7 | 74.2 | 75.4 | 76.6 |
D3Feat [25] | 93.1 | 94.1 | 94.5 | 95.4 | 95.6 | 66.5 | 66.7 | 67.0 | 66.7 | 67.3 |
SpinNet [44] | 94.3 | 95.5 | 96.8 | 97.2 | 97.6 | 63.6 | 70.0 | 72.5 | 74.9 | 75.3 |
Predator [26] | 96.5 | 96.3 | 96.5 | 96.6 | 96.6 | 75.3 | 75.7 | 76.3 | 77.4 | 78.6 |
YOHO [45] | 96.0 | 97.7 | 97.5 | 97.6 | 98.2 | 69.1 | 73.8 | 76.3 | 78.1 | 79.4 |
CoFiNet [30] | 98.3 | 98.2 | 98.1 | 98.3 | 98.1 | 82.6 | 83.1 | 83.3 | 83.5 | 83.1 |
GeoTransformer [29] | 97.6 | 97.9 | 97.9 | 97.9 | 97.9 | 88.3 | 88.6 | 88.8 | 88.6 | 88.3 |
Ours | 98.3 | 98.3 | 98.2 | 98.3 | 98.3 | 87.6 | 87.7 | 87.6 | 87.3 | 87.0 |
Inlier Ratio (%) ↑ | ||||||||||
PerfectMatch [21] | 16.4 | 21.5 | 26.4 | 32.5 | 36.0 | 4.8 | 6.4 | 8.0 | 10.1 | 11.4 |
FCGF [22] | 34.1 | 42.5 | 48.7 | 54.1 | 56.8 | 11.6 | 14.8 | 17.2 | 20.0 | 21.4 |
D3Feat [25] | 41.8 | 41.5 | 40.4 | 38.8 | 39.0 | 15.0 | 14.6 | 14.0 | 13.1 | 13.2 |
SpinNet [44] | 27.6 | 33.9 | 39.4 | 44.7 | 47.5 | 11.1 | 13.8 | 16.3 | 19.0 | 20.5 |
Predator [26] | 49.3 | 54.1 | 57.1 | 58.4 | 58.0 | 25.8 | 27.5 | 28.3 | 28.1 | 26.7 |
YOHO [45] | 41.2 | 46.4 | 55.7 | 60.7 | 64.4 | 15.0 | 18.2 | 22.6 | 23.3 | 25.9 |
CoFiNet [30] | 52.2 | 52.2 | 51.9 | 51.2 | 49.8 | 26.9 | 26.8 | 26.7 | 25.9 | 24.4 |
GeoTransformer [29] | 85.1 | 82.2 | 76.0 | 75.2 | 71.9 | 57.7 | 52.9 | 46.2 | 45.3 | 43.5 |
Ours | 86.8 | 85.6 | 83.8 | 79.3 | 73.0 | 60.0 | 58.2 | 55.6 | 49.4 | 44.2 |
Registration Recall (%) ↑ | ||||||||||
PerfectMatch [21] | 50.8 | 67.6 | 71.4 | 76.2 | 78.4 | 11.0 | 17.0 | 23.3 | 29.0 | 33.0 |
FCGF [22] | 71.4 | 81.6 | 83.3 | 84.7 | 85.1 | 26.8 | 35.4 | 38.2 | 41.7 | 40.1 |
D3Feat [25] | 77.9 | 82.4 | 83.4 | 84.5 | 81.6 | 39.1 | 43.8 | 46.9 | 42.7 | 37.2 |
SpinNet [44] | 70.2 | 83.5 | 85.5 | 86.6 | 88.6 | 26.8 | 39.8 | 48.3 | 54.9 | 59.8 |
Predator [26] | 86.6 | 88.5 | 90.6 | 89.9 | 89.0 | 58.1 | 60.8 | 62.4 | 61.2 | 59.8 |
YOHO [45] | 84.5 | 88.6 | 89.1 | 90.3 | 90.8 | 48.0 | 56.5 | 63.2 | 65.5 | 65.2 |
CoFiNet [30] | 87.0 | 87.4 | 88.4 | 88.9 | 89.3 | 61.0 | 63.1 | 64.2 | 66.2 | 67.5 |
GeoTransformer [29] | 91.2 | 91.4 | 91.8 | 91.8 | 92.0 | 73.5 | 74.1 | 74.2 | 74.8 | 75.0 |
Ours | 91.3 | 91.5 | 91.9 | 93.0 | 91.9 | 70.2 | 70.5 | 71.4 | 71.5 | 71.3 |
Model | Estimator | Samples | RR (%) ↑ | |
---|---|---|---|---|
3DMatch | 3DLoMatch | |||
FCGF [22] | RANSAC-50k | 5000 | 85.1 | 40.1 |
D3Feat [25] | RANSAC-50k | 5000 | 81.6 | 37.2 |
SpinNet [44] | RANSAC-50k | 5000 | 88.6 | 59.8 |
Predator [26] | RANSAC-50k | 5000 | 89.0 | 59.8 |
CoFiNet [30] | RANSAC-50k | 5000 | 89.3 | 67.5 |
GeoTransformer [29] | RANSAC-50k | 5000 | 92.0 | 75.0 |
Ours | RANSAC-50k | 5000 | 91.9 | 71.3 |
FCGF [22] | weighted SVD | 250 | 42.1 | 3.9 |
D3Feat [25] | weighted SVD | 250 | 37.4 | 2.8 |
SpinNet [44] | weighted SVD | 250 | 34.0 | 2.5 |
Predator [26] | weighted SVD | 250 | 50.0 | 6.4 |
CoFiNet [30] | weighted SVD | 250 | 64.6 | 21.6 |
GeoTransformer [29] | weighted SVD | 250 | 86.7 | 60.5 |
Ours | weighted SVD | 250 | 87.4 | 58.6 |
CoFiNet [30] | LGR | all | 87.6 | 64.8 |
GeoTransformer [29] | LGR | all | 91.5 | 74.0 |
Ours | MLGR | all | 91.8 | 72.0 |
Model | Estimator | Samples | 3DMatch | ||
---|---|---|---|---|---|
FMR (%) ↑ | IR (%) ↑ | RR (%) ↑ | |||
Baseline (without LG-Net) | RANSAC-50k | all | 97.8 | 70.9 | 92.0 |
Ours (with LG-Net) | RANSAC-50k | all | 97.8 | 71.4 | 92.2 |
Baseline (without LG-Net) | LGR | all | 97.7 | 70.3 | 91.5 |
Ours (with LG-Net) | LGR | all | 97.8 | 70.8 | 91.4 |
Baseline (without LG-Net) | MLGR (ours) | all | 98.1 | 71.4 | 91.8 |
Ours (with LG-Net) | MLGR (ours) | all | 98.3 | 71.9 | 91.8 |
Model | Estimator | Samples | KITTI | ||
---|---|---|---|---|---|
RTE (cm) ↓ | RRE (°) ↓ | RR (%) ↑ | |||
Baseline (without LG-Net) | RANSAC-50k | all | 7.4 | 0.27 | 99.8 |
Ours (with LG-Net) | RANSAC-50k | all | 6.8 | 0.22 | 99.8 |
Baseline (without LG-Net) | LGR | all | 6.8 | 0.24 | 99.8 |
Ours (with LG-Net) | LGR | all | 6.8 | 0.22 | 99.6 |
Baseline (without LG-Net) | MLGR (ours) | all | 6.6 | 0.23 | 99.8 |
Ours (with LG-Net) | MLGR (ours) | all | 6.4 | 0.20 | 99.8 |
Model | RR (%) ↑ | |
---|---|---|
3DMatch | 3DLoMatch | |
90.7 | 71.2 | |
91.8 | 72.0 | |
90.2 | 70.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Y.; Mei, Y.; Yu, B.; Xu, W.; Wu, Y.; Zhang, D.; Yan, X. A Robust Multi-Local to Global with Outlier Filtering for Point Cloud Registration. Remote Sens. 2023, 15, 5641. https://doi.org/10.3390/rs15245641
Chen Y, Mei Y, Yu B, Xu W, Wu Y, Zhang D, Yan X. A Robust Multi-Local to Global with Outlier Filtering for Point Cloud Registration. Remote Sensing. 2023; 15(24):5641. https://doi.org/10.3390/rs15245641
Chicago/Turabian StyleChen, Yilin, Yang Mei, Baocheng Yu, Wenxia Xu, Yiqi Wu, Dejun Zhang, and Xiaohu Yan. 2023. "A Robust Multi-Local to Global with Outlier Filtering for Point Cloud Registration" Remote Sensing 15, no. 24: 5641. https://doi.org/10.3390/rs15245641
APA StyleChen, Y., Mei, Y., Yu, B., Xu, W., Wu, Y., Zhang, D., & Yan, X. (2023). A Robust Multi-Local to Global with Outlier Filtering for Point Cloud Registration. Remote Sensing, 15(24), 5641. https://doi.org/10.3390/rs15245641