Detail Preserved Surface Reconstruction from Point Cloud
<p>An example of the application of the proposed method in the field of cultural heritage digitalization, (<b>a</b>) point cloud; (<b>b</b>) mesh.</p> "> Figure 2
<p>Pipeline of the proposed method in 2D. From left to right are the input point cloud, Delaunay tetrahedra, the <span class="html-italic">s</span>-<span class="html-italic">t</span> graph, the energy minimization result and the final surface mesh.</p> "> Figure 3
<p>Typical visibility model and soft visibility model in 2D. (<b>a</b>) Typical visibility model; and (<b>b</b>) soft visibility model for a single line of sight <span class="html-italic">v</span>; how to assign weight (energy) to the tetrahedron containing camera center, the end tetrahedron and the facets intersected by <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mi>c</mi> <mo>,</mo> <mi>p</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> or its extension are shown.</p> "> Figure 4
<p>Our visibility model and end tetrahedra comparison in 2D. (<b>a</b>) In our visibility model, for a single line of sight <span class="html-italic">v</span>, how to assign weight (energy) to the tetrahedron containing camera center, the end tetrahedron and the facets intersected by <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mi>c</mi> <mo>,</mo> <mi>p</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> is shown. (<b>b</b>) From left to right: Typical end tetrahedron of noise points and that of true surface points on densely sampled surfaces.</p> "> Figure 5
<p>Surface reconstruction without and with the likelihood energy. From left to right: (<b>a</b>) point cloud with heavy noise; (<b>b</b>) reconstructed meshes without; and (<b>c</b>) with the likelihood energy.</p> "> Figure 6
<p>Free-space support analysis. The four graphs show the ratios of number of outside and inside tetrahedra in different percentiles of all free-space support scores. The four datasets are: (<b>a</b>) Fountain-P11 [<a href="#B3-sensors-19-01278" class="html-bibr">3</a>]; (<b>b</b>) Herz-Jesu-P8 [<a href="#B3-sensors-19-01278" class="html-bibr">3</a>]; (<b>c</b>) Temple-P312 [<a href="#B1-sensors-19-01278" class="html-bibr">1</a>]; and (<b>d</b>) scan23-P49 [<a href="#B2-sensors-19-01278" class="html-bibr">2</a>].</p> "> Figure 7
<p>Free-space support threshold evaluation. The true positive rates and false positive rates in the same four datasets as in <a href="#sensors-19-01278-f006" class="html-fig">Figure 6</a> are evaluated by setting different free-space support thresholds.</p> "> Figure 8
<p>Object edges in the reconstructed meshes, the original image and the depth map. From left to right: (<b>a</b>) the object edges in surface meshes reconstructed by the method in [<a href="#B5-sensors-19-01278" class="html-bibr">5</a>] (top) and our method (bottom) without the dense visibility technique; (<b>b</b>) the original image and the corresponding depth map with a similar view point; and (<b>c</b>) the object edges in surface meshes reconstructed by the method in [<a href="#B5-sensors-19-01278" class="html-bibr">5</a>] (top) and our method (bottom) with the dense visibility technique.</p> "> Figure 9
<p>The modified version of our visibility model in 2D. In this visibility model, for a single line of sight <span class="html-italic">v</span>, how to assign weight (energy) to the tetrahedron containing camera center, the end tetrahedron and the facets intersected by <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mi>c</mi> <mo>,</mo> <mi>p</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> or its extension is shown.</p> "> Figure 10
<p>Result of the five methods on MVS dataset [<a href="#B2-sensors-19-01278" class="html-bibr">2</a>]. GT is the reference model; Tol is Tola et al. [<a href="#B27-sensors-19-01278" class="html-bibr">27</a>]; Fur is Furukawa and Ponce [<a href="#B29-sensors-19-01278" class="html-bibr">29</a>]; Cam is Campbell et al. [<a href="#B24-sensors-19-01278" class="html-bibr">24</a>]; Jan is Jancosek and Pajdla [<a href="#B5-sensors-19-01278" class="html-bibr">5</a>]; Our is the proposed method; and ∗_Sur is the surface generated by poisson surface reconstruction method [<a href="#B30-sensors-19-01278" class="html-bibr">30</a>].</p> "> Figure 11
<p>Quantitative evaluation (lower is better) of the methods on MVS dataset [<a href="#B2-sensors-19-01278" class="html-bibr">2</a>]. The abbreviations of the methods are given in <a href="#sensors-19-01278-f010" class="html-fig">Figure 10</a>. In addition, OMVS is OpenMVS; Our∗ is the proposed method without the dense visibility technique; and ∗_Pts is the point cloud generated by method ∗.</p> "> Figure 12
<p>Detailed views of three methods on MVS dataset [<a href="#B2-sensors-19-01278" class="html-bibr">2</a>]. From left to right are: (<b>a</b>) the point cloud generated by OpenMVS; (<b>b</b>) the mesh of Jancosek and Pajdla [<a href="#B5-sensors-19-01278" class="html-bibr">5</a>]; (<b>c</b>) the mesh of the proposed method without the dense visibility technique; and (<b>d</b>) the mesh of the proposed method. In the even rows are the evaluation result (lower is better) of the corresponding local models in the odd rows through the method in [<a href="#B41-sensors-19-01278" class="html-bibr">41</a>]. The unit is mm for all numbers.</p> "> Figure 13
<p>Result of the proposed method on four scenes of the training set of Tanks and Temples dataset [<a href="#B40-sensors-19-01278" class="html-bibr">40</a>]. From left to right are: (<b>a</b>) the input images; (<b>b</b>) the precision of the model generated by the proposed method; (<b>c</b>) the recall of the model generated by the proposed method; and (<b>d</b>) the evaluation result (higher is better) of the models generated by Colmap [<a href="#B38-sensors-19-01278" class="html-bibr">38</a>] and three other methods depicted in <a href="#sensors-19-01278-f010" class="html-fig">Figure 10</a>.</p> "> Figure 14
<p>Detailed views of three methods on four scenes of the training set of Tanks and Temples dataset [<a href="#B40-sensors-19-01278" class="html-bibr">40</a>]. From left to right are: (<b>a</b>) the point cloud generated by OpenMVS; (<b>b</b>) the mesh of Jancosek and Pajdla [<a href="#B5-sensors-19-01278" class="html-bibr">5</a>]; (<b>c</b>) the mesh of the proposed method without the dense visibility technique; and (<b>d</b>) the mesh of the proposed method. In the even rows are the evaluation result (lower is better) of the corresponding local models in the odd rows through the method in [<a href="#B41-sensors-19-01278" class="html-bibr">41</a>]. The unit is m for all numbers.</p> "> Figure 15
<p>Detailed views of three methods on the intermediate set of Tanks and Temples dataset [<a href="#B40-sensors-19-01278" class="html-bibr">40</a>]. From top to bottom are Family, Francis, Horse, Lighthouse, M60, Panther, Playground and Train. From left to right are: (<b>a</b>) the point cloud generated by OpenMVS; (<b>b</b>) the mesh of Jancosek and Pajdla [<a href="#B5-sensors-19-01278" class="html-bibr">5</a>]; (<b>c</b>) the mesh of the proposed method without the dense visibility technique; and (<b>d</b>) the mesh of the proposed method.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Visibility Models and Energies
3.1. Existing Visibility Models
3.2. Our Proposed Visibility Model
4. Likelihood Energy for Efficient Noise Filtering
4.1. Likelihood Energy
4.2. Implementation of the Likelihood Energy
5. Surface Reconstruction with Energy Minimization
6. Dense Visibility for Edge Preservation
7. Experimental Results
8. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition IEEE, New York, NY, USA, 17–22 June 2006; pp. 519–528. [Google Scholar]
- Jensen, R.; Dahl, A.; Vogiatzis, G.; Tola, E.; Aanæs, H. Large scale multi-view stereopsis evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition IEEE, Columbus, OH, USA, 23–28 June 2014; pp. 406–413. [Google Scholar]
- Strecha, C.; von Hansen, W.; Van Gool, L.; Fua, P.; Thoennessen, U. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition IEEE, Anchorage, AL, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Sinha, S.N.; Mordohai, P.; Pollefeys, M. Multi-view stereo via graph cuts on the dual of an adaptive tetrahedral mesh. In Proceedings of the 11th IEEE International Conference on Computer Vision, Rio De Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Jancosek, M.; Pajdla, T. Exploiting visibility information in surface reconstruction to preserve weakly supported surfaces. Int. Schol. Res. Not. 2014, 2014, 798595. [Google Scholar] [CrossRef] [PubMed]
- Häne, C.; Zach, C.; Cohen, A.; Pollefeys, M. Dense semantic 3d reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1730–1743. [Google Scholar] [CrossRef] [PubMed]
- Vu, H.H.; Labatut, P.; Pons, J.P.; Keriven, R. High accuracy and visibility-consistent dense multiview stereo. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 889–901. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Y.; Shen, S.; Hu, Z. A New Visibility Model for Surface Reconstruction. In Proceedings of the CCF Chinese Conference on Computer Vision, Tianjin, China, 11–14 October 2017; pp. 145–156. [Google Scholar]
- Vogiatzis, G.; Torr, P.H.; Cipolla, R. Multi-view stereo via volumetric graph-cuts. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, San Diego, CA, USA, 20–25 June 2005; pp. 391–398. [Google Scholar]
- Tran, S.; Davis, L. 3D surface reconstruction using graph cuts with surface constraints. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 219–231. [Google Scholar]
- Lempitsky, V.; Boykov, Y.; Ivanov, D. Oriented visibility for multiview reconstruction. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 226–238. [Google Scholar]
- Labatut, P.; Pons, J.P.; Keriven, R. Efficient multi-view reconstruction of large-scale scenes using interest points, delaunay triangulation and graph cuts. In Proceedings of the 11th IEEE international Conference on Computer Vision, Rio De Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Labatut, P.; Pons, J.P.; Keriven, R. Robust and efficient surface reconstruction from range data. Comp. Graph Forum 2009, 28, 2275–2290. [Google Scholar] [CrossRef]
- Laurentini, A. The visual hull concept for silhouette-based image understanding. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 150–162. [Google Scholar] [CrossRef]
- Esteban, C.H.; Schmitt, F. Silhouette and stereo fusion for 3D object modeling. Comp. Vis. Image Underst. 2004, 96, 367–392. [Google Scholar] [CrossRef]
- Starck, J.; Miller, G.; Hilton, A. Volumetric stereo with silhouette and feature constraints. In Proceedings of the British Machine Vision Conference, Edinburgh, UK, 4–7 September 2006; pp. 1189–1198. [Google Scholar]
- Franco, J.S.; Boyer, E. Fusion of multiview silhouette cues using a space occupancy grid. In Proceedings of the 10th IEEE International Conference on Computer Vision, San Diego, CA, USA, 20–25 June 2005; pp. 1747–1753. [Google Scholar]
- Guan, L.; Franco, J.S.; Pollefeys, M. 3d occlusion inference from silhouette cues. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- Kutulakos, K.N.; Seitz, S.M. A theory of shape by space carving. Int. J. Comp. Vis. 2000, 38, 199–218. [Google Scholar] [CrossRef]
- Broadhurst, A.; Drummond, T.W.; Cipolla, R. A probabilistic framework for space carving. In Proceedings of the 8th IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; pp. 388–393. [Google Scholar]
- Yang, R.; Pollefeys, M.; Welch, G. Dealing with Textureless Regions and Specular Highlights-A Progressive Space Carving Scheme Using a Novel Photo-consistency Measure. In Proceedings of the 9th IEEE International Conference on Computer Vision, IEEE, Nice, France, 13–16 October 2003; pp. 576–584. [Google Scholar]
- Jin, H.; Soatto, S.; Yezzi, A.J. Multi-view stereo reconstruction of dense shape and complex appearance. Int. J. Comp. Vis. 2005, 63, 175–189. [Google Scholar] [CrossRef]
- Pons, J.P.; Keriven, R.; Faugeras, O. Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score. Int. J. Comp. Vis. 2007, 72, 179–193. [Google Scholar] [CrossRef]
- Campbell, N.D.; Vogiatzis, G.; Hernández, C.; Cipolla, R. Using multiple hypotheses to improve depth-maps for multi-view stereo. In Proceedings of the European Conference on Computer Vision, Marseille, France, 12–18 October 2008; pp. 766–779. [Google Scholar]
- Goesele, M.; Curless, B.; Seitz, S.M. Multi-view stereo revisited. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 2402–2409. [Google Scholar]
- Hiep, V.H.; Keriven, R.; Labatut, P.; Pons, J.P. Towards high-resolution large-scale multi-view stereo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1430–1437. [Google Scholar]
- Tola, E.; Strecha, C.; Fua, P. Efficient large-scale multi-view stereo for ultra high-resolution image sets. Mach. Vis. Appl. 2012, 23, 903–920. [Google Scholar] [CrossRef]
- Shen, S. Accurate multiple view 3d reconstruction using patch-based stereo for large-scale scenes. IEEE Trans. Image Process. 2013, 22, 1901–1914. [Google Scholar] [CrossRef] [PubMed]
- Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
- Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Trans. Graph. 2013, 32, 29. [Google Scholar] [CrossRef]
- Bódis-Szomorú, A.; Riemenschneider, H.; Van Gool, L. Superpixel meshes for fast edge-preserving surface reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 2011–2020. [Google Scholar]
- Jancosek, M.; Pajdla, T. Multi-view reconstruction preserving weakly-supported surfaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 3121–3128. [Google Scholar]
- Hane, C.; Zach, C.; Cohen, A.; Angst, R.; Pollefeys, M. Joint 3D scene reconstruction and class segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 201; pp. 97–104.
- Blaha, M.; Vogel, C.; Richard, A.; Wegner, J.D.; Pock, T.; Schindler, K. Large-scale semantic 3d reconstruction: an adaptive multi-resolution model for multi-class volumetric labeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, NE, USA, 26 June–1 July 2016; pp. 3176–3184. [Google Scholar]
- Cherabier, I.; Hane, C.; Oswald, M.R.; Pollefeys, M. Multi-label semantic 3d reconstruction using voxel blocks. In Proceedings of the 4th International Conference on 3D Vision, Stanford University, Stanford, CA, USA, 25–28 October 2016; pp. 601–610. [Google Scholar]
- Savinov, N.; Hane, C.; Ladicky, L.; Pollefeys, M. Semantic 3d reconstruction with continuous regularization and ray potentials using a visibility consistency constraint. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NE, USA, 27–30 June 2016; pp. 5460–5469. [Google Scholar]
- Ummenhofer, B.; Brox, T. Global, dense multiscale reconstruction for a billion points. In Proceedings of the IEEE International Conference on Computer Vision, IEEE, Washington, DC, USA, 7–13 December 2015; pp. 1341–1349. [Google Scholar]
- Schönberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NE, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Goldberg, A.V.; Hed, S.; Kaplan, H.; Tarjan, R.E.; Werneck, R.F. Maximum flows by incremental breadth-first search. In Proceedings of the European Symposium on Algorithms, Saarbrücken, Germany, 5–9 September 2011; pp. 457–468. [Google Scholar]
- Knapitsch, A.; Park, J.; Zhou, Q.Y.; Koltun, V. Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction. ACM Trans. Graph. 2017, 36, 78. [Google Scholar] [CrossRef]
- Jurjević, L.; Gašparović, M. 3D Data Acquisition Based on OpenCV for Close-range Photogrammetry Applications. In Proceedings of the ISPRS Hannover Workshop: HRIGI 17–CMRT 17–ISA 17–EuroCOW 17, Hannover, Germany, 6–9 June 2017. [Google Scholar]
- Sapirstein, P. Accurate measurement with photogrammetry at large sites. J. Archaeol. Sci. 2016, 66, 137–145. [Google Scholar] [CrossRef]
Symbol | Meaning |
---|---|
v | line of sight |
c | camera center (a 3D point) |
p | 3D point |
T | tetrahedron |
label of tetrahedron T | |
unary energy of the label assignment of tetrahedron T | |
pair-wise energy of the label assignments of two adjacent tetrahedra | |
weight of a line of sight v | |
amount of tetrahedra intersected with a line of sight v | |
d | distance between point p and the intersecting point of a segment and a facet |
scale factor | |
r | the radius of the circumsphere of the end tetrahedron |
energy of tetrahedron T being labeled as outside | |
energy of tetrahedron T being labeled as inside | |
free-space support of tetrahedron T | |
constant for transferring | |
balance factor | |
energy | |
weight of a facet f | |
angle |
SceneID | 1 | 2 | 3 | 4 | 5 | 6 | 9 | 10 | 15 | 21 | 23 | 24 | 29 | 36 | 44 | 61 | 110 | 114 | 118 | 122 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Tol_Pts | 1.0 | 1.1 | 0.9 | 0.7 | 0.9 | 1.0 | 1.0 | 0.7 | 1.0 | 1.0 | 1.1 | 0.8 | 0.7 | 1.1 | 0.9 | 0.7 | 0.7 | 1.2 | 1.0 | 0.9 |
Tol_Vtx | 2.1 | 2.2 | 2.1 | 1.8 | 1.9 | 2.3 | 2.6 | 1.7 | 2.3 | 2.1 | 2.3 | 2.4 | 1.1 | 2.0 | 1.5 | 1.6 | 1.6 | 2.1 | 2.1 | 1.8 |
Tol_Fcs | 4.2 | 4.4 | 4.2 | 3.5 | 3.7 | 4.6 | 5.2 | 3.3 | 4.7 | 4.3 | 4.5 | 4.8 | 2.1 | 4.1 | 3.1 | 3.2 | 3.2 | 4.2 | 4.2 | 3.5 |
Fur_Pts | 2.3 | 2.6 | 2.5 | 2.2 | 2.2 | 2.4 | 2.4 | 1.9 | 2.5 | 3.0 | 3.1 | 2.5 | 2.3 | 2.7 | 2.7 | 1.6 | 2.2 | 2.6 | 2.6 | 2.4 |
Fur_Vtx | 1.1 | 1.1 | 1.2 | 0.8 | 0.8 | 0.7 | 1.0 | 0.7 | 2.5 | 2.9 | 2.8 | 1.0 | 2.1 | 1.9 | 1.7 | 0.9 | 1.8 | 1.5 | 1.7 | 1.4 |
Fur_Fcs | 2.2 | 2.2 | 2.4 | 1.6 | 1.6 | 1.5 | 2.0 | 1.5 | 4.9 | 5.8 | 5.5 | 1.9 | 4.2 | 3.8 | 3.4 | 1.7 | 3.6 | 2.9 | 3.3 | 2.7 |
Cam_Pts | 23.6 | 29.6 | 22.2 | 20.8 | 20.2 | 23.6 | 19.8 | 13.0 | 22.0 | 24.0 | 29.5 | 20.2 | 16.5 | 29.5 | 20.2 | 7.6 | 19.9 | 26.1 | 30.2 | 21.7 |
Cam_Vtx | 4.2 | 4.6 | 8.1 | 4.8 | 6.8 | 6.7 | 16.0 | 2.6 | 12.0 | 8.9 | 4.1 | 2.6 | 3.3 | 3.2 | 5.1 | 3.7 | 6.3 | 5.1 | 31.2 | 6.1 |
Cam_Fcs | 8.5 | 9.2 | 16.3 | 9.5 | 13.5 | 13.4 | 32.0 | 5.1 | 24.0 | 17.8 | 8.2 | 5.2 | 6.6 | 6.3 | 10.2 | 7.3 | 12.5 | 10.2 | 62.4 | 12.1 |
OMVS_Pts | 11.8 | 11.0 | 12.2 | 10.1 | 11.8 | 10.9 | 9.1 | 8.3 | 9.2 | 10.1 | 12.2 | 9.0 | 7.8 | 11.3 | 9.8 | 8.9 | 8.0 | 13.1 | 8.8 | 8.4 |
Jan_Vtx | 0.6 | 0.6 | 0.7 | 0.5 | 0.6 | 0.6 | 0.6 | 0.5 | 0.6 | 0.8 | 0.7 | 0.6 | 0.6 | 0.7 | 0.7 | 0.5 | 0.5 | 0.7 | 0.6 | 0.6 |
Jan_Fcs | 1.3 | 1.2 | 1.4 | 1.1 | 1.2 | 1.2 | 1.2 | 1.0 | 1.3 | 1.6 | 1.4 | 1.2 | 1.2 | 1.5 | 1.5 | 0.9 | 1.0 | 1.3 | 1.2 | 1.1 |
Our*_Vtx | 1.2 | 1.1 | 1.1 | 1.0 | 0.9 | 1.0 | 1.0 | 0.9 | 1.1 | 1.3 | 1.3 | 1.2 | 1.0 | 1.3 | 1.1 | 0.7 | 1.0 | 1.1 | 0.9 | 1.0 |
Our*_Fcs | 2.5 | 2.3 | 2.2 | 2.0 | 1.8 | 2.0 | 2.0 | 1.9 | 2.2 | 2.6 | 2.6 | 2.4 | 2.0 | 2.6 | 2.3 | 1.5 | 1.9 | 2.2 | 1.8 | 1.9 |
Our_Vtx | 1.6 | 1.6 | 1.4 | 1.0 | 1.2 | 1.4 | 1.3 | 1.2 | 1.5 | 1.5 | 1.7 | 1.3 | 1.3 | 1.3 | 1.2 | 0.7 | 1.0 | 1.5 | 1.2 | 1.1 |
Our_Fcs | 3.3 | 3.2 | 2.9 | 2.0 | 2.5 | 2.9 | 2.6 | 2.4 | 3.0 | 3.1 | 3.4 | 2.6 | 2.7 | 2.6 | 2.5 | 1.4 | 2.0 | 3.1 | 2.4 | 2.3 |
Type | Barn | Ignatius | ||||||||
Colmap | OMVS_Pts | Jan | Our* | Our | Colmap | OMVS_Pts | Jan | Our* | Our | |
Pts | 6.2M | 35.4M | 6.3M | 5.8M | 6.7M | 1.3M | 13.1M | 3.5M | 2.9M | 3.3M |
mean | 19.24 | 17.70 | 10.44 | 11.14 | 10.23 | 2.66 | 3.55 | 2.51 | 2.90 | 2.14 |
95.5%< | 59.93 | 34.75 | 28.17 | 29.42 | 26.04 | 7.96 | 12.15 | 6.33 | 7.69 | 5.15 |
99.7%< | 221.83 | 181.46 | 118.49 | 120.25 | 113.45 | 39.18 | 35.91 | 38.55 | 38.49 | 34.38 |
Precision | 1:800 | 1:1000 | 1:1400 | 1:1400 | 1:1500 | 1:500 | 1:600 | 1:800 | 1:800 | 1:900 |
Type | Courthouse | Truck | ||||||||
Colmap | OMVS_Pts | Jan | Our* | Our | Colmap | OMVS_Pts | Jan | Our* | Our | |
Pts | 17.3M | 63.4M | 14.4M | 13.7M | 15.0M | 3.8M | 22.5M | 3.7M | 3.0M | 3.5M |
mean | 97.56 | 247.94 | 93.15 | 96.84 | 91.88 | 8.07 | 7.29 | 6.65 | 7.13 | 6.47 |
95.5%< | 315.51 | 597.98 | 302.36 | 311.29 | 294.31 | 24.67 | 23.62 | 21.48 | 23.87 | 21.46 |
99.7%< | 2261.96 | 5108.95 | 2086.73 | 2117.38 | 2029.45 | 154.56 | 174.76 | 147.83 | 151.72 | 143.58 |
Precision | 1:300 | 1:200 | 1:400 | 1:400 | 1:400 | 1:400 | 1:400 | 1:600 | 1:600 | 1:700 |
Method | Family | Francis | Horse | Lighthouse | M60 | Panther | Playground | Train | Mean |
---|---|---|---|---|---|---|---|---|---|
PMVSNet | 70.04 | 44.64 | 40.22 | 65.20 | 55.08 | 55.17 | 60.37 | 54.29 | 55.62 |
Altizure-HKUST | 74.60 | 61.30 | 38.48 | 61.48 | 54.93 | 53.32 | 56.21 | 49.47 | 56.22 |
ACMH | 69.99 | 49.45 | 45.12 | 59.04 | 52.64 | 52.37 | 58.34 | 51.61 | 54.82 |
Dense R-MVSNet | 73.01 | 54.46 | 43.42 | 43.88 | 46.80 | 46.69 | 50.87 | 45.25 | 50.55 |
R-MVSNet | 69.96 | 46.65 | 32.59 | 42.95 | 51.88 | 48.80 | 52.00 | 42.38 | 48.40 |
i23dMVS4 | 56.64 | 33.75 | 28.40 | 48.42 | 39.23 | 44.87 | 48.34 | 37.88 | 42.19 |
MVSNet | 55.99 | 28.55 | 25.07 | 50.79 | 53.96 | 50.86 | 47.90 | 34.69 | 43.48 |
COLMAP | 50.41 | 22.25 | 25.63 | 56.43 | 44.83 | 46.97 | 48.53 | 42.04 | 42.14 |
Pix4D | 64.45 | 31.91 | 26.43 | 54.41 | 50.58 | 35.37 | 47.78 | 34.96 | 43.24 |
i23dMVS_3 | 56.21 | 33.14 | 28.92 | 47.74 | 40.29 | 44.20 | 46.93 | 37.66 | 41.89 |
OpenMVG + OpenMVS | 58.86 | 32.59 | 26.25 | 43.12 | 44.73 | 46.85 | 45.97 | 35.27 | 41.71 |
OpenMVG + MVE | 49.91 | 28.19 | 20.75 | 43.35 | 44.51 | 44.76 | 36.58 | 35.95 | 38.00 |
OpenMVG + SMVS | 31.93 | 19.92 | 15.02 | 39.38 | 36.51 | 41.61 | 35.89 | 25.12 | 30.67 |
Theia-I + OpenMVS | 48.11 | 19.38 | 20.66 | 30.02 | 30.37 | 30.79 | 23.65 | 20.46 | 27.93 |
OpenMVG + PMVS | 41.03 | 17.70 | 12.83 | 36.68 | 35.93 | 33.20 | 31.78 | 28.10 | 29.66 |
Jan | 62.69 | 47.44 | 34.52 | 57.94 | 38.67 | 47.06 | 55.26 | 39.90 | 47.94 |
Our* | 62.46 | 46.68 | 32.61 | 57.66 | 33.66 | 44.25 | 52.40 | 38.25 | 46.00 |
Our | 65.21 | 49.41 | 35.41 | 59.04 | 37.57 | 47.85 | 56.77 | 41.28 | 49.07 |
Scene | Family | Francis | Horse | Lighthouse | M60 | Panther | Playground | Train |
---|---|---|---|---|---|---|---|---|
OMVS_Pts | 12.0 | 17.8 | 9.0 | 28.4 | 24.8 | 26.0 | 28.6 | 31.9 |
Jan_Vtx | 2.0 | 1.9 | 1.5 | 2.8 | 5.1 | 4.1 | 5.7 | 4.8 |
Jan_Fcs | 4.1 | 3.7 | 3.0 | 5.6 | 10.1 | 8.3 | 11.3 | 9.6 |
Our∗_Vtx | 1.6 | 1.2 | 1.2 | 1.7 | 3.6 | 2.9 | 4.1 | 3.2 |
Our∗_Fcs | 3.2 | 2.4 | 2.3 | 3.5 | 7.2 | 5.8 | 8.2 | 6.5 |
Our_Vtx | 2.5 | 2.7 | 2.1 | 3.3 | 5.3 | 5.4 | 6.9 | 6.3 |
Our_Fcs | 5.1 | 5.4 | 4.2 | 6.6 | 10.6 | 10.8 | 13.9 | 12.5 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, Y.; Shen, S.; Hu, Z. Detail Preserved Surface Reconstruction from Point Cloud. Sensors 2019, 19, 1278. https://doi.org/10.3390/s19061278
Zhou Y, Shen S, Hu Z. Detail Preserved Surface Reconstruction from Point Cloud. Sensors. 2019; 19(6):1278. https://doi.org/10.3390/s19061278
Chicago/Turabian StyleZhou, Yang, Shuhan Shen, and Zhanyi Hu. 2019. "Detail Preserved Surface Reconstruction from Point Cloud" Sensors 19, no. 6: 1278. https://doi.org/10.3390/s19061278
APA StyleZhou, Y., Shen, S., & Hu, Z. (2019). Detail Preserved Surface Reconstruction from Point Cloud. Sensors, 19(6), 1278. https://doi.org/10.3390/s19061278