Automated Multi-Sensor 3D Reconstruction for the Web
<p>Test site: the Puhos shopping mall in Helsinki. Project area marked with a red circle. Image courtesy of the City of Helsinki.</p> "> Figure 2
<p>The prepared TLS-based reference point cloud based on 43 scans and consisting total of 260,046,266 points.</p> "> Figure 3
<p>The 3D reconstruction process in RealityCapture.</p> "> Figure 4
<p>The resulting automatically created web-compatible 3D models: (<b>a</b>) photogrammetry; (<b>b</b>) TLS; and (<b>c</b>) hybrid.</p> "> Figure 5
<p>Visual comparison of details in (<b>a</b>) photogrammetry; (<b>b</b>) TLS; and (<b>c</b>) hybrid approaches. The photogrammetry model suffers from blurred details, whereas the texture data of the TLS model suffers from clear overexposure. For visualization purpose, the models are visualized here as colored vertices without the textures.</p> "> Figure 6
<p>Quality issues on the textured 3D models. The photogrammetry-based model (<b>a</b>) suffers from holes in the data in shiny and non-textured surfaces such as taped windows. In the TLS-based model (<b>b</b>) the lack of data underneath the scanning stations causes circular patterns in the texture. In addition, the illumination differences in the scene cause abrupt differences between the textured areas. Many of these problems are fixed in the hybrid model (<b>c</b>).</p> "> Figure 7
<p>Ground floor surface deviations of all modeling approaches vs. the reference: (<b>a</b>) the photogrammetry approach; (<b>b</b>) the terrestrial laser scanning approach; and (<b>c</b>) the hybrid approach. The color scale for the M3C2 distance values is ±2.5 cm.</p> "> Figure 8
<p>Distance values of the compared modeling approaches vs. the reference: photogrammetry (green), TLS (red) and hybrid (blue).</p> "> Figure 9
<p>A histogram analysis including all 8-bit pixel values of all texture atlases for the three modeling approaches: photogrammetry (green), TLS (red) and hybrid (blue). The significant peak in the hybrid model (pixel value 95) is caused by a grey-colored empty space between the texture islands on the texture atlases. This has no perceivable impact on the visual quality of the model.</p> "> Figure 10
<p>Results of the expert evaluation on visual quality for the three modeling approaches: photogrammetry (green), TLS (red) and hybrid (blue).</p> "> Figure 11
<p>Visual comparison of the raw images of TLS (<b>a</b>) and photogrammetry (<b>b</b>). The raw TLS image (<b>a</b>) suffers clearly from overexposure. The quality of the image data is directly transferred into texture information in the content creation process.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Case: The Puhos Shopping Mall
2.2. Data Acquisition Campaign
2.3. Data Pre-Processing
2.4. Multi-Source Photorealistic 3D Modeling for the Web
2.5. Geometric and Texture Quality Evaluation
- An initial ground floor area segmentation was done.
- A final alignment to the TLS-based reference was performed using point pairs picking and ICP tools.
- The final segmentation for all models was done to achieve one-to-one correspondence between the compared models to mitigate the effects of data completeness and to remove the need for using any cut-off distances in the analysis.
2.6. Expert Evaluation on Visual Quality
3. Results
3.1. Computing Times
3.2. Geometric Quality
3.3. Texture Quality
3.4. Expert Evaluation on Visual Quality
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A
- 1a. From the perspective of photorealism and visual appeal, which model did you like the best? (multiple-choice question)
- 1b. Why? Explain briefly. (open question with a text box)
- 2a. Which model has the best geometric quality? (multiple-choice question)
- 2b. Why? Explain briefly. (open question with a text box)
- 3a. Which model has the best texturing quality? (multiple-choice question)
- 3b. Why? Explain briefly. (open question with a text box)
References
- WebGL Overview. Available online: https://www.khronos.org/webgl/ (accessed on 30 January 2019).
- Evans, A.; Romeo, M.; Bahrehmand, A.; Agenjo, J.; Blat, J. 3D graphics on the web: A survey. Comput. Graph. 2014, 41, 43–61. [Google Scholar] [CrossRef]
- Potenziani, M.; Fritsch, B.; Dellepiane, M.; Scopigno, R. Automating large 3D dataset publication in a web-based multimedia repository. In Proceedings of the Conference on Smart Tools and Applications in Computer Graphics, Genova, Italy, 3–4 October 2016; Eurographics Association; pp. 99–107. [Google Scholar] [CrossRef]
- Scopigno, R.; Callieri, M.; Dellepiane, M.; Ponchio, F.; Potenziani, M. Delivering and using 3D models on the web: Are we ready? Virtual Archaeol. Rev. 2017, 8, 1–9. [Google Scholar] [CrossRef]
- OpenGL ES Overview. Available online: https://www.khronos.org/opengles/ (accessed on 10 April 2019).
- Three.js—Javascript 3D Library. Available online: https://threejs.org/ (accessed on 10 April 2019).
- Babylon.js—3D Engine based on WebGL/Web Audio and JavaScript. Available online: https://www.babylonjs.com/ (accessed on 10 April 2019).
- Scully, T.; Friston, S.; Fan, C.; Doboš, J.; Steed, A. glTF streaming from 3D repo to X3DOM. In Proceedings of the 21st International Conference on Web3D Technology (Web3D ’16), Anaheim, CA, USA, 22–24 July 2016; ACM: New York, NY, USA, 2016; pp. 7–15. [Google Scholar] [CrossRef] [Green Version]
- Schilling, A.; Bolling, J.; Nagel, C. Using glTF for streaming CityGML 3D city models. In Proceedings of the 21st International Conference on Web3D Technology (Web3D ’16), Anaheim, CA, USA, 22–24 July 2016; ACM: New York, NY, USA, 2016; pp. 109–116. [Google Scholar] [CrossRef]
- Miao, R.; Song, J.; Zhu, Y. 3D geographic scenes visualization based on WebGL. In Proceedings of the 6th IEEE International Conference on Agro-Geoinformatics, Fairfax, VA, USA, 7–10 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Art Pipeline for glTF. Available online: https://www.khronos.org/blog/art-pipeline-for-gltf?fbclid=IwAR0DzggkHUpnQWTNn-lfO9-iLfiOTq4xbQvBMrWYXq9A-jnjSEsDcVhJpNM (accessed on 10 April 2019).
- Limper, M.; Brandherm, F.; Fellner, D.W.; Kuijper, A. Evaluating 3D thumbnails for virtual object galleries. In Proceedings of the 20th International Conference on 3D Web Technology, Heraklion, Crete, Greece, 18–21 June 2015; ACM: New York, NY, USA, 2015; pp. 17–24. [Google Scholar] [CrossRef]
- Sketchfab—Your 3D Content on Web, Mobile, AR, and VR. Available online: https://sketchfab.com/ (accessed on 15 January 2019).
- Poly. Available online: https://poly.google.com/ (accessed on 10 February 2019).
- Facebook. Available online: https://www.facebook.com/ (accessed on 15 January 2019).
- Features-Sketchfab. Available online: https://sketchfab.com/features (accessed on 15 January 2019).
- Supported 3D File Formats. Available online: https://help.sketchfab.com/hc/en-us/articles/202508396-Supported-3D-File-Formats (accessed on 10 April 2019).
- Improving Viewer Performance-Sketchfab Help Center. Available online: https://help.sketchfab.com/hc/en-us/articles/201766675-Viewer-Performance (accessed on 15 January 2019).
- Asset Requirements-Sharing. Available online: https://developers.facebook.com/docs/sharing/3d-posts/asset-requirements/ (accessed on 30 January 2019).
- Uploading to Poly-Poly Help. Available online: https://support.google.com/poly/answer/7562662?hl=en (accessed on 30 January 2019).
- 3ds Max. Available online: https://www.autodesk.eu/products/3ds-max/overview (accessed on 30 January 2019).
- Maya. Available online: https://www.autodesk.eu/products/maya/overview (accessed on 30 January 2019).
- Blender. Available online: https://www.blender.org/ (accessed on 30 January 2019).
- ZBrush. Available online: http://pixologic.com/ (accessed on 30 January 2019).
- AutoCAD. Available online: https://www.autodesk.eu/products/autocad/overview (accessed on 30 January 2019).
- Microstation. Available online: https://www.bentley.com/en/products/brands/microstation (accessed on 30 January 2019).
- Rhinoceros. Available online: https://www.rhino3d.com/ (accessed on 30 January 2019).
- SketchUp. Available online: https://www.sketchup.com/ (accessed on 30 January 2019).
- Remondino, F.; Rizzi, A. Reality-based 3D documentation of natural and cultural heritage sites—techniques, problems, and examples. Appl. Geomat. 2010, 2, 85–100. [Google Scholar] [CrossRef] [Green Version]
- Julin, A.; Jaalama, K.; Virtanen, J.P.; Pouke, M.; Ylipulli, J.; Vaaja, M.; Hyyppä, J.; Hyyppä, H. Characterizing 3D City Modeling Projects: Towards a Harmonized Interoperable System. ISPRS Int. Geo-Inf. 2018, 7, 55. [Google Scholar] [CrossRef]
- Pătrăucean, V.; Armeni, I.; Nahangi, M.; Yeung, J.; Brilakis, I.; Haas, C. State of research in automatic as-built modelling. Adv. Eng. Inform. 2015, 29, 162–171. [Google Scholar] [CrossRef] [Green Version]
- Wang, Q.; Tan, Y.; Mei, Z. Computational Methods of Acquisition and Processing of 3D Point Cloud Data for Construction Applications. Arch. Comput. Methods Eng. 2019, 1–21. [Google Scholar] [CrossRef]
- Photogrammetry and Star Wars Battlefront. Available online: https://www.ea.com/frostbite/news/photogrammetry-and-star-wars-battlefront (accessed on 15 January 2019).
- Virtanen, J.P.; Kurkela, M.; Turppa, T.; Vaaja, M.T.; Julin, A.; Kukko, A.; Hyyppä, J.; Ahlavuo, M.; Edén von Numers, J.; Haggrén, H.; et al. Depth camera indoor mapping for 3D virtual radio play. Photogramm. Rec. 2018, 33, 171–195. [Google Scholar] [CrossRef] [Green Version]
- An In-Depth Guide on Capturing/Preparing Photogrammetry for Unity. Available online: http://metanautvr.com/blog/2017/10/24/a-guide-on-capturing-preparing-photogrammetry-for-unity-vr/ (accessed on 25 October 2018).
- Photogrammetry, Unity. Available online: https://unity.com/solutions/photogrammetry (accessed on 25 October 2018).
- Augmented and Virtual Reality Survey Report-Industry Insights into the Future of AR/VR. Available online: https://dpntax5jbd3l.cloudfront.net/images/content/1/5/v2/158662/2016-VR-AR-Survey.pdf (accessed on 25 October 2018).
- Boehler, W.; Marbs, A. 3D scanning instruments. In Proceedings of the CIPA WG 6 International Workshop on Scanning for Cultural Heritage Recording, Corfu, Greece, 1–2 September 2002; pp. 9–12. [Google Scholar]
- Nistér, D. An efficient solution to the five-point relative pose problem. IEEE Pattern Anal. 2004, 26, 756–770. [Google Scholar] [CrossRef]
- Hirschmuller, H. Accurate and efficient stereo processing by semi-global matching and mutual information. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Volume II, San Diego, CA, USA, 20–25 June 2005; Schmid, C., Soatto, S., Tomasi, C., Eds.; IEEE Computer Society: Los Alamitos, CA, USA, 2005; pp. 807–814. [Google Scholar] [CrossRef]
- Jancosek, M.; Pajdla, T. Multi-view reconstruction preserving weakly-supported surfaces. In Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR) 2011, Colorado Springs, CO, USA, 21–23 June 2011; pp. 3121–3128. [Google Scholar] [CrossRef]
- 3DF Zephyr. Available online: https://www.3dflow.net/3df-zephyr-pro-3d-models-from-photos/ (accessed on 30 January 2019).
- Reality Capture. Available online: https://www.capturingreality.com/ (accessed on 31 January 2019).
- Metashape. Available online: https://www.agisoft.com/ (accessed on 31 January 2019).
- Meshroom. Available online: https://alicevision.github.io/ (accessed on 31 January 2019).
- COLMAP. Available online: http://colmap.github.io/ (accessed on 31 January 2019).
- Pix4D. Available online: https://www.pix4d.com/ (accessed on 31 January 2019).
- SEQUOIA. Available online: https://sequoia.thinkboxsoftware.com/ (accessed on 31 January 2019).
- Cloud Compare. Available online: http://www.cloudcompare.org/ (accessed on 31 January 2019).
- Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F. A Critical Review of Automated Photogrammetric Processing of Large Datasets. Int. Arch. Photogramm. 2017, 42, 591–599. [Google Scholar] [CrossRef]
- Rönnholm, P.; Honkavaara, E.; Litkey, P.; Hyyppä, H.; Hyyppä, J. Integration of laser scanning and photogrammetry. Int. Arch. Photogramm. 2007, 36, 355–362. [Google Scholar]
- Ramos, M.M.; Remondino, F. Data fusion in Cultural Heritage—A Review. Int. Arch. Photogramm. 2015, 40, 359–363. [Google Scholar] [CrossRef]
- Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J.F.; Assali, E.; Smigiel, E. Recording approach of heritage sites based on merging point clouds from high resolution photogrammetry and terrestrial laser scanning. Int. Arch. Photogramm. 2012, 39, 553–558. [Google Scholar] [CrossRef]
- Guarnieri, A.; Remondino, F.; Vettore, A. Digital photogrammetry and TLS data fusion applied to Cultural Heritage 3D modeling. Int. Arch. Photogramm. 2006, 36, 1–6. [Google Scholar]
- Habib, A.F.; Ghanma, M.S.; Tait, M. Integration of LIDAR and photogrammetry for close range applications. Int. Arch. Photogramm. 2004, 35, 1045–1050. [Google Scholar]
- Baltsavias, E.P. A comparison between photogrammetry and laser scanning. Int. Arch. Photogramm. 1999, 54, 83–94. [Google Scholar] [CrossRef]
- Velios, A.; Harrison, J.P. Laser scanning and digital close range photogrammetry for capturing 3D archaeological objects: A comparison of quality and practicality. In Archaeological Informatics: Pushing the Envelope CAA 2001; British Archaeological Reports International Series: Oxford, UK, 2001; Volume 1016, pp. 567–574. [Google Scholar]
- Gašparovic, M.; Malaric, I. Increase of readability and accuracy of 3D models using fusion of close range photogrammetry and laser scanning. Int. Arch. Photogramm. 2012, 39, 93–98. [Google Scholar] [CrossRef]
- Becker, S.; Haala, N. Refinement of building facades by integrated processing of lidar and image data. Int. Arch. Photogramm. 2007, 36, 7–12. [Google Scholar]
- Li, Y.; Zheng, Q.; Sharf, A.; Cohen-Or, D.; Chen, B.; Mitra, N.J. 2D-3D fusion for layer decomposition of urban facades. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 882–889. [Google Scholar] [CrossRef]
- Nex, F.; Rinaudo, F. Lidar of photogrammetry? integration is the answer. Eur. J. Remote Sens. 2011, 43, 107–121. [Google Scholar] [CrossRef]
- Guidi, G.; Beraldin, J.-A.; Ciofi, S.; Atzeni, C. Fusion of range camera and photogrammetry: A systematic procedure for improving 3-D models metric accuracy. IEEE Trans. Syst. Man Cybern. Part B 2003, 33, 667–676. [Google Scholar] [CrossRef] [PubMed]
- Alshawabkeh, Y. Integration of Laser Scanning and Photogrammetry for Heritage Documentation. Ph.D. Thesis, University of Stuttgart, Stuttgart, Germany, 2006; pp. 1–98. [Google Scholar] [CrossRef]
- Peethambaran, J.; Wang, R. Enhancing Urban Façades via LiDAR based Sculpting. Comput. Graph. Forum 2017, 36, 511–528. [Google Scholar] [CrossRef]
- Wang, Q.; Guo, J.; Kim, M.K. An Application Oriented Scan-to-BIM Framework. Remote Sens. 2019, 11, 365. [Google Scholar] [CrossRef]
- Haala, N.; Alshawabkeh, Y. Combining laser scanning and photogrammetry: A hybrid approach for heritage documentation. In Proceedings of the 7th International Conference on Virtual Reality, Archaeology and Intelligent Cultural Heritage, Nicosia, Cyprus, 30 October–4 November 2006; Eurographics Association: Aire-la-Ville, Switzerland, 2006; pp. 163–170. [Google Scholar]
- Vaaja, M.; Kurkela, M.; Hyyppä, H.; Alho, P.; Hyyppä, J.; Kukko, A.; Kaartinen, H.; Kasvi, E.; Kaasalainen, S.; Rönnholm, P. Fusion of mobile laser scanning and panoramic images for studying river environment topography and changes. Int. Arch. Photogramm. 2011, 3812, 319–324. [Google Scholar] [CrossRef]
- Rönnholm, P.; Karjalainen, M.; Kaartinen, H.; Nurminen, K.; Hyyppä, J. Relative orientation between a single frame image and LiDAR point cloud using linear features. Photogramm. J. Finl. 2013, 23, 1–16. [Google Scholar] [CrossRef]
- Balletti, C.; Guerra, F.; Scocca, V.; Gottardi, C. 3D integrated methodologies for the documentation and the virtual reconstruction of an archaeological site. Int. Arch. Photogramm. 2015, 40, 215–222. [Google Scholar] [CrossRef]
- Valenti, R.; Paternò, E. A comparison between tls and uav technologies for historical investigation. Int. Arch. Photogramm. 2019, 422, 739–745. [Google Scholar] [CrossRef]
- Jo, Y.H.; Hong, S. Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry. ISPRS Int. J. Geo-Inf. 2019, 8, 53. [Google Scholar] [CrossRef]
- Remondino, F. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef]
- Virtanen, J.P.; Hyyppä, H.; Kurkela, M.; Vaaja, M.T.; Puustinen, T.; Jaalama, K.; Julin, A.; Pouke, M.; Kukko, A.; Turppa, T.; et al. Browser based 3D for the built environment. Nord. J. Surv. Real Estate Res. 2018, 13, 54–76. [Google Scholar] [CrossRef]
- Krooks, A.; Kahkonen, J.; Lehto, L.; Latvala, P.; Karjalainen, M.; Honkavaara, E. WebGL Visualisation of 3D Environmental Models Based on Finnish Open Geospatial Data Sets. Int. Arch. Photogramm. 2014, XL-3, 163–169. [Google Scholar] [CrossRef]
- Krämer, M.; Gutbell, R. A case study on 3D geospatial applications in the web using state-of-the-art WebGL frameworks. In Proceedings of the 20th International Conference on 3D Web Technology, Heraklion, Crete, Greece, 18–21 June 2015; ACM: New York, NY, USA, 2015; pp. 189–197. [Google Scholar] [CrossRef]
- Prandi, F.; Devigli, F.; Soave, M.; Di Staso, U.; De Amicis, R. 3D web visualization of huge CityGML models. Int. Arch. Photogramm. 2015, 40, 601–605. [Google Scholar] [CrossRef]
- Ressler, S.; Leber, K. Web Based 3D Visualization and Interaction for Whole Body Laser Scans. In Proceedings of the 4th International Conference on 3D Body Scanning Technologies, Long Beach, CA, USA, 19–20 November 2013; pp. 166–172. [Google Scholar]
- Lehtola, V.V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.; Virtanen, J.P.; et al. Comparison of the selected state-of-the-art 3D indoor scanning and point cloud generation methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef]
- Aderhold, A.; Jung, Y.; Wilkosinska, K.; Fellner, D.W. Distributed 3D Model Optimization for the Web with the Common Implementation Framework for Online Virtual Museums. In Proceedings of the IEEE Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; pp. 719–726. [Google Scholar] [CrossRef]
- Puhos: Take a Look Around a Corner of Multicultural Finland under Threat. Available online: https://yle.fi/uutiset/3-9891239 (accessed on 25 October 2018).
- FARO SCENE. Available online: https://www.faro.com/products/construction-bim-cim/faro-scene/ (accessed on 31 January 2019).
- Photoshop Lightroom. Available online: https://www.adobe.com/products/photoshop-lightroom.html (accessed on 1 February 2019).
- Horn, B.K. Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. 1987, 4, 629–642. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the SPIE Robotics ’91, Boston, MA, USA, 14–15 November 1991; pp. 586–607. [Google Scholar]
- Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (NZ). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef]
- Rueden, C.T.; Schindelin, J.; Hiner, M.C.; DeZonia, B.E.; Walter, A.E.; Arena, E.T.; Eliceiri, K.W. ImageJ2: ImageJ for the next generation of scientific image data. BMC Bioinform. 2017, 18, 529. [Google Scholar] [CrossRef] [PubMed]
- Minto, S.; Remondino, F. Online Access and Sharing of Reality-based 3D Models. Sci. Res. Inf. Technol. 2014, 4, 17–28. [Google Scholar] [CrossRef]
Software | Photogrammetric 3D Reconstruction | Point Cloud Based 3D Meshing | Texturing |
---|---|---|---|
3DF Zephyr | x | x | x |
CloudCompare | x | ||
COLMAP | x | ||
Meshroom | x | x | |
Metashape | x | x | |
Pix4D | x | x | |
RealityCapture | x | x | x |
SEQUOIA | x | x |
Scanner Specifications | Faro Focus 3D S120/Trimble TX5 |
---|---|
Scan rate | 976,000 points/s |
Range | 0.6–120 m |
Ranging error | ±2 mm at 10 m (90% reflectivity) |
Ranging noise | 0.6 mm at 10 m (90% reflectivity) |
Total image resolution | Up to 70 Mpix |
Scan Parameters | |
Scanning resolution setting | 12 mm at 10 m (one with 6 mm at 10 m) |
Number of scan stations | 22/21 |
Camera Specifications | Nikon D800E |
---|---|
Image resolution | 7360 × 4912 (36 Mpix) |
Sensor size | Full frame (35.9 × 24 mm) |
Lens | Nikkor AF-S 14–24 mm f/2.8 G the focus and zoom locked at 14 mm |
Focal length (fixed) | 14 mm |
F-stop (fixed) | f/8 |
Number of images | 433 |
Image file format | NEF (Nikon Electronic File) |
Alignment | Photogrammetry | TLS | Hybrid |
---|---|---|---|
Total input data file size | 6.5 GB | 21.1 GB | 27.6 GB |
Number of automatically registered images | 306/433 | - | 363/433 |
Number of automatically registered laser scans | - | 43/43 | 43/43 |
Number of tie points | 1,234,116 | 1,514,454 | 2,628,226 |
Mean projection error (pixels) | 0.416 | Not applicable 1 | 0.429 |
Metric scale | No | Yes | Yes |
Reconstruction | |||
Number of vertices | 193,590,937 | 159,837,170 | 347,794,658 |
Number of polygons | 386,145,064 | 318,875,950 | 693,603,980 |
Final web-compatible models | |||
Number of vertices | 249,380 | 232,146 | 227,664 |
Number of polygons | 500,000 | 500,000 | 500,000 |
Number of 4k textures | 10 | 10 | 10 |
Photogrammetry | TLS | Hybrid | |
---|---|---|---|
Meshing time | 07 h: 07 min: 50 s | 00 h: 43 min: 06 s | 19 h: 51 min: 23 s |
Texturing time | 00 h: 22 min: 15 s | 00 h: 01 min: 51 s | 00 h: 34 min: 15 s |
Total time | 07 h: 30 min: 05 s | 00 h: 44 min: 57 s | 20 h: 25 min: 38 s |
Photogrammetry | TLS | Hybrid | |
---|---|---|---|
Mean distance (signed) | 0.41 mm | −0.15 mm | −0.05 mm |
Std. dev. | 6.20 mm | 2.72 mm | 3.18 mm |
Number of observations | 35,897 | 20,104 | 20,741 |
Photogrammetry | TLS | Hybrid | |
---|---|---|---|
Mean (8-bit) | 92 | 126 | 100 |
Std. dev. (8-bit) | 43 | 59 | 49 |
Mode (8-bit) | 79 | 254 1 | 95 |
Number of black pixels | 1533 | 1,768,527 | 4921 |
Number of white pixels | 1609 | 3,864,622 | 909,088 |
Percentage of black pixels | 0.00091% | 1.05% | 0.0029% |
Percentage of white pixels | 0.00096% | 2.30% | 0.54% |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Julin, A.; Jaalama, K.; Virtanen, J.-P.; Maksimainen, M.; Kurkela, M.; Hyyppä, J.; Hyyppä, H. Automated Multi-Sensor 3D Reconstruction for the Web. ISPRS Int. J. Geo-Inf. 2019, 8, 221. https://doi.org/10.3390/ijgi8050221
Julin A, Jaalama K, Virtanen J-P, Maksimainen M, Kurkela M, Hyyppä J, Hyyppä H. Automated Multi-Sensor 3D Reconstruction for the Web. ISPRS International Journal of Geo-Information. 2019; 8(5):221. https://doi.org/10.3390/ijgi8050221
Chicago/Turabian StyleJulin, Arttu, Kaisa Jaalama, Juho-Pekka Virtanen, Mikko Maksimainen, Matti Kurkela, Juha Hyyppä, and Hannu Hyyppä. 2019. "Automated Multi-Sensor 3D Reconstruction for the Web" ISPRS International Journal of Geo-Information 8, no. 5: 221. https://doi.org/10.3390/ijgi8050221
APA StyleJulin, A., Jaalama, K., Virtanen, J. -P., Maksimainen, M., Kurkela, M., Hyyppä, J., & Hyyppä, H. (2019). Automated Multi-Sensor 3D Reconstruction for the Web. ISPRS International Journal of Geo-Information, 8(5), 221. https://doi.org/10.3390/ijgi8050221