Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Identification of Poverty Areas by Remote Sensing and Machine Learning: A Case Study in Guizhou, Southwest China
Next Article in Special Issue
Towards an Operative Predictive Model for the Songshan Area during the Yangshao Period
Previous Article in Journal
A Theoretical Proposition for Spatial Data Infrastructure On-Going Improvement
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Critical Comparison of 3D Digitization Techniques for Heritage Objects †

by
Efstathios Adamopoulos
1,*,
Fulvio Rinaudo
2 and
Liliana Ardissono
1
1
Computer Science Department, Università degli Studi di Torino, Corso Svizzera 185, 10149 Turin, Italy
2
Department of Architecture and Design, Politecnico di Torino, Viale Pier Andrea Mattioli 39, 10125 Turin, Italy
*
Author to whom correspondence should be addressed.
This article is an extended and updated version of the paper published in proceedings of the 2019 IMEKO TC-4 MetroArchaeo, Florence, Italy, 4–6 December 2019.
ISPRS Int. J. Geo-Inf. 2021, 10(1), 10; https://doi.org/10.3390/ijgi10010010
Submission received: 12 November 2020 / Revised: 22 December 2020 / Accepted: 27 December 2020 / Published: 30 December 2020
Graphical abstract
">
Figure 1
<p>Case studies (from left to right): Cycladic figurine copy, Roman capital replica, stone bust of Francis Joseph I of Austria, and small sculpture of Christ Crucified.</p> ">
Figure 2
<p>Examples of partial and noise-containing reconstructions (from left to right): dataset 1 ARP, dataset 1 Regard3D 1.0.0 (R3D), dataset 3 VCM, dataset 3 R3D.</p> ">
Figure 3
<p>Partial meshes generated with ARP (<b>left</b>) and FZA (<b>right</b>) from dataset 9.</p> ">
Figure 4
<p>Scanning results. Untextured Stonex F6 SR mesh (<b>left</b>), untextured FARO Focus 3D X 330 mesh (<b>center</b>) and scalar field mapping of Hausdorff distances; maximum visualized distance: 1 cm.</p> ">
Figure 5
<p>Textured photogrammetric meshes of the figurine copy, (from left to right) dataset 1 AMP, dataset 1 FZA, dataset 2 AMP, dataset 2 FZA, dataset 3 AMP, dataset 3 FZA.</p> ">
Figure 6
<p>Untextured photogrammetric meshes of the figurine copy, (from left to right) dataset 1 AMP, dataset 1 FZA, dataset 2 AMP, dataset 2 FZA, dataset 3 AMP, dataset 3 FZA.</p> ">
Figure 7
<p>Untextured photogrammetric meshes from dataset 4, (from left to right) AMP, FZA, P4D, ARP, VCM.</p> ">
Figure 8
<p>Scalar field mapping of Hausdorff distances for dataset 4 photogrammetric results. Deviation between the ARP mesh and the AMP mesh (<b>left</b>), deviation between the ARP mesh and the FZA mesh (<b>center</b>), deviation between the ARP and the P4D mesh (<b>right</b>); maximum visualized distance: 1 cm.</p> ">
Figure 9
<p>Textured photogrammetric meshes of the capital replica from dataset 5 (from left to right): AMP, VCM, R3D.</p> ">
Figure 10
<p>Untextured photogrammetric meshes of the capital replica from dataset 5, (from left to right, and from top to bottom): AMP, FZA, P4D, ARP, VCM, R3D.</p> ">
Figure 11
<p>Untextured meshes of the stone bust from dataset 6 (from left to right, and from top to bottom): F6 SR, AMP, FZA, ARP, VCM, R3D.</p> ">
Figure 12
<p>Detail from the untextured photogrammetric meshes of the stone bust from dataset 6 (from left to right): F6 SR, FZA, ARP.</p> ">
Figure 13
<p>Scalar field mapping of Hausdorff distances for dataset 6 photogrammetric results. Deviation between the AMP mesh and the FZA mesh (<b>left</b>), deviation between the AMP mesh and the ARP mesh (<b>center</b>), deviation between the AMP and the VCM mesh (<b>right</b>); maximum visualized distance: 1 cm.</p> ">
Figure 14
<p>Untextured meshes of the small sculpture, (from left to right, and from top to bottom): F6 SR, AMP–dataset 7, AMP–dataset 8, FZA–dataset 7, FZA–dataset 8, and ARP–dataset 8.</p> ">
Figure 15
<p>VCM-produced mesh from dataset 9 (smartphone camera).</p> ">
Figure 16
<p>Scalar field mapping of Hausdorff distances between dataset 6 photogrammetric results and scanning results. Deviation between the F6 SR mesh and the AMP mesh (<b>upper left</b>), deviation between the F6 SR mesh and the FZA mesh (<b>upper right</b>), deviation between the F6 SR mesh and the ARP mesh (<b>lower left</b>), deviation between the F6 SR and the VCM mesh (<b>lower right</b>); maximum visualized distance: 1 cm.</p> ">
Versions Notes

Abstract

:
Techniques for the three-dimensional digitization of tangible heritage are continuously updated, as regards active and passive sensors, data acquisition approaches, implemented algorithms and employed computational systems. These developments enable higher automation and processing velocities, increased accuracy, and precision for digitizing heritage assets. For large-scale applications, as for investigations on ancient remains, heritage objects, or architectural details, scanning and image-based modeling approaches have prevailed, due to reduced costs and processing durations, fast acquisition, and the reproducibility of workflows. This paper presents an updated metric comparison of common heritage digitization approaches, providing a thorough examination of sensors, capturing workflows, processing parameters involved, metric and radiometric results produced. A variety of photogrammetric software were evaluated (both commercial and open sourced), as well as photo-capturing equipment of various characteristics and prices, and scanners employing different technologies. The experimentations were performed on case studies of different geometrical and surface characteristics to thoroughly assess the implemented three-dimensional modeling pipelines.

Graphical Abstract">

Graphical Abstract

1. Introduction

The importance attributed by the scientific community to documenting heritage objects is causally related to the needs for protection, conservation, and valorization. In accordance with the international literature, some of its areas of application include conservation, digital restoration, digital archiving, augmented or virtual reality, the 3D printing of replicas, real-time documentation of archaeological excavations, and monitoring [1].
Heritage case studies related to detailed digitization and visualization, such as surveys of utilitarian, decorative and ritual objects, historical architecture details, paintings, murals, rock art, engravings, and in situ archaeological documentation of fragmented objects, require 3D models of high visual fidelity and accuracy [2]. The continuous advancements on new active and passive sensors for reality capture, data acquisition techniques, processing algorithms, computational systems, and moreover the constant updates on existing hardware and software for the reconstruction of complex geometries, constitute a significant factor that makes the high-resolution recording, processing, and visualization of detailed heritage data more feasible. These developments have enabled more automation, higher velocities, increased accuracy, and precision. Especially, improvements in new instruments and digital tools, such as handheld scanners and photogrammetric automated or semi-automated software, provide powerful 3D digitization solutions for the experts and inexperienced users both [3,4,5,6].
Accurate and high-resolution heritage 3D modeling has been investigated with various passive and active sensors aiming to reconstruct surface characteristics accurately. Triangulation-based laser scanning [7], structured light scanning [8], modeling with range imaging (RGB-D) cameras [9], and image-based photogrammetric modeling [10,11,12] constitute some of the techniques explored for the 3D digitization of heritage objects. Although triangulation scanning, structured light scanning, and image-based modeling systems implementing structure-from-motion (SfM) and multi-view stereo (MVS) algorithms [13] have prevailed for large-scale heritage applications, mainly due to the combined: lower costs, faster data acquisition and processing, high precision, and ability to capture high-resolution texture [8,14,15,16]. Many contemporary workflows combine more than one technique to optimize digital 3D results [17,18]. The advantages of the SfM–MVS combination are also evident for complex case studies, such as low-feature or featureless artifacts [19,20], and metric modeling from beyond-visible spectrum imagery [21].
Metric evaluations have been carried out regarding heritage objects, by Remondino et al. [22], Nabil and Saleh [23], Galizia et al. [24], Bianconi et al. [25] for SfM software, by Evgenikou and Georgopoulos [26] and Menna et al. [27] on scanning versus SfM photogrammetric software approaches and by Kersten et al. [28,29] and Morena et al. [30] for portable/handheld scanners. Additionally, due to recent significant improvements in mobile phone camera technology, regarding sensors’ quality and camera software performance, smartphone cameras have been evaluated for the post-texturing of models from active sensors and for the direct image-based metric modeling of heritage objects [31,32,33]. Despite frequent assessments concerning the aforementioned digitization techniques being found in bibliography, they quickly become outdated due to the rapid technological developments. Furthermore, considering that the productive documentation of heritage assets not only means digitization but also the ability to effectively communicate its results with heritage experts and the public alike, metric comparisons of digitization techniques need to be updated regularly. These assessments should not only seek to evaluate cost and time-effectiveness, but additionally, to have solidly defined the heritage-related results and metadata, so that the limits of their use for dissemination can also be well defined.
Metric heritage modeling consists of three factors: the object (size, geometry, surface characteristics, materials), data acquisition (sensors, conditions, referencing/scaling techniques, and workflows), and processing (hardware, software, algorithms, outputs). For metric assessments to be performed for any 3D digitization technique, two out of these three parameters must remain constant. In this context, the presented research aims to provide a comprehensive comparison on different SfM-based photogrammetric, and scanning workflows for small-sized heritage objects’ digitization, by keeping constant most of the essential parameters of the data acquisition procedures and by altering the passive or active sensors utilized or the software involved in the image-based modeling approaches implemented. In addition, due to the specific interest in the metric testing of the results produced, their comparability was taken into consideration, for planning the acquisition of datasets.
Therefore, in the second section of this paper, a thorough comparison between different commercial and free photogrammetric software implementations based on the SfM image-based modeling approach was accomplished for the imagery datasets collected with three cameras of different characteristics. The third section describes the tests performed with different phase-based laser and structured light portable scanners. The fourth section refers to further evaluations that were performed on final meshes and the fifth to comparisons between the meshes produced by scanning and image-based 3D modeling. The last section is devoted to offering some concluding remarks and research aims for further investigation.
Furthermore, it should be highlighted, that in the presented research, significant importance is attributed not only to the metric comparison and visual assessment of the 3D reconstruction results, but also to the prices of the sensors and software and to the durations of the processing workflows, as they are crucial factors for the selection of instrumentation and software for heritage modeling, especially in cases of in situ, mass, or rapid digitization.
The test objects for this study (presented in Figure 1) were:
  • A copy of Early Cycladic II Spedos-variety marble figurine, dimensions: 4 cm × 4 cm × 16 cm;
  • A Roman column capital replica, dimensions: 45 cm × 45 cm × 45 cm;
  • A bust of Francis Joseph I of Austria from the Accademia Carrara di Bergamo (Province of Bergamo, Lombardy, Italy), dimensions: 40 cm × 50 cm × 90 cm; and
  • A small 19th century religious stone sculpture of Christ Crucified from Castello di Casotto (Province of Cuneo, Piedmont, Italy), dimensions: 31 cm × 22 cm × 5 cm.

2. Image-Based 3D Modeling

2.1. Data Acquisition

The instrumentation used for the collection of imagery (specifications in Table 1) consisted of a full-frame Canon EOS 5DS R digital single-lens reflex (DSLR) camera with a Canon EF 24–105 mm f/4L IS USM lens (USD 4800), a Canon EOS 1200D DSLR camera (APS-C image sensor) with Canon EF-S 18–55 mm f/3.5–5.6 IS II lens (USD 380) and a Huawei P30 smartphone (USD 485) with 5.6 mm (27 mm equivalent) f/1.7 lens camera (Sony IMX 650 Exmor RS sensor; Leica optics).
Imagery for the figurine copy and the small sculpture was acquired using a turntable; artificial targets were placed around the objects to scale the scenes. A tripod was utilized to stabilize the cameras in order to prevent micro-blur. Regarding the case studies of larger dimensions, images were acquired obliquely with large overlaps; an invar scale bar was also photographed in the scene for scaling. Despite the different resolutions of the imaging sensors, it was attempted to maintain similar object sampling distances by considering the distance from the object and the available focal lengths to acquire comparable data. Depth-of-field (DoF) was calculated during acquisition, as sharpness was also considered. The characteristics of the datasets are summarized in Table 2. The last two datasets captured only the upper part of the small sculpture.

2.2. Processing Software and Parameters

The photogrammetric software solutions employed for the SfM–MVS approach image-based modeling included:
  • Agisoft Metashape Professional 1.5.1 (USD 3499);
  • 3DFlow Zephyr Aerial 4519 (USD 4329) [34];
  • Pix4Dmodel 4.5.3 (USD 49/month);
  • Autodesk ReCap Photo 19.3.1.4 (web-based; ReCap Pro USD 54/month);
  • Regard3D 1.0.0. (free and open sourced) which employs a K-GRraph matching algorithm and implements the Multi-View Environment [35] for dense scene reconstruction;
  • A pipeline combining VisualSFM [36,37]—a GPU-based bundler for SfM, CMVS [38] for MVS dense scene reconstruction, and MeshLab for Screened Poisson Surface Reconstruction [39] and mesh color-texturing [40].
The image-based processing solutions are herein referred to with the abbreviations AMP, FZA, P4D, ARP, R3D and VCM, respectively. For the datasets’ photogrammetric processing, a customized laptop was used, with a 6-core Intel i7-8750H CPU at 2.2 GHz (Max 4.1 GHz), 32 GB RAM, and NVIDIA GeForce RTX 2070 GPU.
To effectively evaluate the performance of the software implemented and the effects of utilizing different imaging sensors, similar parameters, when applicable, were selected for all the image-based modeling workflows, as summarized in Table 3. Standard semi-automatic digitization approaches were implemented in all cases, starting with the reconstruction of a sparse cloud, then densifying it after estimating depth maps, creating a mesh with the use of triangulation algorithms, and finally texturing the generated mesh with ortho-photo adaptive approaches. No manual removal of noise was performed, apart from deleting the scene and other components unconnected to the object, selected by component size.
Concerning the implemented algorithms, it should be highlighted that the free software VisualSFM utilizes scale-invariant feature transform (SIFT)-based detection and description [41], while Regard3D uses A-KAZE [42] and Local Intensity Order Patterns (LIOP) for this purpose [43]; in contrast, 3DFlow Zephyr uses a modified Difference-of-Gaussian (DoG) detector. Furthermore, both Metashape and Zephyr perform global bundle adjustment, however, VisualSFM and R3D implement Incremental SfM. Moreover, while all other software utilized Poisson Surface Reconstruction to generate the triangulated mesh, in FZA, an edge-preserving algorithmic approach was selected to compare the results.
The masking of the background on the images was performed in all commercial software, barring ARP that does not allow the user to intervene in any processing step. In P4D, the application of annotations takes place only after an initial full scene reconstruction, if images are correctly aligned, meaning that for the case study of the capital an extra 1:03:56 was required for the EOS 5DS R dataset and an extra 0:37:41 was required for the EOS 1200D dataset, for the first dense cloud and textured mesh reconstruction, additionally of what is reported below in results (Section 4).
The dense point clouds deriving from free reconstruction pipelines—where there was no option for masking unwanted areas of the imagery—were cleaned automatically using Statistical Outlier Removal (SOR) to efficiently remove noise before mesh generation. A maximum octree depth of 13, limited surface interpolation, and specific limits for the number of triangular faces were selected in those solutions that allowed the customization of the parameters for the 3D mesh generation step. Mesh texturing was performed without color or exposure-balancing the imagery, and without averaging values from multiple images, creating single-image file textures.

2.3. Results

For the figurine case study, only AMP and FZA were able to fully reconstruct the scene correctly from datasets 1 (EOS 5DS R), 2 (EOS 1200D), and 3 (Exmor RS IMX650). FZA required significantly more processing time, producing generally sparser results. Similar re-projection errors were observed. P4D was not successful in reconstructing the object from any dataset, and ARP results included partial reconstructions with noise and fictitious surfaces (Figure 2). The VCM pipeline entirely reconstructed the scene from dataset 1 (figurine, EOS 5DS R), but a small amount of noise remained after triangulation, affecting the final mesh, and the texturing results were problematic. Furthermore, the VCM pipeline was not able to reconstruct the object from dataset 2 (figurine, EOS 1200D) and resulted in an incomplete point cloud with a lot of noise for dataset 3 (figurine, Exmor RS IMX650; see Figure 2). R3D resulted in dense point clouds that included a big percentage of noise for dataset 1 (figurine, EOS 5DS R) and 3 (figurine, Exmor RS IMX650; see Figure 2), that was not possible to be removed automatically (or manually) and therefore were not exploited further to construct 3D meshes. Results from SfM–MVS photogrammetric processing are listed in Table 4.
For the capital case study, all software solutions could fully reconstruct the scene correctly (Table 5). Processing times were comparable between each commercial software. P4D produced the densest sparse cloud results, and AMP the densest dense point cloud results. Notably, the camera auto-calibration parameters, extracted from the commercial software, were interchangeable upon changing their format.
Similarly, from the obliquely acquired dataset 6 (bust; EOS 1200D) the object was reconstructed with all photogrammetric solutions. However, ARP seemingly created a few double surfaces and misaligned different planes of the object’s surface. Processing times between free software and AMP were comparable while FZA required significantly more time to process the dataset in this case. The meshes created with the open source software had some holes at the lower and upper parts. R3D generated a very dense point cloud, but there was a high number of duplicate points, resulting in a low-resolution mesh (Table 6). Texturing results were similar.
The object in dataset 7 (small sculpture, EOS 1200D, f = 18 mm) was fully digitized using AMP, while FZA produced sparser results because not all the images of the scene were oriented, despite the significantly longer processing time required. Each software produced dense results from dataset 8 (small sculpture, EOS 1200D, f = 55 mm) fully reconstructing the scene; AMP generated the densest 3D point cloud with the lowest reconstruction error, and FZA required the most time for processing. From dataset 9 (small sculpture, Exmor RS IMX650), we were able to retrieve the complete scene only by using VisualSFM. Both ARP and FZA generated partial models (Figure 3), while AMP produced a very noisy surface. Regard3D failed to generate any mesh from datasets 7–9. Results from the SfM–MVS photogrammetric processing of these datasets are listed in Table 7.

3. Scanning

3.1. Data Acquisition

For the objects’ scanning sessions, a FARO Focus3D X 330 was used along with two portable handheld near-infrared structure light scanners: the FARO Freestyle3D, and the STONEX F6 SR, recently evaluated for the digitization of heritage objects [44,45]. The characteristics of the scanning instrumentation are summarized in Table 8.
Scans were performed under homogeneous light conditions in circular patterns, planning to cover the complete geometry of the objects and eliminate occlusions as much as possible. The scanning distances were approximately 0.4–1 m, translating to 0.2–0.4 mm resolution point cloud densities according to the manufacturers’ specifications for all scanners. The case study of the capital replica required eight separate scans to be performed to fully capture the object’s 3D surface with the Focus3D X 330 and to ascertain the registration of all partial scans in one scene.

3.2. Processing

Scanned point cloud manipulation was performed with the software provided or suggested by manufacturers. Registration, denoising, and decimation were performed with Autodesk ReCap Pro 5.0.4.17 for raw scans from FARO scanners and with Mantis Vision Echo 2.3.1 for raw scans from the STONEX scanner. The 3D meshing was performed in MeshLab with similar parameters as used for the photogrammetric point clouds.

3.3. Results

For the figurine copy case study, no model was constructed, since the Focus 3D X 330 scanner did not provide results with enough density and the handheld scanners resulted in point clouds not correctly registered, with large amounts of noise which could not be removed either manually or automatically. For the capital replica case study, FARO Focus 3D X 330 and SF6 SR produced dense results, with some holes remaining in the first case due to occlusions. Freestyle 3D produced a very noisy and sparse cloud. These scanning results are listed in detail in Table 9. Scanning-based models were also produced for the stone bust after merging eight partial overlapping surface models captured with the F6 SR (14 million points), and for one side of the small sculpture (383 thousand points); all other digitization by scanning failed, because partially scanned scenes could not be registered.

4. Evaluation of the Results

The assessment of the quality of produced meshes considered the completeness, preservation of surface detail, noise, roughness, and additionally visual fidelity of texture for photogrammetric models. Models from the scanners had no observable noise. However, the surface produced with F6 SR was oversimplified, proving that the Mantis Echo Vision eliminated some of the surface details, despite low values for noise filtering had been selected; distances ranged below 2 mm (Figure 4).
For the photogrammetric datasets 1, 2, and 3, AMP and FZA produced very consistent results of similar detail and roughness. The models produced from dataset 1 (EOS 5DS R, figurine copy) were of remarkably high texture quality (Figure 5). The models from dataset 3 (Exmor RS IMS 650, figurine copy) contained an amount of noise (Figure 6). The calculated absolute distances between all models for those datasets were smaller than 0.5 mm (one standard deviation), which is roughly 0.3% of the objects’ size, with mean distances being lower than 0.3 mm (Table 10).
For dataset 4 (capital replica, EOS 5DS R), models from AMP and VCM had more holes. Models from P4D and ARP were smoother. The other models contained a small amount of noise on flatter surfaces and P4D seemed to oversimplify the surface details. Furthermore, the free software models had a small amount of remaining noise at the edges (Figure 7). All commercial software produced similar textures. The calculated absolute distances between the photogrammetric models for dataset 4 were smaller than 1.5 mm (one standard deviation), roughly 0.4 % of the objects’ dimensions, except for the model from P4D, for which the calculated distances to other models were larger than 2 mm (one standard deviation), as displayed in Table 11. A mapping of the geometric differences is also presented with Figure 8. Regarding dataset 5 (capital replica, EOS 1200D), all models contained an amount of surface noise, ranging from low to medium levels, with the VCM pipeline being the one that resulted in a higher level of preservation of surface detail and lower roughness levels. P4D and R3D meshes were the noisiest (Figure 9). Overall, the texture quality between the commercial software was similar to but better than that produced by free reconstruction pipelines (Figure 10). The calculated absolute distances between photogrammetric models (vertices of final mesh) for dataset 5 were smaller than 2.5 mm (one standard deviation—σ), roughly 0.6% of the objects’ dimensions, except for the model from P4D, where higher values were observed (Table 12).
Concerning dataset 6 (stone bust, EOS 1200D), FZA, ARP, and VCM-generated meshes were those more similar to the scanned mesh, and between them, in terms of surface detail and surface roughness (Figure 11 and Figure 12). However, as mentioned above, the ARP mesh had duplicate surfaces, and the FZA-produced mesh had a few holes on the top of the head where there was a smaller overlap between images. The AMP-produced mesh was also similar to the scanned one on the flatter surfaces, but on edges and fabric folds, a significant amount of noise remained. The textures were similar and differentiated only because of the small surface anomalies present at the web-based and open source software-produced meshes. Calculated surface differences are presented in Figure 13. Differences between the AMP-generated model and the other models were the smallest (Table 13), while the differences between AMP–FZA models ranged below 1.4 mm (1 σ).
The only fully reconstructed mesh for dataset 7 from AMP had smoothed-out surface features. All reconstructed models from dataset 8 had similar characteristics (Figure 14). Some double surfaces could again be observed on the surface produced with ARP. Surprisingly, the generated surface with better preserved surface features was the one produced from dataset 9 (small sculpture, Exmor RS IMX650) with the pipeline implementing VisualSFM, CMVS, and MeshLab (Figure 15). The differences observed between the meshes generated from dataset 7 ranged below 1 mm (1 σ), while the differences between models from dataset 8 ranged below 0.7 mm (1 σ), which are both considerable taking into consideration that the resolutions of these datasets were 0.09 and 0.02 mm, respectively. Significantly the surface deviation between the dataset 9 VisualSFM model and the high-resolution Metashape model from dataset 8 was below 0.5 mm (1 σ). Some of the measured distances are presented in Table 14.

5. Further Metric Comparisons

For the capital replica case study, more geometric assessments could be performed, comparing scanning meshes, assumed as ground truth, to the photogrammetric models. The calculation of distances was performed in Cloud Compare, with the cloud-to-cloud distance tool on the final meshes’ vertices, after fine registration with iterative closest point (ICP) was employed. The Hausdorff distances between the scanning and photogrammetric meshes ranged below 3 mm (one standard deviation) for dataset 4, except for the P4D model, and below 3.5 mm (one standard deviation) for dataset 5, except for the R3D model, which contained a great magnitude of noise. The results are presented in detail in Table 15 and Table 16. The main differences between the models produced with photogrammetric and scanning techniques were observed at parts of the capital replica that were occluded due to its complicated geometry.
For the stone bust, further geometric assessments could also be performed, comparing photogrammetrically produced surfaces with the scanned model. Accuracies considering as ground truth the scanned F6 SR model are presented in Table 17 and visualized with Figure 16. The AMP and FZA models were more metrically accurate, considering that the surfaces deviated below 1.3 mm (1 σ) from the ground truth model, representing only 3 ‰ of the object’s smallest dimension. Mean distances and their standard deviation for the models produced from non-commercial software ranged at about 1 mm.

6. Discussion and Conclusions

The presented research carried out a selective comparison on state-of-the-art SfM and MVS image-based modeling solutions of different costs, and on portable scanners for the digitization of small heritage objects. As expected, challenges occur from the different nature of heritage objects, with geometry, surface features, and texture playing important roles in the decision making for acquisition and processing workflows.
Photogrammetric results are affected by the type of camera sensor, sampling distance and coverage of the object on the image space. For complex, featureless, or very small case studies, only the expensive commercial solutions appeared to be able to fully reconstruct the photogrammetric scene, proving that perhaps the more cost-effective solutions are better suited for static scenes, or when the background is homogeneous and of a vastly different color than the object, so that it can be recognized as background by the utilized software.
For the photogrammetric reconstruction from the datasets of the capital replica, and the stone bust, almost all workflows produced similar results. Some noise problems that occurred can be attributed to oblique imagery, and thus can be tackled by acquiring more rigid, dense, and light-consistent imagery datasets.
Although, AMP, FZA and ARP proved to be the more efficient solutions, it should be stated that ARP offers no adjustable parameters and a limitation of 100 images per project, which is an important problem for real-case heritage digitization applications. Furthermore, AMP offers only a few adjustable parameters, with no details being available on the algorithmic approaches it exploits. On the other hand, FZA allows every parameter in the course of the digital reconstruction workflow to be customizable and despite default options resulting in noisy results—relative to FZA—an expert can identify how to optimize its implementation for heritage purposes. The implementation of an edge-preserving meshing algorithm, that also limits the interpolation of the dense point cloud in 3DFlow Zephyr, especially allows the generation of high-quality surfaces, preserving detail similar to the handheld scanners. P4D, as a software mainly oriented towards smaller scale applications and flatter geometries, did not provide sufficient results. The main problematic with the free solutions was the surface noise (due to capturing conditions) which cannot be easily filtered post-production.
Occlusions caused by complex geometries were tackled by image-based methods, but other problematic surfaces may require various combinations of documentation techniques. The textures produced from scanning techniques were not of adequate quality and thus the meshes produced in this manner need to be textured with other methods, ranging from simple image-to-mesh registration, to co-registration with photogrammetric models, to the integration of sensors for multiple data acquisition. Differences between Focus3D X 330 and Stonex F6 SR results can be attributed to the fact that the first one is oriented mainly towards architectural documentation and other construction applications.
The use of a mobile phone camera for photogrammetric purposes also seemed promising, not affecting the metric properties of the results, but had a visible effect on the levels of generated noise. However, despite the high resolution and quality of the mobile camera sensor used, the texture results were of lesser quality than the textures produced from datasets of high resolution DSLR camera. Thus, more experiments need to be conducted in that direction to evaluate the radiometric capacity of smart-phone cameras for the high-fidelity texturing of heritage models. To conclude, the combination of smartphone cameras and web-based solutions provides an exciting potential, for applications where metric quality is not the primary concern, such as rapid recording, dissemination for education, or the promotion of cultural heritage for touristic purposes.

Author Contributions

Conceptualization, Efstathios Adamopoulos; data curation, Efstathios Adamopoulos; methodology, Efstathios Adamopoulos; validation, Efstathios Adamopoulos and Fulvio Rinaudo; formal analysis, Efstathios Adamopoulos; investigation, Efstathios Adamopoulos; resources, Efstathios Adamopoulos and Fulvio Rinaudo; writing—original draft, Efstathios Adamopoulos; visualization, Efstathios Adamopoulos; writing—review and editing, Efstathios Adamopoulos and Fulvio Rinaudo; supervision, Fulvio Rinaudo and Liliana Ardissono; project administration, Fulvio Rinaudo and Liliana Ardissono; funding acquisition Fulvio Rinaudo and Liliana Ardissono. All authors have read and agreed to the published version of the manuscript.

Funding

This project, part of the ‘PhD Technology Driven Sciences: Technologies for Cultural Heritage (Tech4Culture)’, has received funding from the European Union’s Framework Programme for Research and Innovation Horizon 2020, under the H2020-Marie-Skłodowska Curie Actions-COFUND-2016 scheme (Grant Agreement N. 754511) and from the Compagnia di San Paolo.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to instrumentation, and historical objects belonging to third parties.

Acknowledgments

The authors would like to express their gratitude to all members of the Lab of Geomatics for Cultural Heritage (LabG4CH), at the Department of Architecture and Design (DAD)–Politecnico di Torino, for their valuable help regarding the instrumentation. The authors acknowledge the Accademia Carrara di Bergamo, and the Fondazione Centro Conservazione e Restauro dei Beni Culturali (CCR)’La Venaria Reale’ for the generous concession of the permission to publish the results about Emperor Franz Joseph I of Austria’s bust. The authors acknowledge Ragione Piemonte, and CCR ‘La Venaria Reale’ for the courteous concession of permission to publish results about the sculpture of Christ Crucified.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Pieraccini, M.; Guidi, G.; Atzeni, C. 3D digitizing of cultural heritage. J. Cult. Herit. 2001, 2, 63–70. [Google Scholar] [CrossRef]
  2. Adamopoulos, E.; Rinaudo, F. An Updated Comparison on Contemporary Approaches for Digitization of Heritage Objects. In Proceedings of the 2019 IMEKO TC-4 International Conference on Metrology for Archaeology and Cultural Heritage (2019 MetroArchaeo), Florence, Italy, 4–6 December 2019; Catelani, M., Daponte, P., Eds.; IMEKO: Florence, Italy, 2019; pp. 1–6. Available online: https://www.imeko.org/publications/tc4-Archaeo-2019/IMEKO-TC4-METROARCHAEO-2019-1.pdf (accessed on 11 November 2020).
  3. Georgopoulos, A.; Stathopoulou, E.K. Data Acquisition for 3D Geometric Recording: State of the Art and Recent Innovations. In Heritage and Archaeology in the Digital Age; Vincent, M.L., López-Menchero Bendicho, V.M., Ioannides, M., Levy, T.E., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 1–26. ISBN 978-3-319-65369-3. [Google Scholar]
  4. Hassani, F. Documentation of cultural heritage; techniques, potentials, and constraints. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W7, 207–214. [Google Scholar] [CrossRef] [Green Version]
  5. Agosto, E.; Ardissone, P.; Bornaz, L.; Dago, F. 3D Documentation of Cultural Heritage: Design and Exploitation of 3D Metric Surveys. In Applying Innovative Technologies in Heritage Science; Pavlidis, G., Ed.; Advances in Religious and Cultural Studies; IGI Global: Hershey, PA, USA, 2020; pp. 1–15. ISBN 978-1-79982-871-6. [Google Scholar]
  6. Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Haghshenas, S.; Hosseininaveh, A.; Varshosaz, M. The Performance Evaluation of Multi-Image 3D Reconstruction Software with Different Sensors. Measurement 2018, 120, 1–10. [Google Scholar] [CrossRef]
  7. Arbace, L.; Sonnino, E.; Callieri, M.; Dellepiane, M.; Fabbri, M.; Iaccarino Idelson, A.; Scopigno, R. Innovative uses of 3D digital technologies to assist the restoration of a fragmented terracotta statue. J. Cult. Herit. 2013, 14, 332–345. [Google Scholar] [CrossRef]
  8. McPherron, S.P.; Gernat, T.; Hublin, J.-J. Structured light scanning for high-resolution documentation of in situ archaeological finds. J. Archaeol. Sci. 2009, 36, 19–24. [Google Scholar] [CrossRef]
  9. Lachat, E.; Macher, H.; Landes, T.; Grussenmeyer, P. Assessment of the Accuracy of 3D Models Obtained with DSLR Camera and Kinect v2. In SPIE Volume 9528, Videometrics, Range Imaging, and Applications XIII, Proceedings of the SPIE Optical Metrology Event, Munich, Germany, 21–25 June 2015; Remondino, F., Shortis, M.R., Eds.; SPIE: Bellingham, WA, USA, 2015; pp. 95280G-1–95280G-14. [Google Scholar]
  10. Ackermann, J.; Goesele, M. A Survey of Photometric Stereo Techniques. FNT Comput. Graph. Vis. 2015, 9, 149–254. [Google Scholar] [CrossRef]
  11. Remondino, F. Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef] [Green Version]
  12. Guidi, G.; Micoli, L.L.; Gonizzi, S.; Brennan, M.; Frischer, B. Image-based 3D capture of cultural heritage artifacts an experimental study about 3D data quality. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; IEEE: New York, NY, USA, 2015; pp. 321–324. [Google Scholar] [CrossRef] [Green Version]
  13. Micheletti, N.; Chandler, J.H.; Lane, S.N. Structure from Motion (SfM) Photogrammetry. In Geomorphological Techniques; Cook, S.J., Clarke, L.E., Nield, J.M., Eds.; British Society for Geomorphology: London, UK, 2015; Chapter 2, Section 2.2. [Google Scholar]
  14. Graciano, A.; Ortega, L.; Segura, R.J.; Feito, F.R. Digitization of Religious Artifacts with a Structured Light Scanner. Virtual Archaeol. Rev. 2017, 8, 49. [Google Scholar] [CrossRef]
  15. Morita, M.; Bilmes, G. Applications of Low-Cost 3D Imaging Techniques for the Documentation of Heritage Objects. Opt. Pura Apl. 2018, 51, 50026:1–50026:11. [Google Scholar] [CrossRef]
  16. Sapirstein, P. A High-Precision Photogrammetric Recording System for Small Artifacts. J. Cult. Herit. 2018, 31, 33–45. [Google Scholar] [CrossRef]
  17. Ioannidis, C.; Piniotis, G.; Soile, S.; Bourexis, F.; Boutsi, A.-M.; Chliverou, R.; Tsakiri, M. Laser and Multi-Image Reverse Engineering Systems for Accurate 3D Modelling of Complex Cultural Artefacts. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 623–629. [Google Scholar] [CrossRef] [Green Version]
  18. Serna, C.G.; Pillay, R.; Trémeau, A. Data Fusion of Objects Using Techniques Such as Laser Scanning, Structured Light and Photogrammetry for Cultural Heritage Applications. In Computational Color Imaging; Trémeau, A., Schettini, R., Tominaga, S., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 9016, pp. 208–224. ISBN 978-3-319-15978-2. [Google Scholar]
  19. Koutsoudis, A.; Vidmar, B.; Arnaoutoglou, F. Performance Evaluation of a Multi-Image 3D Reconstruction Software on a Low-Feature Artefact. J. Archaeol. Sci. 2013, 40, 4450–4456. [Google Scholar] [CrossRef]
  20. Nicolae, C.; Nocerino, E.; Menna, F.; Remondino, F. Photogrammetry Applied to Problematic Artefacts. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL–5, 451–456. [Google Scholar] [CrossRef] [Green Version]
  21. Adamopoulos, E.; Bovero, A.; Rinaudo, F. Image-Based Metric Heritage Modeling in the Near-Infrared Spectrum. Herit. Sci. 2020, 8, 53. [Google Scholar] [CrossRef]
  22. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogram. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef] [Green Version]
  23. Nabil, M.; Saleh, F. 3D Reconstruction from Images for Museum Artefacts: A Comparative Study. In Proceedings of the 2014 International Conference on Virtual Systems & Multimedia (VSMM), Hong Kong, China, 9–12 December 2014; IEEE: New York, NY, USA, 2014; pp. 257–260. [Google Scholar]
  24. Galizia, M.; Inzerillo, L.; Santagati, C. Heritage and technology: Novel approaches to 3D documentation and communication of architectural heritage. In Proceedings of the Le Vie dei Mercanti XIII International Forum, Capri, Italy, 11–13 June 2015; pp. 686–695. [Google Scholar]
  25. Bianconi, F.; Catalucci, S.; Filippucci, M.; Marsili, R.; Moretti, M.; Rossi, G.; Speranzini, E. Comparison between two non-contact techniques for art digitalization. J. Phys. Conf. Ser. 2017, 882, 012005. [Google Scholar] [CrossRef] [Green Version]
  26. Evgenikou, V.; Georgopoulos, A. Investigating 3D Reconstruction Methods for Small Artifacts. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 101–108. [Google Scholar] [CrossRef] [Green Version]
  27. Menna, F.; Nocerino, E.; Remondino, F.; Dellepiane, M.; Callieri, M.; Scopigno, R. 3D Digitization of an Heritage Masterpiece—A Critical Analysis on Quality Assessment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 675–683. [Google Scholar] [CrossRef]
  28. Kersten Thomas, P.; Przybilla, H.-J.; Lindstaedt, M. Investigations of the Geometrical Accuracy of Handheld 3D Scanning Systems. Photogramm. Fernerkund. Geoinf. 2016, 2016, 271–283. [Google Scholar] [CrossRef] [Green Version]
  29. Kersten, T.P.; Lindstaedt, M.; Starosta, D. Comparative Geometrical Accuracy Investigations of Hand-Held 3D Scanning Systems—An Update. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-2, 487–494. [Google Scholar] [CrossRef] [Green Version]
  30. Morena, S.; Barba, S.; Álvaro-Tordesillas, A. Shining 3D EinScan-Pro, Application and Validation in the Field of Cultural Heritage, from the Chillida-Leku Museum to the Archaeological Museum of Sarno. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W18, 135–142. [Google Scholar] [CrossRef] [Green Version]
  31. Santagati, C.; Lo Turco, M.; Bocconcino, M.M.; Donato, V.; Galizia, M. 3D Models for all: Low-cost acquisition through mobile devices in comparison with image based techniques. Potentialities and weaknesses in cultural heritage domain. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W8, 221–228. [Google Scholar] [CrossRef] [Green Version]
  32. Boboc, R.G.; Gîrbacia, F.; Postelnicu, C.C.; Gîrbacia, T. Evaluation of Using Mobile Devices for 3D Reconstruction of Cultural Heritage Artifacts. In VR Technologies in Cultural Heritage; Duguleană, M., Carrozzino, M., Gams, M., Tanea, I., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 904, pp. 46–59. ISBN 978-3-030-05818-0. [Google Scholar]
  33. Gaiani, M.; Apollonio, F.I.; Fantini, F. Evaluating smartphones color fidelity and metric accuracy for the 3D documentation of small artifacts. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 539–547. [Google Scholar] [CrossRef] [Green Version]
  34. Gherardi, R.; Farenzena, M.; Fusiello, A. Improving the Efficiency of Hierarchical Structure-and-Motion. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: New York, NY, USA, 2010; pp. 1594–1600. [Google Scholar]
  35. Fuhrmann, S.; Langguth, F.; Moehrle, N.; Waechter, M.; Goesele, M. MVE—An image-based reconstruction environment. Comput. Graph. 2015, 53, 44–53. [Google Scholar] [CrossRef]
  36. Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the 2013 International Conference on 3D Vision (3DV 2013), Seattle, WA, USA, 29 June–1 July 2013; IEEE Computer Society: New York, NY, USA, 2013; pp. 127–136. [Google Scholar]
  37. Wu, C.; Agarwal, S.; Curless, B.; Seitz, S.M. Multicore bundle adjustment. In Proceedings of the 2011 CVPR, Providence, RI, USA, 20–25 June 2011; pp. 3057–3064. [Google Scholar]
  38. Furukawa, Y.; Curless, B.; Seitz, S.M.; Szeliski, R. Towards Internet-scale multi-view stereo. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: New York, NY, USA, 2010; pp. 1434–1441. [Google Scholar]
  39. Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Trans. Graph. 2013, 32, 1–13. [Google Scholar] [CrossRef] [Green Version]
  40. Callieri, M.; Ranzuglia, G.; Dellepiane, M.; Cignoni, P.; Scopigno, R. Meshlab as a complete open tool for the integration of photos and colour with high-resolution 3D geometry data. In Proceedings of the 2008 Eurographics Italian Chapter Conference, Salerno, Italy, 2–4 July 2008; pp. 129–136. [Google Scholar]
  41. Wu, C. SiftGPU: A GPU Implementation of Scale Invariant Feature Transform (SIFT). 2007. Available online: http://www.cs.unc.edu/~ccwu/siftgpu/lowesift (accessed on 29 January 2013).
  42. Alcantarilla, P.; Nuevo, J.; Bartoli, A. Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. In Proceedings of the British Machine Vision Conference 2013, Bristol, UK, 9–13 September 2013; British Machine Vision Association: Bristol, UK, 2013; pp. 13.1–13.11. [Google Scholar]
  43. Wang, Z.; Fan, B.; Wu, F. Local Intensity Order Pattern for Feature Description. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: New York, NY, USA, 2011; pp. 603–610. [Google Scholar]
  44. De Luca, D.; Del Giudice, M.; Grasso, N.; Matrone, F.; Osello, A.; Piras, M. Handheld Volumetric Scanner for 3D Printed Integrations of historical Elements: Comparisons and Results. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 381–388. [Google Scholar] [CrossRef] [Green Version]
  45. Patrucco, G.; Rinaudo, F.; Spreafico, A. A New Handheld Scanner for 3D Survey of Small Artifacts: The STONEX F6. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 895–901. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Case studies (from left to right): Cycladic figurine copy, Roman capital replica, stone bust of Francis Joseph I of Austria, and small sculpture of Christ Crucified.
Figure 1. Case studies (from left to right): Cycladic figurine copy, Roman capital replica, stone bust of Francis Joseph I of Austria, and small sculpture of Christ Crucified.
Ijgi 10 00010 g001
Figure 2. Examples of partial and noise-containing reconstructions (from left to right): dataset 1 ARP, dataset 1 Regard3D 1.0.0 (R3D), dataset 3 VCM, dataset 3 R3D.
Figure 2. Examples of partial and noise-containing reconstructions (from left to right): dataset 1 ARP, dataset 1 Regard3D 1.0.0 (R3D), dataset 3 VCM, dataset 3 R3D.
Ijgi 10 00010 g002
Figure 3. Partial meshes generated with ARP (left) and FZA (right) from dataset 9.
Figure 3. Partial meshes generated with ARP (left) and FZA (right) from dataset 9.
Ijgi 10 00010 g003
Figure 4. Scanning results. Untextured Stonex F6 SR mesh (left), untextured FARO Focus 3D X 330 mesh (center) and scalar field mapping of Hausdorff distances; maximum visualized distance: 1 cm.
Figure 4. Scanning results. Untextured Stonex F6 SR mesh (left), untextured FARO Focus 3D X 330 mesh (center) and scalar field mapping of Hausdorff distances; maximum visualized distance: 1 cm.
Ijgi 10 00010 g004
Figure 5. Textured photogrammetric meshes of the figurine copy, (from left to right) dataset 1 AMP, dataset 1 FZA, dataset 2 AMP, dataset 2 FZA, dataset 3 AMP, dataset 3 FZA.
Figure 5. Textured photogrammetric meshes of the figurine copy, (from left to right) dataset 1 AMP, dataset 1 FZA, dataset 2 AMP, dataset 2 FZA, dataset 3 AMP, dataset 3 FZA.
Ijgi 10 00010 g005
Figure 6. Untextured photogrammetric meshes of the figurine copy, (from left to right) dataset 1 AMP, dataset 1 FZA, dataset 2 AMP, dataset 2 FZA, dataset 3 AMP, dataset 3 FZA.
Figure 6. Untextured photogrammetric meshes of the figurine copy, (from left to right) dataset 1 AMP, dataset 1 FZA, dataset 2 AMP, dataset 2 FZA, dataset 3 AMP, dataset 3 FZA.
Ijgi 10 00010 g006
Figure 7. Untextured photogrammetric meshes from dataset 4, (from left to right) AMP, FZA, P4D, ARP, VCM.
Figure 7. Untextured photogrammetric meshes from dataset 4, (from left to right) AMP, FZA, P4D, ARP, VCM.
Ijgi 10 00010 g007
Figure 8. Scalar field mapping of Hausdorff distances for dataset 4 photogrammetric results. Deviation between the ARP mesh and the AMP mesh (left), deviation between the ARP mesh and the FZA mesh (center), deviation between the ARP and the P4D mesh (right); maximum visualized distance: 1 cm.
Figure 8. Scalar field mapping of Hausdorff distances for dataset 4 photogrammetric results. Deviation between the ARP mesh and the AMP mesh (left), deviation between the ARP mesh and the FZA mesh (center), deviation between the ARP and the P4D mesh (right); maximum visualized distance: 1 cm.
Ijgi 10 00010 g008
Figure 9. Textured photogrammetric meshes of the capital replica from dataset 5 (from left to right): AMP, VCM, R3D.
Figure 9. Textured photogrammetric meshes of the capital replica from dataset 5 (from left to right): AMP, VCM, R3D.
Ijgi 10 00010 g009
Figure 10. Untextured photogrammetric meshes of the capital replica from dataset 5, (from left to right, and from top to bottom): AMP, FZA, P4D, ARP, VCM, R3D.
Figure 10. Untextured photogrammetric meshes of the capital replica from dataset 5, (from left to right, and from top to bottom): AMP, FZA, P4D, ARP, VCM, R3D.
Ijgi 10 00010 g010
Figure 11. Untextured meshes of the stone bust from dataset 6 (from left to right, and from top to bottom): F6 SR, AMP, FZA, ARP, VCM, R3D.
Figure 11. Untextured meshes of the stone bust from dataset 6 (from left to right, and from top to bottom): F6 SR, AMP, FZA, ARP, VCM, R3D.
Ijgi 10 00010 g011
Figure 12. Detail from the untextured photogrammetric meshes of the stone bust from dataset 6 (from left to right): F6 SR, FZA, ARP.
Figure 12. Detail from the untextured photogrammetric meshes of the stone bust from dataset 6 (from left to right): F6 SR, FZA, ARP.
Ijgi 10 00010 g012
Figure 13. Scalar field mapping of Hausdorff distances for dataset 6 photogrammetric results. Deviation between the AMP mesh and the FZA mesh (left), deviation between the AMP mesh and the ARP mesh (center), deviation between the AMP and the VCM mesh (right); maximum visualized distance: 1 cm.
Figure 13. Scalar field mapping of Hausdorff distances for dataset 6 photogrammetric results. Deviation between the AMP mesh and the FZA mesh (left), deviation between the AMP mesh and the ARP mesh (center), deviation between the AMP and the VCM mesh (right); maximum visualized distance: 1 cm.
Ijgi 10 00010 g013
Figure 14. Untextured meshes of the small sculpture, (from left to right, and from top to bottom): F6 SR, AMP–dataset 7, AMP–dataset 8, FZA–dataset 7, FZA–dataset 8, and ARP–dataset 8.
Figure 14. Untextured meshes of the small sculpture, (from left to right, and from top to bottom): F6 SR, AMP–dataset 7, AMP–dataset 8, FZA–dataset 7, FZA–dataset 8, and ARP–dataset 8.
Ijgi 10 00010 g014
Figure 15. VCM-produced mesh from dataset 9 (smartphone camera).
Figure 15. VCM-produced mesh from dataset 9 (smartphone camera).
Ijgi 10 00010 g015
Figure 16. Scalar field mapping of Hausdorff distances between dataset 6 photogrammetric results and scanning results. Deviation between the F6 SR mesh and the AMP mesh (upper left), deviation between the F6 SR mesh and the FZA mesh (upper right), deviation between the F6 SR mesh and the ARP mesh (lower left), deviation between the F6 SR and the VCM mesh (lower right); maximum visualized distance: 1 cm.
Figure 16. Scalar field mapping of Hausdorff distances between dataset 6 photogrammetric results and scanning results. Deviation between the F6 SR mesh and the AMP mesh (upper left), deviation between the F6 SR mesh and the FZA mesh (upper right), deviation between the F6 SR mesh and the ARP mesh (lower left), deviation between the F6 SR and the VCM mesh (lower right); maximum visualized distance: 1 cm.
Ijgi 10 00010 g016
Table 1. Specifications of the employed imaging sensors.
Table 1. Specifications of the employed imaging sensors.
TypeMid-Size DSLRCompact DSLRHuawei P30 Phone Camera
BrandCanonCanonSony
ModelEOS 5DS REOS 1200DIMX 650
Ijgi 10 00010 i001 Ijgi 10 00010 i002 Ijgi 10 00010 i003
Resolution52 MP18 MP40 MP
Sensor Sizefull frameAPS-C1/1.7”
Pixel Size4.14 mm4.31 mm0.93 mm
Sensor TypeCMOSCMOSCMOS BSI
Lens usedCanon EF 24–105 mm f/4L IS USMCanon EF-S 18–55 mm IS II5.6 mm (integrated)
Table 2. Characteristics of imagery datasets.
Table 2. Characteristics of imagery datasets.
DatasetObjectCamera
Model
Mega-
Pixels
f (mm)Distance (m)No. of Imagesf-StopExposure (s)ISO
1Figurine copyEOS 5DS R52240.2550f/7.11/20200
2Figurine copyEOS 1200D18180.2050f/81/20200
3Figurine copyExmor RS IMX650405.60.2550f/81/20200
4Capital replicaEOS 5DS R52350.88 *50f/7.11/40200
5Capital replicaEOS 1200D18180.69 *50f/81/40200
6Stone bustEOS 1200D18180.90 *50f/161/60100
7Small sculptureEOS 1200D18180.38142f/161/15100
8Small sculptureEOS 1200D18550.2760f/161/15100
9Small sculptureExmor RS IMX650405.60.1260f/1.81/50100
Note: * signifies average values.
Table 3. Processing parameters of image-based photogrammetric modeling.
Table 3. Processing parameters of image-based photogrammetric modeling.
Reconstruction StepParameterValue
Feature detection and matching alignmentKey point densityHigh (50K)
Tie point densityHigh (50K)
Pair preselectionHigher matches
Camera model fittingRefine
Dense
matching
Point densityHigh
Depth filteringModerate
Mesh
generation
Max number of faces5M (10M for capital replica)
Surface interpolationLimited
Texture
generation
Texture size8192 × 8192 pixels
Color balancingDisabled
Table 4. Photogrammetric results, datasets 1–3.
Table 4. Photogrammetric results, datasets 1–3.
Dataset 1Dataset 2Dataset 3
SoftwareAMPFZAAMPFZAAMPFZA
Sparse CloudImages Aligned505050505042
Matching time (hh:mm:ss)00:00:4000:02:4800:00:1800:01:4000:00:4100:05:34
Alignment time (hh:mm:ss)00:00:1900:01:1100:00:0600:00:2000:00:1000:00:34
Tie points (1000 points)982429197727
Projections (1000 points)3211369291212118
Adjustment error (pixels)0.490.790.540.460.650.72
Resolution (mm/pixel)0.050.050.060.060.040.04
Dense CloudProcessing time (hh:mm:ss)00:10:3101:16:3900:04:3100:24:0300:09:0900:46:40
Point count (1000 points)1832591116937024143920
Triangle MeshProcessing time (hh:mm:ss)00:00:2100:00:0800:00:1600:00:4700:00:3000:00:10
Faces (1000 triangles)44821168284673750001551
Vertices (1000 points)224658914273692514783
TextureProcessing time (hh:mm:ss)00:04:0700:04:0100:02:4600:01:2500:05:4900:02:32
Total time (hh:mm:ss)00:15:5801:24:4700:07:5700:28:1500:16:1900:55:30
Table 5. Photogrammetric results, datasets 4 and 5.
Table 5. Photogrammetric results, datasets 4 and 5.
Dataset 4Dataset 5
SoftwareAMPFZAP4DAMPFZAP4D
Sparse CloudImages Aligned505050505050
Matching time (hh:mm:ss)00:01:0500:10:140:00:5100:01:0400:09:2300:00:49
Alignment time (hh:mm:ss)00:00:3300:01:020:02:5300:00:2100:00:270:01:50
Tie points (1000 points)19778126210252547
Projections (1000 points)53536126972582471126
Adjustment error (pixels)0.981.440.170.690.940.11
Resolution (mm/pixel)0.080.090.080.160.160.16
Dense CloudProcessing time (hh:mm:ss)00:23:1501:44:3500:11:3500:07:5100:31:0100:03:15
Point count (1000)43,611216812,03210,94118113742
Manual denoizingnononononono
Triangle MeshProcessing time (hh:mm:ss)00:36:4000:00:2700:07:2000:03:4400:00:2100:00:44
Faces (1000 triangles)10,000424510,0009995358710,000
Vertices (1000 points)773929357445550722936773
TextureProcessing time (hh:mm:ss)00:36:1600:07:0000:35:4000:11:3500:04:3600:10:02
Total time (hh:mm:ss)01:37:4902:03:180:58:1900:24:3500:45:4800:16:40
Table 6. Photogrammetric results, dataset 6.
Table 6. Photogrammetric results, dataset 6.
Dataset 6
SoftwareVCMR3DARPAMPFZA
Sparse CloudImages aligned5048505050
Matching time (hh:mm:ss)00:02:1900:03:36 00:00:3600:00:59
Alignment time (hh:mm:ss)00:01:0300:00:30 00:00:1300:17:34
Tie points (1000 points)23143 5948
Projections (1000 points)75498 156205
Adjustment error (pixels)1.300.17 0.520.60
Dense CloudProcessing time (hh:mm:ss)00:11:3900:23:01 00:05:3700:22:22
Point count (1000 points)158211,786 98802666
Triangle MeshProcessing time (hh:mm:ss)00:06:0500:01:02 00:06:3100:02:01
Faces (1000 triangles)1451252100350003737
Vertices (1000 points)726127184825001873
TextureProcessing time (hh:mm:ss)00:01:5200:00:48 00:03:1000:03:54
Total time (hh:mm:ss)0:22:580:28:57 0:16:070:46:50
Table 7. Photogrammetric results, datasets 7–9.
Table 7. Photogrammetric results, datasets 7–9.
Dataset 7Dataset 8Dataset 9
SoftwareAMPFZAAMPFZAVCMAMPFZA
Sparse CloudImages aligned142696060606023
Matching time (hh:mm:ss)00:01:0500:03:4000:01:550:01:3900:01:2200:01:380:02:58
Alignment time (hh:mm:ss)0:00:2400:09:0500:01:1900:46:4800:01:2800:00:550:28:02
Tie points (1000 points)8936420132548834
Projections (1000 points)2731541270803253242127
Adjustment error (pixels)0.520.520.350.471.021.151.43
Resolution (mm/pixel)0.090.090.020.020.020.020.2
Dense CloudProcessing time (hh:mm:ss)00:23:1000:55:2000:14:2700:42:1600:15:4300:25:3000:25:06
Point count (1000 points)2058421199803958176411,3251720
Triangle MeshProcessing time (hh:mm:ss)00:01:2300:02:3000:02:5800:03:3400:06:0300:05:3200:01:13
Faces (1000 triangles)4846460550004839386450002061
Vertices (1000 points)2424231225632500193525041055
TextureProcessing time (hh:mm:ss)00:09:2500:18:2500:03:5900:09:5000:09:4700:04:0900:02:51
Total time (hh:mm:ss)0:35:271:29:000:24:381:44:070:34:230:37:441:00:10
Table 8. Specifications of the employed scanners.
Table 8. Specifications of the employed scanners.
TypePhase-Based Laser ScannerHandheld Structured Light ScannerHandheld Structured Light Scanner
BrandFAROFAROSTONEX
modelFocus 3D X 330Freestyle3DF6 SR
Ijgi 10 00010 i004 Ijgi 10 00010 i005 Ijgi 10 00010 i006
Accuracy2 mm1.5 mm0.09 mm
Point density0.2 mm0.2 mm0.4 mm
Depth of field0.6–130 m0.3–0.8 m0.25–0.5 m
Acquisition speedup to 976,000 points/sup to 88,000 points/sup to 640,000 points/s
Noise level0.3 mm0.7 mm0.5 mm
Approx. priceEUR 25,000EUR 13,000EUR 10,000
Table 9. Scanning results, capital replica.
Table 9. Scanning results, capital replica.
STONEX F6 SRFARO Focus3D X 330FARO Freestyle
Acquisition duration (mm:ss)02:1690:5610:40
Registration duration (mm:ss)05:0814:35---
Denoising duration (mm:ss)24:152:2600:02
Meshing duration (mm:ss)01:2304:0101:27
Cloud points (1000 points)20,9281289435
Mesh triangles (1000 triangles)635064881951
Table 10. Hausdorff distances between the photogrammetric models for the figurine copy case study—datasets 1–3 (distances in mm).
Table 10. Hausdorff distances between the photogrammetric models for the figurine copy case study—datasets 1–3 (distances in mm).
Dataset 2 AMPDataset 3 AMPDataset 1 FZADataset 2 FZADataset 3 FZA
Dataset 1 AMP0.150.110.170.140.210.290.160.150.190.18
Dataset 2 AMP 0.190.160.230.280.210.180.180.14
Dataset 3 AMP 0.180.170.170.140.150.10
Dataset 1 FZA 0.200.200.170.16
Dataset 2 FZA 0.160.10
Mean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. dev
Table 11. Hausdorff distances between the photogrammetric models for the capital replica case study—dataset 4 (distances in mm).
Table 11. Hausdorff distances between the photogrammetric models for the capital replica case study—dataset 4 (distances in mm).
FZAP4DARPVCMR3D
AMP0.660.450.751.300.760.590.690.540.800.57
FZA 0.801.500.720.780.720.710.790.68
P4D 0.952.140.941.060.961.07
ARP 0.800.670.820.64
VCM 0.600.59
Mean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. dev
Table 12. Hausdorff distances between the photogrammetric models for the capital replica case study—dataset 5 (distances in mm).
Table 12. Hausdorff distances between the photogrammetric models for the capital replica case study—dataset 5 (distances in mm).
FZAP4DARPVCMR3D
AMP0.600.450.680.810.720.990.500.505.453.33
FZA 0.941.061.041.230.730.685.373.23
P4D 1.071.510.900.955.483.22
ARP 1.051.515.553.23
VCM 5.373.26
Mean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. dev
Table 13. Hausdorff distances between the photogrammetric models for the stone bust case study—dataset 6 (distances in mm).
Table 13. Hausdorff distances between the photogrammetric models for the stone bust case study—dataset 6 (distances in mm).
FZAARPVCMR3D
AMP0.820.581.280.890.690.821.031.31
FZA 1.211.311.001.151.111.37
ARP 1.441.191.681.5
VCM 1.211.36
Mean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. dev
Table 14. Hausdorff distances between photogrammetric models for the small sculpture case study—datasets 7–9 (distances in mm).
Table 14. Hausdorff distances between photogrammetric models for the small sculpture case study—datasets 7–9 (distances in mm).
AMP–Dataset 7FZA–Dataset 7AMP–Dataset 8VCM–Dataset 9
AMP–Dataset 80.701.450.811.530.240.480.280.86
mean abs.std. dev.mean abs.std. dev.mean abs.std. dev.mean abs.std. dev.
Table 15. Hausdorff distances between the 3D scanning and photogrammetric models from dataset 4 (distances in mm).
Table 15. Hausdorff distances between the 3D scanning and photogrammetric models from dataset 4 (distances in mm).
AMPFZAP4DARPVCMR3D
3D X 3300.842.181.011.571.322.141.171.691.211.791.011.04
F6 SR1.721.751.231.401.231.431.481.611.211.391.171.35
Mean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. dev
Table 16. Hausdorff distances between 3D scanning and photogrammetric models from dataset 5 (distances in mm).
Table 16. Hausdorff distances between 3D scanning and photogrammetric models from dataset 5 (distances in mm).
AMPFZAP4DARPVCMR3D
3D X 3301.461.961.291.901.372.161.422.041.252.095.753.30
F6 SR1.532.161.541.411.221.431.511.691.311.446.223.33
Mean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. dev
Table 17. Hausdorff distances between the 3D scanning and photogrammetric models from dataset 6 (distances in mm).
Table 17. Hausdorff distances between the 3D scanning and photogrammetric models from dataset 6 (distances in mm).
AMPFZAARPVCMR3D
F6 SR0.630.730.660.651.221.000.750.921.111.38
Mean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. devMean abs.Std. dev
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Adamopoulos, E.; Rinaudo, F.; Ardissono, L. A Critical Comparison of 3D Digitization Techniques for Heritage Objects. ISPRS Int. J. Geo-Inf. 2021, 10, 10. https://doi.org/10.3390/ijgi10010010

AMA Style

Adamopoulos E, Rinaudo F, Ardissono L. A Critical Comparison of 3D Digitization Techniques for Heritage Objects. ISPRS International Journal of Geo-Information. 2021; 10(1):10. https://doi.org/10.3390/ijgi10010010

Chicago/Turabian Style

Adamopoulos, Efstathios, Fulvio Rinaudo, and Liliana Ardissono. 2021. "A Critical Comparison of 3D Digitization Techniques for Heritage Objects" ISPRS International Journal of Geo-Information 10, no. 1: 10. https://doi.org/10.3390/ijgi10010010

APA Style

Adamopoulos, E., Rinaudo, F., & Ardissono, L. (2021). A Critical Comparison of 3D Digitization Techniques for Heritage Objects. ISPRS International Journal of Geo-Information, 10(1), 10. https://doi.org/10.3390/ijgi10010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop