Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Triple-Stacked FET Distributed Power Amplifier Using 28 nm CMOS Process
Previous Article in Journal
Robotic Grasping Detection Algorithm Based on 3D Vision Dual-Stream Encoding Strategy
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery †

Faculty of Applied Informatics, Tomas Bata University in Zlín, 760 05 Zlín, Czech Republic
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Analysis of Natural Lighting Condition for the Digitization if Artwork in an Art Gallery Interior. In Proceedings of the WSCG 2024 International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, Computer Science Research Notes, Prague, Czech Republic, 3–6 June 2024; pp. 391–394. https://doi.org/10.24132/CSRN.3401.43.
Electronics 2024, 13(22), 4431; https://doi.org/10.3390/electronics13224431
Submission received: 2 October 2024 / Revised: 1 November 2024 / Accepted: 5 November 2024 / Published: 12 November 2024
(This article belongs to the Section Electronic Multimedia)
Figure 1
<p>Digitization of an art object: (<b>a</b>) 2D digitized object and detail marked in red; (<b>b</b>) matrix of partial details the yellow range of the image; and (<b>c</b>) visualization of the detail of the structure and color of a partial part of the object.</p> ">
Figure 2
<p>The basic principle of the Structure from Motion (SfM) method [<a href="#B37-electronics-13-04431" class="html-bibr">37</a>].</p> ">
Figure 3
<p>Creation of a 3D model using the SfM photogrammetry method: (<b>a</b>) Digitized object; (<b>b</b>) position of 24 photos from which the basic cloud of points is created; (<b>c</b>) Dense Cloud generation; (<b>d</b>) the resulting 3D texture model of the artwork.</p> ">
Figure 4
<p>Creating a 3D model using a LiDAR sensor: (<b>a</b>) digitized object; (<b>b</b>) 3D model generated by Scaniverse; (<b>c</b>) 3D texture model imported into Agisoft 3D SW; and (<b>d</b>) generated point cloud from the textured 3D model.</p> ">
Figure 5
<p>Generated Dense Cloud: (<b>a</b>) 3D SfM photogrammetry method and (<b>b</b>) LiDAR sensor.</p> ">
Figure 6
<p>Colorimetry: (<b>a</b>) RGB color model and (<b>b</b>) sRGB color space (gamut).</p> ">
Figure 7
<p>SfM—Points Segmentation #758605: (<b>a</b>) Dense Cloud 3D model using SfM photogrammetry; (<b>b</b>) segmentation points by color G#758605; (<b>c</b>) body #758605 in Dense Cloud.</p> ">
Figure 8
<p>LiDAR—Segmentation of points #758605: (<b>a</b>) 3D model using LiDAR sensor; (<b>b</b>) Segmentation of points by color G#758605; (<b>c</b>) detail of the points generated in Dense Cloud.</p> ">
Figure 9
<p>CIE XYZ 1931 standardized color space: (<b>a</b>) Basic ColorChecker standardized color gamut; (<b>b</b>) position of individual standardized colors in the CIE 1931 chromatic diagram; (<b>c</b>) color model L*a*b*; (<b>d</b>) CIE 1931 chromaticity diagram with Rec.2020 gamuts; sRGB and L*a*b.</p> ">
Figure 10
<p>Visual comparison of reproduction quality in the process of 3D modeling and color segmentation: 3D models using the SfM photogrammetry method and 3D models using the LiDAR sensor.</p> ">
Figure 11
<p>Visualization of a realistic 3D reconstruction of the artwork: (<b>a</b>) 3D Dense Cloud model; (<b>b</b>) 3D texture model by the SfM method; (<b>c</b>) 3D texture model by LiDAR sensor.</p> ">
Versions Notes

Abstract

:
This study deals with the color reproduction of a work of art to digitize it into a 3D realistic model. The experiment aims to digitize a work of art for application in a virtual reality environment concerning faithful color reproduction. Photogrammetry and scanning with a LiDAR sensor are used to compare the methods and work with colors during the reconstruction of the 3D model. An innovative tablet with a camera and LiDAR sensor is used for both methods. At the same time, current findings from the field of color vision and colorimetry are applied to 3D reconstruction. The experiment focuses on working with the RGB and L*a*b* color models and, simultaneously, on the sRGB, CIE XYZ, and Rec.2020(HDR) color spaces for transforming colors into a virtual environment. For this purpose, the color is defined in the Hex Color Value format. This experiment is a starting point for further research on color reproduction in the digital environment. This study represents a partial contribution to the much-discussed area of forgeries of works of art in current trends in forensics and forgery.

1. Introduction

Currently, digitization processes are reflected in all areas of human activity. Digital technologies are used across commercial, scientific, and artistic fields. Especially in the field of art, emphasis is often placed on the highly realistic quality of digital reproduction. Digital technologies are already a standard part of capturing and processing images. However, digital technology and image processing processes are projected and part of artistic creation [1]. Nevertheless, the digital reproduction of works of art is still excellent for applying new procedures, such as machine learning, especially in connection with new trends, such as 3D visualization and virtual presentations in the online environment [2,3].
3D realistic digital reproduction of a work of art also brings many challenges and unsolved issues in image processing. This also depends on the digitization itself in 2D [4]. One of them is the high quality of the reproduction of the object in connection with its texture and color, which are often changeable due to the influence of light and weather conditions. This problem is noted especially in exterior exhibitions and architecture [5]. The technology and methodology chosen for the 3D digitization process also greatly influence the quality of the digitization of the object [6]. The method and process of digitization depend mainly on the final output of the digitized object. The team can be 2D and 3D printing, 3D online presentations, or using an object in a virtual and augmented reality (VR/AV) environment in interaction with the user [7,8]. For a realistic digital 3D reproduction of a work of art, great emphasis is placed on the quality of the digital image and its realism. Image reproduction aims to obtain as close as possible to the original image [9]. The same attribute for evaluating reproduction quality is color, which is directly related to light and human vision. This issue is dealt with by colorimetry, color, and human vision [10,11,12].
This text responds to current trends in the digitization of art and the issue of realistic digital reproduction using the photogrammetry method and significantly expands the findings presented in a short paper presented at an international conference in Pilsen, Czech Republic [13,14]. The work also considers using LiDAR (Light Detection And Ranging) sensors for rapid image capture using a mobile application in a mobile device [15,16]. The presented experiment aims to determine the extent to which ambient lighting conditions can influence the 3D digitization of a work of art in connection with the chosen modeling method. Previous research in the field of image digitization used sensing devices, such as compact and DSLR (Digital Single-Lens Reflex) cameras, mobile device cameras, laser scanners, and 360° cameras. One sensing device is used in this experiment [17,18,19,20].
The following chapters describe the digitization of the art work using the SfM (Structure from Motion) photogrammetry method and scanning with a LiDAR sensor in the interior. At the same time, image capture is performed indoors in daylight. The influence of lighting conditions on the color reproduction of the 3D model is analyzed in 3D point cloud models for one precisely defined color #758605 (Hex Color Value), color model RGB (117, 134, 5), and CIELAB/L*a*b* (54.30, −8.46, 3.83). The findings from this experiment will subsequently be used to analyze the color visualization of realistic 3D digital reproduction of artwork types in a VR environment. Current studies on color reproduction in artistic paintings do not focus on direct, realistic color reproduction based on a precisely defined color or a set of colors measured on an object [21,22,23].
Therefore, working with colors and color models is a suitable solution for the realistic reproduction of works of art in digital and virtual environments. Accurate color reproduction and working with the color light spectrum can be a suitable complementary process for determining the original color in a digital environment. The market for duplicates, not only in the field of art but also in the clothing and industrial sectors, represents a large area for the trade in counterfeits [24,25]. Forensic science is also used here, mainly in digital forensic art, the subject of which is also the 2D and 3D realistic reproduction of objects [26,27]. In the digital reproduction of works of art, the color and texture of the material play important roles. The type and nature of lighting also affect these two attributes. This experiment represents the first basic research in the field of realistic color reproduction in a virtual environment with the aim of application in criminology and forensic investigation, which will complement the current methods applied in forensic sciences.

2. Materials and Methods

The development of digital technologies and sensing devices, which gradually replaced the analog method of image processing, image digitization processes, and graphic software for image processing according to the type and purpose of the final output, has also been developed and improved. This includes developing and digitizing print, digital, and 3D printing, as well as 2D and 3D online presentations [25,28]. Currently, virtual and augmented reality (VR/AR) technologies are also being perfected and are now available to the general public, especially in the gaming industry [16,29,30,31]. The following section describes the digitization of the artwork. Section 2.3 and Section 2.4 describe image digitization using the SfM (Structure for Motion) photogrammetry method and LiDAR sensor scanning to create a realistic 3D digital model.

2.1. Digitization of Art: Art Painting with Acrylic Paints on Canvas

The artistic object for 3D digital reproduction was painted with acrylic paints on the canvas. The artwork is dominated by green and brown acrylic paints, and the tones of these two colors are created by mixing these acrylic paints. As seen in Figure 1a. Also marked in red in this figure is the base color space. In this place, the color value was measured with a Colorcatch NANO colorimeter from the Swiss company Colorix SA (Neuchâtel, Switzerland), and the value of the direct green color RGB = 117, 134, 123, and L*a*b = 54.30, −8.46, 3.83 [32]. These values proved sufficient to convert the color value into a Hex = #758605 color model for subsequent color segmentation in a realistic 3D point cloud model of the object. The CIE Lab/L*a*b* color space model is applied for digital imaging across digital and display devices.
Figure 1a shows a photograph of a digitized object with a marked space in which the value of the direct green color RGB = 117, 134, 123 was measured by the Colorcatch NANO colorimeter. Figure 1b shows a detailed matrix of the color structure. The link matrix is marked in the yellow range of 5.563 µm × 4.171 µm. Figure 1b shows the details of the material structure of the object of the marked element of the matrix shown in Figure 1c. This essential visual inspection of the color of the object in detail of the material structure of the object (acrylic paint on the canvas) aimed to determine the possible influence of the material structure on the generation of unwanted points during the creation of a 3D realistic model using the photogrammetry method. Furthermore, the LiDAR sensor assumed a slight deformation in the structure of the 3D model. A 3D laser scanning microscope Keyence VK-X3000 (Keyence International NV/SA, Mechelen, Belgium) was used to inspect the material structure visually [33]. This device features scanning that provides adaptability to identify the minor surface features of a material.
The object was scanned using scanning devices in daylight in the natural environment of the interior of an art gallery, aiming to capture the object’s color in such a way that it is perceived by human vision in this environment. The lighting conditions met the standard D65 (Standard Illuminant) in the interior. Furthermore, the art object was photographed in the dark to evaluate whether it was significantly affected by changes in lighting conditions in the art gallery at dusk.

2.2. Digital Image Capture

Digital devices have gradually replaced analog sensing devices. Digital compact cameras and Digital Single-Lens Reflex (DSLR) cameras have gradually been supplemented by 360° cameras and scanners, RGBd cameras, Light Detection And Ranging (LiDAR) sensor technology, and other types of digital devices. Currently, SMART mobile devices, such as mobile phones and tablets, are already commonly used for these purposes, and emphasis is placed on low-cost methods and procedures [15,17,19,20]. An innovative mobile device with LiDAR technology was used in this experiment. An iPad 11″ Pro smart tablet from Apple was used to capture and digitize the artwork. This smart device has a high-quality camera with high resolution and a LiDAR sensor [34]. This smart device was used for the 3D reconstruction of a work of art using ground image photogrammetry. At the same time, the free application Scaniverse from Niantic Labs was used to compare the quality of the 3D model, which was intended directly for the 3D digitization of objects and spaces using the LiDAR sensor [35]. Both methods are described in the following section. The professional 3D modeling SW Agisoft Metashape Professional (online: agisoft.cz, 2021) from the company Agisoft (St. Petersburg, Russia) was used for the 3D reconstruction of the artwork and analysis of the color 3D reproduction [36].

2.3. 3D Reconstruction by Photogrammetry Method

The SfM (Structure from Motion) photogrammetry method calculates the location of an object in 3D space based on the description of information obtained from individual images taken from multiple angles. In the case of a specific object, the 3D reconstruction described below involves 250 photos, where the algorithm based on the principle of triangulation finds standard bodies in individual photos and calculates the individual position of the camera around the object. By subsequently calculating the Dense Cloud (cloud of points), each point obtains its own x, y, and z coordinates and thus defines basic information about the position, size, and geometry of the object located in space. Figure 2 shows the principle of the photogrammetry method [14,37].
I’ + l 1 X + l 2 Y + l 3 Z + l 4 l 9 X + l 10 Y + l 11 Z + 1 = 0 J + l 5 X + l 6 Y + l 7 Z + l 8 l 9 X + l 10 Y + l 11 Z + 1 = 0
where I = I I 0 ; J = J J 0 ; l 1 l 11 is the Direct Linear Transformation Parameter (DLTP). The coefficients l 1 to l 11 are functions of exterior landmarks and interior landmarks. The initial values of the external and internal orientation elements are unnecessary in the calculation. The DLT equation can be used in the photogrammetry of consumer-class digital cameras [19,37]. The basic generated point cloud of the digitized object is shown in Figure 3. In this case, the basic 3D point cloud is created from 24 images, from which 13,828 points were generated.
Figure 3a shows a 2D model of the artwork, and Figure 3b shows a 3D point cloud generated from 24 photographs that were used to apply the SfM method. The principle of this method is shown in Figure 2. The generated points provide information about the reconstructed object’s position, geometry, and color. From this basic information in the primary cloud of points, the points that form the Dense Cloud are added by further calculation. This high number of points will more specifically display the shape of the 3D object and its position in space, as shown in Figure 3c. This will create a complete point model, from which it can be determined in which places the calculation did not define points and where it is necessary to add points. The resulting 3D model corresponds to the shape and structure of the physical object in real space, as shown in Figure 3d. Let us add that the generated Dense Cloud 3D model contains 413,688 individual points. SW Agisoft Metashape Professional was used for 3D modeling. In the following section, the method of 3D reconstruction using a smart device with a LiDAR sensor is described and visualized in more detail.

2.4. 3D Reconstruction by a LiDAR Sensor

In the case of using a LiDAR sensor and an image processing application, photographs are not used for the primary reconstruction of the point cloud. The finished 3D digital model of the artwork was scanned by the sensor directly in the free Scaniverse application and then converted to 3D graphic 3D SW Agisoft Metashape Professional. A point cloud was generated from this 3D model for subsequent image analysis. This method was chosen to compare the quality of the digital reproduction of a work of art captured by the same capture device, which is an iPad 11″ Pro tablet.
Figure 4 shows the 3D reconstruction of the object using the LiDAR sensor. The Scaniverse application was used to create a 3D model. A 3D model of the object is shown in Figure 4b. This textured 3D model was transferred to SW Agisoft to generate a cloud of points. Figure 4c shows the 3D model exported to the Agisoft 3D SW. A Dense Cloud with a similar body 23 was generated in this case. Figure 4d shows the details of the generated 3D points. Figure 5 shows the details of the Dense Cloud.
Figure 5 shows the details of the digitized artwork’s Dense Cloud structure. Figure 5a shows the details of the Dense Cloud and individual points generated from the photographs and the primary point cloud produced by the SfM photogrammetry method. In total, 413,688 individual points were generated in the 3D model. Figure 5b shows the details of the 23 points generated from the 3D texture model made by the sensor LiDAR. Individual points provide color information that ultimately defines individual points with a color value of #758605.

3. Colorimetry and Color Analysis

Colorimetry is the science of color and light. This field deals with the color interpretation of objects and the environment, human vision, and color reproduction. Light is electromagnetic radiation. The spectrum includes visible light (400–700 nm), which is visible to the human eye. The visible color of light is based on the wavelength λ and subjective perception by the human eye. Color image processing is also related to this. For these purposes, color models and gamuts define the possibilities of the color display of individual tones and their maximum range. To unify color reproduction, the international standard CIE 1931 was adopted in 1931, which is based on current modern technology and procedures across manufacturing and scientific fields.

3.1. Color Model and Gamut

In this experiment, in which a real object is transformed into a digital form, the RGB (red, green, blue) color model and the sRGB color space (gamut) are used. Figure 6 shows their graphical representation.
Figure 6a shows the RGB (Red, Green, Blue) color model. This is the basic color model of RGB primary colors. The RGB color model operates using light components. This model primarily targets digital imaging and imaging devices (DSLR displays). The secondary colors created by mixing the primary RGB colors are CMY (Cyan, Magenta, Yellow). Mixing all three essential components of RGB creates white light (W). Figure 6b shows the CIE1931 trichromatic triangle, standardizing work with colors since 1931.
The sRGB gamut (color space) is represented in the indicated trichromatic triangle in Figure 6. This space represents the maximum color range in which a digital sensing or imaging device can operate. It currently displays the color spectrum and range of most digital imaging and sensing devices in the sRGB gamut. Therefore, the RGB model was chosen as the default color model for this experiment. The values of the individual color components of light of direct color #758605 (Hex Color Value) are as follows: R = 117; G = 134; B = 5. In connection with Section 3.1. subsequently, in Section 3.3, attention is paid to the color space and gamut L*a*b*, which is suitable for subsequent work with color reproduction and display in the VR environment.

3.2. Color Value #758605 Segmentation

In Agisoft 3D modeling, the SW environment can work with information in the color of individual points or groups of points in the Dense Cloud in RGB and HSV color models. This can be performed with a precisely defined Hex Color Value, as shown in Figure 7.
Figure 7 shows the segmentation of points carrying information about color value #758605. Figure 7a shows a 3D Dense Cloud model of individual points generated from photographs and created by the SfM photogrammetric method. Figure 7b visualizes the SW Agisoft environment for working with Dense Cloud and information about the color value carried by each point. The 3D SW works with RGB and HSV (Hue, Saturation, Value) color models in this case. Values can be directly numerically defined in Hex format. However, it should be noted that the RGB color model does not contain tonal or other values such as Hue, Saturation, and Value in the HSV color model. Figure 7c shows an example of the final segmentation of points with a color value of #758605 in the total number of clouding points in the 3D model.
Figure 8 shows the process of segmenting the points using color information #758605. Figure 8a shows the 3D texture model captured by the LiDAR sensor in Agisoft 3D SW. A point cloud containing 17 points was generated from this texture model, as shown in Figure 8b. The segmentation process of the point with color information #758605 is identical to that of the SfM method, as shown in Figure 7b. As shown in Figure 8c, no single point with a color value of #758605 was defined out of the total number of points. For this reason, the segmentation of points in the 3D models by the LiDAR sensor is not shown in Section 4.1.

3.3. Color Model and Gamut in Virtual Reality

The previous sections present work with the RGB (Red, Green, Blue) color model and the sRGB color space, in which the most common sensing and display devices work. However, the type of VR headset and technical parameters must be considered when viewing a VR environment. In particular, the type and resolution of the VR headset fundamentally demonstrate the quality of displaying realistic scenes or models. It should be noted that the types of display devices for VR and their technical parameters differ significantly, and the color space or gamut needs to be precisely defined. These attributes have not yet been standardized for VR technology. When creating realistic 3D models and scenes, it is necessary to know in advance the type and parameters of the display VR headset and the final output presentation. Image processing needs to be adapted to these factors, which may be different. An Oculus Quest 2 VR headset was used in this study [38].
Almost all standard 2D and display devices operate in the sRGB color space [39,40]. However, in the case of VR imaging techniques, standard color models and gamuts are not used. Considering the difference between classic and VR head-mounted displays (HDM) is necessary. However, different HDMs use different color spaces and specifications. Therefore, the colors will be different from those of the standard display. However, different HDMs visually interpret the colors differently. Therefore, it is necessary to consider the type of VR headset during the virtual presentation at the beginning of the process. Older Oculus and Quest/Rift headsets interpreted the image in the Rec.709 color space. This is the most common color space characteristic of Internet content. However, this color space does not have HDR (High Dynamic Range) technology, which simultaneously enables the extended reproduction of details in dark and light partially captured scenes.
Therefore, the Oculus Quest2 VR Headset was used in this study to visualize the artwork in a VR environment. This VR display device operates in the Rec.2020(HDR) default gamut and includes a standardized D65 white point corresponding to daylight. This is a different range of color simplicity for display devices. In connection with the different color spaces for the sensing and display devices in this experiment, it is appropriate to use the L*a*b* gamut (CIE 1976). The L*a*b* color space is derived from the first standardized CIE XYZ color space (CIE 1931) and is independent of a particular sensing or display device. This color space contains the full color range of the trichromatic triangle, as shown in Figure 6 and Figure 9. Also related to the CIE XYZ color space is the basic standardized color scale of ColorChecker Classic (X-Rite), used in color sensing and for color calibration of display devices according to international standards [41]. It is evident from the above that it is necessary to transform individual color models and gamuts among themselves [42].
As shown in Figure 9, it is necessary to consider the choice of color models and color spaces depending on the desired final output for realistic color reproduction. Figure 9a shows the standard color ColorChecker (X-Rite) used in the image capture and calibration device. The individual color positions in the chromatic diagram are shown in Figure 9b. Figure 9c shows the color model L*a*b*. The individual color ranges of the Rec.2020(HDR), sRGB, and L*a*b* gamuts are shown in the CIE 1931 chromatic diagram in Figure 9d. From the above, it follows that realistic 3D reconstruction of works of art and their visualization in a VR environment must pay attention to the issue of colors, color models, and games in each of the individual steps of the entire procedure.

4. Results

As mentioned in the previous sections, the artwork was scanned using an iPad 11″ Pro smart device with a LiDAR sensor from Apple. The artwork was shot indoors in natural daylight and then in natural twilight using a tablet camera. For the 3D reconstruction of the object, two different methods were chosen using one scanning device. The 3D reconstruction of the object using the SfM photogrammetric method uses the integrated camera of the sensing device. The second method of 3D reconstruction uses a LiDAR sensor and the direct scanning of the Scaniverse mobile application to create 3D models. The goal of using these methods is to compare the quality of the 3D reconstruction of an object using one scanning device of the image, as well as other attributes, such as the scanning speed or the process of processing the image into a 3D model.
Table 1 shows the attributes of the reference photos that were part of the individual series from which the 3D models of the object were modeled using the SfM method. In the case of 3D reconstruction using LiDAR sensor scanning, individual photos were not the source of digitization. Therefore, the image attributes are not listed. Table 1 lists the attributes and characteristics of the photo series that were applied to create a realistic 3D model. While the resolution, bit depth, aperture, and focal length are identical, the exposure data size and ISO are different for the reference photos. These attributes affect the progress of 3D model creation and the number of generated points in the point cloud carrying color information. Table 2 shows the number of generated points. Table 2 also shows the points generated from the 3D models captured by the LiDAR sensor and converted into 3D modeling software for comparison.
Table 2 lists the number of points generated for each type of 3D model of the artwork. The quality of the 3D reconstruction using the SfM photogrammetric method is described in Table 2 by the marked models D1, D2, D3, G1, G2, and G3. The models marked with the letter D are taken from a series of images taken in daylight (D65), and the models marked with the letter G were taken in twilight. 3D models created by LiDAR technology and the Scaniverse mobile application on a mobile device have the same designation. However, this 3D modeling method cannot generate a Dense Cloud with enough points to identify the points with color value #758605, as shown in Table 2. Different differences in 3D reconstruction require different image processing.
Table 2 also shows that the number of photos in a series does not necessarily increase the quality of the 3D model or the number of generated points. Therefore, the nature of the light intensity may not be directly influenced by these attributes; there was no significant difference that could affect shooting in the dark, where the quality of the captured image reflects the use of a photographic flash. However, the type of light affects the visual appearance of the resulting model, especially in color reproduction.

4.1. Visualization of 3D Models of the Artwork

Figure 10 visualizes the 3D models of the object in individual digitization steps. 3D models marked 1D visualize the 3D reconstruction of the object in natural daylight. The labeling of the images is identical to the labeling given in Table 2. The individual images visualize a 3D point cloud model, a 3D Dense Cloud model, a texture 3D model, and a segmentation of points with a color value of #758605. Figure 10 visualizes the 3D models produced by both methods of 3D realistic reconstruction.
The 3D models of the object visualized in Figure 10 show the 3D reconstruction in the individual digitization steps. 3D models marked as D visualize the 3D reconstruction of the object in natural daylight. 1D images visualize a 3D point cloud model, a Dense Cloud 3D model, a textured 3D model, and a point segmentation with color value #758605. Figure 10 shows an example of the final segmentation of points with a color value of #758605 among the total cloud points in the 1G 3D model.

4.2. Vizualizace 3D Modelů v Prostředí VR

In this section, the resulting presentation of the artistic section of the work is visualized in a VR environment. Figure 11 shows a visualization of a realistic 3D model of a work of art in a VR environment, on which a 3D DS (Dense Cloud) model, a 3D texture model created by the photogrammetry method, and a 3D texture model created by a scanned LiDAR sensor are captured from the left.
From the visualization of the 3D realistic models in the VR environment shown in Figure 11, it is clear that each model differs slightly in the reproduction of colors and tones. Figure 11a shows a 3D Dense Cloud model comprising individual bodies carrying color information. 3D model 11b shows a texture model created from a DS 3D model, and 3D model 11c is a texture model created by direct LiDAR scanning, where it was impossible to analyze the number of color points #758605. Furthermore, these three models represent the initial visualizations of 3D models for further research into color reproduction in the VR environment and color analysis in this environment. The goal is to accurately create digital twins identical to the original physical twins for their virtual presentation with remote access.
This study does not consider the effect of structure and material on the quality of a realistic reproduction of an artwork. Together with other aspects, and especially with the applications of other possibilities of color and image analysis, these issues will be another research subject, which aims to accurately visualize works of art in a digital and virtual environment for the presentation of works of art and their creators.

5. Discussion and Conclusions

This article presents a partial issue in the 3D realistic digital reproduction of a work of art. The experiment was conducted in the interior of an art gallery. The ambient lighting conditions of daylight and darkness were used for image capture. The object was photographed with a mobile device iPad 11″ camera and the LiDAR sensor of this device. This work aimed to compare the effect of light on color reproduction. Knowledge from the field of colorimetry was applied to achieve this goal. At the same time, this study is a starting point for further research in realistic color reproduction in digital and virtual environments and work with color models and spaces.
The experiment was conducted in two parts. In the first part of Section 2, a camera captures an image to create a 3D model using the SfM photogrammetric method. The object was scanned using a LiDAR sensor in this part of the work. The obtained digital image was then used to create 3D realistic digital twins of the object. 3D models of the artwork were made using Metashape Agisoft Professional software (SW) in an essential point cloud model. A cloud of points was subsequently generated from these points to create a polygonal network from which the resulting 3D texture model was created. Six artistic models were created using this procedure. Using the LiDAR sensor, the same object was scanned using the same sensing device. The subject was subjected to the same lighting conditions. This part of the research aimed to test the possibility of using a smart device for the 3D reconstruction of an object in a relatively short time under natural light conditions. Regarding LiDAR scanning, it is clear that 3D reconstruction can be performed in real time. However, the quality of a 3D object may not always be good. In the case of the Scaniverse mobile application, the possibility of subsequent post-production to judge quality is minimal. However, there is an assumption of the rapid development of this method and the possibility of using another application with more tools and options for working with point cloud models.
A comparison of individual models and the generated number of Dense Cloud is shown in Table 2 and Figure 10, which visualizes individual 3D models. Section 4 presents a visual analysis of 3D texture models. As described in Section 4.1 of the visual analysis, it was clear that the artwork shot in twilight using 17 and 29 photos was best reproduced. In Figure 10, the models are labeled 1G and 2G. From visual analysis, it is clear that direct daylight is not suitable for this type of 3D modeling. However, the object was not exposed to direct sunlight, and the resulting 3D models exhibited many undesirable optical-physical phenomena, such as gloss and the effects of material and brushstrokes. The reproduction quality can be very high. In subsequent research, it will be possible to work with a detailed representation of the object’s material, structure, and color. In the case of the issue of forgeries in art, follow-up research is necessary.
Using the segmentation method of individual points with a precisely defined color value of #758605, the number of individual points with this value in the point cloud was determined. However, it can be seen in Table 1 that the 3D models 3D and 3G were generated from only eight and nine photographs, respectively, and the Dense Cloud contained individual points. It is evident that to reproduce a smaller object, the number of images should be at least 15. A minimum number of points was generated in the case of 3D models made with a LiDAR sensor and using a mobile application. The Dense Cloud of individual models contained only 18–23 points, which was unsatisfactory for further progress. This methodology is inadequate for detailed image analysis. In digitizing art, no similar research has measured the procedures presented in this experiment. This issue is examined from the point of view of using machine learning to identify parts of an object or other visual elements. However, the experiments in these investigations may benefit the next strand of research that focuses on microscopic details to apply to the entire picture.
There are currently no standards for display in VR/AR headsets regarding color reproduction. Due to the different technologies used by the manufacturers for this type of device and the different requirements for their performance, it is impossible to use the experiment for a headset different from the Oculus Ques2, which was used in the study. Nonetheless, there is an excellent assumption of standardization in the field of color vision, which was introduced in 1931 (CIE 1931) and is described in Section 3. These standards are being used and improved even today; however, with the new needs of digital technologies, new requirements in the field of sensors have also arisen. Above all, imaging devices and the problem of telematics of color vision are also necessary.
This sub-experiment with the use of colorimetry methods in the field of 3D modeling shows the direction of research in the field of realistic digitization of art, with the aim of the most accurate reproduction of colors and their use for presentation in a virtual environment. Further research will investigate the influence of structure and material on image reproduction. Attention will also be paid to the lighting conditions during image capture and their simulation when creating a virtual presentation of works of art. The experiment described in this text will also be the subject of further research, especially regarding other methodologies of 3D realistic reconstruction of the object. Nevertheless, the current study lays the foundation for further research and application of these results in forensic science and counterfeiting in digital and virtual environments.

Author Contributions

Conceptualization, I.D.; methodology, I.D.; validation, I.D.; formal analysis, I.D.; resources, I.D.; data curation, I.D.; writing—original draft preparation, I.D.; writing—review and editing, I.D.; visualization, I.D.; supervision, M.A.; project administration, I.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Internal Grant Agency of Tomas Bata University, supported under project No. IGA/CebiaTech/2024/004.

Data Availability Statement

The data presented in this study are available upon request from the author (I.D.).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Näslund, A.; Wasielewski, A. Cultures of Digitization: A Historiographic Perspective on Digital Art History. Vis. Resour. 2020, 36, 339–359. [Google Scholar] [CrossRef]
  2. Gultebpe, E.; Thomas, E.; Conturo, M.M. Predicting and grouping digitized paintings by style using unsupervised feature learning. J. Cult. Herit. 2018, 31, 13–23. [Google Scholar] [CrossRef] [PubMed]
  3. Cstellano, G.; Vessio, G. Deep Convolutional Embedding for Painting Clustering: Case Study on Picasso’s Artworks. In Discovery Science, Proceedings of the 23rd International Conference, DS 2020, Thessaloniki, Greece, 19–21 October 2020; Advanced Data Mining and Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 68–78. [Google Scholar] [CrossRef]
  4. Berger Haladová, Z.; Bohdal, R.; Černeková, Z.; Štancelová, P.; Ferko, A.; Blaško Križanová, J.; Hojstričová, J.; Baráthová, K. Finding the Best Lighting Mode for Daguerreotype, Ambrotype, and Tintype Photographs and Their Deterioration on the Cruse Scanner Based on Selected Methods. Sensors 2023, 23, 2303. [Google Scholar] [CrossRef] [PubMed]
  5. Bialkova, S.; Van Gisbergen, M.S. When sound modulates vision: VR applications for art and entertainment. In Proceedings of the 2017 IEEE 3rd Workshop on Everyday Virtual Reality, Los Angeles, CA, USA, 19 March 2017. [Google Scholar] [CrossRef]
  6. Guarneri, M.; Ceccarelli, S.; Francucci, M.; Ferri de Collibus, M.; Ciaffi, M.; Gusella, V.; Liberotti, R.; La Torre, M. Multi-sensor analysis for experimental diagnostic and monitoring techniques at San Bevignate templar church in Perugia. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, XLVIII-M-2-2023, 693–700. [Google Scholar] [CrossRef]
  7. Zhao, G.; Thienmongkol, R.; Nimnoi, R. Cultural Relics Restoration Technology in Virtual Reality: Application of 3D Modeling and Rendering Algorithms in Cultural Relics Digitization. Int. J. Relig. 2024, 5, 1701–1718. [Google Scholar] [CrossRef]
  8. Abbas, S.F.; Abed, F.M. Evaluating the Accuracy of iPhone Lidar Sensor for Building Façades Conservation. In Recent Research on Geotechnical Engineering, Remote Sensing, Geophysics and Earthquake Seismology, Proceedings of the MedGU 2022, Marrakech, Morocco, 27–30 November 2022; Bezzeghoud, M., Ergüler, Z.A., Rodrigo-Comino, J., Jat, M.K., Kalatehjari, R., Bisht, D.S., Biswas, A., Chaminé, H.I., Shah, A.A., Radwan, A.E., et al., Eds.; Advances in Science, Technology & Innovation; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  9. Drofova, I.; Adamek, M. Analysis of Counterfeits Using Color Models for the Presentation of Painting in Virtual Reality. In Software Engineering Application in Informatics, Proceedings of the CoMeSySo 2021, Online, 1 October 2021; Lecture Notes in Networks and Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 609–620. [Google Scholar] [CrossRef]
  10. Xu, H. Study on the Color Design of Elderly Space based on Design Color Science. Acad. J. Sci. Technol. 2024, 11, 175–179. [Google Scholar] [CrossRef]
  11. Molada-Tebar, A.; Verhoeven, G.J.; Hernández-López, D.; González-Aguilera, D. Practical RGB-to-XYZ Color Transformation Matrix Estimation under Different Lighting Conditions for Graffiti Documentation. Sensors 2024, 24, 1743. [Google Scholar] [CrossRef]
  12. Shafiq, H.; Lee, B. Image Colorization Using Color-Features and Adversarial Learning. IEEE Access 2023, 11, 132811–132821. [Google Scholar] [CrossRef]
  13. Drofova, I.; Adamek, M. Analysis of Natural Lighting Condition for the Digitization if Artwork in an Art Gallery Interior. In Proceedings of the WSCG 2024 International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, Computer Science Research Notes, Prague, Czech Republic, 3–6 June 2024; pp. 391–394. [Google Scholar] [CrossRef]
  14. Cerasoni, J.; do Nascimento Rodrigues, F.; Tang, Y.; Hallett, E.Y. Do-It-Yourself digital archaeology: Introduction and practical applications of photography and photogrammetry for the 2D and 3D representation of small objects and artefacts. PLoS ONE 2022, 17, e0267168. [Google Scholar] [CrossRef]
  15. Guenther, M.; Heenkenda, M.K.; Morris, D.; Leblon, B. Tree Diameter at Breast Height (DBH) Estimation Using an iPad Pro LiDAR Scanner: A Case Study in Boreal Forests, Ontario, Canada. Forests 2024, 15, 214. [Google Scholar] [CrossRef]
  16. Liu, J. Virtual Reality: Challenges of VR Game Development for Emerging Game Studios. Highlights Sci. Eng. Technol. 2024, 93, 229–234. [Google Scholar] [CrossRef]
  17. Patonis, P. Comparative Evaluation of the Performance of a Mobile Device Camera and a Full-Frame Mirrorless Camera in Close-Range Photogrammetry Applications. Sensors 2024, 24, 4925. [Google Scholar] [CrossRef] [PubMed]
  18. Bruno, N.; Perfetti, L.; Fassi, F.; Roncella, R. Photogrammetric survey of narrow spaces in cultural heritage: Comaprison of two multi-camera approaches. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2024, XLVIII–2/W4–2024, 87–94. [Google Scholar] [CrossRef]
  19. Drofova, I.; Guo, W.; Wang, H.; Adamek, M. Use of scanning devices for object 3D reconstruction by photogrammetry and visualization in virtual reality. Bull. Electr. Eng. Inform. 2023, 12, 868–881. [Google Scholar] [CrossRef]
  20. Vílchez-Lara, M.d.C.; Molinero-Sánchez, J.G.; Rodríguez-Moreno, C.; Gómez-Blanco, A.J.; Reinoso-Gordo, J.F. High Resolution 3D Model of Heritage Landscapes Using UAS LiDAR: The Tajos de Alhama de Granada, Spain. Land 2024, 13, 75. [Google Scholar] [CrossRef]
  21. Wang, F. Color Enhancement in Art Works Based on Image Processing Technology of Contourlet Domain. Mob. Inf. Syst. 2022, 2022, 7730157. [Google Scholar] [CrossRef]
  22. Tang, J. An Optimized Digital Image Processing Algorithm for Digital Oil Painting. Mob. Inf. Syst. 2022, 2022, 4956839. [Google Scholar] [CrossRef]
  23. Delle Foglie, A.; Felici, A.C. The Iridescent Painting Palette of Michelino da Besozzo: First Results of Non-Invasive Diagnostic Analyses. Heritage 2024, 7, 3013–3033. [Google Scholar] [CrossRef]
  24. Lv, G.; Liao, N.; Yuan, C.; Wei, L.; Feng, Y. A Study on the Color Prediction of Ancient Chinese Architecture Paintings Based on a Digital Color Camera and the Color Design System. Appl. Sci. 2024, 14, 5916. [Google Scholar] [CrossRef]
  25. Pelagotti, A.; Piva, A.; Uccheddu, F.; Shullani, D.; Alberghina, M.F.; Schiavone, S.; Massa, E.; Menchetti, C.M. Forensic Imaging for Art Diagnostics. What Evidence Should We Trust? IOP Conf. Ser. Mater. Sci. Eng. 2020, 949, 012076. [Google Scholar] [CrossRef]
  26. Zhang, M. Forensic Imaging: A Powerful Tool in Modern Forensic Investigation. Forensic Sci. Res. 2022, 7, 385–392. [Google Scholar] [CrossRef] [PubMed]
  27. Carew, R.M.; French, J.; Morgan, R.M. 3D forensic science: A new field integrating 3D imaging and 3D printing in crime reconstruction. Forensic Sci. Int. Synerg. 2021, 3, 100205. [Google Scholar] [CrossRef]
  28. Selivanova, K. 3D visualization of human body internal structures surface during stereo-endoscopic operations using computer vision techniques. Prz. Electrotech. 2021, 2021, 30–33. [Google Scholar] [CrossRef]
  29. Lu, Y.; Ota, K.; Dong, M. An Empirical Study of VR Head-Mounted Displays Based on VR Games Reviews. ACM Games 2024, 2, 1–20. [Google Scholar] [CrossRef]
  30. Valmorisco, S.; Raya, L.; Sanchez, A. Enabling personalized VR experiences: A framework for real-time adaptation and recommendations in VR environments. Virtual Real. 2024, 28, 128. [Google Scholar] [CrossRef]
  31. Moreno-Lumbreras, D.; Robles, G.; Izquierdo-Cortázar, D.; Gonzalez-Barahona, J.M. Software development metrics: To VR or not to VR. Empir. Softw. Eng. 2024, 29, 42. [Google Scholar] [CrossRef]
  32. Gamin, s.r.o. Czech Republic. 2023. Available online: https://www.gamin.cz/kolorimetr-colorix-colorcatch-nano/ (accessed on 25 December 2023).
  33. Keyence International (Belgium) NV/SA. Belgium. 2024. Available online: https://www.keyence.eu/cscz/products/microscope/laser-microscope/vk-x3000/ (accessed on 27 October 2024).
  34. Apple Distribution international Ltd. Ireland. 2023. Available online: https://www.apple.com/cz/shop/buy-ipad/ipad-pro (accessed on 25 October 2024).
  35. Niantic Labs. Scaniverse App. 2024. Available online: https://scaniverse.com/ (accessed on 15 October 2024).
  36. G4D, s.r.o. Metashape, Prague, Czech Republic. 2024. Available online: https://www.agisoft.cz/ (accessed on 27 October 2024).
  37. Moulon, P.; Monasse, P.; Marlet, R. Adaptive Structure from Motion with a Contrario Model Estimation. In Computer Vision—ACCV 2012, Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Republic of Korea, 5–9 November 2012; Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7727. [Google Scholar] [CrossRef]
  38. Meta, Oculus Quest2. 2024. Available online: https://www.meta.com/quest/products/quest-2/ (accessed on 26 October 2024).
  39. Le, H.; Jeong, T.; Abdelhamed, A.; Shin, H.J.; Brown, M.S. GamutNet: Restoring Wide-Gamut Colors for Camera-Captured Images. Color Imaging Conf. 2021, 29, 7–12. [Google Scholar] [CrossRef]
  40. Le, H.; Afifi, M.; Brown, M.S. Improving Color Space Conversion for Camera-Captured Images via Wide-Gamut Metadata. Color Imaging Conf. 2020, 28, 193–198. [Google Scholar] [CrossRef]
  41. Hemrit, G.; Finlayson, G.D.; Gijsenij, A.; Gehler, P.; Bianco, S.; Funt, B.; Drew, M.; Shi, L. Rehabilitating the ColorChecker Dataset for Illuminant Estimation. Color Imaging Conf. 2018, 26, 350–353. [Google Scholar] [CrossRef]
  42. Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
Figure 1. Digitization of an art object: (a) 2D digitized object and detail marked in red; (b) matrix of partial details the yellow range of the image; and (c) visualization of the detail of the structure and color of a partial part of the object.
Figure 1. Digitization of an art object: (a) 2D digitized object and detail marked in red; (b) matrix of partial details the yellow range of the image; and (c) visualization of the detail of the structure and color of a partial part of the object.
Electronics 13 04431 g001
Figure 2. The basic principle of the Structure from Motion (SfM) method [37].
Figure 2. The basic principle of the Structure from Motion (SfM) method [37].
Electronics 13 04431 g002
Figure 3. Creation of a 3D model using the SfM photogrammetry method: (a) Digitized object; (b) position of 24 photos from which the basic cloud of points is created; (c) Dense Cloud generation; (d) the resulting 3D texture model of the artwork.
Figure 3. Creation of a 3D model using the SfM photogrammetry method: (a) Digitized object; (b) position of 24 photos from which the basic cloud of points is created; (c) Dense Cloud generation; (d) the resulting 3D texture model of the artwork.
Electronics 13 04431 g003
Figure 4. Creating a 3D model using a LiDAR sensor: (a) digitized object; (b) 3D model generated by Scaniverse; (c) 3D texture model imported into Agisoft 3D SW; and (d) generated point cloud from the textured 3D model.
Figure 4. Creating a 3D model using a LiDAR sensor: (a) digitized object; (b) 3D model generated by Scaniverse; (c) 3D texture model imported into Agisoft 3D SW; and (d) generated point cloud from the textured 3D model.
Electronics 13 04431 g004
Figure 5. Generated Dense Cloud: (a) 3D SfM photogrammetry method and (b) LiDAR sensor.
Figure 5. Generated Dense Cloud: (a) 3D SfM photogrammetry method and (b) LiDAR sensor.
Electronics 13 04431 g005
Figure 6. Colorimetry: (a) RGB color model and (b) sRGB color space (gamut).
Figure 6. Colorimetry: (a) RGB color model and (b) sRGB color space (gamut).
Electronics 13 04431 g006
Figure 7. SfM—Points Segmentation #758605: (a) Dense Cloud 3D model using SfM photogrammetry; (b) segmentation points by color G#758605; (c) body #758605 in Dense Cloud.
Figure 7. SfM—Points Segmentation #758605: (a) Dense Cloud 3D model using SfM photogrammetry; (b) segmentation points by color G#758605; (c) body #758605 in Dense Cloud.
Electronics 13 04431 g007
Figure 8. LiDAR—Segmentation of points #758605: (a) 3D model using LiDAR sensor; (b) Segmentation of points by color G#758605; (c) detail of the points generated in Dense Cloud.
Figure 8. LiDAR—Segmentation of points #758605: (a) 3D model using LiDAR sensor; (b) Segmentation of points by color G#758605; (c) detail of the points generated in Dense Cloud.
Electronics 13 04431 g008
Figure 9. CIE XYZ 1931 standardized color space: (a) Basic ColorChecker standardized color gamut; (b) position of individual standardized colors in the CIE 1931 chromatic diagram; (c) color model L*a*b*; (d) CIE 1931 chromaticity diagram with Rec.2020 gamuts; sRGB and L*a*b.
Figure 9. CIE XYZ 1931 standardized color space: (a) Basic ColorChecker standardized color gamut; (b) position of individual standardized colors in the CIE 1931 chromatic diagram; (c) color model L*a*b*; (d) CIE 1931 chromaticity diagram with Rec.2020 gamuts; sRGB and L*a*b.
Electronics 13 04431 g009
Figure 10. Visual comparison of reproduction quality in the process of 3D modeling and color segmentation: 3D models using the SfM photogrammetry method and 3D models using the LiDAR sensor.
Figure 10. Visual comparison of reproduction quality in the process of 3D modeling and color segmentation: 3D models using the SfM photogrammetry method and 3D models using the LiDAR sensor.
Electronics 13 04431 g010
Figure 11. Visualization of a realistic 3D reconstruction of the artwork: (a) 3D Dense Cloud model; (b) 3D texture model by the SfM method; (c) 3D texture model by LiDAR sensor.
Figure 11. Visualization of a realistic 3D reconstruction of the artwork: (a) 3D Dense Cloud model; (b) 3D texture model by the SfM method; (c) 3D texture model by LiDAR sensor.
Electronics 13 04431 g011
Table 1. Attributes of a reference picture for making a 3D SfM model.
Table 1. Attributes of a reference picture for making a 3D SfM model.
SfMSize
(MB)
Resolution
(dpi)
Bit Depth
(bit)
ShutterExposure
(s)
ISOFocal Distance
(3 mm)
D17.13 7224f/0.81/1222029
G15.677224f/1.81/6012529
D25.327224f/1.81/1221629
G25.607224f/1.81/6020029
D34.957224f/1.81/1221629
G35.657224f/1.81/5312529
Table 2. Values of the Points in 3D models.
Table 2. Values of the Points in 3D models.
SfMImagesPoint CloudDense CloudPoints #758608
D12413,828383,170236,418
G11710,710413,688248,703
D2139327405,385221,648
G22916,219310,913178,275
D387100325,107194,873
G394617363,348208,232
LiDARImagesPoint CloudDense CloudPoints #758608
D1//21/
G1//21/
D2//23/
G2//18/
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Drofova, I.; Adamek, M. Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery. Electronics 2024, 13, 4431. https://doi.org/10.3390/electronics13224431

AMA Style

Drofova I, Adamek M. Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery. Electronics. 2024; 13(22):4431. https://doi.org/10.3390/electronics13224431

Chicago/Turabian Style

Drofova, Irena, and Milan Adamek. 2024. "Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery" Electronics 13, no. 22: 4431. https://doi.org/10.3390/electronics13224431

APA Style

Drofova, I., & Adamek, M. (2024). Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery. Electronics, 13(22), 4431. https://doi.org/10.3390/electronics13224431

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop