1. Introduction
Currently, digitization processes are reflected in all areas of human activity. Digital technologies are used across commercial, scientific, and artistic fields. Especially in the field of art, emphasis is often placed on the highly realistic quality of digital reproduction. Digital technologies are already a standard part of capturing and processing images. However, digital technology and image processing processes are projected and part of artistic creation [
1]. Nevertheless, the digital reproduction of works of art is still excellent for applying new procedures, such as machine learning, especially in connection with new trends, such as 3D visualization and virtual presentations in the online environment [
2,
3].
3D realistic digital reproduction of a work of art also brings many challenges and unsolved issues in image processing. This also depends on the digitization itself in 2D [
4]. One of them is the high quality of the reproduction of the object in connection with its texture and color, which are often changeable due to the influence of light and weather conditions. This problem is noted especially in exterior exhibitions and architecture [
5]. The technology and methodology chosen for the 3D digitization process also greatly influence the quality of the digitization of the object [
6]. The method and process of digitization depend mainly on the final output of the digitized object. The team can be 2D and 3D printing, 3D online presentations, or using an object in a virtual and augmented reality (VR/AV) environment in interaction with the user [
7,
8]. For a realistic digital 3D reproduction of a work of art, great emphasis is placed on the quality of the digital image and its realism. Image reproduction aims to obtain as close as possible to the original image [
9]. The same attribute for evaluating reproduction quality is color, which is directly related to light and human vision. This issue is dealt with by colorimetry, color, and human vision [
10,
11,
12].
This text responds to current trends in the digitization of art and the issue of realistic digital reproduction using the photogrammetry method and significantly expands the findings presented in a short paper presented at an international conference in Pilsen, Czech Republic [
13,
14]. The work also considers using LiDAR (Light Detection And Ranging) sensors for rapid image capture using a mobile application in a mobile device [
15,
16]. The presented experiment aims to determine the extent to which ambient lighting conditions can influence the 3D digitization of a work of art in connection with the chosen modeling method. Previous research in the field of image digitization used sensing devices, such as compact and DSLR (Digital Single-Lens Reflex) cameras, mobile device cameras, laser scanners, and 360° cameras. One sensing device is used in this experiment [
17,
18,
19,
20].
The following chapters describe the digitization of the art work using the SfM (Structure from Motion) photogrammetry method and scanning with a LiDAR sensor in the interior. At the same time, image capture is performed indoors in daylight. The influence of lighting conditions on the color reproduction of the 3D model is analyzed in 3D point cloud models for one precisely defined color #758605 (Hex Color Value), color model RGB (117, 134, 5), and CIELAB/L*a*b* (54.30, −8.46, 3.83). The findings from this experiment will subsequently be used to analyze the color visualization of realistic 3D digital reproduction of artwork types in a VR environment. Current studies on color reproduction in artistic paintings do not focus on direct, realistic color reproduction based on a precisely defined color or a set of colors measured on an object [
21,
22,
23].
Therefore, working with colors and color models is a suitable solution for the realistic reproduction of works of art in digital and virtual environments. Accurate color reproduction and working with the color light spectrum can be a suitable complementary process for determining the original color in a digital environment. The market for duplicates, not only in the field of art but also in the clothing and industrial sectors, represents a large area for the trade in counterfeits [
24,
25]. Forensic science is also used here, mainly in digital forensic art, the subject of which is also the 2D and 3D realistic reproduction of objects [
26,
27]. In the digital reproduction of works of art, the color and texture of the material play important roles. The type and nature of lighting also affect these two attributes. This experiment represents the first basic research in the field of realistic color reproduction in a virtual environment with the aim of application in criminology and forensic investigation, which will complement the current methods applied in forensic sciences.
2. Materials and Methods
The development of digital technologies and sensing devices, which gradually replaced the analog method of image processing, image digitization processes, and graphic software for image processing according to the type and purpose of the final output, has also been developed and improved. This includes developing and digitizing print, digital, and 3D printing, as well as 2D and 3D online presentations [
25,
28]. Currently, virtual and augmented reality (VR/AR) technologies are also being perfected and are now available to the general public, especially in the gaming industry [
16,
29,
30,
31]. The following section describes the digitization of the artwork.
Section 2.3 and
Section 2.4 describe image digitization using the SfM (Structure for Motion) photogrammetry method and LiDAR sensor scanning to create a realistic 3D digital model.
2.1. Digitization of Art: Art Painting with Acrylic Paints on Canvas
The artistic object for 3D digital reproduction was painted with acrylic paints on the canvas. The artwork is dominated by green and brown acrylic paints, and the tones of these two colors are created by mixing these acrylic paints. As seen in
Figure 1a. Also marked in red in this figure is the base color space. In this place, the color value was measured with a Colorcatch NANO colorimeter from the Swiss company Colorix SA (Neuchâtel, Switzerland), and the value of the direct green color RGB = 117, 134, 123, and L*a*b = 54.30, −8.46, 3.83 [
32]. These values proved sufficient to convert the color value into a Hex = #758605 color model for subsequent color segmentation in a realistic 3D point cloud model of the object. The CIE Lab/L*a*b* color space model is applied for digital imaging across digital and display devices.
Figure 1a shows a photograph of a digitized object with a marked space in which the value of the direct green color RGB = 117, 134, 123 was measured by the Colorcatch NANO colorimeter.
Figure 1b shows a detailed matrix of the color structure. The link matrix is marked in the yellow range of 5.563 µm × 4.171 µm.
Figure 1b shows the details of the material structure of the object of the marked element of the matrix shown in
Figure 1c. This essential visual inspection of the color of the object in detail of the material structure of the object (acrylic paint on the canvas) aimed to determine the possible influence of the material structure on the generation of unwanted points during the creation of a 3D realistic model using the photogrammetry method. Furthermore, the LiDAR sensor assumed a slight deformation in the structure of the 3D model. A 3D laser scanning microscope Keyence VK-X3000 (Keyence International NV/SA, Mechelen, Belgium) was used to inspect the material structure visually [
33]. This device features scanning that provides adaptability to identify the minor surface features of a material.
The object was scanned using scanning devices in daylight in the natural environment of the interior of an art gallery, aiming to capture the object’s color in such a way that it is perceived by human vision in this environment. The lighting conditions met the standard D65 (Standard Illuminant) in the interior. Furthermore, the art object was photographed in the dark to evaluate whether it was significantly affected by changes in lighting conditions in the art gallery at dusk.
2.2. Digital Image Capture
Digital devices have gradually replaced analog sensing devices. Digital compact cameras and Digital Single-Lens Reflex (DSLR) cameras have gradually been supplemented by 360° cameras and scanners, RGBd cameras, Light Detection And Ranging (LiDAR) sensor technology, and other types of digital devices. Currently, SMART mobile devices, such as mobile phones and tablets, are already commonly used for these purposes, and emphasis is placed on low-cost methods and procedures [
15,
17,
19,
20]. An innovative mobile device with LiDAR technology was used in this experiment. An iPad 11″ Pro smart tablet from Apple was used to capture and digitize the artwork. This smart device has a high-quality camera with high resolution and a LiDAR sensor [
34]. This smart device was used for the 3D reconstruction of a work of art using ground image photogrammetry. At the same time, the free application Scaniverse from Niantic Labs was used to compare the quality of the 3D model, which was intended directly for the 3D digitization of objects and spaces using the LiDAR sensor [
35]. Both methods are described in the following section. The professional 3D modeling SW Agisoft Metashape Professional (online: agisoft.cz, 2021) from the company Agisoft (St. Petersburg, Russia) was used for the 3D reconstruction of the artwork and analysis of the color 3D reproduction [
36].
2.3. 3D Reconstruction by Photogrammetry Method
The SfM (Structure from Motion) photogrammetry method calculates the location of an object in 3D space based on the description of information obtained from individual images taken from multiple angles. In the case of a specific object, the 3D reconstruction described below involves 250 photos, where the algorithm based on the principle of triangulation finds standard bodies in individual photos and calculates the individual position of the camera around the object. By subsequently calculating the Dense Cloud (cloud of points), each point obtains its own x, y, and z coordinates and thus defines basic information about the position, size, and geometry of the object located in space.
Figure 2 shows the principle of the photogrammetry method [
14,
37].
where
is the Direct Linear Transformation Parameter (DLTP). The coefficients
to
are functions of exterior landmarks and interior landmarks. The initial values of the external and internal orientation elements are unnecessary in the calculation. The DLT equation can be used in the photogrammetry of consumer-class digital cameras [
19,
37]. The basic generated point cloud of the digitized object is shown in
Figure 3. In this case, the basic 3D point cloud is created from 24 images, from which 13,828 points were generated.
Figure 3a shows a 2D model of the artwork, and
Figure 3b shows a 3D point cloud generated from 24 photographs that were used to apply the SfM method. The principle of this method is shown in
Figure 2. The generated points provide information about the reconstructed object’s position, geometry, and color. From this basic information in the primary cloud of points, the points that form the Dense Cloud are added by further calculation. This high number of points will more specifically display the shape of the 3D object and its position in space, as shown in
Figure 3c. This will create a complete point model, from which it can be determined in which places the calculation did not define points and where it is necessary to add points. The resulting 3D model corresponds to the shape and structure of the physical object in real space, as shown in
Figure 3d. Let us add that the generated Dense Cloud 3D model contains 413,688 individual points. SW Agisoft Metashape Professional was used for 3D modeling. In the following section, the method of 3D reconstruction using a smart device with a LiDAR sensor is described and visualized in more detail.
2.4. 3D Reconstruction by a LiDAR Sensor
In the case of using a LiDAR sensor and an image processing application, photographs are not used for the primary reconstruction of the point cloud. The finished 3D digital model of the artwork was scanned by the sensor directly in the free Scaniverse application and then converted to 3D graphic 3D SW Agisoft Metashape Professional. A point cloud was generated from this 3D model for subsequent image analysis. This method was chosen to compare the quality of the digital reproduction of a work of art captured by the same capture device, which is an iPad 11″ Pro tablet.
Figure 4 shows the 3D reconstruction of the object using the LiDAR sensor. The Scaniverse application was used to create a 3D model. A 3D model of the object is shown in
Figure 4b. This textured 3D model was transferred to SW Agisoft to generate a cloud of points.
Figure 4c shows the 3D model exported to the Agisoft 3D SW. A Dense Cloud with a similar body 23 was generated in this case.
Figure 4d shows the details of the generated 3D points.
Figure 5 shows the details of the Dense Cloud.
Figure 5 shows the details of the digitized artwork’s Dense Cloud structure.
Figure 5a shows the details of the Dense Cloud and individual points generated from the photographs and the primary point cloud produced by the SfM photogrammetry method. In total, 413,688 individual points were generated in the 3D model.
Figure 5b shows the details of the 23 points generated from the 3D texture model made by the sensor LiDAR. Individual points provide color information that ultimately defines individual points with a color value of #758605.
3. Colorimetry and Color Analysis
Colorimetry is the science of color and light. This field deals with the color interpretation of objects and the environment, human vision, and color reproduction. Light is electromagnetic radiation. The spectrum includes visible light (400–700 nm), which is visible to the human eye. The visible color of light is based on the wavelength λ and subjective perception by the human eye. Color image processing is also related to this. For these purposes, color models and gamuts define the possibilities of the color display of individual tones and their maximum range. To unify color reproduction, the international standard CIE 1931 was adopted in 1931, which is based on current modern technology and procedures across manufacturing and scientific fields.
3.1. Color Model and Gamut
In this experiment, in which a real object is transformed into a digital form, the RGB (red, green, blue) color model and the sRGB color space (gamut) are used.
Figure 6 shows their graphical representation.
Figure 6a shows the RGB (Red, Green, Blue) color model. This is the basic color model of RGB primary colors. The RGB color model operates using light components. This model primarily targets digital imaging and imaging devices (DSLR displays). The secondary colors created by mixing the primary RGB colors are CMY (Cyan, Magenta, Yellow). Mixing all three essential components of RGB creates white light (W).
Figure 6b shows the CIE1931 trichromatic triangle, standardizing work with colors since 1931.
The sRGB gamut (color space) is represented in the indicated trichromatic triangle in
Figure 6. This space represents the maximum color range in which a digital sensing or imaging device can operate. It currently displays the color spectrum and range of most digital imaging and sensing devices in the sRGB gamut. Therefore, the RGB model was chosen as the default color model for this experiment. The values of the individual color components of light of direct color #758605 (Hex Color Value) are as follows: R = 117; G = 134; B = 5. In connection with
Section 3.1. subsequently, in
Section 3.3, attention is paid to the color space and gamut L*a*b*, which is suitable for subsequent work with color reproduction and display in the VR environment.
3.2. Color Value #758605 Segmentation
In Agisoft 3D modeling, the SW environment can work with information in the color of individual points or groups of points in the Dense Cloud in RGB and HSV color models. This can be performed with a precisely defined Hex Color Value, as shown in
Figure 7.
Figure 7 shows the segmentation of points carrying information about color value #758605.
Figure 7a shows a 3D Dense Cloud model of individual points generated from photographs and created by the SfM photogrammetric method.
Figure 7b visualizes the SW Agisoft environment for working with Dense Cloud and information about the color value carried by each point. The 3D SW works with RGB and HSV (Hue, Saturation, Value) color models in this case. Values can be directly numerically defined in Hex format. However, it should be noted that the RGB color model does not contain tonal or other values such as Hue, Saturation, and Value in the HSV color model.
Figure 7c shows an example of the final segmentation of points with a color value of #758605 in the total number of clouding points in the 3D model.
Figure 8 shows the process of segmenting the points using color information #758605.
Figure 8a shows the 3D texture model captured by the LiDAR sensor in Agisoft 3D SW. A point cloud containing 17 points was generated from this texture model, as shown in
Figure 8b. The segmentation process of the point with color information #758605 is identical to that of the SfM method, as shown in
Figure 7b. As shown in
Figure 8c, no single point with a color value of #758605 was defined out of the total number of points. For this reason, the segmentation of points in the 3D models by the LiDAR sensor is not shown in
Section 4.1.
3.3. Color Model and Gamut in Virtual Reality
The previous sections present work with the RGB (Red, Green, Blue) color model and the sRGB color space, in which the most common sensing and display devices work. However, the type of VR headset and technical parameters must be considered when viewing a VR environment. In particular, the type and resolution of the VR headset fundamentally demonstrate the quality of displaying realistic scenes or models. It should be noted that the types of display devices for VR and their technical parameters differ significantly, and the color space or gamut needs to be precisely defined. These attributes have not yet been standardized for VR technology. When creating realistic 3D models and scenes, it is necessary to know in advance the type and parameters of the display VR headset and the final output presentation. Image processing needs to be adapted to these factors, which may be different. An Oculus Quest 2 VR headset was used in this study [
38].
Almost all standard 2D and display devices operate in the sRGB color space [
39,
40]. However, in the case of VR imaging techniques, standard color models and gamuts are not used. Considering the difference between classic and VR head-mounted displays (HDM) is necessary. However, different HDMs use different color spaces and specifications. Therefore, the colors will be different from those of the standard display. However, different HDMs visually interpret the colors differently. Therefore, it is necessary to consider the type of VR headset during the virtual presentation at the beginning of the process. Older Oculus and Quest/Rift headsets interpreted the image in the Rec.709 color space. This is the most common color space characteristic of Internet content. However, this color space does not have HDR (High Dynamic Range) technology, which simultaneously enables the extended reproduction of details in dark and light partially captured scenes.
Therefore, the Oculus Quest2 VR Headset was used in this study to visualize the artwork in a VR environment. This VR display device operates in the Rec.2020(HDR) default gamut and includes a standardized D65 white point corresponding to daylight. This is a different range of color simplicity for display devices. In connection with the different color spaces for the sensing and display devices in this experiment, it is appropriate to use the L*a*b* gamut (CIE 1976). The L*a*b* color space is derived from the first standardized CIE XYZ color space (CIE 1931) and is independent of a particular sensing or display device. This color space contains the full color range of the trichromatic triangle, as shown in
Figure 6 and
Figure 9. Also related to the CIE XYZ color space is the basic standardized color scale of ColorChecker Classic (X-Rite), used in color sensing and for color calibration of display devices according to international standards [
41]. It is evident from the above that it is necessary to transform individual color models and gamuts among themselves [
42].
As shown in
Figure 9, it is necessary to consider the choice of color models and color spaces depending on the desired final output for realistic color reproduction.
Figure 9a shows the standard color ColorChecker (X-Rite) used in the image capture and calibration device. The individual color positions in the chromatic diagram are shown in
Figure 9b.
Figure 9c shows the color model L*a*b*. The individual color ranges of the Rec.2020(HDR), sRGB, and L*a*b* gamuts are shown in the CIE 1931 chromatic diagram in
Figure 9d. From the above, it follows that realistic 3D reconstruction of works of art and their visualization in a VR environment must pay attention to the issue of colors, color models, and games in each of the individual steps of the entire procedure.
4. Results
As mentioned in the previous sections, the artwork was scanned using an iPad 11″ Pro smart device with a LiDAR sensor from Apple. The artwork was shot indoors in natural daylight and then in natural twilight using a tablet camera. For the 3D reconstruction of the object, two different methods were chosen using one scanning device. The 3D reconstruction of the object using the SfM photogrammetric method uses the integrated camera of the sensing device. The second method of 3D reconstruction uses a LiDAR sensor and the direct scanning of the Scaniverse mobile application to create 3D models. The goal of using these methods is to compare the quality of the 3D reconstruction of an object using one scanning device of the image, as well as other attributes, such as the scanning speed or the process of processing the image into a 3D model.
Table 1 shows the attributes of the reference photos that were part of the individual series from which the 3D models of the object were modeled using the SfM method. In the case of 3D reconstruction using LiDAR sensor scanning, individual photos were not the source of digitization. Therefore, the image attributes are not listed.
Table 1 lists the attributes and characteristics of the photo series that were applied to create a realistic 3D model. While the resolution, bit depth, aperture, and focal length are identical, the exposure data size and ISO are different for the reference photos. These attributes affect the progress of 3D model creation and the number of generated points in the point cloud carrying color information.
Table 2 shows the number of generated points.
Table 2 also shows the points generated from the 3D models captured by the LiDAR sensor and converted into 3D modeling software for comparison.
Table 2 lists the number of points generated for each type of 3D model of the artwork. The quality of the 3D reconstruction using the SfM photogrammetric method is described in
Table 2 by the marked models D1, D2, D3, G1, G2, and G3. The models marked with the letter D are taken from a series of images taken in daylight (D65), and the models marked with the letter G were taken in twilight. 3D models created by LiDAR technology and the Scaniverse mobile application on a mobile device have the same designation. However, this 3D modeling method cannot generate a Dense Cloud with enough points to identify the points with color value #758605, as shown in
Table 2. Different differences in 3D reconstruction require different image processing.
Table 2 also shows that the number of photos in a series does not necessarily increase the quality of the 3D model or the number of generated points. Therefore, the nature of the light intensity may not be directly influenced by these attributes; there was no significant difference that could affect shooting in the dark, where the quality of the captured image reflects the use of a photographic flash. However, the type of light affects the visual appearance of the resulting model, especially in color reproduction.
4.1. Visualization of 3D Models of the Artwork
Figure 10 visualizes the 3D models of the object in individual digitization steps. 3D models marked 1D visualize the 3D reconstruction of the object in natural daylight. The labeling of the images is identical to the labeling given in
Table 2. The individual images visualize a 3D point cloud model, a 3D Dense Cloud model, a texture 3D model, and a segmentation of points with a color value of #758605.
Figure 10 visualizes the 3D models produced by both methods of 3D realistic reconstruction.
The 3D models of the object visualized in
Figure 10 show the 3D reconstruction in the individual digitization steps. 3D models marked as D visualize the 3D reconstruction of the object in natural daylight. 1D images visualize a 3D point cloud model, a Dense Cloud 3D model, a textured 3D model, and a point segmentation with color value #758605.
Figure 10 shows an example of the final segmentation of points with a color value of #758605 among the total cloud points in the 1G 3D model.
4.2. Vizualizace 3D Modelů v Prostředí VR
In this section, the resulting presentation of the artistic section of the work is visualized in a VR environment.
Figure 11 shows a visualization of a realistic 3D model of a work of art in a VR environment, on which a 3D DS (Dense Cloud) model, a 3D texture model created by the photogrammetry method, and a 3D texture model created by a scanned LiDAR sensor are captured from the left.
From the visualization of the 3D realistic models in the VR environment shown in
Figure 11, it is clear that each model differs slightly in the reproduction of colors and tones.
Figure 11a shows a 3D Dense Cloud model comprising individual bodies carrying color information. 3D model 11b shows a texture model created from a DS 3D model, and 3D model 11c is a texture model created by direct LiDAR scanning, where it was impossible to analyze the number of color points #758605. Furthermore, these three models represent the initial visualizations of 3D models for further research into color reproduction in the VR environment and color analysis in this environment. The goal is to accurately create digital twins identical to the original physical twins for their virtual presentation with remote access.
This study does not consider the effect of structure and material on the quality of a realistic reproduction of an artwork. Together with other aspects, and especially with the applications of other possibilities of color and image analysis, these issues will be another research subject, which aims to accurately visualize works of art in a digital and virtual environment for the presentation of works of art and their creators.
5. Discussion and Conclusions
This article presents a partial issue in the 3D realistic digital reproduction of a work of art. The experiment was conducted in the interior of an art gallery. The ambient lighting conditions of daylight and darkness were used for image capture. The object was photographed with a mobile device iPad 11″ camera and the LiDAR sensor of this device. This work aimed to compare the effect of light on color reproduction. Knowledge from the field of colorimetry was applied to achieve this goal. At the same time, this study is a starting point for further research in realistic color reproduction in digital and virtual environments and work with color models and spaces.
The experiment was conducted in two parts. In the first part of
Section 2, a camera captures an image to create a 3D model using the SfM photogrammetric method. The object was scanned using a LiDAR sensor in this part of the work. The obtained digital image was then used to create 3D realistic digital twins of the object. 3D models of the artwork were made using Metashape Agisoft Professional software (SW) in an essential point cloud model. A cloud of points was subsequently generated from these points to create a polygonal network from which the resulting 3D texture model was created. Six artistic models were created using this procedure. Using the LiDAR sensor, the same object was scanned using the same sensing device. The subject was subjected to the same lighting conditions. This part of the research aimed to test the possibility of using a smart device for the 3D reconstruction of an object in a relatively short time under natural light conditions. Regarding LiDAR scanning, it is clear that 3D reconstruction can be performed in real time. However, the quality of a 3D object may not always be good. In the case of the Scaniverse mobile application, the possibility of subsequent post-production to judge quality is minimal. However, there is an assumption of the rapid development of this method and the possibility of using another application with more tools and options for working with point cloud models.
A comparison of individual models and the generated number of Dense Cloud is shown in
Table 2 and
Figure 10, which visualizes individual 3D models.
Section 4 presents a visual analysis of 3D texture models. As described in
Section 4.1 of the visual analysis, it was clear that the artwork shot in twilight using 17 and 29 photos was best reproduced. In
Figure 10, the models are labeled 1G and 2G. From visual analysis, it is clear that direct daylight is not suitable for this type of 3D modeling. However, the object was not exposed to direct sunlight, and the resulting 3D models exhibited many undesirable optical-physical phenomena, such as gloss and the effects of material and brushstrokes. The reproduction quality can be very high. In subsequent research, it will be possible to work with a detailed representation of the object’s material, structure, and color. In the case of the issue of forgeries in art, follow-up research is necessary.
Using the segmentation method of individual points with a precisely defined color value of #758605, the number of individual points with this value in the point cloud was determined. However, it can be seen in
Table 1 that the 3D models 3D and 3G were generated from only eight and nine photographs, respectively, and the Dense Cloud contained individual points. It is evident that to reproduce a smaller object, the number of images should be at least 15. A minimum number of points was generated in the case of 3D models made with a LiDAR sensor and using a mobile application. The Dense Cloud of individual models contained only 18–23 points, which was unsatisfactory for further progress. This methodology is inadequate for detailed image analysis. In digitizing art, no similar research has measured the procedures presented in this experiment. This issue is examined from the point of view of using machine learning to identify parts of an object or other visual elements. However, the experiments in these investigations may benefit the next strand of research that focuses on microscopic details to apply to the entire picture.
There are currently no standards for display in VR/AR headsets regarding color reproduction. Due to the different technologies used by the manufacturers for this type of device and the different requirements for their performance, it is impossible to use the experiment for a headset different from the Oculus Ques2, which was used in the study. Nonetheless, there is an excellent assumption of standardization in the field of color vision, which was introduced in 1931 (CIE 1931) and is described in
Section 3. These standards are being used and improved even today; however, with the new needs of digital technologies, new requirements in the field of sensors have also arisen. Above all, imaging devices and the problem of telematics of color vision are also necessary.
This sub-experiment with the use of colorimetry methods in the field of 3D modeling shows the direction of research in the field of realistic digitization of art, with the aim of the most accurate reproduction of colors and their use for presentation in a virtual environment. Further research will investigate the influence of structure and material on image reproduction. Attention will also be paid to the lighting conditions during image capture and their simulation when creating a virtual presentation of works of art. The experiment described in this text will also be the subject of further research, especially regarding other methodologies of 3D realistic reconstruction of the object. Nevertheless, the current study lays the foundation for further research and application of these results in forensic science and counterfeiting in digital and virtual environments.