Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
A Data-Driven Scheme for Fault Detection of Discrete-Time Switched Systems
Next Article in Special Issue
A New Photographic Reproduction Method Based on Feature Fusion and Virtual Combined Histogram Equalization
Previous Article in Journal
A Review of Nanocomposite-Modified Electrochemical Sensors for Water Quality Monitoring
Previous Article in Special Issue
A Smart Home Energy Management System Using Two-Stage Non-Intrusive Appliance Load Monitoring over Fog-Cloud Analytics Based on Tridium’s Niagara Framework for Residential Demand-Side Management
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Photographic Reproduction and Enhancement Using HVS-Based Modified Histogram Equalization

1
Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
2
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
3
Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei 106, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(12), 4136; https://doi.org/10.3390/s21124136
Submission received: 5 May 2021 / Revised: 4 June 2021 / Accepted: 15 June 2021 / Published: 16 June 2021
(This article belongs to the Special Issue Advanced Sensing for Intelligent Transport Systems and Smart Society)
Figure 1
<p>Preliminary comparison between parallel-based (<b>top</b>) and cascade-based (<b>bottom</b>) photographic reproduction methods, which illustrates the motivation of this study. In the proposed method, we utilized the HVS-based modified histogram equalization (HE) to avoid the fusion loss from blending two images, which was the main reason why we adopted the cascaded-architecture-type hybrid reproduction strategy. Detailed comparisons are provided in <a href="#sec4-sensors-21-04136" class="html-sec">Section 4</a>.</p> ">
Figure 2
<p>The overall framework of the proposed cascaded-architecture-type reproduction method, where WGIF indicates the weighted guided image filtering technique [<a href="#B24-sensors-21-04136" class="html-bibr">24</a>]. In this paper, two stages were designed to complement each other to achieve the advantages of both the local and the global operators.</p> ">
Figure 3
<p>Comparison of the effect between single-scale and multiscale detail extraction using the test image Cadik_Desk02 shown in <a href="#sec4dot2-sensors-21-04136" class="html-sec">Section 4.2</a> (<b>a</b>) Result of single-scale-based detail extraction (performed in [<a href="#B24-sensors-21-04136" class="html-bibr">24</a>]). (<b>b</b>) Result of multiscale-based extraction (performed in the proposed method). (<b>c</b>) The pixel value of the detail plane along the horizontal white line segments on (<b>a</b>) and (<b>b</b>). In (<b>a</b>) and (<b>b</b>), the right-side images are the enlarged version of the white rectangles, which are suggested to be closely examined by the reader.</p> ">
Figure 4
<p>Comparison of the intensity distribution with (and without) the pre-processed detail enhancement along the line segment shown in the bottom right of <a href="#sensors-21-04136-f001" class="html-fig">Figure 1</a>. (<b>a</b>) Before normalization. (<b>b</b>) After normalization. As shown in the red curve of (<b>b</b>), the goal of <a href="#sec3dot2-sensors-21-04136" class="html-sec">Section 3.2</a> was to preserve the local features as much as possible while normally compressing the global features.</p> ">
Figure 5
<p>Illustration of the proposed bin width adjustment approach in two aspects. (<b>a</b>) Comparison in terms of the output bin width ratios. (<b>b</b>) Comparison in terms of the output histograms.</p> ">
Figure 6
<p>Self-evaluation of the proposed HVS-based modified histogram equalization. (<b>a</b>) Result and the histogram before correction, whose bin width is calculated by Equation (15). (<b>b</b>) Result and the histogram after correction, whose bin width is calculated by Equation (18). The histograms of (<b>a</b>) are too wide so that the resulting image is slightly noisy in the sky.</p> ">
Figure 7
<p>Comparison of multiple-exposure images and the results of our proposed method. The red rectangles indicate the areas which should be closely examined by the reader.</p> ">
Figure 8
<p>Results of the test image Spheron_NapaValley by (<b>a</b>) Khan et al. [<a href="#B9-sensors-21-04136" class="html-bibr">9</a>], (<b>b</b>) Gu et al. [<a href="#B11-sensors-21-04136" class="html-bibr">11</a>], (<b>c</b>) Gao et al. [<a href="#B28-sensors-21-04136" class="html-bibr">28</a>], (<b>d</b>) Ok et al. [<a href="#B23-sensors-21-04136" class="html-bibr">23</a>], (<b>e</b>) Yang et al. [<a href="#B21-sensors-21-04136" class="html-bibr">21</a>], and (<b>f</b>) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.</p> ">
Figure 9
<p>Results of the test image Cadik_Desk02 by (<b>a</b>) Khan et al. [<a href="#B9-sensors-21-04136" class="html-bibr">9</a>], (<b>b</b>) Gu et al. [<a href="#B11-sensors-21-04136" class="html-bibr">11</a>], (<b>c</b>) Gao et al. [<a href="#B28-sensors-21-04136" class="html-bibr">28</a>], (<b>d</b>) Ok et al. [<a href="#B23-sensors-21-04136" class="html-bibr">23</a>], (<b>e</b>) Yang et al. [<a href="#B21-sensors-21-04136" class="html-bibr">21</a>], and (<b>f</b>) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.</p> ">
Figure 10
<p>Results of the test image Tree_oAC1 by (<b>a</b>) Khan et al. [<a href="#B9-sensors-21-04136" class="html-bibr">9</a>], (<b>b</b>) Gu et al. [<a href="#B11-sensors-21-04136" class="html-bibr">11</a>], (<b>c</b>) Gao et al. [<a href="#B28-sensors-21-04136" class="html-bibr">28</a>], (<b>d</b>) Ok et al. [<a href="#B23-sensors-21-04136" class="html-bibr">23</a>], (<b>e</b>) Yang et al. [<a href="#B21-sensors-21-04136" class="html-bibr">21</a>], and (<b>f</b>) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.</p> ">
Figure 10 Cont.
<p>Results of the test image Tree_oAC1 by (<b>a</b>) Khan et al. [<a href="#B9-sensors-21-04136" class="html-bibr">9</a>], (<b>b</b>) Gu et al. [<a href="#B11-sensors-21-04136" class="html-bibr">11</a>], (<b>c</b>) Gao et al. [<a href="#B28-sensors-21-04136" class="html-bibr">28</a>], (<b>d</b>) Ok et al. [<a href="#B23-sensors-21-04136" class="html-bibr">23</a>], (<b>e</b>) Yang et al. [<a href="#B21-sensors-21-04136" class="html-bibr">21</a>], and (<b>f</b>) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.</p> ">
Figure 11
<p>Comparison of the results with close-up images: (<b>a</b>) Khan et al. [<a href="#B9-sensors-21-04136" class="html-bibr">9</a>], (<b>b</b>) Gu et al. [<a href="#B11-sensors-21-04136" class="html-bibr">11</a>], (<b>c</b>) Gao et al. [<a href="#B28-sensors-21-04136" class="html-bibr">28</a>], (<b>d</b>) Ok et al. [<a href="#B23-sensors-21-04136" class="html-bibr">23</a>], (<b>e</b>) Yang et al. [<a href="#B21-sensors-21-04136" class="html-bibr">21</a>], and (<b>f</b>) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.</p> ">
Figure 12
<p>Part of test images from dataset [<a href="#B29-sensors-21-04136" class="html-bibr">29</a>]. Images 1 to 4—First row from left to right: SpheronNapaValley_oC5D, MtTamWest_o281, Montreal_float_o935, and dani_synagogue_o367. Images 5 to 7—Second row from left to right: rosette_oC92, rend11_o972, and rend08_o0AF. (Image 8) First from the right: memorial_o876. Image 9: Second from the right: bigFogMap_oDAA. All the images were processed using the proposed method.</p> ">
Figure 13
<p>Overall comparison of different methods using the test images in [<a href="#B29-sensors-21-04136" class="html-bibr">29</a>]. <b>(a</b>) Result of FSITMr_TMQI. (<b>b</b>) Result of FSITMg_TMQI. (<b>c</b>) Result of FSITMb_TMQI.</p> ">
Versions Notes

Abstract

:
Photographic reproduction and enhancement is challenging because it requires the preservation of all the visual information during the compression of the dynamic range of the input image. This paper presents a cascaded-architecture-type reproduction method that can simultaneously enhance local details and retain the naturalness of original global contrast. In the pre-processing stage, in addition to using a multiscale detail injection scheme to enhance the local details, the Stevens effect is considered for adapting different luminance levels and normally compressing the global feature. We propose a modified histogram equalization method in the reproduction stage, where individual histogram bin widths are first adjusted according to the property of overall image content. In addition, the human visual system (HVS) is considered so that a luminance-aware threshold can be used to control the maximum permissible width of each bin. Then, the global tone is modified by performing histogram equalization on the output modified histogram. Experimental results indicate that the proposed method can outperform the five state-of-the-art methods in terms of visual comparisons and several objective image quality evaluations.

1. Introduction

The human visual system (HVS) is a delicate and complex system. To perceive real-world scenes, human eyes function as visual sensors to receive lights reflected from the surface of objects. Light enters the cornea and refracts; the amount of light entering is regulated by the iris by adjusting the size of the pupil. Then, the ciliary muscle changes the shape of the lens to make the light focus on the retina, where photoreceptors convert the light into electrical signals. Finally, these signals are transmitted to the brain and interpreted as visual images.
Modern people only need to take out their mobile phones from their pockets to capture memorable moments. However, before the camera was invented, people could only record the scenes they saw through words and paintings. As early as the middle of the sixteenth century, inventors began studying imaging technology to lay the foundation for the development of cameras. At the end of the nineteenth century, the Eastman Kodak Company produced film negatives and gradually popularized cameras, and in 1975, they designed the first digital camera that captured a real-world scene by using electronic photodetectors and stored it as a digitized file.
Since the invention of digital cameras, digital photography has evolved rapidly, and people’s requirements for image quality are getting higher and higher. Currently, some people choose to use high-dynamic-range (HDR) sensors to record brightness information transmitted in the real world. HDR images use a 32-bit floating-point format to record details and natural tones in a scene. However, although HDR camera technology is mature, it is limited by the technology of traditional displays. Although screen manufacturers have introduced HDR displays, their prices are too high for them to find widespread use; therefore, low-dynamic-range (LDR) or standard-dynamic-range (SDR) screens that can only display 256 brightness levels are still popular. Therefore, many studies are being conducted to develop photographic tone reproduction methods or tone mapping methods to address this problem. In addition, some interesting works on histogram and image enhancement are recently proposed [1,2,3,4,5,6]. In this paper, we present a novel reproduction method for the conversion of HDR to LDR images that can enhance local details and maintain global naturalness.

2. Related Works and Research Motivation

Currently, most photographic reproduction methods can be classified into three categories: global-based, local-based, and hybrid-based methods. Global-based photographic reproduction methods employ the typical mapping strategies, such as linear mapping, exponential mapping, and logarithmic mapping. To upgrade the quality of the subjective viewing experience, Lenzen and Christmann [7] focused on improving the contrast rather than improving the brightness because they thought the most essential part of reproduction is to increase global contrast. Jung and Xu [8] enhanced the overall contrast of the image by using a transfer function called perceptual quantization, which is based on the human contrast sensitivity that represents the human visual perception of luminance. Khan et al. [9] used an HVS-based optimization step to identify pixels in the histogram bins that are indistinguishable to the human eye and then combined the original histogram and the reconstructed histogram to create a new one for designing the mapping curve. Because the shape of the retinal response curve is asymmetric, Lee et al. [10] used the zone system (a classic photography technique) to obtain a new type of asymmetric sigmoid curve (ASC). By using ASC, the curvature of mapping curves can be determined, and the global contrasts of LDR images can be expanded.
Local-based photographic reproduction methods yield suitable transfer functions for individual pixels. Gu et al. [11] proposed three assumptions and designed a local edge-preserving filter that avoids gradient reversal to perform multiscale decomposition of images. Barai et al. [12] integrated a saliency map with the edge-preserving guided filter and also enhanced the detail layer that is rich in edge information. Then, they used HVS-based parameters to adjust both the saturation and the exposure. Mezeni et al. [13] focused on maximizing the available dynamic range. They performed tone compression in the logarithm domain to reduce drastic changes in the dynamic range. Then, in order to modify the appearances of the tone-mapped results, tone compression in the linear domain was also performed. Reproduction methods based on the gradient domain have also been developed. Fattal et al. [14] presented a reproduction method in which the degree of compression is increased as the gradient becomes larger. Their assumption was that by considering the gradient, the fine details could be preserved as the dynamic range is compressed drastically. Mantiuk et al. [15] also proposed a gradient-based method to enhance the contrast and maintain the polarity of the local contrast (i.e., avoid the artificial artifacts caused by gradient reversal) by imposing additional constraints during the gradient process. Unlike global-based methods, local-based methods tend to focus on adjusting the local contrast by considering adjacent pixels. Although details are thus preserved effectively, there is a high probability of generating artificial artifacts, especially for those pixels at salient edges.
In light of the disadvantages of using global- or local-based reproduction methods alone, studies are increasingly combining the properties of these two in hybrid frameworks. Most hybrid-based reproduction methods can be divided into two different types: cascaded architecture and parallel architecture.
In cascaded-architecture-type hybrid reproduction methods, global and local processes are connected in series. Reinhard et al. [16] applied traditional photography schemes to digital images. To overcome the dynamic issue, they proposed a dodging-and-burning technique; however, it tends to generate artifacts such as halos. Ferradans et al. [17] proposed a reproduction method that considers the characteristic of cones (i.e., photoreceptor cells) in the first global stage; in the subsequent stage, the loss of visual contrast was compensated locally. Although they tried to manipulate the saturation perceived by human eyes, the tones of resultant images were not sufficiently vivid. Benzi et al. [18] presented a hybrid reproduction method that reproduces the adaptation mechanism in the retina. They proposed a virtual retina model that takes pupil adaptation into account; unfortunately, some images tended to have a gray-like appearance.
In parallel-architecture-type hybrid reproduction methods, the modular technique is usually used to subdivide the framework into many small units that can be applied independently. Input images are substituted into different modules so that their characteristics can be considered from different aspects through a weighted fusion. Raffin et al. [19] presented a parallel-based method that uses a tone reproduction curve and a local contrast expansion scheme for detail-rich areas. Artusi et al. [20] applied local mapping at regions with high frequencies and a global mapping at the remaining regions. However, the rendered image may be unsatisfactory in some cases, especially in the boundary between locally and globally tone-mapped regions. Yang et al. [21] applied adaptively generated gamma curves to regions with different brightness levels and then performed adaptive weight fusion. The tone-mapped results successfully render a balanced tone between lightness and darkness but tended to lose details. Miao et al. [22] presented a hybrid framework containing two parallel models, where the macro-model manipulates contrasts and the micro-model adjusts details. Although the global information is obtained adaptively, the tones of the resultant images are somehow blurred because of the final fusion process.
  • Motivation for this study: Recently, the hybrid-based approach seems to be a promising solution to the photographic reproduction problem. However, as mentioned in the above two paragraphs, there is still room for improvement. As shown in Figure 1, the algorithm of [23] presents a typical parallel-architecture-type hybrid reproduction framework, in which the image information content is used to separately enhance each pixel in global contrast and in local details to different extents, following which a weighted fusion is performed. However, if the tone reproduction process involves this type of parallel architecture and fusion, the resultant images might bias to one of the global and local characteristics. Consequently, the parallel-architecture-based method sacrifices either the global tone naturalness or the local details more or less.
  • Contribution of this study: In view of the shortcoming of the parallel-architecture-based method, this work presents a cascaded-architecture-type reproduction method. Despite having the advantage of computational efficiency, photographic reproduction methods using a monotonic transfer function are typically vulnerable to detail loss (i.e., loss of the local features), especially in the bright and dark areas. In this study, we demonstrate a practical reproduction method and demonstrate that even though it applies the monotonic transfer function (i.e., the proposed HVS-based modified histogram equalization), it is able to preserve the global contrast and even enhance the local details in bright and dark areas simultaneously. To adopt the histogram equalization scheme in photographic reproduction, the histogram configuration is reallocated according to two HVS characteristics: the just noticeable difference and the threshold versus intensity curve. The experimental results demonstrate the effectiveness of the proposed method in terms of different evaluations.

3. Proposed Approach

Figure 2 shows the overall framework of the proposed reproduction method. Unlike in the case of the parallel-architecture-based reproduction method, we prioritized regional features to preserve as much detail as possible in the first stage. This strategy may cause concerns over sacrificing the global tone; however, because the human eye is only sensitive to the regional contrast (i.e., distinguishing between relative bright and dark) and not to the absolute value of the luminance difference [14], we believed that retaining the regional characteristics of the image was more important than rendering a natural global tone. Therefore, in the first stage of the proposed method, we expand the local contrast of the input image by enhancing the local features. In the second stage, the dynamic range is allocated according to the composition of the entire image and the properties of the HVS to recover the natural tone adaptively. As a result, the re-rendered image is closer to the real scene, and the high contrast and regional details are maintained. We believe that the two stages of the proposed method can complement each other so that the advantages of both the local and the global operators can be achieved.

3.1. Luminance Extraction and Initial Log Compression

For the photographic reproduction methods, it is a typical process to grasp the important information of the image by extracting the luminance channel from the image. To obtain the luminance channel of the image, we convert the input image from the RGB color space to the XYZ color space:
[ X i n Y i n Z i n ] = [ 0.4124 0.3576 0.1805 0.2126 0.7152 0.0722 0.0193 0.1192 0.9505 ] [ H D R R H D R G H D R B ]
where H D R R , H D R G , and H D R B represent the three RGB channels of the input HDR image. After the matrix transformation in Equation (1), X i n , Y i n , and Z i n represent the input XYZ channels, where Y i n contains the luminance information of the input image. Since human perception of brightness involves a non-linear logarithmic relationship, we then apply log compression to Y i n and define the logarithmic luminance ( Y l o g ) as:
Y l o g ( i , j ) =   log ( L Y ( i , j ) + ε 1 )
where i and j are the coordinates of the pixels in the image. A minimum value ε 1 (set at 10 6 empirically in this study) is added in Equation (2) to avoid the singular value during the compression process.

3.2. Pre-Processing for Detail Enhancement

Normally, the local contrast in bright and dark areas tends to be compressed and damaged severely during the reproduction process from HDR to LDR images. To address this problem, this study adopted a detail injection technique that contained two phases. In the first phase, three spatial filters with different radii are used to obtain multiscale feature information. In the second phase, a model of Stevens effects [25] is integrated into our system to fully consider the correlation between each brightness level and its corresponding perceived contrast.
As shown in Figure 3a, the detail layer extracted using single-scale decomposition tends to lose multiscale characteristics and is vulnerable to high-frequency noises. To cope with this problem, we adopted a weighted guided image filter ( W G I F ) [24], an edge-preserving smoothing technique that is robust against halo artifacts, to obtain multiscale features. Two W G I F s with different radii were used: the one with a smaller radius ( r 1 ) is used for extracting micro-detail features and the one with a larger radius ( r 2 ) is used for extracting macro-detail features. The procedure of micro- and macro-detail extraction is given by:
B ( i , j ) = W G I F ( Y l o g ( i , j ) , r 1 , ε 2 )
D m i c r o ( i , j ) = Y l o g ( i , j ) B ( i , j )
D m a c r o ( i , j ) = B ( i , j ) W G I F ( B ( i , j ) , r 2 , ε 2 )
where B is the base plane, and ε 2 is a regularization parameter for penalization. In this work, r 1 , r 2 , and ε 2 were empirically set as 15, 30 (double of r 1 ), and 0.01.
In Equations (4) and (5), D m i c r o and D m a c r o , respectively, indicate the micro-detail plane and the macro-detail plane. The former contains delicate textures such as hair information, and the latter contains structural edges such as outline information of objects. Figure 3b shows the result of merging the micro- and the macro-detail planes. Compared with the single-scale detail extraction (Figure 3a), multiscale micro-and macro-detail extraction (Figure 3b) apparently amplifies more local details and, therefore, relatively avoids the unrealistic visual perception of viewers due to excessive high-frequency noises.
Subsequently, we further apply the concept of the Stevens effect to modify the detailed information. First, the merged detail plane ( D m e r g e ) is defined as:
D m e r g e ( i , j ) = 2 × ( D m i c r o ( i , j ) + D m a c r o ( i , j ) ) W G I F ( D m i c r o ( i , j ) + D m a c r o ( i , j ) , r 3 , ε 2 )
Instead of simply adding D m i c r o to D m a c r o , the third W G I F with the smallest radius r 3 (set as approximately half of r 1 ) is used to enhance the tiny textures and to improve the detail visibility of the merged detail plane. The color appearance phenomenon explains how lighting conditions affect human perception and the corresponding psychological state. From psychophysical experiments, despite having the same tristimulus values, human eyes may perceive them as different colors due to the inconsistent lighting conditions. For example, a black-and-white image shows relatively low contrast under low-lighting conditions. By contrast, when the same image is moved to a bright area, the white regions become perceivably (cognitively) brighter, and the black regions become perceivably darker. Therefore, the perceived contrast level substantially increases under a bright-lighting condition.
To consider the color appearance phenomenon, the Stevens effect is applied to obtain the injection detail plane ( D i n j ) as:
D i n j ( i , j ) = 10 τ ( i , j )
τ ( i , j ) = D m e r g e ( i , j ) × ( 0.8 + F L ( i , j ) ) 0.25
In Equation (7), to emphasize the fineness of intensity variation in detail, the processed detail plane is converted back to a linear domain by a power function. In Equation (8), τ involves the merged detail plane and the luminance-dependent factor ( F L ), which is used to adaptively model the Stevens effect at different luminance levels. The F L value is directly adopted from the previous work [26], and it can be expressed as:
F L ( i , j ) = 0.1 × ( L A ( i , j ) ) 0.33 × [ 1 ( 1 L A ( i , j ) + 1 ) 4 ] 2 + 0.2 × L A ( i , j ) × ( L A ( i , j ) + 1 ) 4
where L A is the luminance of the adapted field. Finally, we combine the injection detail plane and the logarithmic luminance plane as:
I i n j ( i , j ) = Y l o g ( i , j ) + D i n j ( i , j )
The intensity of I i n j is further normalized through the following n o r function:
I i n j _ n ( i , j ) = n o r ( I i n j ( i , j ) ) = I i n j ( i , j ) m i n ( I i n j ) m a x ( I i n j ) m i n ( I i n j )
Figure 4 shows the pixel intensity distribution in each step; it illustrates the underlying concept of the detail enhancement performed in Section 3.2. As indicated by the green line in Figure 4b, if the luminance channel is directly adjusted by a linear compression, most of the limited dynamic range is preferentially assigned to the regions where local contrasts are relatively high; by contrast, the remaining regions are compressed (to almost zero) severely and thus drastically lose details. Therefore, in view of the nonlinearity between the actual brightness and the brightness perceived by human eyes, we first converted the luminance channel into the logarithmic domain (blue dashed curve in Figure 4a). However, although the major coarse details (i.e., large-scale variations) in the image were maintained, the small-scale details tended to be lost after normalized compression.
To address the above problem, we proposed injecting the micro- and macro-detail planes into the logarithmic luminance plane (red dashed curve in Figure 4a). Moreover, the Stevens effect was applied to consider the color appearance phenomenon in which the perceived image contrast varies as the lighting condition changes. Through the detail injection procedure, the local details are strengthened and are thus still visible after normalization as we desired. Nevertheless, the global contrast of n o r ( I i n j ) was sacrificed, as shown in Figure 4b. In the next step, we deal with this problem by using the HVS-based modified histogram equalization.

3.3. HVS-Based Modified Histogram Equalization

In the first stage, the proposed method prioritizes preserving local features. However, the dynamic ranges of images are decreased, and thus, the global contrast is low. To solve this problem, in the second stage, we proposed using the property of image histograms and the HVS characteristics to adjust the configuration of the dynamic range by stretching pixel intensities. Therefore, after reallocation, the overall tone appears in a high-contrast state without sacrificing detailed information.
A histogram is a discrete function that counts the total number of pixels at different intensity levels. Therefore, we can use it to read the information contained in the image. For example, a dark image tends to have the most low-intensity pixels, and so the peak of its histogram will appear at a left-side (i.e., lower intensity) level. In another case, pixels in a low-contrast image tend to distribute over close intensity levels, and so a concentrated and narrow histogram will be generated. In addition, traditional histograms usually accumulate m equispaced bin widths to construct the bin edge E d g e k with the same spacing:
E d g e k = { m i n ( I i n j _ n ) , i f   k = 0 E d g e k 1 + Δ ω , i f   k = 1 , 2 ,   , m
where I i n j _ n indicates the luminance channel after detail injection, I m a x and I m i n , respectively indicate the maximum and minimum of I i n j _ n , and Δ ω = ( I m a x I m i n ) m is the equispaced bin width. The parameter m is used for adjusting the total number of quantification levels in the histogram. A larger m value indicates the use of more intensity levels for rendering a high-quality image. By contrast, a smaller m value indicates that lower computation time is required. Under the trade-off between time and quality, the value of m was empirically set as 60. Moreover, assuming an input image is an unknown signal, the probability P ( b k ) assigned to each bin can be expressed by:
P ( b k ) = N ( b k ) Q
I i n j _ n ( i , j ) b k ,           i f   E d g e k 1 I i n j _ n ( i , j ) < E d g e k
where b k is the κ -th bin and is defined as the interval between E d g e k 1 and E d g e k , that is, b k = [ E d g e k 1 , E d g e k ) . The bin count N ( b k ) is defined as the number of pixels within b k , and Q is the total number of pixels in the image.
Traditional histogram equalization uses a uniform bin width to construct a histogram and subsequently perform histogram-based mapping techniques to adjust the dynamic range. However, for those histograms made from a uniform bin width, the bin counts may vary significantly: For the pixels which belong to the bins with large bin counts, there is insufficient space for stretching the pixel intensities to depict the image details. In contrast, for the pixels which belong to the bins with small (sometimes, even equal to zero) bin counts, they occupy too much dynamic range and thus limit the arrangement of the entire contrast scale. Based on this observation, we found that instead of stretching intensities with the fixed equal-spacing bin width, it was better to arrange each bin width according to the image characteristics dynamically.
In this study, two factors were considered to adjust the dynamic range through the reallocation of the histogram configuration. First, the limited dynamic range is assigned to the bins where sufficient pixels actually exist. Second, a psychophysical metric, the just noticeable difference (JND), is used to balance regional contrast and global contrast. Therefore, the bin width is initialized in proportion to ( b k ) , which can be expressed as:
ω k = N ( b k ) f
where ω k represents the initial width of the κ -th bin, and f equals one minus the standard deviation of P ( b k ) . When the probability of pixels appearing at each intensity level is more dispersed (i.e., the standard deviation is large), the difference between bin counts is larger. Moreover, if the gaps between individual bin widths are wide, the dynamic range is mostly occupied by the intensity levels corresponding to great numbers of bin counts; however, if the gaps are small, the differences among individual intensity levels are indistinguishable from each other, leading to the loss of important information about images. Therefore, we set Equation (15) as a power function and determine the degree of compression based on the degree of probability dispersion. That is, the more obvious the dispersion of the histogram, the smaller is the f value used.
Cutting down the bin widths where bin counts are small and reallocating wider bin widths to the bins where bin counts are large can prevent the situation in which most pixels are at certain narrow intervals of the entire dynamic range. Nevertheless, this is not sufficient. Once an image has big patches that consist of similar colors, a large number of pixels with close intensities are assigned to certain bins, and the pixels of these bins also dominate the dynamic range of the output image, thereby limiting the stretch range of other pixels. Therefore, from the aspect of perceived brightness, we further use the characteristics of HVS to establish a mechanism for correcting ω k .
The background luminance affects the perception of human eyes. The JND metric represents such characteristic of the HVS, which describes the minimum luminance difference between the target and background to be noticeable by human eyes: At the beginning of the experiment, the observers fixate on a screen until they are adapted to the background luminance level (hereafter called the adaptation level, L a ). Then, the screen starts flashing a disc-shaped light spot, and the observers are asked to report whether the target disc can be recognized from the background. The experiment defines JNDs under different adaptation levels by adjusting the luminance, and as a result, the threshold versus intensity (TVI) curve can be obtained by combining the relationship between the detection threshold and the background luminance in the logarithmic domain. In this study, we directly adopted the JND/TVI model from [27], which can be expressed as:
l o g ( Δ L ) = { 3.81 ,           i f   l o g ( L a ) < 3.94 ( 0.405 × l o g ( L a ) + 1.6 ) 2.8 3.81 ,           i f 3.94 l o g ( L a ) < 1.44 l o g ( L a ) 1.345 ,           i f 1.44 log ( L a ) < 0.0184 ( 0.249 × l o g ( L a ) + 0.65 ) 2.7 1.67 ,           i f 0.0184 l o g ( L a ) < 1.9 l o g ( L a ) 2.205 ,       i f   l o g ( L a ) 1.9
where Δ L is the threshold value perceived by human eyes at each adaptation level and the units of both Δ L and L a are c d m 2 .
As depicted in Equation (16), the JND/TVI model is defined on a log-log domain. Although human eyes can capture a wide range of luminance intensities, actually two types of retinal cells are used in cooperation—the rod cells function in the dim-light condition, and the cone cells function in the well-lit condition. Therefore, the JND value increases as the adaptation level increases, implying that the bins regarding different luminance intensities inherently require different bin widths; that is, the bins at higher intensity levels need more space for stretching. Considering the abovementioned property, this study proposes the model of a JND-based threshold ( T k J N D ) to ensure that the limited dynamic range reaches the most effective arrangement:
T k J N D =   ω k × Δ L k   L k  
where T k J N D represents the maximum permissible bin width of the κ -th bin, and Δ L k represents the threshold value of the κ -th bin from Equation (16). Because JND is proportional to the background luminance, the maximum intensity in the κ -th bin is set as L a for the calculation of the corresponding Δ L k so that all pixels in the bin are guaranteed to have sufficient stretched space. Moreover, for those bins whose initial bin widths exceed T k J N D , pixel distortion may occur in the output image because they initially obtain too much stretched space. Therefore, each initial bin width is corrected by:
ω k = { ω k , i f   ω k T k J N D   T k J N D , o t h e r w i s e
In summary, Figure 5a indicates the variations in bin width ratio arrangement in different stages, where the cyan bars indicate the equispaced bin widths used in the traditional methods, yellow bars indicate the initial bin widths from Equation (15), and magenta bars indicate the corrected bin widths from Equation (18). Considering that if dominant bins (bins with significantly wide bin widths) exist, unnatural colors will occur due to overemphasis of certain pixels, this work utilizes the JND model to define the maximum permissible bin width, i.e., the green curve. As shown in Figure 5a, the bins in which the bin width ratio exceeded their corresponding JND threshold were corrected (i.e., extra bin width is deleted), and the other bins keep their initially allotted bin widths to maintain the relationship of assigning dynamic range to bins that really contain pixels. Figure 5b shows two output histograms. The cyan one was generated using the traditional approach, and the magenta one was generated using the proposed bin width adjustment approach that automatically allocated bin widths and appropriately utilized the dynamic range. Furthermore, the histogram generated using the proposed method not only covers wide intensity levels, which means that the global contrast has been visually expanded, but also helps generate natural tones that are close to the real scene.

3.4. Luminance Adaptation and Color Recovery

After bin width adjustment, all bin widths are different from each other, and moreover, all possess a suitable range because both the properties of the HVS and the image content are considered. The limited dynamic range is preferentially assigned to places with abundant details by imposing restrictions on the bins where the probability of pixels appearing is low. Next, the modified bin edges ( E d g e k ) can be calculated as:
E d g e k = { 0 , i f   k = 0 E d g e k 1 + ω k   ω k , i f   k = 1 , 2 ,   , m
From the information of the modified bin edges, a look-up table (LUT) is constructed by using the standard histogram equalization method and the linear interpolation scheme. The LUT is used to form the output luminance plane ( Y o u t ). Because the LUT is a global monotonic mapping function, when rearranging the pixel intensity, artificial artifacts such as blocking and halo effects are guaranteed to be avoided. Finally, the tone-mapped image is obtained as:
L D R c ( i , j ) = ( H D R c ( i , j ) Y i n ( i , j ) ) s · Y o u t ( i , j )
where the subscript c   { R , G , B } represents the three RGB channels, and s is set as 0.65 to control the saturation.

4. Experimental Results and Discussions

4.1. Self-Evaluation

To verify the effectiveness of our proposed algorithm, we compared it with five state-of-the-art photographic reproduction algorithms, including a global-based method from [9] (published in 2018), two local-based methods from [11] (published in 2013), and [28] (published in 2020), and two parallel-architecture-type hybrid methods from [21] (published in 2019) and [23] (published in 2017). The test images were obtained from public online resources [29,30,31]. For the comparison of computational performance, taking the image memorial_o876 (with a size of 768 × 512) as an example, the processing time required to generate a reproduced image was 0.6069s (in [9]), 0.9511s (in [11]), 3.8284s (in [21])), 1.3471s (in [23]), 1.0971s (in [28]), and 0.9325s (in the proposed method). All the experiments were performed in MATLAB R2019b with an i7–4790 processor running at 3.60 GHz. In addition to self-evaluation (Section 4.1) of the proposed method, the results of subjective and objective comparison with other methods were also provided in Section 4.2 and Section 4.3, respectively.
First, we evaluated the most important property in this study, namely, the HVS-based modified histogram equalization approach. Unlike other methods that simply perform global compression, we proposed the use of a bin width adjustment scheme (and the corresponding histogram equalization) to reallocate the overall tone into a fixed dynamic range. Figure 6a,b show the histograms and the results before and after bin width correction, respectively, where the largest bin widths of each histogram are marked in yellow. In Figure 6a, a large number of pixels have similar luminance intensity; therefore, the yellow bin initially possesses a large proportion of the dynamic range. However, if too much dynamic range is allocated to the pixels with close intensities, the image contrast will be over-stretched and will thus over-amplify some noises, as shown in the sky in Figure 6a. To address this problem, we refer to the characteristic of the HVS and use the JND-based threshold to automatically correct the bin widths that will take up too much dynamic range. As shown in Figure 6b, after bin width correction, the global contrast was maintained, and the output result has a more natural appearance.
In Figure 7, we refer to images with different exposure levels (LDR images downloaded from [32]) to evaluate our proposed method from a different aspect. Generally, for comparison among images captured by a common camera, the overall tones of middle-exposed images were visually pleasing and close to the real scenes, whereas under- and over-exposed images clearly show the details of bright and dark areas, respectively. Although high-end HDR cameras can record a wider dynamic range of luminance intensities, considerable detailed information tends to be lost when an HDR image is directly displayed on an LDR monitor (second column from the right). As shown in the rightmost column, the results of our method not only maintain natural tones but also preserve the details of the bright and dark areas.

4.2. Subjective Analysis

In Figure 8, Figure 9, Figure 10 and Figure 11, we selected images under different conditions to verify whether the proposed method outperforms other methods in having natural tones and rich details. Figure 8 shows the tone-mapped results using the test image Spheron_NapaValley. For Figure 8a,e, although the natural tone of the scene was retained, the details of dark areas can hardly be seen. In Figure 8d, the detail clarity problem was slightly improved; however, the weighted fusion process causes unnatural seams in the sky. In Figure 8b, the details are clearly visible; however, the global tone was faded. In Figure 8c, although the method of [28] improved the problem of detail clarity with contrast; however, the global tone was over-saturated, resulting in a halo effect in the sunset part. The result of our method is presented in Figure 8f, where the trade-off between local and global contrasts was balanced so that it simultaneously retains clear details and the overall color information.
Figure 9 shows the tone-mapped results using the test image Cadik_Desk02. In Figure 9a,d, the global contrast was maintained; however, the detailed information such as the text in the book was lost. In Figure 9b, the details are well preserved; however, artificial artifacts appearing around the lamp were caused by gradient reversal. In Figure 9c, the overall tone is clearly bright, and the details of the text in the book are slightly visible; however, the details in the bottom-left dark area are low. In Figure 9e, an adaptive gamma correction method was used to correct the tones of bright and dark areas separately; however, for a dim indoor scene like this example, an unnatural overall tone tends to be produced. In Figure 9f, the preservation of the natural tone results in a visually pleasing appearance; further, the details are clear, and no artifacts are present because of the use of the proposed multiscale detail injection scheme. Clearly, the proposed method provided the best performance in terms of the coordination of global and local characteristics.
Unlike the indoor scene in Figure 9, Figure 10 shows the reproduced results of an outdoor scene with sufficient lighting: Tree_oAC1. In Figure 10a, the detailed textures of the trunk and the rear trees were not preserved and were thus obscured. In Figure 10b, the sky region and fallen leaves are clear; however, the color is not sufficiently vivid and lacks contrast. In Figure 10c, the details are clear, but the colors are oversaturated, leading to edge distortion, reducing the pleasing visual experience. In Figure 10e, the global chrominance was somehow distorted, and thus, the visual quality was degraded in terms of rendering global tone and local details. Moreover, in Figure 10b,c,e, the noise in the centered tree hole region was amplified. In Figure 10d, although the overall contrast was preserved, the global chrominance was faded (especially in the background). In Figure 10f, in addition to the preservation of naturalness and details, our method prevented high-frequency noise in the tree hole from being amplified and thus provides a visually pleasing appearance.
Figure 11 presents three more examples, with magnified images of the dark and bright areas provided at the right-hand side of each image. An outstanding photographic reproduction method not only maintains the structural information of the input image but also produces natural and attractive results. In terms of structure, the proposed method could effectively preserve the details of bright and dark areas and avoid artificial artifacts that are usually produced by the gradient reversal of local-based photographic reproduction methods. In terms of visual attraction, image components were used to allocate a limited dynamic range dynamically, and furthermore, the characteristics of the HVS were considered. Our resultant images not only conformed to the human visual perception but also provided a good viewing experience for observers.

4.3. Objective Analysis

In addition to the subjective comparisons, objective evaluation results were obtained using all the images of the dataset in [29], where the dynamic range varied from 2.0 to 8.9, as shown in Table 1. As shown in Figure 12, the images of the dataset from [29] were obtained from various scenes, e.g., outdoor/indoor scenes, day/night scenes, country/urban scenes, and so on. The first objective quality metric is called the tone mapping quality index (TMQI) [33]. It measures the image quality in terms of the structural fidelity (TMQI-S), statistical naturalness (TMQI-N) between the input HDR image and the output LDR result, and overall quality (TMQI-Q) obtained by integrating TMQI-S and TMQI-N by weighted power functions.
Table 2, Table 3 and Table 4 present the TMQI data in terms of TMQI-S, TMQI-N, and TMQI-Q, where the highest and second-highest scores of each row are marked in green and yellow, respectively. The scores of these three evaluation standards are all between 0 and 1. The higher the TMOI score is, the better the image quality of a reproduced image has. Moreover, the total number of the highest scores of each method is counted in the last row, and the one with the highest total number is marked in bold. As shown in Table 2, Table 3 and Table 4, our method has, respectively, 16, 9, and 12 first-ranked images in the three quality indicators, thereby surpassing the other five algorithms in each table. The results listed in Table 2, Table 3 and Table 4 indicate the superiority of the proposed method in terms of different TMQI metrics.
The second objective quality metric is called the feature similarity index for tone-mapped images (FSITM_TMQI) [34]. It claims to be an improved version of TMQI because it further considers the phase-derived features. As in TMQI, the score of FSITM_TMQI was between 0 and 1, and a higher one indicates better quality. Figure 13 presents the results of FSITMr_TMQI, FSITMg_TMQI, and FSITMb_TMQI obtained by each method, where the subscript indicates one of the RGB channels. Again, the proposed method exhibited better overall performance than other methods; specifically, it had the top-three scores for most of the 33 images.
The abovementioned indicators are full-reference image quality assessment (FRIQA) techniques that were formulated by referring to the undistorted images. Next, we provide a comparison of two no-reference image quality assessment (NRIQA) techniques: the blind/referenceless image spatial quality evaluator (BRISQUE) [35] and the blind tone-mapped quality index (BTMQI) [36]. BRISQUE refers to the pixel distribution of an image and uses the relationship between normalized luminance coefficients and adjacent pixels to obtain features. BTMQI refers to the analyses of information, statistical naturalness, and structural gradient, which represent different types of features in an image. Both these NRIQA indicators measure the image quality through the features of a tone-mapped image; the lower the score, the better the quality. Regarding the research topic of this paper, as far as we know, the TMQI metrics could be considered the most representative metrics. For example, the TMQI metrics were used in the studies of [9,10,13,18,21,22,23,28,33,34,36]. For the remaining selected metrics, they are commonly used to evaluate the performance of photographic reproduction methods, and they also have been used in many studies. For example, the FSITM metrics were used in the studies of [9,13,33], and the no-reference image quality assessment techniques (BRISQUE or BTMQI) metrics were used in the references of [13,23,28,35,36].
Table 5 presents the results of the averaged score obtained using the abovementioned FRIQA and NRIQA techniques, where the first and second places are marked in green and yellow, respectively. Among the eight objective quality indicators, our method achieved six first-ranked scores and one second-ranked score. Notably, the proposed method ranked only third in TMQI-N. Due to the pre-processing stage of our method, we utilized a detail injection scheme to enhance the local details. The details, especially in the highlight and dark regions, were indeed enhanced and provide visually pleasing results, as shown in Figure 8, Figure 9, Figure 10 and Figure 11; however, the naturalness of the image was affected. Overall, the performance of our work remains remarkable, as shown in Table 5, thereby validating the effectiveness of the proposed method.

5. Conclusions

This study proposes a cascaded-architecture-type photographic reproduction method that prioritizes enhancing multiscale local features and then utilizes an HVS-based modified histogram equalization scheme to formulate a global tone adaption curve. Unlike traditional methods that use single-scale decomposition, we used a multiscale micro- and macro-detail injection technique to improve the visibility of local features. Moreover, in parallel-architecture-type hybrid reproduction methods, the final weighted fusion is normally similar to a balance process; to prevent abrupt fusion results, either the clarity of details (advantage of local-based reproduction methods) or the naturalness (advantage of global-based reproduction methods) of tones is sacrificed. As a result, the resulting images from parallel-architecture-type hybrid reproduction methods tend to be vulnerable to dullness. To address this problem, we propose combining the advantages of global-based/local-based approaches in a cascaded architecture to ensure consistency among the dark and the bright regions throughout the image and provide a natural appearance. The experimental results of subjective visual comparisons (Figure 8, Figure 9, Figure 10 and Figure 11) and objective comparisons (Table 2, Table 3, Table 4 and Table 5) validate the effectiveness and superiority of our proposed method.

Author Contributions

Y.-Y.C. carried out the studies and drafted the manuscript. K.-L.H. participated in its design and helped to draft the manuscript. Y.-C.T. and J.-H.W. conducted the experiments and performed the statistical analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Ministry of Science and Technology, TAIWAN, under Grant No. MOST 108-2221-E-027-095-MY2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, J.; Huang, L.; Zhang, Y.; Zhu, Y.; Ni, J.; Shi, Y. An effective steganalysis algorithm for histogram-shifting based reversible data hiding. Comput. Mater. Contin. 2020, 64, 325–344. [Google Scholar] [CrossRef]
  2. Hu, J.; Lei, Z.; Li, X.; He, Y.; Zhou, J. Ultrasound speckle reduction based on histogram curve matching and region growing. Comput. Mater. Contin. 2020, 65, 705–722. [Google Scholar] [CrossRef]
  3. Mamoun, M.E.; Mahmoud, Z.; Kaddour, S. Efficient analysis of vertical projection histogram to segment arabic handwritten characters. Comput. Mater. Contin. 2019, 60, 55–66. [Google Scholar] [CrossRef] [Green Version]
  4. Huang, D.; Gu, P.; Feng, H.; Lin, Y.; Zheng, L. Robust visual tracking models designs through kernelized correlation filters. Intell. Autom. Soft Comput. 2020, 26, 313–322. [Google Scholar] [CrossRef]
  5. Liu, K. The data classification query optimization method for english online examination system based on grid image analysis. Intell. Autom. Soft Comput. 2020, 26, 749–754. [Google Scholar] [CrossRef]
  6. Xia, Z.; Lu, L.; Qiu, T.; Shim, H.J.; Chen, X.; Jeon, B. A privacy-preserving image retrieval based on ac-coefficients and color histograms in cloud environment. Comput. Mater. Contin. 2019, 58, 27–43. [Google Scholar] [CrossRef] [Green Version]
  7. Lenzen, L.; Christmann, M. Subjective viewer preference model for automatic HDR down conversion. In Proceedings of the IS&T International Symposium on Electronic Imaging, Burlingame, CA, USA, 1–2 February 2017; pp. 191–197. [Google Scholar]
  8. Jung, C.; Xu, K. Naturalness-preserved tone mapping in images based on perceptual quantization. In Proceedings of the 2017 IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017; pp. 2403–2407. [Google Scholar]
  9. Khan, I.R.; Rahardja, S.; Khan, M.M.; Movania, M.M.; Abed, F. A tone- mapping technique based on histogram using a sensitivity model of the human visual system. IEEE Trans. Ind. Electron. 2018, 65, 3469–3479. [Google Scholar] [CrossRef]
  10. Lee, D.H.; Fan, M.; Kim, S.; Kang, M.; Ko, S. High dynamic range image tone mapping based on asymmetric model of retinal adaptation. Signal Process. Image Commun. 2018, 68, 120–128. [Google Scholar] [CrossRef]
  11. Gu, B.; Li, W.; Zhu, M.; Wang, M. Local edge-preserving multiscale decomposition for high dynamic range image tone mapping. IEEE Trans. Image Process. 2013, 22, 70–79. [Google Scholar] [CrossRef] [PubMed]
  12. Barai, N.R.; Kyan, M.; Androutsos, D. Human visual system inspired saliency guided edge preserving tone-mapping for high dynamic range imaging. In Proceedings of the 2017 IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017; pp. 1017–1021. [Google Scholar]
  13. Mezeni, E.; Saranovac, L.V. Enhanced local tone mapping for detail preserving reproduction of high dynamic range images. J. Vis. Commun. Image Represent. 2018, 53, 122–133. [Google Scholar] [CrossRef]
  14. Fattal, R.; Lischinski, D.; Werman, M. Gradient domain high dynamic range compression. ACM Trans. Graph. 2002, 21, 249–256. [Google Scholar] [CrossRef] [Green Version]
  15. Mantiuk, R.; Myszkowski, K.; Seidel, H. A perceptual framework for contrast processing of high dynamic range images. ACM Trans. Appl. Percept. 2006, 3, 286–308. [Google Scholar] [CrossRef]
  16. Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. ACM Trans. Graph. 2002, 21, 267–276. [Google Scholar] [CrossRef] [Green Version]
  17. Ferradans, S.; Bertalmio, M.; Provenzi, E.; Caselles, V. An analysis of visual adaptation and contrast perception for tone mapping. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2002–2012. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Benzi, M.; Escobar, M.; Kornprobst, P. A bio-inspired synergistic virtual retina model for tone mapping. Comput. Vis. Image Underst. 2018, 168, 21–36. [Google Scholar] [CrossRef] [Green Version]
  19. Raffin, M.; Guarnieri, G. Tone mapping and enhancement of high dynamic range images based on a model of visual perception. In Proceedings of the 10th IASTED International Conference on Computer Graphics and Imaging, Anaheim, CA, USA, 6 February 2008; pp. 190–195. [Google Scholar]
  20. Artusi, A.; Akyuz, A.; Roch, B.; Michael, D.; Chrysanthou, Y.; Chalmers, A. Selective local tone mapping. In Proceedings of the 2017 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 2309–2313. [Google Scholar]
  21. Yang, K.; Li, H.; Kuang, H.; Li, C.; Li, Y. An adaptive method for image dynamic range adjustment. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 640–652. [Google Scholar] [CrossRef]
  22. Miao, D.; Zhu, Z.; Bai, Y.; Jiang, G.; Duan, Z. Novel tone mapping method via macro-micro modeling of human visual system. IEEE Access 2019, 7, 118359–118369. [Google Scholar] [CrossRef]
  23. Ok, J.; Lee, C. HDR tone mapping algorithm based on difference compression with adaptive reference values. J. Vis. Commun. Image Represent. 2017, 43, 61–76. [Google Scholar] [CrossRef]
  24. Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted guided image filtering. IEEE Trans. Image Process. 2015, 24, 120–129. [Google Scholar] [CrossRef]
  25. Stevens, J.C.; Stevens, S.S. Brightness function: Effects of adaptation. J. Opt. Soc. Am. 1963, 53, 375–385. [Google Scholar] [CrossRef]
  26. Kuang, J.; Johnson, G.M.; Fairchild, M.D. iCAM06: A refined image appearance model for HDR image rendering. J. Vis. Commun. Image Represent. 2007, 18, 406–414. [Google Scholar] [CrossRef]
  27. Ward, G. Defining dynamic range. In Proceedings of the ACM SIGGRAPH 2008, Los Angeles, CA, USA, 11 August 2008; pp. 1–3. [Google Scholar]
  28. Gao, S.; Tan, M.; He, Z.; Li, Y. Tone Mapping Beyond the Classical Receptive Field. IEEE Trans. Image Process. 2020, 29, 4174–4187. [Google Scholar] [CrossRef]
  29. Anyhere Database. Available online: http://www.anyhere.com/ (accessed on 30 August 2020).
  30. Paris, S.; Hasinoff, S.W.; Kautz, J. Local Laplacian filters: Edge-aware image processing with a Laplacian pyramid. ACM Trans. Graph. 2011, 30, 68. [Google Scholar] [CrossRef]
  31. HDR-Eye Database. Available online: https://www.epfl.ch/labs/mmspg/downloads/hdr-eye/ (accessed on 18 August 2020).
  32. Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed]
  33. Yeganeh, H.; Wang, Z. Objective quality assessment of tone mapped images. IEEE Trans. Image Process. 2013, 22, 657–667. [Google Scholar] [CrossRef] [PubMed]
  34. Nafchi, H.Z.; Shahkolaei, A.; Moghaddam, R.F.; Cheriet, M. FSITM: A feature similarity index for tone-mapped images. IEEE Signal Process. Lett. 2015, 22, 1026–1029. [Google Scholar] [CrossRef] [Green Version]
  35. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  36. Gu, K.; Wang, S.; Zhai, G.; Ma, S.; Yang, X.; Lin, W.; Zhang, W.; Gao, W. Blind quality assessment of tone-mapped images via analysis of information, naturalness and structure. IEEE Trans. Multimed. 2016, 18, 432–443. [Google Scholar] [CrossRef]
Figure 1. Preliminary comparison between parallel-based (top) and cascade-based (bottom) photographic reproduction methods, which illustrates the motivation of this study. In the proposed method, we utilized the HVS-based modified histogram equalization (HE) to avoid the fusion loss from blending two images, which was the main reason why we adopted the cascaded-architecture-type hybrid reproduction strategy. Detailed comparisons are provided in Section 4.
Figure 1. Preliminary comparison between parallel-based (top) and cascade-based (bottom) photographic reproduction methods, which illustrates the motivation of this study. In the proposed method, we utilized the HVS-based modified histogram equalization (HE) to avoid the fusion loss from blending two images, which was the main reason why we adopted the cascaded-architecture-type hybrid reproduction strategy. Detailed comparisons are provided in Section 4.
Sensors 21 04136 g001
Figure 2. The overall framework of the proposed cascaded-architecture-type reproduction method, where WGIF indicates the weighted guided image filtering technique [24]. In this paper, two stages were designed to complement each other to achieve the advantages of both the local and the global operators.
Figure 2. The overall framework of the proposed cascaded-architecture-type reproduction method, where WGIF indicates the weighted guided image filtering technique [24]. In this paper, two stages were designed to complement each other to achieve the advantages of both the local and the global operators.
Sensors 21 04136 g002
Figure 3. Comparison of the effect between single-scale and multiscale detail extraction using the test image Cadik_Desk02 shown in Section 4.2 (a) Result of single-scale-based detail extraction (performed in [24]). (b) Result of multiscale-based extraction (performed in the proposed method). (c) The pixel value of the detail plane along the horizontal white line segments on (a) and (b). In (a) and (b), the right-side images are the enlarged version of the white rectangles, which are suggested to be closely examined by the reader.
Figure 3. Comparison of the effect between single-scale and multiscale detail extraction using the test image Cadik_Desk02 shown in Section 4.2 (a) Result of single-scale-based detail extraction (performed in [24]). (b) Result of multiscale-based extraction (performed in the proposed method). (c) The pixel value of the detail plane along the horizontal white line segments on (a) and (b). In (a) and (b), the right-side images are the enlarged version of the white rectangles, which are suggested to be closely examined by the reader.
Sensors 21 04136 g003
Figure 4. Comparison of the intensity distribution with (and without) the pre-processed detail enhancement along the line segment shown in the bottom right of Figure 1. (a) Before normalization. (b) After normalization. As shown in the red curve of (b), the goal of Section 3.2 was to preserve the local features as much as possible while normally compressing the global features.
Figure 4. Comparison of the intensity distribution with (and without) the pre-processed detail enhancement along the line segment shown in the bottom right of Figure 1. (a) Before normalization. (b) After normalization. As shown in the red curve of (b), the goal of Section 3.2 was to preserve the local features as much as possible while normally compressing the global features.
Sensors 21 04136 g004
Figure 5. Illustration of the proposed bin width adjustment approach in two aspects. (a) Comparison in terms of the output bin width ratios. (b) Comparison in terms of the output histograms.
Figure 5. Illustration of the proposed bin width adjustment approach in two aspects. (a) Comparison in terms of the output bin width ratios. (b) Comparison in terms of the output histograms.
Sensors 21 04136 g005
Figure 6. Self-evaluation of the proposed HVS-based modified histogram equalization. (a) Result and the histogram before correction, whose bin width is calculated by Equation (15). (b) Result and the histogram after correction, whose bin width is calculated by Equation (18). The histograms of (a) are too wide so that the resulting image is slightly noisy in the sky.
Figure 6. Self-evaluation of the proposed HVS-based modified histogram equalization. (a) Result and the histogram before correction, whose bin width is calculated by Equation (15). (b) Result and the histogram after correction, whose bin width is calculated by Equation (18). The histograms of (a) are too wide so that the resulting image is slightly noisy in the sky.
Sensors 21 04136 g006
Figure 7. Comparison of multiple-exposure images and the results of our proposed method. The red rectangles indicate the areas which should be closely examined by the reader.
Figure 7. Comparison of multiple-exposure images and the results of our proposed method. The red rectangles indicate the areas which should be closely examined by the reader.
Sensors 21 04136 g007
Figure 8. Results of the test image Spheron_NapaValley by (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Figure 8. Results of the test image Spheron_NapaValley by (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Sensors 21 04136 g008
Figure 9. Results of the test image Cadik_Desk02 by (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Figure 9. Results of the test image Cadik_Desk02 by (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Sensors 21 04136 g009
Figure 10. Results of the test image Tree_oAC1 by (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Figure 10. Results of the test image Tree_oAC1 by (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Sensors 21 04136 g010aSensors 21 04136 g010b
Figure 11. Comparison of the results with close-up images: (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Figure 11. Comparison of the results with close-up images: (a) Khan et al. [9], (b) Gu et al. [11], (c) Gao et al. [28], (d) Ok et al. [23], (e) Yang et al. [21], and (f) the proposed method. The white rectangles indicate the areas which should be closely examined by the reader.
Sensors 21 04136 g011
Figure 12. Part of test images from dataset [29]. Images 1 to 4—First row from left to right: SpheronNapaValley_oC5D, MtTamWest_o281, Montreal_float_o935, and dani_synagogue_o367. Images 5 to 7—Second row from left to right: rosette_oC92, rend11_o972, and rend08_o0AF. (Image 8) First from the right: memorial_o876. Image 9: Second from the right: bigFogMap_oDAA. All the images were processed using the proposed method.
Figure 12. Part of test images from dataset [29]. Images 1 to 4—First row from left to right: SpheronNapaValley_oC5D, MtTamWest_o281, Montreal_float_o935, and dani_synagogue_o367. Images 5 to 7—Second row from left to right: rosette_oC92, rend11_o972, and rend08_o0AF. (Image 8) First from the right: memorial_o876. Image 9: Second from the right: bigFogMap_oDAA. All the images were processed using the proposed method.
Sensors 21 04136 g012
Figure 13. Overall comparison of different methods using the test images in [29]. (a) Result of FSITMr_TMQI. (b) Result of FSITMg_TMQI. (c) Result of FSITMb_TMQI.
Figure 13. Overall comparison of different methods using the test images in [29]. (a) Result of FSITMr_TMQI. (b) Result of FSITMg_TMQI. (c) Result of FSITMb_TMQI.
Sensors 21 04136 g013
Table 1. List of 33 test images from the dataset from [29] and their dynamic ranges (D).
Table 1. List of 33 test images from the dataset from [29] and their dynamic ranges (D).
No.NameDNo.NameDNo.NameD
1Apartment_float_o15C4.712StillLife_o7C16.123rend044.5
2AtriumNight_oA9D4.113Tree_oAC14.424rend05_o87A3.3
3Desk_oBA25.214bigFogMap_oDAA3.625rend06_oB1D3.6
4Display1000_float_o4463.415dani_belgium_oC654.126rend078.9
5Montreal_float_o9353.116dani_cathedral_oBBC4.127rend08_o0AF3.7
6MtTamWest_o2813.417dani_synagogue_o3672.028rend09_o2F33.9
7Spheron35.818memorial_o8764.829rend10_oF1C5.0
8SpheronNice4.719nave6.030rend11_o9724.1
9SpheronPriceWestern2.820rend01_oBA33.031rend128.9
10SpheronNapaValley_oC5D3.221rend02_oC954.132rend13_o7B04.1
11SpheronSiggraph2001_oF1E4.522rend03_oB123.233rosette_oC924.4
Table 2. TMQI-S score of images from the dataset from [29] and the total number of highest scores for each method.
Table 2. TMQI-S score of images from the dataset from [29] and the total number of highest scores for each method.
ImageKhan et al. [9]Gu et al. [11]Gao et al. [28]Ok et al. [23]Yang et al. [21]Our Method
Apartment_float_o15C0.86310.81320.71510.85430.63640.8850
AtriumNight_oA9D0.90580.88090.87260.89820.89280.8972
Desk_oBA20.87270.85260.86770.87000.81460.8885
Display1000_float_o4460.86760.86430.83250.86260.83750.8925
Montreal_float_o9350.83440.75320.71630.84160.59510.8297
MtTamWest_o2810.92750.87580.82280.89000.72780.9231
Spheron30.87070.75430.76580.81600.66190.8144
SpheronNice0.70290.68920.70900.73800.55300.8004
SpheronPriceWestern0.82260.76630.92040.80880.60840.8321
SpheronNapaValley_oC5D0.92280.88020.68680.91850.92260.9437
SpheronSiggraph2001_oF1E0.80860.79500.69700.83290.67640.8251
StillLife_o7C10.79070.63680.70630.76360.61640.8093
Tree_oAC10.87600.76910.71150.86040.85120.9044
bigFogMap_oDAA0.93910.84680.92900.93540.90580.9117
dani_belgium_oC650.89740.85710.83810.87630.80650.8931
dani_cathedral_oBBC0.87860.87750.81050.89630.90140.9098
dani_synagogue_o3670.97350.86170.72120.96770.45460.9263
memorial_o8760.87420.85750.85650.87090.90610.8949
nave0.86780.82760.75220.84980.86320.8554
rend01_oBA30.80300.77380.76100.79660.71970.7880
rend02_oC950.89300.85690.86210.87720.80210.8961
rend03_oB120.86240.82750.76430.85610.81650.8443
rend040.87630.81770.84610.86190.84890.8782
rend05_o87A0.87090.78720.74530.86660.78330.8730
rend06_oB1D0.93030.90970.75500.94050.91580.9210
rend070.76630.75670.68850.75130.78830.7725
rend08_o0AF0.90160.85800.82940.87990.76560.9032
rend09_o2F30.92460.85940.76420.91380.76680.8830
rend10_oF1C0.76000.71470.68840.74590.60700.7914
rend11_o9720.85770.81560.71640.85940.79610.8814
rend120.58360.61740.50580.61740.55170.6533
rend13_o7B00.84260.70450.53850.77900.53670.7138
rosette_oC920.87100.87820.75250.87540.87430.8898
Total number of highest scores11013216
Green (and Yellow) numbers indicate the best (and the second-best) performing methods for each row, respectively.
Table 3. TMQI-N score of images from the dataset in [29] and total number of highest scores for each method.
Table 3. TMQI-N score of images from the dataset in [29] and total number of highest scores for each method.
ImageKhan et al. [9]Gu et al. [11]Gao et al. [28]Ok et al. [23]Yang et al. [21]Our Method
Apartment_float_o15C0.19240.55740.32660.23250.00570.5819
AtriumNight_oA9D0.92310.74080.99520.75880.43330.8941
Desk_oBA20.93840.22190.34140.93090.41610.8246
Display1000_float_o4460.97160.45110.51470.89600.69540.6767
Montreal_float_o9350.67960.58630.48270.88020.02390.7591
MtTamWest_o2810.44470.62600.78230.85420.32970.9689
Spheron30.32730.34700.21610.46470.04390.6419
SpheronNice0.21540.18180.38020.38330.02750.7614
SpheronPriceWestern0.23840.24100.64400.33270.04790.8732
SpheronNapaValley_oC5D0.97030.31610.21920.80070.97220.3494
SpheronSiggraph2001_oF1E0.22630.76100.30650.36850.03150.3866
StillLife_o7C10.66060.56610.23090.73840.90720.5474
Tree_oAC10.99830.30900.71600.93110.68640.8580
bigFogMap_oDAA0.57130.55910.59730.81720.05930.1685
dani_belgium_oC650.90920.81640.49980.88530.30940.9810
dani_cathedral_oBBC0.89070.47030.23260.99740.72040.6609
dani_synagogue_o3670.64860.33860.76190.77610.11280.5031
memorial_o8760.80380.26660.17670.57860.26290.4157
nave0.81010.93580.14320.86770.22350.9027
rend01_oBA30.96630.03430.35160.84140.10960.6820
rend02_oC950.89840.71310.25550.98600.20980.7876
rend03_oB120.94570.76570.55510.94280.11200.4781
rend040.93640.71120.41960.87940.11530.2038
rend05_o87A0.57910.90960.41790.66240.33680.7149
rend06_oB1D0.36760.81070.02080.50040.02540.8606
rend070.97990.80050.12960.87660.33450.9331
rend08_o0AF0.88670.98780.82080.90870.42120.9290
rend09_o2F30.70160.23940.26920.72600.08920.8307
rend10_oF1C0.93180.98960.44740.94470.19390.9357
rend11_o9720.8177 0.79830.62670.76430.41560.8490
rend120.08450.65190.07580.10480.03640.4317
rend13_o7B00.19800.29790.13930.21740.06350.2142
rosette_oC920.85910.85810.14900.94400.73510.5430
Total number of highest scores871629
Green (and Yellow) numbers indicate the best (and the second-best) performing methods for each row, respectively.
Table 4. TMQI-Q score of images from the dataset in [29] and total number of highest scores for each method.
Table 4. TMQI-Q score of images from the dataset in [29] and total number of highest scores for each method.
ImageKhan et al. [9]Gu et al. [11]Gao et al. [28]Ok et al. [23]Yang et al. [21]Our Method
Apartment_float_o15C0.82790.88370.81330.83440.70330.9074
AtriumNight_oA9D0.96530.93160.96670.93890.88390.9588
Desk_oBA20.95870.83160.86010.95690.85950.9463
Display1000_float_o4460.96210.87950.88180.94990.91270.9246
Montreal_float_o9350.90940.87110.84240.94180.69810.9204
MtTamWest_o2810.89500.91210.92200.95110.81780.9763
Spheron30.85820.82920.80580.86850.72830.8978
SpheronNice0.82690.81130.82170.84220.71170.9125
SpheronPriceWestern0.97640.85850.92670.95050.97660.9382
SpheronNapaValley_oC5D0.78660.77470.78240.83110.68450.8815
SpheronSiggraph2001_oF1E0.82030.91090.80380.85580.72840.8570
StillLife_o7C10.89410.83110.72600.89840.87700.8809
Tree_oAC10.96810.82610.87920.95430.91510.9554
bigFogMap_oDAA0.91970.89330.92140.95740.80420.8352
dani_belgium_oC650.96100.93660.88080.95200.83700.9702
dani_cathedral_oBBC0.95340.88640.82220.97330.93380.9267
dani_synagogue_o3670.94090.85800.88920.95930.67250.9049
memorial_o8760.93930.84240.82250.90310.85460.8813
nave0.93860.94600.78480.94220.83480.9489
rend01_oBA30.94340.75920.83200.92350.76630.8967
rend02_oC950.95830.92080.84140.96670.81490.9427
rend03_oB120.95700.92080.86920.95480.79540.8788
rend040.95940.90970.86890.94720.80520.8345
rend05_o87A0.90320.93080.83970.91550.83570.9254
rend06_oB1D0.88160.94980.74820.90810.79470.9601
rend070.93480.90580.76170.91540.83670.9299
rend08_o0AF0.95890.96180.92970.95630.84630.9654
rend09_o2F30.93700.83720.81660.93790.77480.9457
rend10_oF1C0.92610.92060.82750.92370.75030.9357
rend11_o9720.93700.92240.86650.92940.85410.9480
rend120.71450.83850.68290.73190.68740.8134
rend13_o7B00.82360.80440.71270.80990.69100.7897
rosette_oC920.94670.94850.78630.96020.92890.9022
Total number of highest scores9317112
Green (and Yellow) numbers indicate the best (and the second-best) performing methods for each row, respectively.
Table 5. Average score of different objective evaluations for the dataset in [29].
Table 5. Average score of different objective evaluations for the dataset in [29].
MetricKhan et al. [9]Gu et al. [11]Gao et al. [28]Ok et al. [23]Yang et al. [21]Our Method
TMQI-Q0.9120.8800.8340.9160.8070.912
TMQI-S0.8560.8070.7620.8480.7520.858
TMQI-N0.6840.5720.4010.7210.2880.671
BRISQUE25.23025.42024.28326.20026.05024.100
BTMQI3.6463.6565.0104.2494.9783.202
FSITMr_TMQI0.8600.8400.8200.8600.8060.864
FSITMg_TMQI0.8720.8480.8130.8670.8160.873
FSITMb_TMQI0.8630.8470.8190.8610.8070.865
Green (and Yellow) numbers indicate the best (and the second-best) performing methods for each row, respectively.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.-Y.; Hua, K.-L.; Tsai, Y.-C.; Wu, J.-H. Photographic Reproduction and Enhancement Using HVS-Based Modified Histogram Equalization. Sensors 2021, 21, 4136. https://doi.org/10.3390/s21124136

AMA Style

Chen Y-Y, Hua K-L, Tsai Y-C, Wu J-H. Photographic Reproduction and Enhancement Using HVS-Based Modified Histogram Equalization. Sensors. 2021; 21(12):4136. https://doi.org/10.3390/s21124136

Chicago/Turabian Style

Chen, Yung-Yao, Kai-Lung Hua, Yun-Chen Tsai, and Jun-Hua Wu. 2021. "Photographic Reproduction and Enhancement Using HVS-Based Modified Histogram Equalization" Sensors 21, no. 12: 4136. https://doi.org/10.3390/s21124136

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop