Nothing Special   »   [go: up one dir, main page]

Next Article in Journal / Special Issue
Improved Color Mapping Methods for Multiband Nighttime Image Fusion
Previous Article in Journal / Special Issue
Image Fragile Watermarking through Quaternion Linear Transform in Secret Space
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Color Consistency and Local Contrast Enhancement for a Mobile Image-Based Change Detection System

Advanced Visionics and Processing Group, French-German Research Institute of Saint-Louis, 5 rue du Général Cassagnou, 68300 Saint-Louis, France
*
Author to whom correspondence should be addressed.
J. Imaging 2017, 3(3), 35; https://doi.org/10.3390/jimaging3030035
Submission received: 30 June 2017 / Revised: 31 July 2017 / Accepted: 8 August 2017 / Published: 23 August 2017
(This article belongs to the Special Issue Color Image Processing)
Figure 1
<p>Two images of the same scene acquired at different time points under different illumination conditions.</p> ">
Figure 2
<p>Intensity transfer function for the Retinex model using a gain/offset function with <math display="inline"> <semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>127.5</mn> </mrow> </semantics> </math> and two contrast gains (<math display="inline"> <semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics> </math>). For <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>, the transfer function corresponds to the gain/offset function with a single gain value.</p> ">
Figure 3
<p>Processed images using (<b>a</b>) the original Retinex [<a href="#B2-jimaging-03-00035" class="html-bibr">2</a>] with a single gain and (<b>b</b>) Retinex [<a href="#B2-jimaging-03-00035" class="html-bibr">2</a>] extended with the proposed two gains. Enlarged sections for the marked regions are shown on the bottom. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>120</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>80</mn> <mo>,</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>120</mn> </mrow> </semantics> </math>.</p> ">
Figure 4
<p>Example of preventing from color inversion issues for (<b>a</b>) an image containing saturated colors (<b>b</b>) using our approach, compared to (<b>c</b>) using the original scheme [<a href="#B7-jimaging-03-00035" class="html-bibr">7</a>] with the gain/offset function applied after the color processing function. (image credit: J.L. Lisani CC BY)</p> ">
Figure 5
<p>Vehicle-mounted color camera system.</p> ">
Figure 6
<p>(<b>a</b>) original and processed images applying (<b>b</b>–<b>d</b>) previous Retinex-based approaches and (<b>e</b>) our approach. Enlarged sections for the marked regions are shown on the right. (<b>a</b>) unprocessed image; (<b>b</b>) Retinex [<a href="#B2-jimaging-03-00035" class="html-bibr">2</a>]; (<b>c</b>) Gray World [<a href="#B15-jimaging-03-00035" class="html-bibr">15</a>] extension; (<b>d</b>) hue-preserving Retinex [<a href="#B24-jimaging-03-00035" class="html-bibr">24</a>]; (<b>e</b>) new combined Retinex/Gray World.</p> ">
Figure 7
<p>(<b>a</b>) original and processed images applying (<b>b</b>) the Gray World extension and (<b>c</b>) our approach for a scene acquired at two different time points. The second image for each example has been registered w.r.t. the first image (top) and the two images have been combined using a checkerboard (bottom). (<b>a</b>) unprocessed images; (<b>b</b>) Gray World extension; (<b>c</b>) new combined Retinex/Gray World.</p> ">
Figure 8
<p>Mean RGB angular error over time for corresponding images of two image sequences for our approach and three previous Retinex-based approaches.</p> ">
Figure 9
<p>(<b>a</b>) original and (<b>b</b>) processed images applying our approach for a scene with a change. The second image has been registered w.r.t. the first image. On the right, the magnitudes of the CIELAB color differences between the two images are visualized with the contour of the segmented change. (<b>a</b>) original images; (<b>b</b>) new combined Retinex/Gray World.</p> ">
Versions Notes

Abstract

:
Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along each image sequence as well as between corresponding image sequences due to the different illumination conditions, and to determine colors with natural appearance. We have developed a real-time local/global color processing approach for local contrast enhancement and lightness/color consistency, which processes images of the different sequences independently. Our approach combines the center/surround Retinex model and the Gray World hypothesis using a nonlinear color processing function. We propose an extended gain/offset scheme for Retinex to reduce the halo effect on shadow boundaries, and we employ stacked integral images (SII) for efficient Gaussian convolution. By applying the gain/offset function before the color processing function, we avoid color inversion issues, compared to the original scheme. Our combined Retinex/Gray World approach has been successfully applied to pairs of image sequences acquired on outdoor routes for change detection, and an experimental comparison with previous Retinex-based approaches has been carried out.

1. Introduction

Detecting changes in outdoor scenes is of high importance for video surveillance and security applications, allowing for identifying suspicious objects or threats such as IEDs (improvised explosive devices) in route clearance operations or detecting intrusions in secured areas. Image-based change detection systems on mobile platforms such as vehicles, unmanned aerial vehicles (UAVs), or unmanned ground vehicles (UGVs), can improve the detection of changes on a route of interest between a reference and the current time point. Based on the feature correspondences between the two time points, frames of a reference and a current image sequence that correspond to the same scene are aligned using a registration approach and can be alternately displayed on a monitor [1]. The alternate display of reference and current frames allows for visualizing changes, which can be evaluated by a human operator.
For the alternate display of color images in a change detection system, it is required to enhance local contrast for improving the visibility in image areas with low contrast (e.g., shadows), and to compensate lightness and color inconsistencies due to different illumination conditions (e.g., see Figure 1) in consecutive frames of an image sequence (i.e., achieving intra-sequence consistency), as well as in corresponding frames of a reference and a current image sequence that depicts the same scene (i.e., achieving inter-sequence consistency). In particular, achieving inter-sequence lightness/color consistency allows unchanged areas of corresponding frames of a reference and a current image sequence to appear constant during the alternate display, whereas changes are blinking and are thus better detectable by the observer. In addition, natural color rendition of the computed colors is required to facilitate the scene understanding. Color rendition describes the fidelity of colors w.r.t. the scene and the human color perception [2]. However, simultaneously achieving intra-sequence and inter-sequence color consistency as well as color rendition in real time is a challenging task. Color constancy approaches compensate the colors of a light source and can be used to achieve color consistency for images acquired under different illumination conditions. Most color constancy approaches (e.g., pre-calibrated, Gamut-based, statistical-based, or machine learning approaches) are either computationally expensive, unstable, or require sensor calibration [3]. On the other hand, certain Retinex-based approaches that have been previously used for lightness/color constancy, as well as for image enhancement, are automatic, robust, and suitable for real-time applications.
Retinex was proposed by Land as a model of the human vision system in the context of lightness and color perception [4]. The Retinex model was initially based on a random walk concept to compute the global maximum estimates of each chromatic channel and to determine the relative lightness of surfaces of an image [5]. In [6], Land proposed determining the relative lightness using a local center/surround approach that computes the ratio of the intensity of a pixel and the average intensity of the surrounding. Jobson et al. [2] extended the center/surround Retinex with a surround Gaussian function and a log operation, achieving local contrast enhancement and lightness/color constancy simultaneously; however, the color rendition was often poor. To improve color rendition, Jobson et al. [7] proposed a nonlinear color processing function to partially restore the original colors. In [8], Barnard and Funt presented a luminance-based Retinex model to preserve the chromaticity of the original image. However, the use of the original color information in [7,8] affects color constancy. Recent extensions of the Retinex model have been mainly proposed for color enhancement (e.g., [9,10]). For a detailed discussion on Retinex models, we refer to, for example, [11,12]. Moreover, variational approaches related to the original Retinex formulation have been proposed for local contrast enhancement and color enhancement or color correction (e.g., [13,14]).
In this work, we have developed an automatic real-time local contrast enhancement and lightness/color consistency approach for the display of images in a mobile change detection system. Our approach combines the center/surround Retinex model [2] and the Gray World hypothesis [15] using a nonlinear color processing function previously used for color restoration [7]. Frames of the different image sequences are processed independently, and thus an image with reference colors is not required. Our approach takes advantage of the combination of local and global color processing: Retinex allows for locally achieving lightness/color consistency in unchanged areas of corresponding images regardless of changes in the depicted scenes, and the global Gray World allows for improving color rendition compared to Retinex. We have extended the gain/offset function used by the center/surround Retinex to reduce the halo effect on shadow boundaries. In our approach, we have applied the gain/offset function before the color processing function, avoiding color inversion issues compared to the original scheme in [7]. For Gaussian convolution, we employed stacked integral images (SII) [16] to reduce computation times and to allow the display of more than 20 frames per second in the change detection system. A previous version of our approach has been presented in [17]. We have performed an experimental evaluation of our approach based on image sequences of outdoor routes acquired by a vehicle-mounted color camera. In addition, a comparison with previous Retinex-based approaches in the context of color rendition, inter-sequence color consistency, and change detection has been conducted.

2. Methods

In this section, we first describe the employed center/surround Retinex model and introduce an extended gain/offset function. Then, we describe the Gray World hypothesis and present the nonlinear color processing function combining Retinex and Gray World. Finally, we describe the stacked integral images (SII) scheme for efficient Gaussian convolution.

2.1. Center/Surround Retinex Using an Extended Gain/Offset Function

Our image processing approach is based on the single-scale center/surround Retinex model [2], previously used for local image enhancement and lightness/color constancy and processes each channel of an RGB image independently. For each channel I i ( i { R , G , B } ) and for each position x = ( x , y ) of an image, a Retinex value R i x , c is computed:
R i x , c = log I i x log F x , c I i x ,
where I i x denotes the intensity of the channel I i at position x , ∗ is the symbol for spatial convolution, and F x , c is the surround function defined by a Gaussian function:
F x , c = K e x 2 + y 2 / c 2 ,
where c denotes the scale of the Gaussian function and K is a value such that F x , c d x = 1 .
The computed Retinex describes the logarithm of the ratio of the intensity of a pixel and the local mean intensity defined by the surround function, since Equation (1) can be rewritten to R i x , c = log I i x F x , c I i x . The intensity ratios allows for locally enhancing details, in particular for image areas with uniform intensities and low contrast (e.g., shadows). Simultaneously, the intensity ratios describe the reflectance of surfaces, and can be used for compensating the illumination in a scene.
The scale c of the surround function in Equation (2) represents a trade-off between the level of local contrast enhancement and color rendition [2]. Decreasing c stronger enhances details in an image, and increasing c improves color rendition. The value of c cannot be determined automatically, and should be chosen based on the image data and the application. For larger image areas with uniform colors, Retinex computes colors with low saturation (i.e., colors close to gray) regardless of the colors in the original image.
To map the computed Retinex values from the logarithmic domain to output intensity values I i R e t x , we employ a gain/offset function [2]:
I i R e t x = A + C R i x , c ,
where A denotes the offset and C is the gain that regulates the contrast of the output image. For example, increasing C increases the contrast; however, the halo effect is increased as well. The function in Equation (3) in conjunction with Equation (1) describes a transfer function that maps the local intensity ratios I i x F x , c I i x to intensities of the output image (see Figure 2). Note that negative values in Equation (3) are clipped to zero, and values larger than 255 are clipped to 255 (for 8-bit images).
In our approach, we use two different gain values C 1 and C 2 . For each pixel, we choose among the two gain values based on the value of the intensity ratio I i x F x , c I i x :
C = C 1 , if I i x F x , c I i x 1 , C 2 , otherwise .
The use of two different gain values allows for regulating the steepness of the upper and the lower part of the intensity transfer function independently. A lower value for C 1 compared to C 2 reduces the halo effect on shadow boundaries (cf. Figure 3a,b), while retaining the number of bright pixels and thus avoiding the decrease of contrast. Note that the transfer function in Equation (3) is continuous at I i x F x , c I i x = 1 , since I i R e t x = A for any value of C.

2.2. Gray World

In our approach, we also employ the Gray World hypothesis [15]. This global color constancy method is based on the assumption that the average reflectance in a scene is achromatic, thus the average intensity values for the three RGB channels should be equal (gray) under a neutral illuminant (light source). Based on this assumption, any deviation of the average colors of an image from gray is due to the colors of the illuminant. This deviation can be used to compute the colors I i G W x of a scene under a neutral illuminant:
I i G W x = I g r a y I ¯ i I i x ,
where I g r a y denotes the defined gray value and I ¯ i is the average intensity of each channel.

2.3. Color Processing Function Combining Retinex and Gray World

To combine the Retinex model with the Gray World hypothesis, we employed a nonlinear color processing function previously used for color restoration [7]. This function maps the colors computed using Gray World (Equation (5)) to the Retinex result. For each channel and for each image position, we compute a modified Retinex value R i G W x :
R i G W x = C i G W x I i R e t x ,
where I i R e t x denotes the computed Retinex value after applying the gain/offset function in Equation (3) and C i G W x is a nonlinear color processing function defined by:
C i G W x = β x log α I i G W x log j R , G , B I j G W x ,
where α ( α > 1 ) denotes the nonlinearity strength that controls the influence of Gray World, and β x controls the brightness levels. Note that Equation (6) is not used for pixels with zero brightness, i.e., i I i R e t x = 0 for i R , G , B . We determine the value for β x automatically for each pixel, such that using the color processing function in Equation (6) preserves the brightness of the Retinex result in Equation (3), i.e., i R i G W x = i I i R e t x :
β x = i I i R e t x i log α I i G W x log j R , G , B I j G W x I i R e t x .
The differences of using the color processing function in Equation (6) compared to the original scheme in [7] are three. First, we use the colors computed by Gray World, instead of the original colors. Note that also different global constancy approaches can be used in Equation (7), for example, Shades of Gray [18] or Gray Edge [19]. Second, we compute the value of β x in Equation (7) automatically for each pixel, compared to using a fixed value in [7]. This guarantees that the color processing function in Equation (6) is brightness-preserving and only modifies the chromaticities, compared to the result of the original Retinex [2] in Equation (3). Third, we apply the gain/offset function (Eqation (3)) before the color processing function, in contrast to [7] where the order of the two operations is reversed (i.e., we use I i R e t x in Equation (6), instead of R i x , c ). We found that this prevents from color conversion issues for images that contain saturated colors, compared to the original color processing scheme [7,20]. In Figure 4, we show an example of an image that contains pixels with saturated colors, for example, pixels of the blue balls with I R = 0 (Figure 4a). For such pixels, the Retinex result in Equation (1) as well as the result of the color processing function in Equation (7) are negative. Following the original scheme and applying the color processing function directly on the Retinex result yields a positive value, which is mapped by the gain/offset function to a large value for R R G W x (Figure 4c). In contrast, the gain/offset function in our approach maps the low negative Retinex values to negative image values in Equation (3) which are clipped to zero, and the subsequent application of the color processing function in Equation (6) yields a zero value R R G W x = 0 (Figure 4b).
For image regions with homogeneous colors, the weighting in Equation (6) increases the color saturation compared to Retinex. The reason is that, for such pixels, the variation of the values of C i G W x is larger for the different i R , G , B , compared to the variation of the respective values of R i x , c . Thus, the combination with Gray World and the exploitation of global image statistics allows for overcoming a main drawback of the center/surround Retinex that uses local image information only.

2.4. Stacked Integral Images (SII)

To determine the local mean intensity F x , c I i x in Equation (1), we employ stacked integral images (SII) [16]. This computationally efficient scheme for Gaussian convolution is based on an integral image defined for each image position x = ( x , y ) as the cumulative sum of intensity values over all rows and all columns from position ( 0 , 0 ) to position ( x , y ) of a RGB channel:
I I i x , y = x x , y y I i x , y ,
where I I i x , y denotes the computed integral image value for channel I i . To construct the integral image, two additions per pixel are required. For a rectangle area b o x defined by the positions x 1 , x 2 , x 3 , and x 4 , the sum S i b o x of intensities in I i can be computed using the integral image:
S i b o x = I I i x 4 I I i x 3 I I i x 2 + I I i x 1 ,
requiring four pixel accesses and three additions/subtractions, regardless of the size of b o x . This significantly reduces computation times compared to iterating over all pixels in b o x ( N b o x pixel accesses, N b o x 1 additions, where N b o x denotes the number of pixels in b o x ). The local mean intensity can be obtained by dividing S i b o x with N b o x . Gaussian convolution using SII is based on the weighted sum of K stacked boxes of different size [16]:
I ¯ i x , σ , K = k = 1 K w k S i b o x k ,
where I ¯ i x , σ , K denotes the local mean intensity value of I i defined by a Gaussian kernel centered at x with standard deviation σ . Note that the weights w k and the size of each b o x k depend on σ and on K. The local mean intensity I ¯ i x , σ , K in Equation (11) corresponds to the result of the spatial convolution F x , c I i x in Equation (1). SII yielded the lowest computation times among different investigated schemes for Gaussian convolution in [21].

3. Results

We have applied our combined Retinex/Gray World approach for color consistency and local contrast enhancement to color image sequences acquired on five outdoor itineraries using a vehicle-mounted camera system (Figure 5). For each itinerary, we have recorded image sequences at two different time points under different illumination conditions. The five pairs of reference and current image sequences consist of 786 to 2400 corresponding frames. We have aligned frames of the reference image sequences w.r.t. the corresponding frames of the current sequences using an affine homography-based registration approach [22]. For our combined Retinex/Gray World approach, the same parameter setting was used for all image data (see Table 1). We have chosen the values for C 1 , C 2 , and σ taking into account the contrast in the computed images, the visibility within shadows, and the halo effect on shadow boundaries. Taking into account the computation times, we used K = 3 , since increasing K did not significantly improve the results. We used α = 125 as suggested in [7], and the typical values for A and I g r a y . The computation time for processing images with a resolution of 1280 × 960 pixel on a workstation under Linux Ubuntu 14.04 with a Xeon E5-1650v3 CPU (3.5 GHz) is 35 ms using an OpenGL-based implementation, allowing the display of more than 20 frames per second in the change detection system. We investigated the performance of our approach w.r.t. color rendition, inter-sequence color consistency, and change detection. We have also performed an experimental comparison with previous Retinex-based approaches: the original center/surround Retinex model [2], a Gray World [15] extension based on an intensity-based Retinex [23], and a hue-preserving intensity-based Retinex approach [24] (without shadow detection). We have used the same values for the parameters of the previous Retinex-based approaches that are common with our approach, except for C = 120 in Equation (3). The Gray World extension and the hue-preserving extension use an affine function [25] to map the colors computed by Gray World and the original colors, respectively, to the intensities computed by the intensity-based Retinex [23].

3.1. Color Rendition

First, we have evaluated the performance of our approach in the context of color rendition and the visibility in the processed images. In Figure 6, we show an example of processing an outdoor color image using our combined Retinex/Gray World approach as well as the three previous Retinex-based approaches. It can be seen that the original image (Figure 6a) contains large shadows with low local contrast, and thus visibility is poor. It can be also seen that the colors of the non-shadowed areas are strongly influenced by the warm colors of the light source. It turned out that the Retinex-based approaches enhanced local contrast (Figure 6b–e, improving the visibility within shadows. Compared to the traditional gain/offset function ( C = 120 ) used by the previous Retinex-based approaches, the extended gain/offset function ( C 1 = 80 , C 2 = 120 ) in our approach reduces the halo effect on the shadow boundaries. In Figure 6b, it can be seen that the original center/surround Retinex approach strongly compensated the color of the illuminant. However, color rendition is poor since the processed image appears grayish (low saturation), particularly in larger areas with homogeneous colors (e.g., grass area) due to the local color processing. In contrast, the saturation is higher for smaller objects with different colors compared to the background, and thus visibility of these objects is good (Figure 6b, right). The saturation of the processed image applying the Gray World extension is significantly increased (Figure 6c). However, colors in certain areas of the image that have been more strongly enhanced w.r.t. the original image (e.g., shadows) appear unnatural. The reason is that for images with spatially inhomogeneous illumination (e.g., images containing both shadowed and directly illuminated areas) or for images with unbalanced color distribution (i.e., when the scene has a dominant color) the assumptions of the Gray World hypothesis do not hold. The hue-preserving Retinex approach avoids unnatural colors by preserving the original color information, however, the color of the illuminant is partially preserved as well (Figure 6d). In addition, due to the global decrease of saturation, the visibility of objects of different color is relatively low (Figure 6d, right). The result of applying our combined Retinex/Gray World approach is shown in Figure 6e. It can be seen that the colors of the illuminant have been strongly compensated and that the color appearance represents a combination of the results of Retinex and Gray World. Compared to Retinex, saturation is increased and the computed colors appear more natural (cf. the grass area in the images). We have determined the mean saturation (based on the HSI color model) averaged over 4800 corresponding frames of two image sequences and found that our approach yields 0.0997 , which is an increase of 28 % compared to Retinex ( 0.0776 ). Note that the differences in contrast between our approach and Retinex (cf. Figure 6b,e) are due to the proposed scheme with two gains in Equation (3), and the differences in chromaticities are due to the color processing function (Equation (6)). Compared to the Gray World extension, our approach avoids strong unnatural colors (cf. the shadowed areas and the non-shadowed road area in Figure 6c,e). Compared to the hue-preserving Retinex, saturation is increased, the colors of the illuminant are strongly compensated, and objects of locally distinctive colors are more visible due to the use of local color processing (cf. Figure 6d,e, right).

3.2. Inter-Sequence Color Consistency

We have also evaluated our combined Retinex/Gray World approach in the context of inter-sequence color consistency, that is, the color consistency between corresponding frames of a reference and a current image sequence recorded on the same itinerary at different time points. In Figure 7, we show an example of applying our approach to two corresponding frames of the same scene acquired at different time points with different illumination conditions (Figure 7a). It can be seen that our approach strongly compensates lightness and color inconsistencies between the two images (Figure 7c). In contrast, the Gray World extension yields images with visible color inconsistencies (Figure 7b).
To quantitatively assess inter-sequence color consistency, we have computed the mean RGB angular error and the mean rg chromaticity endpoint error between corresponding frames of pairs of reference and current image sequences. These error metrics are typically used in combination with ground truth images for evaluation of color constancy approaches [26]. Note that, in addition to the different illumination conditions, changes in the scene and registration errors cause lightness and color differences as well, and thus influence the error values. However, we assume that the influence of these factors is similar for each investigated approach and can be ignored. In Figure 8, we show an example of the mean RGB angular error over time for corresponding frames of two image sequences of an outdoor route. It can be seen that for all time points our approach yields a lower error compared to the unprocessed images. The mean error averaged over all time points computes to 2.43 , which is an improvement of 46 % compared to the respective error ( 4.51 ) for the unprocessed images. It can be also seen that our approach outperforms the Retinex approach ( 3.21 ) as well as the Gray World extension ( 3.36 ). The worse performance of Retinex is due to inconsistencies caused by the stronger lightness halo effect on shadow boundaries and the color halo effect on edges of color change. The reasons for the worse performance of Gray World are two. First, the unnatural colors in certain areas of images with inhomogeneous illumination (see also Section 3.1 above) introduce color inconsistencies. Second, changes in the scene influence the global statistics of an image and affect the computed colors for unchanged areas, introducing color inconsistencies as well. Our approach yields a lower mean error also compared to the hue-preserving Retinex approach ( 2.64 ); however, for certain time points the error is higher. The reason is that for these time points the hue-preserving Retinex determines colors with low saturation due to the global saturation decrease, resulting in small color differences and low error values, as expected. The mean RGB angular errors and the mean rg endpoint errors averaged over all time points and over the five pairs of corresponding image sequences are shown in Table 2. It turns out that our approach yields significantly lower errors compared to the unprocessed images, as well as compared to Retinex and Gray World. The hue-preserving Retinex yields the lowest errors, however, due to the lower saturation of the computed images.
We have also used the Shades of Gray [18] and the Gray Edge [19] global color constancy approaches in Eqution (6), as alternatives for Gray World. For both variants, we used p = 5 for the Minkowski norm. We applied these two variants to the two corresponding image sequences used for the example in Figure 8, and we computed the mean RGB angular error over time. The errors for the combined Retinex/Shades of Gray and the combined Retinex/Gray Edge approaches compute to 2.6 and 2.85 , respectively, which are an increase of 6.9 % and 17.3 % , compared to the proposed Retinex/Gray World approach. The reason for the better performance of Gray World for this image data is that the image mean used by Gray World is more stable w.r.t. changes in the scenes, compared to the Minkowski norm and the image derivatives used by Shades of Gray and Gray Edge.

3.3. Change Detection

Finally, we have evaluated our combined Retinex/Gray World approach in terms of change detection. In Figure 9, we show an example of a pair of images corresponding to a changed scene at two different time points. However, it is difficult to quantitatively evaluate the performance of image processing approaches for change detection systems that are based on a human operator. The reason is that there are different factors influencing change detection that are related to the perception of the human operator as well as on the properties of the displayed images. Factors related to the images are lightness and color rendition, local contrast, and lightness and color consistency. In general, images with relatively natural lightness and color rendition facilitate the operator to more quickly understand a scene and focus on areas of interest, compared to, for example, overenhanced images containing too many details, or, images containing unnatural colors. Changes are better visible when the differences between the two images for the changed area are larger compared to the differences for the background (unchanged area), within a spatial neighborhood. Note that the latter criterion is used by image differencing approaches for automatic change detection [27]. Based on this, we computed the CIELAB color differences between corresponding images. Note that the CIELAB color differences better describe the sensitivity of the human eye to color differences, compared to RGB color differences. Then, we determined the ratio of the mean differences for the changed area to the mean differences for the background, within a spatial neighborhood around the change. The changed area and the background have been obtained based on manual segmentation.
An example of the CIELAB color differences for a neighborhood around a change is shown in Figure 9a (right). It can be seen that the magnitude of the differences for the area of the change are similar with the magnitudes of the differences for the background, and the ratio computes to 1.3 . Applying our approach, the differences for the area of the change increased in contrast to the differences for the background that decreased (Figure 9b, right), compared to the respective differences for the unprocessed images. The ratio of the mean differences computes to 11.2 . This is a significant increase compared to the unprocessed images, and, therefore, the change is potentially better detectable by a human operator. In Table 3 we show the ratio of CIELAB color differences for six examples of changed objects in different scenes, applying our approach as well as the three previous Retinex-based approaches. It can be seen that our approach yields the largest ratio for most changes, as well as the largest mean ratio.

4. Discussion

From the results, it turns out that our combined Retinex/Gray World approach enhances local contrast and reduces inter-sequence color inconsistencies between corresponding images acquired under different illumination conditions, taking advantage of the combination of local and global color processing. The advantages of the local center/surround Retinex are that the computation of the colors of a pixel is neither influenced by information of distant image regions with different illumination nor by changes in other regions of the image. The advantage of the global Gray World is the generally higher color saturation. Simultaneously, our approach overcomes typical issues of both local and global color processing.
The experimental comparison showed that our approach yields better results than previous Retinex-based approaches. Compared to the center/surround Retinex that often computes grayish colors due to the use of local image information only, our approach increases the saturation and improves color rendition. Our approach outperforms Retinex also in the context of color consistency, since the color halo effect on the edges of color change is reduced. Compared to Gray World, which yields strong unnatural colors for areas of images with inhomogeneous illumination, our approach determines more natural colors. This improves color rendition and, in addition, improves color consistency as well, since unnatural colors introduce color differences between corresponding images. The second reason for the worse performance of Gray World that influences color consistency is that changes in a scene affect the computed colors of the whole image. Compared to a hue-preserving Retinex approach, objects of locally distinctive color are better visible due to the local color processing. In addition, our approach yields images with higher saturation, compared to the hue-preserving Retinex, which is based on the global saturation decrease.
Furthermore, the proposed gain/offset function allows for reducing halo effect on shadow boundaries, without decreasing the contrast in other areas of an image. In future work, we plan to further reduce the halo effect on shadow boundaries, using, for example, an edge-preserving filter or a guided filter (e.g., [9,10]). This is, however, a challenging task, due to the high computational complexity of these processing schemes. Another alternative could be to extend a previous shadow-based Retinex approach [24] by improving the consistency of the employed shadow detection method.

5. Conclusions

We have presented an automatic approach for local contrast enhancement and color consistency for the display of images in a mobile change detection system. Our approach is based on the center/surround Retinex model for local contrast enhancement and lightness/color constancy, and on the Gray World method for global color constancy. We employed a nonlinear color processing function to combine Retinex and Gray World and to take advantage of both local and global color processing. We proposed a gain/offset scheme that uses two gains, in order to reduce lightness halo effect on shadow boundaries and to improve visibility within shadows. The use of stacked integral images (SII) for Gaussian convolution allows for achieving low computation times and using the proposed approach in real-time applications. We have successfully applied our approach to color image sequences acquired from outdoor itineraries using a vehicle-mounted camera, and an experimental comparison demonstrated that our approach outperforms previous Retinex-based approaches and Gray World w.r.t. color rendition, inter-sequence color consistency, and change detection.

Author Contributions

Marco Tektonidis developed the approach, performed the experiments, analyzed the data, and wrote the paper. David Monnin contributed to the design of the approach and the experiments, and proofread the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Monnin, D.; Schneider, A.L.; Bieber, E. Detecting suspicious objects along frequently used itineraries. In Proceedings of the SPIE, Security and Defence: Electro-Optical and Infrared Systems: Technology and Applications VII, Toulouse, France, 20 September 2010; Volume 7834, pp. 1–6. [Google Scholar]
  2. Jobson, D.; Rahman, Z.; Woodell, G.A. Properties and performance of a center/surround Retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  3. Agarwal, V.; Abidi, B.R.; Koschan, A.; Abidi, M.A. An overview of color constancy algorithms. J. Pattern Recognit. Res. 2006, 1, 42–54. [Google Scholar] [CrossRef]
  4. Land, E.H. The Retinex Theory of Color Vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef] [PubMed]
  5. Land, E.H.; McCann, J.J. Lightness and retinex theory. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef]
  6. Land, E.H. An alternative technique for the computation of the designator in the Retinex theory of color vision. Proc. Natl. Acad. Sci. USA 1986, 83, 3078–3080. [Google Scholar] [CrossRef] [PubMed]
  7. Jobson, D.; Rahman, Z.; Woodell, G.A. A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  8. Barnard, K.; Funt, B. Investigations into multi-scale Retinex. In Colour Imaging: Vision and Technology; John Wiley & Sons: Hoboken, NJ, USA, 1999; pp. 9–17. [Google Scholar]
  9. Pei, S.C.; Shen, C.T. High-dynamic-range parallel multi-scale retinex enhancement with spatially-adaptive prior. In Proceedings of the IEEE International Symposium on Circuits and Systems, Melbourne, Australia, 1–5 June 2014; pp. 2720–2723. [Google Scholar]
  10. Liu, H.; Sun, X.; Han, H.; Cao, W. Low-light video image enhancement based on multiscale Retinex-like algorithm. In Proceedings of the IEEE Chinese Control and Decision Conference, Yinchuan, China, 28–30 May 2016; pp. 3712–3715. [Google Scholar]
  11. McCann, J.J. Retinex at 50: Color theory and spatial algorithms, a review. J. Electron. Imaging 2017, 26, 1–14. [Google Scholar] [CrossRef]
  12. Provenzi, E. Similarities and differences in the mathematical formalizations of the Retinex model and its variants. In Proceedings of the International Workshop on Computational Color Imaging, Milan, Italy, 29–31 March 2017; Springer: Berlin, Germany, 2017; pp. 55–67. [Google Scholar]
  13. Bertalmío, M.; Caselles, V.; Provenzi, E. Issues about retinex theory and contrast enhancement. Int. J. Comput. Vis. 2009, 83, 101–119. [Google Scholar] [CrossRef]
  14. Provenzi, E.; Caselles, V. A wavelet perspective on variational perceptually-inspired color enhancement. Int. J. Comput. Vis. 2014, 106, 153–171. [Google Scholar] [CrossRef]
  15. Buchsbaum, G. A spatial processor model for object colour perception. J. Frankl. Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
  16. Bhatia, A.; Snyder, W.E.; Bilbro, G. Stacked integral image. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 1530–1535. [Google Scholar]
  17. Tektonidis, M.; Monnin, D. Image enhancement and color constancy for a vehicle-mounted change detection system. In Proceedings of the SPIE, Security and Defense: Electro-Optical Remote Sensing, Edinburgh, UK, 26 September 2016; Volume 9988, pp. 1–8. [Google Scholar]
  18. Finlayson, G.D.; Trezzi, E. Shades of gray and colour constancy. In Proceedings of the Color and Imaging Conference, Scottsdale, AZ, USA, 9 November 2004; Society for Imaging Science and Technology: Springfield, VA, USA, 2004; pp. 37–41. [Google Scholar]
  19. Van De Weijer, J.; Gevers, T.; Gijsenij, A. Edge-based color constancy. IEEE Trans. Image Process. 2007, 16, 2207–2214. [Google Scholar] [CrossRef] [PubMed]
  20. Petro, A.B.; Sbert, C.; Morel, J.M. Multiscale Retinex. Image Process. On Line 2014, 4, 71–88. [Google Scholar] [CrossRef]
  21. Getreuer, P. A survey of Gaussian convolution algorithms. Image Process. On Line 2013, 3, 286–310. [Google Scholar] [CrossRef]
  22. Gond, L.; Monnin, D.; Schneider, A. Optimized feature-detection for on-board vision-based surveillance. In Proceedings of the SPIE, Defence and Security: Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XVII, Baltimore, MA, USA, 23 April 2012; Volume 8357, pp. 1–12. [Google Scholar]
  23. Funt, B.; Barnard, K.; Brockington, M.; Cardei, V. Luminance-based multi-scale Retinex. In Proceedings of the AIC Color, Kyoto, Japan, 25–30 May 1997; Volume 97, pp. 25–30. [Google Scholar]
  24. Tektonidis, M.; Monnin, D.; Christnacher, F. Hue-preserving local contrast enhancement and illumination compensation for outdoor color images. In Proceedings of the SPIE, Security and Defense: Electro-Optical Remote Sensing, Photonic Technologies, and Applications IX, Toulouse, France, 21 September 2015; Volume 9649, pp. 1–13. [Google Scholar]
  25. Yang, C.C.; Rodríguez, J.J. Efficient luminance and saturation processing techniques for color images. J. Vis. Commun. Image Represent. 1997, 8, 263–277. [Google Scholar] [CrossRef]
  26. Barnard, K.; Cardei, V.; Funt, B. A comparison of computational color constancy algorithms—Part I: Methodology and experiments with synthesized data. IEEE Trans. Image Process. 2002, 11, 972–984. [Google Scholar] [CrossRef] [PubMed]
  27. Radke, R.J.; Andra, S.; Al-Kofahi, O.; Roysam, B. Image change detection algorithms: A systematic survey. IEEE Trans. Image Process. 2005, 14, 294–307. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Two images of the same scene acquired at different time points under different illumination conditions.
Figure 1. Two images of the same scene acquired at different time points under different illumination conditions.
Jimaging 03 00035 g001
Figure 2. Intensity transfer function for the Retinex model using a gain/offset function with A = 127.5 and two contrast gains ( C 1 and C 2 ). For C 1 = C 2 , the transfer function corresponds to the gain/offset function with a single gain value.
Figure 2. Intensity transfer function for the Retinex model using a gain/offset function with A = 127.5 and two contrast gains ( C 1 and C 2 ). For C 1 = C 2 , the transfer function corresponds to the gain/offset function with a single gain value.
Jimaging 03 00035 g002
Figure 3. Processed images using (a) the original Retinex [2] with a single gain and (b) Retinex [2] extended with the proposed two gains. Enlarged sections for the marked regions are shown on the bottom. (a) C = 120 ; (b) C 1 = 80 , C 2 = 120 .
Figure 3. Processed images using (a) the original Retinex [2] with a single gain and (b) Retinex [2] extended with the proposed two gains. Enlarged sections for the marked regions are shown on the bottom. (a) C = 120 ; (b) C 1 = 80 , C 2 = 120 .
Jimaging 03 00035 g003
Figure 4. Example of preventing from color inversion issues for (a) an image containing saturated colors (b) using our approach, compared to (c) using the original scheme [7] with the gain/offset function applied after the color processing function. (image credit: J.L. Lisani CC BY)
Figure 4. Example of preventing from color inversion issues for (a) an image containing saturated colors (b) using our approach, compared to (c) using the original scheme [7] with the gain/offset function applied after the color processing function. (image credit: J.L. Lisani CC BY)
Jimaging 03 00035 g004
Figure 5. Vehicle-mounted color camera system.
Figure 5. Vehicle-mounted color camera system.
Jimaging 03 00035 g005
Figure 6. (a) original and processed images applying (bd) previous Retinex-based approaches and (e) our approach. Enlarged sections for the marked regions are shown on the right. (a) unprocessed image; (b) Retinex [2]; (c) Gray World [15] extension; (d) hue-preserving Retinex [24]; (e) new combined Retinex/Gray World.
Figure 6. (a) original and processed images applying (bd) previous Retinex-based approaches and (e) our approach. Enlarged sections for the marked regions are shown on the right. (a) unprocessed image; (b) Retinex [2]; (c) Gray World [15] extension; (d) hue-preserving Retinex [24]; (e) new combined Retinex/Gray World.
Jimaging 03 00035 g006
Figure 7. (a) original and processed images applying (b) the Gray World extension and (c) our approach for a scene acquired at two different time points. The second image for each example has been registered w.r.t. the first image (top) and the two images have been combined using a checkerboard (bottom). (a) unprocessed images; (b) Gray World extension; (c) new combined Retinex/Gray World.
Figure 7. (a) original and processed images applying (b) the Gray World extension and (c) our approach for a scene acquired at two different time points. The second image for each example has been registered w.r.t. the first image (top) and the two images have been combined using a checkerboard (bottom). (a) unprocessed images; (b) Gray World extension; (c) new combined Retinex/Gray World.
Jimaging 03 00035 g007
Figure 8. Mean RGB angular error over time for corresponding images of two image sequences for our approach and three previous Retinex-based approaches.
Figure 8. Mean RGB angular error over time for corresponding images of two image sequences for our approach and three previous Retinex-based approaches.
Jimaging 03 00035 g008
Figure 9. (a) original and (b) processed images applying our approach for a scene with a change. The second image has been registered w.r.t. the first image. On the right, the magnitudes of the CIELAB color differences between the two images are visualized with the contour of the segmented change. (a) original images; (b) new combined Retinex/Gray World.
Figure 9. (a) original and (b) processed images applying our approach for a scene with a change. The second image has been registered w.r.t. the first image. On the right, the magnitudes of the CIELAB color differences between the two images are visualized with the contour of the segmented change. (a) original images; (b) new combined Retinex/Gray World.
Jimaging 03 00035 g009
Table 1. Parameter setting for our combined Retinex/Gray World approach.
Table 1. Parameter setting for our combined Retinex/Gray World approach.
A in [3] C 1 , C 2 in [4] I gray in [5] α in [7] σ in [11]K in [11]
127.5 80, 120 127.5 125 30 / π 3
Table 2. Mean errors averaged over corresponding time points of five pairs of image sequences for our approach and three previous Retinex-based approaches. Percentages indicate the change compared to the unprocessed image data.
Table 2. Mean errors averaged over corresponding time points of five pairs of image sequences for our approach and three previous Retinex-based approaches. Percentages indicate the change compared to the unprocessed image data.
ApproachMean RGBMean rg
Angular ErrorEndpoint Error
Unprocessed 4.06 0.029
Retinex [2] 3.32 18 % 0.027 8 %
Gray World [15] extension 3.93 3 % 0.029 1 %
Hue-preserving Retinex [24] 2.39 41 % 0.016 47 %
New combined Retinex/Gray World 2.56 37 % 0.020 31 %
Table 3. Ratios of the mean CIELAB color differences for changed areas to unchanged areas within a spatial neighborhood around the change. Our combined Retinex/Gray World approach and three previous Retinex-based approaches have been applied to six pairs of corresponding images of changed scenes.
Table 3. Ratios of the mean CIELAB color differences for changed areas to unchanged areas within a spatial neighborhood around the change. Our combined Retinex/Gray World approach and three previous Retinex-based approaches have been applied to six pairs of corresponding images of changed scenes.
Changed Objects
ApproachTraffic PoleStoneStone2BarrelFake StoneCarMean
Unprocessed 1.3 3.6 3.6 1.3 0.8 0.6 1.9
Retinex [2] 9.5 11.7 4.5 2.6 21.7 8.8 9.8
Gray World [15] extension 8.0 11.3 4.7 2.1 20.3 5.5 8.6
Hue-preserving Retinex [24] 11.1 13.1 4.5 4.0 17.9 6.6 9.6
New combined Retinex/Gray World 11.2 28.4 5.1 2.9 22.1 8.4 13

Share and Cite

MDPI and ACS Style

Tektonidis, M.; Monnin, D. Color Consistency and Local Contrast Enhancement for a Mobile Image-Based Change Detection System. J. Imaging 2017, 3, 35. https://doi.org/10.3390/jimaging3030035

AMA Style

Tektonidis M, Monnin D. Color Consistency and Local Contrast Enhancement for a Mobile Image-Based Change Detection System. Journal of Imaging. 2017; 3(3):35. https://doi.org/10.3390/jimaging3030035

Chicago/Turabian Style

Tektonidis, Marco, and David Monnin. 2017. "Color Consistency and Local Contrast Enhancement for a Mobile Image-Based Change Detection System" Journal of Imaging 3, no. 3: 35. https://doi.org/10.3390/jimaging3030035

APA Style

Tektonidis, M., & Monnin, D. (2017). Color Consistency and Local Contrast Enhancement for a Mobile Image-Based Change Detection System. Journal of Imaging, 3(3), 35. https://doi.org/10.3390/jimaging3030035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop