918
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
VOL. 34, NO. 5,
MAY 2012
Improving Color Constancy
by Photometric Edge Weighting
Arjan Gijsenij, Member, IEEE, Theo Gevers, Member, IEEE, and
Joost van de Weijer, Member, IEEE
Abstract—Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge
types exist in real-world images, such as material, shadow, and highlight edges. These different edge types may have a distinctive
influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge
types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types
based on their photometric properties (e.g., material, shadow-geometry, and highlights). Then, a performance evaluation of edgebased color constancy is provided using these different edge types. From this performance evaluation, it is derived that specular and
shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted
Gray-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are
recorded under controlled circumstances demonstrate that the proposed iterative weighted Gray-Edge algorithm based on highlights
reduces the median angular error with approximately 25 percent. In an uncontrolled environment, improvements in angular error up to
11 percent are obtained with respect to regular edge-based color constancy.
Index Terms—Color constancy, illuminant estimation, Gray Edge, edge classification.
Ç
1
INTRODUCTION
C
in illumination cause the measurements of
object colors to be biased toward the color of the light
source. The goal of color constancy is to provide invariance
with respect to these changes. Color constancy facilitates
many computer vision-related tasks like color feature
extraction [1] and color appearance models [2].
Many computational color constancy algorithms have
been proposed, see, e.g., [3], [4] for an overview. Traditionally, computational color constancy methods use pixel values
of an image to estimate the illuminant. Examples of such
methods include approaches based on low-level features,
e.g., [5], [6], [7], and gamut-based algorithms [8], [9], [10].
Recently, methods that use derivatives (i.e., edges) and even
higher order statistics are proposed [11], [12], [13], [14].
The underlying assumption of Gray-World and GrayEdge-based algorithms is that the distribution of the colors
and edges is directed toward the illuminant direction.
Hence, in order to accurately recover the color of the light
source, ideally only those pixels and edges should be used
HANGES
. A. Gijsenij is with Alten PTS, Rivium 1E Straat, 2909 LE Capelle a/d
IJssel, The Netherlands. E-mail: arjan.gijsenij@gmail.com.
. T. Gevers is with the Intelligent Systems Laboratory Amsterdam (ISLA),
University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The
Netherlands, and also with the Computer Vision Center, Universitat
Autónoma de Barcelona, 08193 Bellaterra, Spain.
E-mail: th.gevers@uva.nl.
. J. van de Weijer is with the Computer Vision Center (CVC), Universitat
Autonoma de Barcelona, Edifici O, Campus UAB, Bellaterra 08193,
Cerdanyola, Spain. E-mail: joost@cvc.uab.es.
Manuscript received 20 Sept. 2010; revised 31 July 2011; accepted 25 Aug.
2011; published online 7 Oct. 2011.
Recommended for acceptance by D. Forsyth.
For information on obtaining reprints of this article, please send e-mail to:
tpami@computer.org, and reference IEEECS Log Number
TPAMI-2010-09-0721.
Digital Object Identifier no. 10.1109/TPAMI.2011.197.
0162-8828/12/$31.00 ß 2012 IEEE
that coincide with the illuminant direction (highlights are
one example of such pixels, perfect reflectances are
another). Under the assumption of neutral interface reflection, it is known that highlights roughly correspond to the
color of the light source, making highlights particularly
suited for estimating the color of the light source [15], [16],
[17]. However, detecting highlight pixels has proven to be
very challenging without prior knowledge of the scene [15],
[18], [16], [19]. Edges, on the other hand, can automatically
be classified into different types without much prior
knowledge by using physics principles [20], [21], [22],
[23]. For example, edges can be classified into material
edges (e.g., edges between objects and object-background
edges), shadow/shading edges (e.g., edges caused by the
shape or position of an object with respect to the light
source), and specular edges (i.e., highlights). These edges
may have a distinctive influence on the performance of
illuminant estimation.
In this paper, the use of distinct edge types is exploited to
improve the performance of edge-based color constancy by
computing a weighted average of the edges. The weights
are computed using a photometric edge classification
scheme. Since such methods often assume that the scene
is illuminated by a white light source, the automatic
detection of such edges can become erroneous when the
color of the light source is not white. To this end, an iterative
weighting scheme is proposed that sequentially estimates
the color of the light source and updates the computed edge
weights. The rationale behind this approach is to fully
exploit the information that is enclosed in the image, and
simultaneously increase the accuracy of the illuminant
estimation and (specular) edge detection.
This paper is organized as follows: In Section 2, color
constancy is discussed, followed by the introduction of
color constancy by edge weighting in Section 3. Then, in
Published by the IEEE Computer Society
GIJSENIJ ET AL.: IMPROVING COLOR CONSTANCY BY PHOTOMETRIC EDGE WEIGHTING
Section 4, the performance of edge-based color constancy is
analyzed with respect to different edge types. Finally, in
Sections 5 and 6, the method is evaluated and the obtained
results are discussed.
2
COLOR CONSTANCY
The image values f ¼ ðfR ; fG ; fB ÞT for a Lambertian surface
depend on the color of the light source IðÞ, the surface
reflectance Sðx; Þ, and the camera sensitivity function
ðÞ ¼ ðR ðÞ; G ðÞ; B ðÞÞT , where is the wavelength of
the light and x is the spatial coordinate (e.g., [3], [10], [8]):
Z
fc ðxÞ ¼ mðxÞ IðÞc ðÞSðx; Þd;
ð1Þ
!
where ! is the visible spectrum, mðxÞ is Lambertian
shading, and c ¼ fR; G; Bg. It is assumed that the scene is
illuminated by uniform illumination and that the observed
color of the light source e depends on the color of the light
source IðÞ as well as the camera sensitivity function ðÞ:
0 1
Z
eR
@
A
e ¼ eG ¼
IðÞðÞd:
ð2Þ
!
eB
Color constancy can be achieved by estimating the color
of the light source e, given the image values of f , followed
by a transformation of the original image values using this
illuminant estimate [24]:
f t ¼ Du;t f u ;
ð3Þ
where f u is the image taken under an unknown light source,
f t is the same image transformed, so it appears as if it was
taken under the canonical illuminant, and Du;t is a diagonal
matrix which maps colors that are taken under an unknown
light source u to their corresponding colors under the
canonical illuminant c. The aim of this transformation is not
to scale the brightness level of the image since the color
constancy methods proposed and compared in this paper
only correct for the chromaticity of the light source. Since
both IðÞ and ðÞ are, in general, unknown, the estimation
of e is an under-constrained problem that cannot be solved
without further assumptions.
2.1 Pixel-Based Color Constancy
Two well-known algorithms that are often used are based
on the Retinex Theory proposed by Land [7]: the WhitePatch and the Gray-World algorithm. The White-Patch
algorithm is based on the White-Patch assumption, i.e., the
assumption that the maximum response in the RGB-channels is
caused by a white patch. In practice, this assumption is
alleviated by considering the color channels separately,
resulting in the max-RGB algorithm. The Gray-World
algorithm [5] is based on the Gray-World assumption, i.e.,
the average reflectance in a scene is achromatic. Another type of
algorithms is gamut-based methods, originally proposed by
Forsyth [10]. Gamut-based algorithms use more advanced
statistical information about the image, and are based on the
assumption that in real-world images, one observes, under a
given illuminant, only a limited number of different colors.
919
Another pixel-based algorithm that is related to the
current paper is proposed by Tan et al. [17]. This approach
is based on the dichromatic reflection model, and uses
specularities or highlights for the estimation of the
illuminant. By transforming the image to inverse intensity
chromaticity space, pixels are identified that have a low body
reflectance factor (effectively identifying pixels with a color
that is close or identical to the color of the light source).
However, the identification of such specular pixels remains
a problem.
2.2 Edge-Based Color Constancy
Extending pixel-based methods to incorporate derivative
information, i.e., edges and higher order statistics, resulted
in the Gray Edge [14] and the derivative-based Gamut
mapping algorithms [13].
The Gray Edge actually comprises a framework that
incorporates zeroth-order methods (e.g., the Gray-World
and the White-Patch algorithms), first-order methods (i.e.,
the Gray Edge), as well as higher order methods (e.g.,
second-order Gray Edge). Many different algorithms can be
created by varying the three parameters in
e
n;p;
Z n
1p
@ fc; ðxÞp
¼
@xn dx ¼ kec ;
ð4Þ
@ sþt fc;
@ sþt G
¼ fc s t ;
s
t
@x y
@x @y
ð5Þ
where j j indicates the absolute value, c ¼ fR; G; Bg, n is
the order of the derivative, p is the Minkowski-norm, and k
a multiplicative scalar constant chosen such that the
illuminant vector e has unit length.
Further, the derivative is defined as the convolution of
the image with the derivative of a Gaussian filter with scale
parameter [25]:
where denotes the convolution and s þ t ¼ n. Good
results are obtained by using instantiation e1;1; , i.e., a
simple average of the edges at scale also called the GrayEdge method [14].
Another pixel-based method which has been extended to
incorporate derivative information is the Gamut mapping
algorithm [13]. It can be proven that linear combinations of
image values also form gamuts, thereby extending the
Gamut mapping theory to incorporate image derivatives. In
this paper, we assess the influence of various edge types on
the performance of both the Gray-Edge method and the
derivative-based Gamut mapping method.
3
COLOR CONSTANCY BY EDGE WEIGHTING
Consider the following simplified version of (4):
Z
jfc;x ðxÞjp dx
1p
¼ kec ;
ð6Þ
where fc;x ðxÞ is the derivative of color channel c 2 fR; G; Bg
of image f at a certain scale. Then, the weighted Gray-Edge
algorithm is given by
920
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
Z
1p
jwðf Þ fc;x ðxÞjp dx ¼ kec ;
ð7Þ
where wðf Þ is a weighting function that assigns a weight
to every value of f and can be used to enforce the
weights (a higher value of results in more emphasis on
higher weights).
Numerous different weighting schemes can be incorporated in (7), but in order to accurately estimate the color of
the light source, a suitable weighting scheme should enforce
relevant information about the color of the light source and
disregard irrelevant information. Since it is known that
highlights are a valuable cue for estimating the color of the
light source [15], [16], [17], an obvious choice would be to
compute weights using specular edge detection methods.
3.1 Edge Types and Classification
Various edge types are considered, i.e., material edges,
(colored) shadow or shading edges, specular edges, and
interreflection edges. Material edges are transitions between
two different surfaces or objects. Shading edges are
transitions that are caused by the geometry of an object,
for instance by a change in surface orientation with respect
to the illumination. Shadow edges are cast shadows caused
by an object that (partially) blocks the light source. Blocking
of the light source often results in merely an intensity
gradient, but sometimes a faint color gradient is introduced
(provided two illuminants are present in a scene). When we
refer to shadow edges in general, both intensity and colored
shadow edges are implied. Finally, in real-world images,
interreflection is an important aspect. Interreflection is the
effect of light reflected from one surface onto a second
surface. This effect changes the overall illumination that is
received by the second surface, and hence the color of this
surface. Finally, note that combinations of edge types (e.g., a
shadow or shading edge that coincides with a material
edge) can also occur, but are not handled explicitly here.
Generally, edge classification is based on photometric
information. For instance, Finlayson et al. [20] propose to
project a 2D log-chromaticity image onto the direction
orthogonal to the light source, resulting in a new image that
is invariant to light intensity and color. Shadow edges are
then detected by subtracting the derivatives of the invariant
image and the original image. Alternatively, Geusebroek et al.
[21] propose a rule-based approach in combination with a set
of color invariants derived from the Kubelka-Munk theory for
colorant layers. A slightly different rule-based approach is
proposed by van de Weijer et al. [23], which is based on the
same photometric invariant principles. Geometric information is mostly ignored for general edge classification methods,
although some recent advancements show that shadow edge
detection can benefit from geometric features [26], [27]. The
weighted Gray-Edge method given by (7) can incorporate
classifications based on both photometric and geometric
features, but in this paper the focus will be mainly on
photometric features. More specifically, the quasi invariants
[23] are used to design several different soft weighting
schemes, resulting in an elegant incorporation of weighting
schemes into edge-based color constancy.
VOL. 34, NO. 5,
MAY 2012
3.2 Edge Weighting Schemes
Quasi invariants [23] are computed using the derivative of
an image, f x ¼ ðfR;x ; fG;x ; fB;x ÞT , and a set of three photometric variants. By removing the variance from the
derivative of the image, a complementary set of derivatives
is constructed called quasi invariants. The edge energy
contained in the three variant directions is indicative for the
type of edge, e.g., if most energy is contained in the specular
direction, the edge is most likely to be a highlight. Using
the quasi invariants, three different weighting schemes can
be derived (including all combinations of these schemes):
the specular weighting scheme, the shadow weighting
scheme, and the material weighting scheme.
Specular edge weighting scheme. The quasi invariants
decompose a derivative image into three directions. The
projection of the derivative on the specular direction (i.e.,
the color of the light source) is called the specular variant
and is defined as
i
Ox ¼ ðf x ^
ci Þ^
ci ;
p1ffiffi ð1; 1; 1ÞT
3
ð8Þ
where ^
c ¼
is the color of the light source
(assumed to be white here) and the dot indicates the
vector inner product. The specular variant is that part of
the derivative which could be caused by highlights. What
remains after subtraction of the variant from the
derivative is called the specular quasi invariant:
Otx ¼ f x Ox :
ð9Þ
The quasi invariant Otx only contains shadow shading and
material edges, and is insensitive to highlight edges. Since
all derivative energy of an image f is contained in either of
the three variant directions, the ratio of the energy in the
specular variant versus the total amount of derivative
energy is an indication if an edge is specular or not. This
ratio translates to the following specular weighting scheme:
ws;specular ðf x Þ ¼
jOx j
;
kf x k
ð10Þ
where jOx j is the absolute value of Ox and
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 þ f2 þ f2 :
kf x k ¼ fR;x
B;x
G;x
Shadow edge weighting scheme. Using the same reasoning on the shadow-shading direction ^
f (i.e., intensity), a
shadow-shading invariant and quasi invariant is obtained:
Sx ¼ ðf x ^
f Þ^
f;
ð11Þ
Stx ¼ f x Sx ;
ð12Þ
T
1
ffi ðR; G; BÞ . It can be derived that the
where ^
f ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
R2 þG2 þB2
shadow-shading quasi invariant is insensitive to shadow
edges. Translating this variant to a shadow weighting
scheme yields the following result:
ws;shadow ðf Þ ¼
jSx j
:
kf x k
ð13Þ
Material edge weighting scheme. Finally, the shadowshading-specular variant and quasi invariant can be constructed by projecting the derivative on the hue direction:
GIJSENIJ ET AL.: IMPROVING COLOR CONSTANCY BY PHOTOMETRIC EDGE WEIGHTING
^ b;
^
Hx ¼ ðf x bÞ
ð14Þ
Htx ¼ f x Hx ;
ð15Þ
^ is the hue direction, which is perpendicular to the
where b
previous two directions:
^ ci
^ ¼ f ^
:
b
j^
f ^
ci j
ð16Þ
The resulting quasi invariant Htx does not contain
specular or shadow-shading edges, and can be used to
assign higher weights to material edges as follows:
ws;material ðf x Þ ¼
jHtx j
:
kf x k
ð17Þ
To evaluate the influence of the edge-type classifier on
the color constancy results, one additional experiment in
Section 5 will make use of a different specular weighting
scheme, based on geometric features similar to [26].
3.3 Iterative Weighted Gray Edge
In (8) and (16), it can be derived that the specular and
shadow-shading-specular variants and quasi invariants are
dependent on the color of the light source (the shadowshading variant and quasi invariant are not). The underlying assumption that the scene is viewed under a white (or
neutral) light source [23] is obviously not met for the images
in the data sets used prior to applying color constancy.
However, after the proposed algorithm is applied, the
illuminant should be neutral, at least in theory. Hence, we
propose to first correct the input image A with an estimated
illuminant I. Then, using this color corrected image B, we
can compute a weighting scheme W, which in turn is used
by the weighted Gray-Edge algorithm to compute an
updated estimate of the illuminant I. After some iterations,
the illuminant estimate will approximate a white light
source, at which point the accuracy will no longer increase
and the method has converged. Consequently, we propose
to iteratively apply the weighted Gray-Edge algorithm,
where a new instantiation of the weighting scheme is
computed every iteration based on the color corrected
image at each iteration. The iterative weighted Gray Edge is
given by the following lines of pseudocode:
Algorithm 1. Iterative Weighted Gray-Edge
Input:
input image: A
initial illuminant estimate: I
stopping criterion: C
Method:
while (: C) do
B ¼ color_correct(A, I)
W ¼ compute_weighting_scheme(B)
I’ ¼ weighted_GrayEdge(B,W)
I ¼ I I’
C ¼ update_stopping_criterion()
end while
For the sake of clarity, we will not change the type of
weighting scheme W (e.g., specular or shadow based)
throughout the iterations. Further, the initial illuminant
921
estimate I can either be a white light source (ð1; 1; 1ÞT ) or it
can be the result of any color constancy algorithm. Finally,
the stopping criterion C can be defined as a fixed number of
iterations, or as some measure of convergence (e.g., the
distance between a white light source and the illuminant I’
at the end of each iteration is below some threshold).
4
PERFORMANCE USING DIFFERENT EDGE TYPES
In this section, the aim is to analyze which edge types have
the most influence on the accuracy of the illuminant
estimation. To this end, a spectral data set is used first to
generate different edges types under controlled circumstances. On this data set, the two different edge-based color
constancy algorithms, i.e., the Gray Edge and the derivative-based Gamut mapping approach, are evaluated.
To evaluate the performance of color constancy algorithms, the angular error is widely used [28]. This measure
is defined as the angular distance between the actual color
of the light source el and the estimated color ee :
^e Þ;
¼ cos1 ð^
el e
ð18Þ
^l e
^e is the dot product of the two normalized
where e
vectors representing the true color of the light source el and
the estimated color of the light source ee . To measure the
performance of an algorithm on a whole data set, the
median angular error is reported.
4.1 Spectral Data
The first experiments are performed using the spectral data
set introduced by Barnard et al. [29]. This set consists of
1,995 surface reflectance spectra and 287 illuminant spectra,
from which an extensive range of surfaces (i.e., RGB-values)
can be generated using (1). As discussed before, for these
experiments, the following types of surfaces are created:
.
.
.
material surface mi :
Z
mik ¼
Ik ðÞðÞSi ðx; Þd;
!
ð19Þ
intensity shadow surface pi :
Z
Ik ðÞ
pik ¼
ðÞSi ðx; Þd;
!
ð20Þ
colored shadow surface qi :
Z
qikk0 ¼ pik þ Ik0 ðÞðÞSi ðx; Þd;
ð21Þ
!
.
specular surface hik :
hik ¼ mik þ
.
Z
!
Ik ðÞ ðÞd;
interreflection surface rijk :
Z
rijk ¼
Ik? ðÞSi ðx; Þd;
!
ð22Þ
ð23Þ
922
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
VOL. 34, NO. 5,
MAY 2012
where Ik? ¼ Ik ðÞ þ Ik ðÞSj ðx; Þ. Further, the subscripts i
and j denote different surface reflectance spectra, k and k0
denote different illuminant spectra. The random variables
and are uniformly distributed between 1 and 4, and and
are random variables uniformly distributed between 0
and 0.25.
Since the focus is on edge-based color constancy, the
following transitions (i.e., edges) between surfaces are
generated:
. material edge: mik mjk ,
. intensity shadow edge: mik pik ,
. colored shadow edge: mik qikk0 ,
. specular edge: mik hik ,
. interreflection edge: mik rijk .
A material edge is generated by taking the difference between
two different material surfaces, mi mj . The difference
between a material surface mi and the same surface under a
weaker light source results in an intensity shadow edge,
mi pi . A colored shadow edge is defined as the difference
between a material surface mi and a colored shadow surface,
mi qi . A specular edge is defined as the difference between
a material surface mi and the bright version of the same
material, mi hi . Finally, an interreflection edge is defined as
the difference between a material surface mi and an
interreflection surface rij where surface mi interreflects onto
a second surface mj , hence mi rij . Note that these edges
can be considered to be step edges. In real-world scenes,
transitions are likely to be more gradual. However, for the
purpose of the analysis performed in Sections 4.2 and 4.3,
these edges are used to give a best case relative assessment
of algorithm performance, comparing the different edge
types under the same conditions. Further, we would like to
note that the intrinsic properties of the used data set (i.e.,
the average of all surfaces is not Gray) is a potential cause
for error, but to avoid confusion we will ignore this in the
remainder of this section.
4.2 Different Number of Edges
In the first experiment, the performance of two edge-based
color constancy algorithms is analyzed with respect to
different edge types. Using the spectral data set, a number
of random surfaces are created, including n material surfaces,
n intensity shadow surfaces, n colored shadow surfaces,
n specular surfaces, and n interreflection surfaces, resulting
in a total of 5n surfaces. Note that to create these surfaces, the
same illuminant is used. Using these surfaces, n material
edges, n intensity shadow edges, n colored shadow edges,
n specular edges, and n interreflection edges are created. Two
edge-based color constancy algorithms are evaluated (the
Gray-Edge algorithm and the derivative-based Gamut
mapping) by gradually increasing the number of edges. For
each value of n (n ¼ f4; 8; 16; 32; 64; 128; 256; 512; 1;024g), the
experiment is repeated 1,000 times.
In Fig. 1 (top graph), the median angular error for the GrayEdge algorithm is shown differentiated by these five edge
types. The angular error when using material edges is
significantly higher than when using intensity shadow edges.
As expected, color constancy based on specular edges results
in a close to ideal performance. Further, the performance
using the colored shadow edges and the interreflection edges
Fig. 1. Median angular error of the gray edge, top figure, and the
derivative-based Gamut mapping, bottom figure, including a 95 percent
confidence interval, using several different edge types.
is similar to the performance when using the material edges.
The performance of the derivative-based Gamut mapping,
see Fig. 1 (bottom graph), shows a similar trend. Using
specular edges results in near-perfect color constancy, and
intensity shadow edges are more favorable than the three
other types of edges.
4.3 Gamuts of Different Edge Types
To analyze the high error for the material edges and to
explain the differences in performance of the other types of
edges, we will go into detail on the underlying assumptions. First, consider the assumption of material edge-based
color constancy:
N
1X
Rand mik mjk ¼ a
i;j
N!1 N
1
lim
Z
!
Ik ðÞ ðÞd;
ð24Þ
where mik is a material surface as defined by (19), N is the
number of edges, Randi;j ð:Þ is a function that randomly
selects two surfaces i and j, and a is a scalar value. Further,
a material edge (the difference between two material
surfaces) is computed as
mik mjk ¼
Z
!
Ik ðÞðÞðSi ðÞ Sj ðÞÞd:
ð25Þ
Substituting (25) into (24) results in the following underlying assumption:
N
1X
Rand Si ðÞ Sj ðÞ ¼ a:
i;j
N!1 N
1
lim
ð26Þ
GIJSENIJ ET AL.: IMPROVING COLOR CONSTANCY BY PHOTOMETRIC EDGE WEIGHTING
923
Fig. 2. Gamut in opponent color space of several edge types put under one illuminant, which is specified by the fourth axis. Shown are material
edges in (a), intensity shadow edges in (b), colored shadow edges in (c), specular edges in (d), interreflection edges in (e).
Under the assumption that the surface reflectances are
normally distributed with mean and variance 2 , then
subtracting two surfaces results in a new random variable
with larger variance (22 ). On the other hand, considering
the assumption of (intensity) shadow edge-based color
constancy, we have
N
1X
Rand ðjmik pik jÞ ¼ a
i
N!1 N
1
lim
Z
!
Ik ðÞðÞd;
ð27Þ
where pik is an intensity shadow surface as defined by (20).
Substituting (20) into (27) results in the following underlying assumption:
N
1X
RandðjSi ðÞjÞ ¼ a:
N!1 N
i
1
lim
ð28Þ
Under the same assumption that the surface reflectances are
normally distributed with mean and variance 2 , it can be
observed that the variance of the intensity shadow edges
(2 ) is lower than the variance of the material edges
(2 < 22 ). This implies that a larger number of (different)
material edges is required to obtain the same accuracy as
shadow-edge-based color constancy.
This analytical derivation is supported by the empirical
distribution of these two edge types. For the ease of
illustration of the physical properties of edge types, the
edges are converted to the opponent color space
Rx Gx
pffiffiffi ;
2
ð29Þ
O2x ¼
Rx þ Gx 2Bx
pffiffiffi
;
6
ð30Þ
O3x ¼
Rx þ Gx þ Bx
pffiffiffi
;
3
ð31Þ
O1x ¼
where Rx , Gx , and Bx are derivatives of the R, G, and B
channels, respectively.
The distribution of edge responses in the opponent color
space is shown in Fig. 2. From these graphs, it can be
derived that the variation in edge color is much higher for
the material edges, Fig. 2a, than for shadow edges, Figs. 2b
and 2c, which is also analytically derived. Further, the
intensity shadow edges are more directed toward the color
of the light source (shown by the fourth axis) than the
colored shadow edges. The shape of the gamut of the color
shadow edges, which appears to be less directed toward the
color of the light source than other edge types, can be
explained by the influence of the second light source.
The gamut of interreflection edges, Fig. 2e, is similar to the
material edges. Finally, specular edges, Fig. 2d, all align
perfectly with the color of the light source (shown by the
fourth axis).
These graphs show that it is beneficial to use edges that
are aligned with the color of the light source. The specular
edges are all distributed on the diagonal representing the
color of the light source, and near-perfect color constancy
can be obtained using these edges. This observation is in
accordance to pixel-based highlight analysis, where highlights contain valuable information about the color of the
light source [15], [16]. Shadow edges are distributed denser
around the color of the light source than material edges and
interreflection edges, resulting in a higher performance.
Color clipping. In practice, pixel values are often bound
to a certain maximum value. This effect is called color
clipping. Since the specular surfaces have the highest RGBvalues, these surfaces (and consequently the specular
edges) risk to be affected by color clipping. To analyze this
effect, a second experiment is performed where the
generated RGB-values are color clipped at a gradually
decreasing value. The results of this experiment for the
Gray-Edge algorithm are shown in Fig. 3. The derivativebased Gamut mapping reveals a similar trend (not shown
here). The performance using the specular edges immediately starts to decrease significantly. The performance using
the material and the shadow edges is less affected; the
angular error does not significantly increase until 40 percent
of the total number of surfaces are clipped. The effects of
color clipping cause the gamuts of the specular edges to
shift toward the intensity axis (O3x ); hence, the estimate of
the illuminant will be biased toward white. Color clipping
is an often occurring phenomena and cannot be prevented
in practice. To alleviate the effects of color clipping on the
performance of any color constancy algorithm, pixels that
are potentially color clipped are often discarded. Practically,
this means that pixels with a maximum response in either of
the three color channels are not considered while estimating
the illuminant.
To conclude, from an analytical approach, it can be
derived that using specular edges for edge-based color
924
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
VOL. 34, NO. 5,
MAY 2012
Fig. 3. Mean angular error using material edges, shadow edges, and specular edges for different clipping values.
constancy results in a close to ideal performance because
the specular edges align with the color of the light source.
However, in practice, color clipping may eliminate the
advantages of specular edges and cause a decrease in
performance. Shadow edges contain more variation than
specular edges but are still aligned with the color of the
light source. Consequently, the performance of edge-based
color constancy using shadow edges degrades only slightly
with respect to using highlights. However, as material
edges vary even more, their performance degrades even
more. Although interreflection edges vary less than material
edges, they are not aligned with the color of the light and
hence their performance is the worst.
5
EXPERIMENTS
Experiments using the (iterative) weighted Gray-Edge
method are performed on several data sets. First, experiments are performed on a data set containing indoor images
that are recorded under controlled settings, see Section 5.1.
Then, in Section 5.2, results are reported on two unconstrained data sets with natural images.
5.1 SFU Controlled Indoor
The first data set, denoted by SFU Controlled Indoor [29],
consists of 31 different scenes, recorded under 11 different
light sources, resulting in a total of 321 images. Two
relevant subsets are distinguished to demonstrate the
robustness of the proposed algorithm; one subset contains
223 images with minimal specularities (denoted subset A),
another subset contains 98 images with nonnegligible
dielectric specularities (denoted subset B). The main
difference between these two subsets is the fact that the
images in subset A (some of which are flat Mondrian-like
compositions of colored paper) contain few or no specularities, while all images in subset B contain at least some
highlights. Some examples are shown in Fig. 6 (top row).
Weighting schemes. Results of applying the three
weighting schemes, emphasizing specular, shadow, or
material edges, are shown in Table 1. Interestingly, results
of assigning higher weights to specular edges initially, i.e.,
after one iteration, is worse than assigning higher weights to
shadow edges. The reason for this is explained in Section 3,
i.e., the specular quasi invariant assumes a neutral illumination while the shadow-shading quasi invariant does not.
Hence, the detection of highlights suffers from the colored
light sources.
Running multiple iterations increases the accuracy of the
specular edge detection, and hence the accuracy of the
weighted Gray Edge using the specular weighting scheme,
see Table 1b. The effects of the running multiple iterations
have only minor effects on the shadow weighting scheme,
while the performance of the material weighting scheme
considerably deteriorates, see also Fig. 4a. The latter can be
explained by the misclassification rate of the specular
edges: When running multiple iterations, less specular
edges are misclassified as material edges. The fact that the
performance of the shadow weighting scheme is not
affected by running multiple iterations is expected because
TABLE 1
Mean, Median, and Maximum Angular Errors on the SFU Controlled Indoor Set
All results in this table are obtained using the optimal parameter settings for the proposed iterative weighted Gray Edge (i.e., Minkowski-norm ¼ 7,
¼ 5 and ¼ 8). Table (a) shows the results of the weighted Gray Edge for three different weighting schemes, Table (b) shows the results of
applying the iterative weighted Gray Edge for the same three weighting schemes, and Table (c) shows the results of the theoretical scenario, where
we compute the weighting schemes from the color corrected images.
GIJSENIJ ET AL.: IMPROVING COLOR CONSTANCY BY PHOTOMETRIC EDGE WEIGHTING
925
Fig. 4. The effects of applying several iterations to the different weighting
schemes in (a). (b) The effects of increasing the value of for different
weighting schemes. Results are obtained on the SFU controlled indoor
set.
the shadow-shading quasi invariant is robust to illumination changes (see Section 3.2).
Results shown in Table 1 are obtained using a relatively
high value for ( ¼ 8). The effect of using lower values
for is shown in Fig. 4b. It can be observed that the
specular weighting scheme especially benefits from an
increased value for , while the shadow weighting scheme
is not affected at all.
Influence of edge classification accuracy. The results of
the iterative weighted Gray Edge show that the proposed
method benefits from accurate specular edge detection. An
illustration is presented in Fig. 5, showing the results and
corresponding weight maps for two individual images of
the first, second, and final iteration. It can be seen that the
accuracy of the edge classification and the illuminant
estimates increase simultaneously. These examples imply
that the proposed method can benefit from accurate
specular edge detection, while specular edges can be
detected more accurately on images with more accurate
color constancy. To further confirm that the improved
performance of the iterative weighted Gray Edge is directly
related to the accuracy of the specular edge detection, two
additional experiments are performed.
As the quasi invariants are based on neutral illumination, computing the specular weight map on the images
color corrected with the ground truth would result in the
highest edge detection accuracy. Using this weight map
together with the original uncorrected images to estimate the
color of the light source will give an indication of the
potential of the proposed method. The results of this
(theoretical) experiment are shown in Table 1c, where it can
be observed that the performance of the specular weighting
scheme can especially benefit from even more accurate
specular edge detection.
Influence of different edge classifiers. For the next
experiment, the system proposed in [26] is adapted to
specular edge detection. Patches from the same data set as
used in [26] are manually selected and annotated. Note that
these patches have no overlap with the color constancy data
set. Using features derived from these patches, a classifier
Fig. 5. Two results of running multiple iterations of the specular-weighted
Gray Edge. The first and second images show the result after the first
and second iteration; the third image shows the result after the algorithm
converged. The weight maps used in the corresponding iterations are
color coded such that dark red indicates a high weight and dark blue
indicates a low weight.
(SVM) is learned that is able to distinguish patches
containing specular edges from patches without specular
edges. By applying this classifier to a full image using a
sliding window approach, a posterior probability is
obtained for every block of pixels in the image. Finally, a
smoothing filter is applied to reduce the inherent uncertainty of the block-based detection result. These smoothed
posterior probabilities are directly used as weights in (7).
Three different classifiers are learned, each with a different
accuracy. The accuracy is measured as the area under the ROCcurve (AUC), and is determined using cross validation on the
training patches. The first classifier uses the SIFT-feature in
combination with the quasi invariants (AUC ¼ 0:83), the
second classifier uses merely the quasi invariants
(AUC ¼ 0:78), and the third classifier uses the quasi invariants with suboptimal SVM-parameters (AUC ¼ 0:68). Results of the weight maps computed using these three
classifiers are summarized in Table 2. It can be seen that the
classifier with the highest accuracy, i.e., the combination of
SIFT and the quasi invariants, results in the best color
TABLE 2
Results of Using Weighting Schemes Computed
by Applying Different Specular Edge Detection
Classifiers on the SFU Controlled Indoor Set
The area under the ROC-curve values is determined using cross
validation on the training set.
926
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
VOL. 34, NO. 5,
MAY 2012
TABLE 3
Comparison to Other Algorithms on the SFU Controlled Indoor
Note that cross-validation results are obtained using threefold cross
validation (the reported results are averaged over 100 repetitions).
constancy performance. Moreover, the median angular
errors of the suboptimal classifier using the quasi-invariant
features is outperformed by the optimal classifier using the
same features.
Finally, to verify whether the edge detection accuracy of
the training set corresponds to the accuracy on the test
images, we manually annotated highlights in the SFU
Controlled Indoor set. Using these manually labeled highlights and the output of the three classifiers, we are able to
relate the classifier performance to the color constancy
output. The accuracy of the classifier is dependent on the
threshold on the classifier posterior probability, e.g., a low
threshold will classify many edges as highlight and
consequently result in high recall but low precision, while
a high threshold will classify few edges as highlight,
resulting in low recall but high precision. The F-measure
is often used to summarize the tradeoff between precision
and recall, so we use this measure here. For each method,
we select the threshold which yields the highest F-measure.
The results of the three classifiers are 0.960 (SIFT-feature in
combination with the quasi invariants), 0.729 (quasi invariants), and 0.513 (quasi invariants with suboptimal SVMparameters). These results are in accordance with the color
constancy performance in Table 2 and confirm that the
proposed method benefits from more accurate specular
edge detection.
Comparison to state-of-the-art. Results of state-of-the-art
methods are shown in Table 3. It should be noted that the
proposed method using specular weights, as well as several
other state-of-the-art methods contain parameters that
considerably influence the results. Therefore, we report
the performance of applying cross validation to determine
the parameter settings automatically (denoted cross-val. in
Table 3). The automatic selection of parameters is performed by threefold cross validation, where the reported
results are the average of 100 runs.
It can be derived that the proposed method is comparable to state-of-the-art. For the gamut mapping, the
implementation of [13] is used. All images recorded under
one light source (syl-50MR16Q) are used to construct the
canonical gamut, where we make sure that when testing
images from a particular scene (e.g., ball), the corresponding
training image of that scene (e.g., ball under syl-50MR16Q) is
not used for computation of the canonical gamut. The mean
angular error of the proposed method is slightly higher than
the gamut mapping and the general Gray World, but the
median is considerably lower. Further, using the Wilcoxon
Fig. 6. Some example images of the data sets used. The top row shows
some examples of the SFU Controlled Indoor set, the middle row shows
examples of the color-checker set, and the bottom row shows some
examples of the SFU Gray-ball set.
sign test at 95 percent confidence level [28] (results are not
visualized here), it is found that the proposed method is
significantly better than all methods except the general Gray
World. Finally, we would like to note that the proposed
method outperforms the unweighted Gray Edge for all
choices for and Minkowski-norm, provided the appropriate value for is selected.
5.2 Uncontrolled Data Sets
The next experiments are performed on two uncontrolled
data sets containing a variety of scenes. The first data set,
denoted SFU Gray-ball, consists of 15 clips with a total of
11,346 images [30]. These images are stored in a nonlinear
device-RGB color space (NTSC-RGB), so to create linear
images we applied gamma correction with ¼ 2:2 and
recomputed the ground truth using these linear images.
Further, since the correlation among the images is rather
high, video-based analysis was applied to decorrelate the
visual content of the data set [31], resulting in a smaller but
uncorrelated data set containing 1,135 images (the reported
results are obtained on the test set, consisting of 70 percent
of the uncorrelated data, as indicated by [31]). The second
data set, denoted Color checker, contains 568 images [32].
This data set uses a Macbeth Color Checker that is
carefully placed within the scene to capture the ground
truth. Note that the latter of the two data sets does not
need additional processing to acquire linear images.
Examples of these data sets are shown in Fig. 6 (middle
and bottom rows).
The results on these two data sets mostly agree with the
previous experiments (see Table 4). The error when using
specular-based weights decreases as the algorithm is applied
multiple iterations, while the error when using shadowbased weights roughly remains stable. Although the increase
in performance on the SFU Gray-ball set is only marginal
with respect to the unweighted Gray-Edge (a decrease in
mean angular error of approximately 2 percent is not
perceptually significant [33]), the proposed method still
performs significantly better than all other methods according to the Wilcoxon sign test (at the 95 percent confidence
level). An overview of all results is shown in Table 5. A
possible reason for the apparent lack of improvement with
respect to the unweighted Gray Edge on this data set is the
number of color clipped pixels: On average, 5 percent of the
pixels in this data set are possibly clipped (a pixel is possibly
GIJSENIJ ET AL.: IMPROVING COLOR CONSTANCY BY PHOTOMETRIC EDGE WEIGHTING
927
TABLE 4
Mean, Median, and Maximum Angular Errors on the Two Uncontrolled Data Sets
The results on the SFU Gray-ball set are obtained with ¼ 10, the results on the color-checker set with ¼ 50 (both sets are processed with ¼ 1).
6
DISCUSSION
clipped if it has a maximum response in either of the three
channels). The color-checker set, for instance, only consists of
0.5 percent color clipped pixels. Since these pixels are
ignored during the estimation of the light source, they
cannot contribute to more accurate estimations. In fact, as
was shown in Section 4, the accuracy will rapidly degrade if
a significant percentage of the pixels with high intensity
(likely specular pixels) is ignored.
Applying the proposed method to the color-checker set
results in state-of-the-art performance, see Table 6. The
proposed method significantly outperforms the other
methods according to the Wilcoxon sign test, including
the gamut mapping. It should be noted that the Wilcoxon
sign test is based on the number of images on which one
method performs better than another. In the comparison
between the proposed method and the gamut mapping, it
was found that the proposed method performs better than
the gamut mapping on the majority of the images, resulting
in a statistical significant difference in favor of the proposed
method. Running multiple iterations using the specular
weighting scheme improves the performance considerably:
Compared to the unweighted Gray Edge, a decrease in
angular error of more than 11 percent can be obtained. This
difference is perceptually and statistically significant.
Some example results of both the SFU Gray-ball and the
color-checker set are shown in Fig. 7. Overall, it can be
concluded that the proposed method using specular edges
improves upon the traditional Gray-Edge method. Assigning higher weights to specular edges can lead to an
improvement of up to 11 percent (on the color-checker set).
However, from the experiments in this section, it becomes
clear that the proposed method introduces stronger (but not
more) outliers: the maximum angular error increases on all
data sets, but the median angular error decreases.
In this paper, it is shown that weighted edge-based color
constancy based on specular edges can significantly
improve unweighted edge-based color constancy. Further,
it is shown that shadow edges contain valuable information.
The reason for these conclusions can be derived as follows:
Edges that are achromatic when viewed under a white light
source will accumulate in a tight gamut and assume the
color of the light source when observed under colored
illumination. This will increase the saturation from 0 to the
saturation of the light source. Consequently, these edges are
well suited for estimating the color of the light source, as all
properties of the light source are contained in this edge.
This was shown in Section 4.
Colored edges, on the other hand, are edges that
correspond to the transition from one surface (e.g., a red
surface when viewed under a white light source) to another
surface (e.g., a blue surface when viewed under a white
light source). The saturation of this red-to-blue edge when
viewed under a white light source can be an arbitrary value.
Moreover, when this edge is observed under a colored light
source, the saturation and color will result in gamuts with
large variation that can take on unpredictable values, from
which it is extremely hard (if not impossible) to estimate the
color of the light source. In general, the hue of an edge is
more altered by the illuminant when the saturation of that
edge under a white light source is lower. More formally, a
negative correlation exists between Sw and dwu , where Sw is
the saturation of an edge under a white light source and dwu
is the distance of that edge under a white light source w to
that same edge viewed under an unknown light source u.
From this, it can be concluded that edges that are
unsaturated under a white light source are good candidates
for estimating the color of the light source. Specular and
shadow edges are examples of such edges. However,
specular edges are difficult to detect because of disturbances
TABLE 5
Comparison to Other Algorithms on the SFU Gray-Ball Set
TABLE 6
Comparison to Other Algorithms on the Color-Checker Set
928
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
VOL. 34, NO. 5,
MAY 2012
Fig. 7. Some example results of several methods, applied to the uncontrolled data sets. Note that these images are gamma-corrected for better
visualization, but the estimation and correction are performed on the linear images. The value reported in the bottom right corner indicates the
angular error. The top two rows show examples of the SFU Gray-ball set with, from left to right, the original image, the result of correction with the
ground truth, the proposed method, the first-order Gray Edge, the Gray World, and using I.I.C. Space. The bottom two rows show examples of these
algorithms applied to the color-checker set.
like a colored illumination. Using a specular edge detector
that is dependent on the color of the illumination will
introduce the necessity of running multiple iterations. The
shadow edges can be detected regardless of the color of the
light source, but are of less value than the specular edges.
If we assume highlights are the most important cue for
the estimation of the color of the light source, it becomes
obvious that Gray-Edge-based methods gain more from this
knowledge than pixel-based methods. In a typical scene, the
number of pixels that coincide with the illuminant direction
is vastly outnumbered by the number of pixels that do not
coincide with this direction: Most pixels are not highlights
or perfect reflectances. For edges, a similar conclusion
holds, although the ratio of edges that coincide with the
illuminant direction versus edges that do not is likely to be
larger. Indeed, uniformly colored patches will only have
nonzero edge energy near the boundary of the patch, so
small patches will generate relatively more edges with
nonzero energy than large patches. Consequently, as highlights (or other pixels that coincide with the color of the
light source) are often small regions in an image, the edges
that are caused by these regions are likely to stand out
among the other edges more than highlight pixels among
other pixels. Note, however, even though edges are more
likely to coincide with the illuminant direction than pixels,
this does not mean pixels are inferior to edges at all times.
Moreover, even scenes with few or no edges or pixels that
coincide with the illuminant direction could result in
accurate illuminant estimates: The average of a scene can
still be an accurate cue for the color of the light source if
there is enough variation among the pixels or edges.
The main advantage of the proposed (iterative) weighted
Gray Edge over existing methods is that additional
information, which is provided by the different edge types,
is used. This information, when available, results in more
accurate illuminant estimates. Moreover, the proposed
method connects two theories involving color image
analysis, i.e., the Gray Edge for color constancy and the
Quasi Invariants for edge classification. The disadvantage
of the proposed method is that in case of misclassification of
the edge types the method may result in lesser performance.
The weighted Gray Edge inherits the weakness of the
regular Gray Edge: opaque parameter selection, i.e., the
optimal parameters are difficult to select without prior
knowledge of the input images.
7
CONCLUSION
In this paper, the influence of different edge types on the
performance of edge-based color constancy is analyzed.
It is shown that weighted edge-based color constancy
based on specular edges can result in accurate illuminant
estimates. Using a weight map that is based on shadow
edges performs slightly worse than specular edges, but
considerably better than using material edges. However, the
accuracy of the detection of specular edges is degraded by
failing assumptions of the quasi invariants (like the
assumption of a white light source). Iteratively classifying
specular edges and estimating the illuminant can considerably reduce the dependency of specular weights on the
intrinsic assumption of a white light source, and hence
result in a better performance. Weight maps that put more
emphasis on shadow edges are not dependent on the color
of the light source, and result in stable illuminant estimates.
Experiments on images that are recorded under controlled circumstances demonstrate that the proposed
iterative weighted Gray-Edge algorithm based on highlights reduces the median angular error with approximately
25 percent. Further, in experiments on images that are
recorded in an uncontrolled environment, improvements in
angular error up to 25 percent with respect to unweighted
edge-based color constancy are obtained.
GIJSENIJ ET AL.: IMPROVING COLOR CONSTANCY BY PHOTOMETRIC EDGE WEIGHTING
ACKNOWLEDGMENTS
This work has been supported by the EU projects ERGTSVICI-224737, by the Spanish Research Program ConsoliderIngenio 2010: MIPRCV (CSD200700018), and by the Spanish
projects TIN2009-14173. Joost van de Weijer acknowledges
the support of a Ramon y Cajal fellowship.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
T. Gevers and A. Smeulders, “Pictoseek: Combining Color and
Shape Invariant Features for Image Retrieval,” IEEE Trans. Image
Processing, vol. 9, no. 1, pp. 102-119, Jan. 2000.
M. Fairchild, Color Appearance Models, second ed. John Wiley &
Sons, 2005.
S. Hordley, “Scene Illuminant Estimation: Past, Present, and
Future,” Color Research Application, vol. 31, no. 4, pp. 303-314,
2006.
M. Ebner, Color Constancy. Wiley-IS&T Series in Imaging Science
and Technology, John Wiley & Sons, 2007.
G. Buchsbaum, “A Spatial Processor Model for Object Colour
Perception,” J. Franklin Inst., vol. 310, no. 1, pp. 1-26, 1980.
G. Finlayson and E. Trezzi, “Shades of Gray and Colour
Constancy,” Proc. Color Imaging Conf., pp. 37-41, 2004.
E. Land, “The Retinex Theory of Color Vision,” Scientific Am.,
vol. 237, no. 6, pp. 108-128, Dec. 1977.
G. Finlayson, S. Hordley, and P. Hubel, “Color by Correlation: A
Simple, Unifying Framework for Color Constancy,” IEEE Trans.
Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 12091221, Nov. 2001.
G. Finlayson, S. Hordley, and I. Tastl, “Gamut Constrained
Illuminant Estimation,” Int’l J. Computer Vision, vol. 67, no. 1,
pp. 93-109, 2006.
D. Forsyth, “A Novel Algorithm for Color Constancy,” Int’l
J. Computer Vision, vol. 5, no. 1, pp. 5-36, 1990.
A. Chakrabarti, K. Hirakawa, and T. Zickler, “Color Constancy
beyond Bags of Pixels,” Proc. IEEE Conf. Computer Vision and
Pattern Recognition, 2008.
H. Chen, C. Shen, and P. Tsai, “Edge-Based Automatic White
Balancing with Linear Illuminant Constraint,” Visual Comm. and
Image Processing, SPIE, 2007.
A. Gijsenij, T. Gevers, and J. van de Weijer, “Generalized
Gamut Mapping Using Image Derivative Structures for Color
Constancy,” Int’l J. Computer Vision, vol. 2/3, pp. 127-139,
2010.
J. van de Weijer, T. Gevers, and A. Gijsenij, “Edge-Based Color
Constancy,” IEEE Trans. Image Processing, vol. 16, no. 9, pp. 22072214, Sept. 2007.
K. Barnard, V. Cardei, and B. Funt, “A Comparison of Computational Color Constancy Algorithms; Part I,” IEEE Trans. Image
Processing, vol. 11, no. 9, pp. 972-984, Sept. 2002.
K. Barnard and B. Funt, “Color Constancy with Specular and
Non-Specular Surfaces,” Proc. Color Imaging Conf., pp. 114-119,
1999.
R. Tan, K. Nishino, and K. Ikeuchi, “Color Constancy through
Inverse-Intensity Chromaticity Space,” J. Optical Soc. Am. A,
vol. 21, no. 3, pp. 321-334, 2004.
K. Barnard, G. Finlayson, and B. Funt, “Color Constancy for
Scenes with Varying Illumination,” Computer Vision and Image
Understanding, vol. 65, no. 2, pp. 311-321, 1997.
G. Finlayson, B. Funt, and K. Barnard, “Color Constancy under
Varying Illumination,” Proc. IEEE Int’l Conf. Computer Vision,
pp. 720-725, 1995.
G. Finlayson, S. Hordley, C. Lu, and M. Drew, “On the Removal of
Shadows from Images,” IEEE Trans. Pattern Analysis and Machine
Intelligence, vol. 28, no. 1, pp. 59-68, Jan. 2006.
J. Geusebroek, R. van den Boomgaard, A. Smeulders, and H.
Geerts, “Color Invariance,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 23, no. 12, pp. 1338-1350, Dec. 2001.
T. Gevers and H. Stokman, “Classifying Color Edges in Video into
Shadow-Geometry, Highlight, or Material Transitions,” IEEE
Trans. Multimedia, vol. 5, no. 2, pp. 237-243, June 2003.
J. van de Weijer, T. Gevers, and J. Geusebroek, “Edge and Corner
Detection by Photometric Quasi-Invariants,” IEEE Trans. Pattern
Analysis and Machine Intelligence, vol. 27, no. 4, pp. 625-630, Apr.
2005.
929
[24] J. von Kries, “Influence of Adaptation on the Effects Produced by
Luminous Stimuli,” Sources of Color Vision, D. MacAdam, ed.,
pp. 109-119, MIT Press, 1970.
[25] W. Freeman and E. Adelson, “The Design and Use of Steerable
Filters,” IEEE Trans. Pattern Analysis and Machine Intelligence,
vol. 13, no. 9, pp. 891-906, Sept. 1991.
[26] A. Gijsenij and T. Gevers, “Shadow Edge Detection Using
Geometric and Photometric Features,” Proc. IEEE Int’l Conf. Image
Processing, pp. 1-8, 2009.
[27] J. Zhu, K. Samuel, S. Masood, and M. Tappen, “Learning to
Recognize Shadows in Monochromatic Natural Images,” Proc.
IEEE Conf. Computer Vision and Pattern Recognition, pp. 223-230,
2010.
[28] S. Hordley and G. Finlayson, “Reevaluation of Color Constancy
Algorithm Performance,” J. Optical Soc. Am. A, vol. 23, no. 5,
pp. 1008-1020, 2006.
[29] K. Barnard, L. Martin, B. Funt, and A. Coath, “A Data Set for Color
Research,” Color Research and Application, vol. 27, no. 3, pp. 147151, 2002.
[30] F. Ciurea and B. Funt, “A Large Image Database for Color
Constancy Research,” Proc. Color Imaging Conf., pp. 160-164, 2003.
[31] S. Bianco, G. Ciocca, C. Cusano, and R. Schettini, “Improving
Color Constancy Using Indoor-Outdoor Image Classification,”
IEEE Trans. Image Processing, vol. 17, no. 12, pp. 2381-2392, Dec.
2008.
[32] P. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian
Color Constancy Revisited,” Proc. IEEE Conf. Computer Vision and
Pattern Recognition, pp. 1-8, 2008.
[33] A. Gijsenij, T. Gevers, and M. Lucassen, “A Perceptual Analysis of
Distance Measures for Color Constancy,” J. Optical Soc. Am. A,
vol. 26, no. 10, pp. 2243-2256, 2009.
Arjan Gijsenij received the MSc degree in
computer science from the University of Groningen in 2005, and the PhD degree from the
University of Amsterdam in 2010. He is currently
a scientific software consultant for Alten PTS.
Formerly, he was postdoctoral researcher at the
University of Amsterdam. His main research
interests are in the field of color in computer
vision and psychophysics. He has published
papers in high-impact journals and conferences,
and served on the program committee of several conferences and
workshops. He is a member of the IEEE.
Theo Gevers is an associate professor of
computer science at the University of Amsterdam, The Netherlands, where he is also the
teaching director of the MSc of Artificial
Intelligence. He currently holds a VICI-award
(for excellent researchers) from the Dutch
Organization for Scientific Research. His main
research interests are in the fundamentals of
content-based image retrieval, color image
processing, and computer vision, specifically
in the theoretical foundation of geometric and photometric invariants.
He is chair of various conferences and he is an associate editor for the
IEEE Transactions on Image Processing. Further, he is a program
committee member for a number of conferences, and an invited
speaker at major conferences. He is a lecturer of postdoctoral courses
given at various major conferences (CVPR, ICPR, SPIE, CGIV). He is
member of the IEEE.
Joost van de Weijer received the MSc degree
from Delft University of Technology in 1998 and
the PhD degree from University of Amsterdam in
2005. He was a Marie Curie Intra-European
fellow at INRIA Rhone-Alpes. He was awarded
the Ramon y Cajal Research Fellowships in
computer science by the Spanish Ministry of
Science and Technology. He has been based in
the Computer Vision Center in Barcelona, Spain,
since 2008. He is a member of the IEEE.