Edge-Aware Color Appearance
MIN H. KIM
Yale University, University College London
TOBIAS RITSCHEL
Télécom ParisTech, MPI Informatik
JAN KAUTZ
University College London
Color perception is recognized to vary with surrounding spatial structure,
but the impact of edge smoothness on color has not been studied in color
appearance modeling. In this work, we study the appearance of color under
different degrees of edge smoothness. A psychophysical experiment was
conducted to quantify the change in perceived lightness, colorfulness and
hue with respect to edge smoothness. We confirm that color appearance, in
particular lightness, changes noticeably with increased smoothness. Based
on our experimental data, we have developed a computational model that
predicts this appearance change. The model can be integrated into existing
color appearance models. We demonstrate the applicability of our model on
a number of examples.
Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/image generation—Display algorithms; I.4.0 [Image Processing and
Computer Vision]: General—Image displays
General Terms: Experimentation, Human Factors
Additional Key Words and Phrases: color appearance, psychophysics, visual
perception
ACM Reference Format:
Kim, M. H., Ritschel, T., and Kautz, J. 2011. Edge-Aware Color Appearance.
ACM Trans. Graph. 30, 2, Article 13 (April 2011), 9 pages.
DOI = 10.1145/1944846.1944853
http://doi.acm.org/10.1145/1944846.1944853
This work was completed while M. H. Kim was at University College
London with J. Kautz, and T. Ritschel was at MPI Informatik. Authors
addresses: M. H. Kim, Yale University, 51 Prospect St, New Haven, CT
06511, USA; email: minhyuk.kim@yale.edu; T. Ritschel, Telecom ParisTech,
46 rue Barrault, 75013 Paris, France; email: ritschel@telecom-paristech.fr
J. Kautz, University College London, Malet Place, Gower Street, London,
WC1E 6BT, UK; email: j.kautz@cs.ucl.ac.uk.
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies show
this notice on the first page or initial screen of a display along with the full
citation. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, to
republish, to post on servers, to redistribute to lists, or to use any component
of this work in other works requires prior specific permission and/or a fee.
Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn
Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481,
or permissions@acm.org.
c 2011 ACM 0730-0301/2011/11-ART13 $10.00
DOI 10.1145/1944846.1944853
http://doi.acm.org/10.1145/1944846.1944853
1.
INTRODUCTION
The appearance of color has been well-studied, especially in order to derive a general relationship between given physical stimuli
and corresponding perceptual responses. Common appearance studies use neatly-cut color patches in conjunction with a variety of
backgrounds or viewing environments and record the participants’
psychophysical responses, usually regarding lightness, colorfulness,
and hue [Luo et al. 1991]. The elements of the viewing environments
typically include the main stimulus, the proximal field, the background, and the surround [Fairchild 2005]. Although this categorization suggests that the spatial aspect of the viewing environment is
taken into account, previous appearance studies have only focused
on patch-based color appearance w.r.t. background and surround.
The spatial aspects of the main stimulus, such as its smoothness,
have not been considered.
Figure 1 presents two discs with different edge smoothness. The
right disc appears brighter than the left, even though the inner densities of these two discs are identical. The only difference between the
two is the smoothness of their edges. This indicates that our color
perception changes according to the spatial property of surrounding
edges.
Perceptual color appearance in the spatial context has been intensively researched in psychological vision [Baüml and Wandell
1996; Brenner et al. 2003; Monnier and Shevell 2003]. Typically,
frequency variations of the main stimuli or the proximal field are
explored. The studies are usually set up as threshold experiments,
where participants are asked to match two stimuli with different
frequencies or to cancel out an induced color or lightness sensation.
Although threshold experiments are easy to implement and more
Fig. 1: The right patch appears brighter than the left, while the (inner)
densities of the two are actually identical. The smooth edge of the right
patch induces our lightness perception into the surrounding white, making it
appear brighter. Note that the total amount of emitted radiance is the same
for both, with and without blurred edges. We investigate the impact of edge
gradation on color appearance in this paper.
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
2
•
accurate, this type of data is not directly compatible with suprathreshold measurements of available appearance data [Luo et al.
1991], which allows one to build predictive computational models
of color appearance.
In this paper, we study the impact of perceptual induction of edge
smoothness on color appearance. This is motivated by Brenner et
al.’s work [2003], which has shown that the edge surrounding a
colored patch of about 1◦ is very important to its appearance. To
this end, we conducted a psychophysical experiment and propose a
simple spatial appearance model which can be plugged into other
appearance models. Our main contributions are:
— appearance measurement data of color with edge variation,
— a spatial model taking into account edge variations.
2.
To appear in the ACM Transactions on Graphics
M. H. Kim et al.
RELATED WORK
This section summarizes relevant studies with respect to the perceptual impact of spatial structure.
Background CIE-based tristimulus values can represent the
physical quantity of color, but the perception of a color depends
on many parameters, such as the spatial structure of the surrounding background. For instance, identical gray patches will appear
differently on white and black backgrounds, which is the so-called
simultaneous contrast effect [Fairchild 2005]. In particular, perceived lightness and colorfulness are induced such that they are less
like the surrounding background. We investigate this phenomenon
and, in particular, how induction is influenced by the smoothness of
the edge between a color and its surrounding background.
Spatial Color Appearance Many experiments have been conducted to investigate the influence of spatial structure on color perception; for instance, using a vertical sine-wave luminance grating
that surrounds the test field [McCourt and Blakeslee 1993]. According to McCourt and Blakeslee [1993], the perceived contrast
induction decreases when the spatial frequency of the surrounding
structure is increased. Instead of vertical frequency stimuli, Brenner
et al. [2003] experimented with non-uniform surrounding checkerboard patterns to test chromatic induction. Interestingly, they found
that the directly neighboring surround (up to approx. 1◦ ) is more
influential than the remote surround. Monnier and Shevell [2003]
tested chromatic induction from narrow, patterned, ring-shape surrounds, and found significant shifts.
Much research has been devoted to contrast, which is very related
to edges. For instance, Baüml and Wandell [1996] use a squarewave grating as the main stimuli to determine contrast sensitivity
thresholds. Border effects on brightness and contrast have been
studied by Kingdom and Moulden [1988]. These perceptual effects
have been exploited in unsharp masking (Cornsweet illusion) to
increase perceived contrast [Calabria and Fairchild 2003]. In this
paper, we do not focus on contrast (and contrast thresholds), but
rather on modeling appearance induction due to edge smoothness.
To the best of our knowledge, this is the first work to introduce
and use a color appearance model for counteracting lightness and
colorfulness shifts due to edge variations.
Appearance Modeling Luo et al. [1991] triggered intensive
research into color appearance modeling by providing publicly
available appearance data. More recently, CIECAM02 [Moroney
et al. 2002] established a standard appearance model. Although it
carefully accounts for viewing conditions such as background and
surround, it does not model any spatial properties of the surround.
Nonetheless, there are appearance models that include some spatial aspects. Zhang and Wandell [1997] introduced a simple image
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
appearance model which relies on straightforward spatial filtering
of the different channels before using CIELAB. Fairchild and Johnson [2002] provide a more advanced image appearance model based
on CIECAM02, which also employs spatial filtering. However, they
only account for the local change in contrast sensitivity, as they are
based on contrast sensitivity measurements [Baüml and Wandell
1996]. In contrast, we focus on the more specific question of how
edge frequency (i.e., smoothness) changes color appearance w.r.t.
the surrounding background. We answer and quantify this through a
psychophysical magnitude-estimation experiment.
Edge-Aware Imaging There are a number of imaging techniques in computer graphics that rely on edges/gradients – usually
in the context of high-dynamic-range compression [Tumblin and
Turk 1999; Fattal et al. 2002]. However, these techniques are not
concerned with modeling color appearance with respect to edges.
3.
PSYCHOPHYSICAL EXPERIMENT
In order to quantify the influence of spatial context on color appearance, we conducted two experiments. First, we conducted a
magnitude estimation experiment, where observers were presented
with a number of colored patches, for which lightness, colorfulness,
and hue values needed to be estimated. This magnitude experiment
explored the luminance interaction between the main stimulus and
the background; different phases were conducted, where the softness of the patch edges and background lightness level were varied
independently. Second, we conducted a hue cancellation experiment
for testing hue induction by colored background w.r.t. edge gradation. In this experiment, observers were presented with a number
of random color patches on a different color background, for which
hue needed to be adjusted to imaginary gray.
3.1
Stimuli
Edge-Varying Color Our basic setup for this experiment and
the data analysis were adapted from Luo et al. [1991] and Kim et
al. [2009]. However, our methodology focused on exploring the
impact of the spatial context on color appearance. Each participant
judged a color patch in terms of lightness (w.r.t. a reference white
patch) and colorfulness (w.r.t. a reference colorfulness patch), see
Fig. 3a. The pattern was observed at a 55 cm viewing distance, such
that the patch covered a 2◦ field-of-view. We varied the softness of
the patch edge (from hard to soft, see Fig. 2) but ensured that the
center area Φ always covered at least 1◦ , with the width of the edge
increasing up to ∆Φ = 1.33◦ . Prior to the main experiment, we ran
a pilot experiment to examine the appearance impact of the size of
the solid part. We found that the size of the patch did not affect color
appearance significantly. This is important, since we varied the size
of the solid inner part of our stimuli (see Fig. 2) to ensure that the
overall emitted radiance remains constant.
The smooth edges were created with a smooth-step function
(cubic Hermite spline), evaluated radially. Note that we represent the
smoothness of the edge by its angular width (covering the complete
edge, see Fig. 2) instead of gradient magnitude, as the angular width
can be used directly to build a perceptual model. Three different
background levels (0%, 50%, and 100% of the maximum luminance
level) were used.
Colored Background The magnitude estimation experiment
investigated induction by background luminance w.r.t. edge smoothness. We devised a second experiment to investigate chromatic
induction from colored backgrounds w.r.t. edge smoothness. We
hypothesized that if perceived lightness and colorfulness were in-
To appear in the ACM Transactions on Graphics
•
Edge-Aware Color Appearance
(d) (c) (b) (a)
3
0.7
0.6
Test patch
0.5
∆L
(b)
CIE v'
(a)
Center
of patch
(c)
0.3
0.2
Test color patches
sRGB color gamut
Spectral locus
0.1
∆Φ
(d)
0°
0.4
0.5°
1.0°
0.0
Background
1.5°
2.0°
(a)
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
CIE u'
(b)
Viewing angle (Φ/2)
50
Fig. 2: Edge smoothness variation of the test color patch. Participants
performed a magnitude estimation experiment with different edges. The edge
width values (∆Φ) for (a), (b), (c), and (d) are 0.08◦ , 0.50◦ , 0.92◦ , and 1.33◦
respectively. Hence, the patch sizes (Φ + ∆Φ) varied from 2.08◦ to 3.33◦ .
fluenced by the gradating background, perceived hue would also be
affected. However, due to concerns over chromatic adaptation (perceived hue is adapted to the brightest stimulus [Fairchild 2005]), we
decided against a magnitude experiment as in the first experiment.
Instead, we opted for a hue cancellation experiment [Brenner et al.
2003]. A single patch is shown on a colored background, see Fig. 3c.
The background is one of eight different hues (average luminance
CIE L∗ = 40.79 and chrominance C∗ = 37.40, see Fig. 3d), and the
patch smoothness is varied in the same manner as the magnitude
experiment (observer distance and patch size also remain the same).
3.2
Experimental Procedure
Magnitude Estimation We conducted a series of magnitude
estimation experiments. To this end, the viewing patterns were presented on a calibrated computer monitor (23-inch Apple Cinema
HD Display, characterized according to the sRGB color standard
(gamma: 2.2); max luminance of 188.89 cd/m2 ). The spectral power
distribution of all the color stimuli were measured in 10 nm intervals
with a GretagMacbeth EyeOne spectrometer.
Six trained participants, who passed the Ishihara and City University vision tests, were shown twenty color patches in a dark viewing
environment in random order in each experimental phase (different
background luminance and edge smoothness). See Figure 3 for the
distribution of the color patches and Table I for the different phases.
Participants were asked to produce three integer scales for lightness
(0–100), colorfulness (0 – open scale), and hue (0–400) of the solid
center part of each color patch [Kim et al. 2009]. The participants
completed the twelve phases in approximately three hours without counting break times. They also completed a one-hour training
session in the same experimental setting.
We have tested the reproducibility of the experiment by repeating the same phase with three different participants on different
days. The coefficient of variation (CV), which is the RMS-error
w.r.t. mean, of these two phases of the same stimuli (repeatability)
were 11.74% for lightness, 22.47% for colorfulness, and 4.43% for
hue. The CVs of inter-observer variance were 13.18% for lightness, 25.91% for colorfulness, and 6.56% for hue. This is in good
agreement with previous studies [Luo et al. 1991; Kim et al. 2009].
Hue Cancellation Magnitude estimation experiments were followed by hue cancellation experiments. Viewing patterns which contains only a random color on a colored background were adjusted by
the same participants on the same display, similar to [Brenner et al.
2003]. For a given colored background, participants were asked to
adjust a patch (varying edge smoothness, see Fig. 2) with an initial
random main color such that it appeared as neutral gray. The participants were able to control hue and chroma but not luminance
to yield neutral gray. Note that no white point was shown, just the
CIE b*
30
10
-50
-30
-10
-10
10
30
50
-30
Background
color samples
(c)
(d)
-50
CIE a*
Fig. 3: Viewing patterns and color distributions of the magnitude estimation
experiments, (a) & (b), and the hue cancellation experiments, (c) & (d), as
observed by participants. Each color patch was shown with four different
levels of edge smoothness (random order, see Fig. 2), for each of the viewing
conditions.
background and the patch itself, to avoid potential hue adaptation
to any reference color. We varied the initial patch color to have five
different luminance levels and the background to have eight different hues, but consistent chrominance and lightness. The same six
participants completed the experiment in approximately two hours
(see Table II).
Phase
1
2
3
4
5
6
7
8
9
10
11
12
Edge Width (∆Φ)
0.08◦
0.50◦
0.92◦
1.33◦
0.08◦
0.50◦
0.92◦
1.33◦
0.08◦
0.50◦
0.92◦
1.33◦
Background Lumin.
0.32 cd/m2
0.32 cd/m2
0.32 cd/m2
0.32 cd/m2
22.23 cd/m2
22.23 cd/m2
22.23 cd/m2
22.23 cd/m2
188.72 cd/m2
188.72 cd/m2
188.72 cd/m2
188.72 cd/m2
Peak Lumin.
188.89 cd/m2
188.89 cd/m2
188.89 cd/m2
188.89 cd/m2
182.52 cd/m2
182.52 cd/m2
182.52 cd/m2
182.52 cd/m2
188.27 cd/m2
188.27 cd/m2
188.27 cd/m2
188.27 cd/m2
Table I. : Summary of the twelve phases of our first appearance experiment
with twenty color samples each. Each participant made a total of 720 estimations. This experiment was conducted in dark illumination conditions,
and took about three hours per participant.
Phase
13
14
15
16
Edge Width (∆Φ)
0.08◦
0.50◦
0.92◦
1.33◦
Back. Aver. L∗
40.79
40.79
40.79
40.79
Back. Aver. C∗
37.40
37.40
37.40
37.40
Table II. : Summary of the four phases of our hue cancellation experiment
with forty initial random color samples each (five different luminance levels
(24/42/62/82/100) of the main patch with eight different background hues
(3/45/84/133/178/239/266/313) with a fixed chroma ∼37.40). Each participant performed a total of 40 hue cancellations, such that the patch color
appeared neutral on the colored backgrounds (dark viewing conditions).
This experiment took about two hours per participant.
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
•
4
To appear in the ACM Transactions on Graphics
M. H. Kim et al.
Perceived lightness
90
90
80
80
80
70
70
70
60
60
60
50
50
50
40
40
30
High Lumin.
Mid. Lumin.
Low Lumin.
20
10
High Lumin.
30
20
Mid. Lumin.
20
0.00
0.20
0.40
0.60
0.80
1.00
1.20
0
0.00
1.40
(b)
Colorfulness (dark background)
0.20
0.40
0.60
0.80
1.00
1.20
0.00
1.40
(c)
Colorfulness (mid-gray back.)
60
60
50
50
50
40
40
40
30
30
Mid. Lumin.
10
0.00
Mid. Lumin.
(d)
0.60
0.80
1.00
1.20
0.20
(e)
Hue (dark colors)
80
0.00
1.40
0.40
0.60
0.80
1.00
1.20
70
60
60
60
50
50
50
40
40
40
30
30
0
0.00
0.20
0.40
0.60
0.80
1.00
1.20
1.40
0.40
0.60
0.80
1.00
1.20
1.40
Hue (bright colors)
30
20
L* 42
20
10
L* 62
10
0
0.00
Low Lumin.
0.20
80
70
10
1.40
Mid. Lumin.
(f)
Hue (middle colors)
80
L* 24
1.20
High Lumin.
0.00
1.40
70
20
1.00
0
0
0.40
0.80
10
Low Lumin.
Low Lumin.
0.20
0.60
20
High Lumin.
10
0
0.40
30
20
High Lumin.
0.20
Colorfulness (bright back.)
60
20
High Lumin.
Mid. Lumin.
Low Lumin.
10
Low Lumin.
0
(a)
Perceived colorfulness
40
30
10
0
Color difference (∆E)
Lightness (bright back.)
Lightness (mid-gray back.)
Lightness (dark background)
90
0.20
0.40
0.60
0.80
1.00
1.20
L* 82
L* 100
0
0.00
1.40
0.20
0.40
0.60
0.80
1.00
1.20
1.40
Edge width (∆Φ)
Edge width (∆Φ)
Edge width (∆Φ)
(g)
(h)
(i)
Fig. 4: Comparison of the perceived lightness, colorfulness, and hue. The first two rows represent the differences of perceived lightness and
colorfulness on different background luminances (each column). Horizontal axis indicates the smoothness of the edge in terms of angular
edge width (∆Φ). The stimuli are grouped by their respective level of luminance: high (6 patches), middle (9 patches), and low (5 patches).
High luminance patches have higher values than CIE L∗ =70. Low luminance patches have lower values than L∗ =40. The dark background
has a lightness of L∗ =1.53; the mid-gray background has L∗ =41.50; the bright background has L∗ =100. The third row represents the color
difference in CIE ∆E between the colored background and the neutralized patch for three different luminance levels (each column). The given
patches are grouped by their luminance level: dark (L∗ 24), middle (L∗ 42 and 62), and bright colors (L∗ 82 and 100). These color differences
indicate the relative changes of perceived white against colored background.
Lightness Perceived lightness is affected by the change of edge
smoothness. A softer edge induces perceived lightness more strongly,
i.e., perceived lightness is induced more towards the level of background lightness. For instance, smoother edges on a dark background causes the perceived lightness of a patch to appear darker
than on a mid-gray background; smoother edges on a bright background cause the lightness to appear brighter. See Figure 4(a)–(c).
Colorfulness In most phases, colorfulness – compared to lightness – shows a subtle change according to edge smoothness, see
Fig. 4(d)–(f). In particular, high luminance colors on a dark background present a clear trend: colorfulness of bright patches decreases
with smoothness, see Fig. 4(d). We believe that in this case colorfulness is indirectly influenced by the decrease in perceived lightness
(Fig. 4(a), blue line), which is known as the Hunt effect [Hunt 1994].
Hue In contrast to our initial hypothesis on hue induction, participants were able to adjust the initially colored patch to neutral gray
(CIE x=0.3176 & y=0.3263) with a very small variation (average
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
Perceived lightness
Our experiment quantifies the change in perceived color appearance
with respect to edge smoothness as well as luminance difference
between the patch and the background. Our initial findings can be
summarized as follows.
Colorfulness (dark back.)
Lightness (bright back.)
80
60
40
∆Φ: 1.33
20
∆Φ: 0.08
80
300
60
200
40
∆Φ: 1.33
20
0
20
40 60
CIE L*
80
100
100
∆Φ: 1.33
∆Φ: 0.08
0
0
Hue (mid-gray back.)
400
100
100
Perceived hue
Findings
Perceived colorfulness
3.3
∆Φ: 0.08
0
0
20
40 60
CIE C*
80
100
0
100
200 300
CIE H*
400
Fig. 5: Qualitative comparison of two different edges (∆Φ = 0.08 and
∆Φ = 1.33).
std. dev.: 0.0058). This is despite the fact that the backgrounds were
colored and that there was no reference white. Perception of the neutral grayscales shows a small trend of luminance-dependency (see
Table III). In lower luminance levels, participants picked warmer
grays as neutral, but in middle and high luminance levels, they
chose colder grays as neutral. However, as shown in Fig. 4(g)–(i),
no significant perceived hue changes against different color backgrounds or edge smoothness were observed in the hue cancellation
experiment. Figure 5 presents a qualitative comparison for perceived
color appearance (lightness/colorfulness/hue) with respect to the
smoothness of the edge.
To appear in the ACM Transactions on Graphics
Z
8.31
26.91
63.47
124.46
204.58
x
0.3376
0.3151
0.3119
0.3118
0.3116
y
0.3302
0.3224
0.3242
0.3261
0.3288
CCT
5260K
6476 K
6628 K
6601 K
6571 K
Table III. : Physical measurements of imaginary neutral gray scales in the
hue cancellation experiment. The first column indicates the luminance levels
of given random color patches, and the remaining columns—CIE XY Z,
xy, and correlated color temperature (CCT)—are the averaged physical
readings of the neutral patches chosen by the participants against the colored
backgrounds. The display was calibrated to x=0.3112 and y=0.3280 (CCT:
6587 K).
In summary, edge smoothness consistently affects the induction
of perceived lightness. With softer edges, the lightness of a patch
is induced more towards the background lightness. Colorfulness
shows subtle changes and hue seems unaffected.
4.
MODELING
Classical perceptual models for color appearance, such as
CIECAM02 [Moroney et al. 2002], assume that the edge of a color
patch is sharp because their appearance measurements were based
on sharp-edged color samples [Luo et al. 1991]. However, our perceptual study found that perceived appearance is affected by the
smoothness of stimuli edges. We now present an appearance model
that takes edge smoothness into account.
As shown in the previous section, color appearance strongly
depends on the shape of the bordering edge, namely the lightness difference between the patch and the surrounding background
∆L = Lpatch − Lbackground , as well as the angular width of the edge
∆Φ. For instance, when a patch is shown with a background darker
than the patch, ∆L has a positive value; when a patch is surrounded
by a brighter background, ∆L has a negative value. The width of the
edge ∆Φ has a positive value only.
In order to model lightness and colorfulness induction, we choose
to modify existing color appearance models, namely CIEMCAM02
[Moroney et al. 2002], Kim et al. [2009], and CIELCH [CIE 1986],
instead of deriving a model from scratch. We will explain our model
in the context of standard CIECAM02, but it is essentially identical
when plugged into other appearance models – except for different
constants.
As can be seen in Figure 4(a)-(c), lightness induction is fairly
linear with respect to edge width. Hence, we model the change in
lightness J δ due to induction as:
J δ = f (∆J, ∆Φ) = −k · ∆Φ · ∆J,
(1)
where ∆J = Jpatch − Jbackground and ∆Φ is the angular edge width,
and k is parameter that we fit based on our experimental data1 .
We group the majority of the phases of our experiment as the
training set (phases 1, 2, 4, 5, 7, 8, 9, 11, and 12) and the remaining
ones as the test set (phases 3, 6, and 10). Given the changes in
appearance δ∆J,∆Φ due to lightness differences ∆J and edge widths
∆Φ of the training data set Ψ, we optimize the parameter k by
minimizing the following objective function O:
O = ∑ | f (∆J, ∆Φ) − δ∆J,∆Φ |2
(2)
Ψ
1 CIECAM02
ingly.
denotes perceptual lightness as J; we change notation accord-
•
5
The optimization yields k = 0.0923 (CIECAM02). We perform
a similar optimization for the other appearance models, yielding
k = 0.1317 (CIELCH) and k = 0.0567 (Kim et al. 2009).
The main difference between this model and the original
CIECAM02 is that we need the perceptual background luminance
level Jbackground . The original input parameter Yb for the background is a percentage ratio of the background luminance. We first
compute background XY Z values by scaling the reference white
point XW YW ZW by Yb /100. From this XY Z, we compute the background lightness value Jbackground . The new perceptual lightness
value J ′ is calculated by adding J δ to the original lightness Jpatch :
J ′ = Jpatch + J δ . Note that colorfulness and chroma must then be
computed with this new lightness J ′ .
Colorfulness induction is more subtle, see Fig. 4(d)-(f). We note
that modeling lightness induction already models colorfulness induction to a degree, since a change in predicted lightness will also
change predicted colorfulness. For instance, prediction accuracy for
colorfulness does indeed improve for CIECAM02 (cf. Section 5).
We also experimented with modeling colorfulness induction using linear, quadratic, and cubic polynomials (similar to lightness);
however, prediction of colorfulness did not improve. Since no hue
changes were observed, hue prediction was left unchanged.
Our method is applicable to any color appearance model, e.g.,
CIELAB, RLAB, CIECAM02, or Kim et al. [2009]. As we will see
in the following section, it significantly improves the accuracy of
color appearance models that account for background luminance,
such as CIECAM02 and Kim et al’s.
5.
RESULTS
Figure 7 presents the CV error between the predicted and perceptual
appearance. The dashed red line indicates the result of the original CIECAM02 model. It fails to predict that perceived lightness
increases with edge smoothness, see Fig. 7(a). The solid red line
represents the CV error for CIECAM02 with our edge-aware model.
The lightness prediction is significantly better. An even better improvement is achieved for Kim et al.’s model (blue lines). There is
no improvement for either model for mid-gray backgrounds, which
is to be expected, since lightness perception does not change in that
case (see Fig. 4(b)). The improvement for CIELCH (orange lines)
is not significant for any kind of background, which is unsurprising, as the original model does not take into account background
luminance.
In Figure 7(b), the results for colorfulness prediction are shown.
Colorfulness prediction for CIECAM02 with dark backgrounds
improves with our model; this is also the only case where a clear
colorfulness induction was observed (blue line in Fig. 4(d)). The
colorfulness prediction of Kim et al.’s model does not improve,
as the colorfulness computation in their model does not directly
depend on relative lightness. Similarly, the chroma prediction of the
CIELCH model does not improve.
Lightness (bright back.)
Lightness (dark back.)
100
100
80
60
40
J (CIECAM02)
J (Spatial model)
20
80
60
40
J (CIECAM02)
J (Spatial model)
20
0
0
0
20 40 60 80 100
Perceived lightness
Colorfulness (dark back.)
100
Predicted colorfulness
Y
8.23
23.70
56.17
111.51
186.70
Predicted lightness
X
8.44
23.18
54.05
106.69
176.95
Predicted lightness
L∗
24
42
62
82
100
Edge-Aware Color Appearance
80
60
40
M (CIECAM02)
M (Spatial model)
20
0
0
20 40 60 80 100
Perceived lightness
0
20 40 60 80 100
Perceived colorfulness
Fig. 6: Qualitative comparison of the results of the ∆Φ = 1.33 on dark and
bright backgrounds.
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
•
6
To appear in the ACM Transactions on Graphics
M. H. Kim et al.
45
30
40
25
CV of colorfulness
CV of lightness
35
20
15
10
L (CIELCH)
J (CIECAM02)
J (Kim et al.)
5
L (CIELCH+S.)
J (CAM02+S.)
J (Kim et al.+S.)
30
25
20
15
C (CIELCH)
M (CIECAM02)
M (Kim et al.)
10
C (CIELCH+S.)
M (CAM02+S.)
M (Kim et al.+S.)
5
0
0
1
2
3
4
5
6
7
8
9
10
11
1
12
2
3
4
5
6
7
8
9
10
11
12
Phase
Phase
(a) Lightness prediction
(b) Colorfulness prediction
Fig. 7: Comparison between CIELCH, CIECAM02, Kim et al. [2009] and their edge-aware counterparts. In both subfigures, (a) and (b),
phases 1–4 are with a dark background; phases 5–8 are with a mid-gray background; phases 9–12 are with a bright background. Within these
phase groups, higher phase numbers present smoother edge gradation. In particular, lightness and colorfulness predictions are significantly
improved w.r.t. edge smoothness. Among them, the test data set include phases 3, 6, and 10. For quantitative results of each model, see
Tables IV, V, and VI.
35
CAM
30
Average CV error
CAM + S.
25
20
15
10
5
0
L* C*
L* C*
CIELCH
CIELCH+S.
J
M
CIECAM02
J
M
CIECAM02+S.
J
M
Kim et al.
J
M
Kim et al.+S.
Appearance model
Fig. 8: Overall CV errors for different color appearance models (CAMs),
with and without our spatial enhancement. L∗ and J denote lightness; C∗ and
M denote colorfulness. As can be seen, the CV errors of background-aware
appearance models are considerably reduced, especially for lightness in
CIECAM02 and Kim et al.’s model.
(Fig. 4), blurring can lead to perceived lightness and colorfulness
changes, which we have formalized in our model. We can now use
our model to counteract these changes. The logo in Figure 9(a) contains uniform blue and red colors. After applying a Gaussian filter,
the colors in image (b) appear not only brighter, but also more colorful, even though the actual color values in image (b) are the same
as in (a). Before we can use our model, we have to relate the standard deviation σ of the (angular) Gaussian kernel to (angular) edge
width. We numerically derive a direct linear relationship between σ
and the resulting edge width, σ = 2.656 ∆Φ, which also produces
the same overall slope. We now apply our edge-aware CIECAM02
model (forward and inverse [CIE 2004]) so that the original color
appearance is preserved even after blurring. The result can be seen
in image (c), where color appearance now matches image (a).
Figure 10 presents the perceptual impact of anti-aliasing fonts.
Fig. 10(a) shows the Arial Italic font without anti-aliasing. The font
appears as high-contrast, albeit jagged edges are visible. Fig. 10(b)
is the same font but rendered with smooth anti-aliasing. The font
now appears smoother with reduced aliasing artifacts. However, the
perceived lightness and contrast of the font is also altered. Note that
Unfortunately, there is no other publicly available perceptual data
for edge-based appearance, so we could not test our model with any
external data. We used phases 3, 6, and 10 for cross-validation (not
part of the training data), which also produced consistent results
(see Fig. 7). The overall average results are presented in Fig. 8.
Qualitative results for lightness and colorfulness are shown in Fig. 6.
5.1
Applications
In the following, we demonstrate how our edge-aware model
(CIECAM02-based) can be used in practice. Note that all figures
are optimized for a calibrated 23” display with 190 cd/m2 at 55 cm
viewing distance. A blurring filter is a commonly used manipulation
tool in image editing software. As evident from our experiment
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
(a)
(b)
(c)
Fig. 9: The logo in (a) is blurred by a Gaussian filter, yielding (b). Image
(b) appears brighter and more colorful than the original (assuming 55 cm
viewing distance and full-page view). Using our model, image (c) produces
the same color appearance even after the blurring operation. Note that
actual pixel values in image (c) are different from the original image (a).
To appear in the ACM Transactions on Graphics
Edge-Aware Color Appearance
(a)
(b)
(c)
Fig. 10: Image (a) shows the Arial Italic font without anti-aliasing (30 pt
font, zoomed in 200% using nearest neighbor upsampling for display purposes). Image (b) shows the same font but rendered with anti-aliasing. The
font is rendered with the same pixel density, but appears lighter than the
original due to edge smoothness. With our model, we can render anti-aliased
fonts, see image (c), but with the same appearance as the original fonts.
•
7
blurred image) on the calibrated LCD display at 55 cm distance in
dark viewing conditions. For each set, a source reference image
without blur was inserted between these two stimuli to be compared
with them. Participant were asked to choose which stimulus was
closer to the reference in terms of color appearance (lightness, colorfulness, and hue). The five sets of images are all shown in this paper
(see Figures 1, 9, 13, 14, and 15).
A one-way ANOVA test was employed to validate the statistical
significance of our method. As shown in Figure 12, the result indicates that our model produces blurred images that are much closer to
the original in terms of lightness and colorfulness, compared to the
standard blurred image, and the difference in scores is statistically
significant (p < 1.1 × 10−5 ; α=0.05). This shows that there is a
clearly perceptible difference between the original and the standard
blurred image in terms of color appearance, whereas our method
preserves perceptual color appearance.
5
100%
4
70%
Score
40%
10%
3
2
1
0
Density of letter
Fig. 11: A blurred letter is successively downsampled which reduces the
smoothness of its edges accordingly. The four letters have the same lightness
appearance when applying our edge-aware appearance model. Comparing
the actual densities of these letters (bottom) shows that smaller letters have
lighter densities.
the pixel intensity of characters are identical to Fig. 10(a). Fig. 10(c)
shows the anti-aliased font with our edge-aware model applied,
giving equally high contrast as in the original font. The contrast of
Fig. 10(c) is physically different from Fig. 10(a), but perceptually
identical to Fig. 10(a).
In Figure 11, we successively downsample a blurry letter. In order
to maintain the same lightness impression even for the downsampled
letters, we apply our model. At the bottom we show the actual graylevel that is used to maintain the same perceptual appearance.
Figures 13, 14, and 15 show more complex examples. Image (a)
in Figs. 13–15 is a sharp source image. We simulate a depth-of-field
effect by directly applying a progressively stronger Gaussian blur
(bottom to top), yielding Image (b) in Figs. 13–15. Again, lightness
and colorfulness increase in Fig. 13. In contrast, lightness and colorfulness decrease in Fig. 14 as the castle towers are surrounded by
a dark background, compared to the original (assuming full-screen
view of images at 55 cm distance). Image (c) in Figs. 13–15 shows
that our model manages to preserve the original appearance. In
Fig. 15(b), electronic displays on the building at the junction of the
two avenues seem darker due to blurring. The displays on Fig. 15(c)
preserves the original brightness, compared to Fig. 15(b).
5.2
Validation
We conducted a user study to verify the perceptual applicability of
our method. To this end, ten participants were shown five sets of
two different stimuli (a standard blurred image and our edge-aware
Standard smoothing
Edge-aware smoothing
Fig. 12: This graph from a one-way ANOVA test shows the mean (red line)
and 95% confidence intervals (blue trapezoids) of appearance similarity of
the standard blur and edge-aware blur to the reference. Score varies from
0 (different from the reference) to 5 (identical to the reference) in terms of
color appearance. The mean similarity score of the standard blur is 1.0909;
the mean of our edge-aware blur is 3.9091. The p value from this test shows
statistically significant performance of our method w.r.t. perceptual similarity
to the reference (p < 1.1 × 10−5 ).
5.3
Discussion and Limitations
Prior to the presented magnitude experiments, we conducted several
pilot experiments to determine if background patterns of different
frequencies cause noticeable appearance changes. We found that for
the same average background luminance but different frequencies,
patches appear quite consistent. Previous work [Brenner et al. 2003]
also found these appearance changes to be subtle (albeit measurable).
We therefore do not take spatial frequencies into account. Monnier
and Shevell [2003] found significant shifts for circular chromatic
patterns around a patch. We speculate that these shifts may in fact
be related to shifts due to edge smoothness.
Our stimuli provide a solid color at the center with varying edge
smoothness. Participants were asked to make their judgements by
exclusively considering the solid center part of the color patch. We
have experimented with different edge profiles (Gaussian vs. spline)
and conclude that induction depends on the overall edge width and
slope, but not on the exact shape of the fall off.
With regards to the hue cancellation experiment, we found that
participants consistently produced the same gray patches despite
being shown different backgrounds, as discussed before. This is
unexpected since chromatic adaptation depends on the brightest
stimulus as the reference white [Fairchild 2005]. Unfortunately, we
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
8
•
M. H. Kim et al.
To appear in the ACM Transactions on Graphics
(a)
(b)
(c)
Fig. 13: The background of image (a) is softened by a Gaussian blur. Image (b) shows the naı̈ve blurring result, where the dark red house
appears brighter and bricks appear more colorful than the original. Image (c) shows our edge-aware smoothing result, with the appearance of
the house being maintained as in the source image. We assume each image is displayed full-screen. Image courtesy and copyright of Ray
Daly [2010].
(b)
(c)
(a)
Fig. 14: A depth-of-field effect is simulated by directly applying a progressively stronger Gaussian blur (bottom to top). Image (b) shows the
naı̈ve blurring result, where the further castle towers appear less bright and less colorful. Image (c) applies our model to preserve the original
appearance. Image courtesy and copyright of Rebekah Travis [2010].
(a)
(b)
(c)
Fig. 15: Image (a) shows the source image without any blur. Image (b) and (c) show the comparison between naı̈ve blurring and our model.
Although our model compensates for the perceptual difference induced by the blur, the change of perceptual luminance is subtle in this case.
Image courtesy and copyright of Juan Sanchez [2010].
do not have a good hypothesis for why this is. However, based on our
experiment we were unable to observe hue induction, and therefore
excluded it from our model. Our model also excludes the multiple
surrounding effect, as presented by Monnier and Shevell [2003], but
focuses on lightness and colorfulness changes.
Lightness induction is often rather obvious, see Fig. 1, 9, and 13,
but can also be subtle, see Fig. 15. This seems to be true for cluttered
scenes on a dark background.
perceived hue was not influenced. Based on the experiment, we
have developed a spatial model that can be used to enhance existing
color appearance models, such as CIECAM02. We demonstrated
the applicability of our model in imaging applications.
Acknowledgements
6.
CONCLUSIONS
We have conducted a psychophysical experiment to determine and
measure the influence of edge smoothness on appearance. We found
that edge smoothness significantly affected perceived lightness. Colorfulness was also affected, albeit mostly for dark backgrounds. The
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.
We would like to thank the participants for their tremendous effort
with our experiments, Ray Daly, Rebekah Travis, and Juan Sanchez
who kindly gave us permission to use their photographs, and Patrick
Paczkowski for proof-reading. We would like to express gratitude
to the anonymous reviewers for their valuable comments.
To appear in the ACM Transactions on Graphics
Edge-Aware Color Appearance
Appendices
L∗
14.61
13.46
12.22
12.34
18.74
15.05
16.79
16.94
13.06
13.13
12.18
11.87
C∗
31.51
33.51
32.37
33.25
28.78
29.56
30.82
35.33
36.94
31.29
35.17
34.59
h∗
14.15
13.51
15.73
16.63
14.43
17.63
14.16
16.31
13.61
17.03
20.33
14.35
L∗ +S.
14.86
14.44
13.44
13.83
18.78
15.13
16.83
16.99
13.24
13.38
11.76
10.49
C∗ +S.
31.50
33.49
32.36
33.23
28.77
29.55
30.81
35.32
36.94
31.28
35.15
34.58
h∗ +S.
14.13
13.50
15.72
16.62
14.44
17.63
14.15
16.30
13.61
17.03
20.31
14.35
Table IV. : Quantitative comparison results in CV errors between CIELCH
and its edge-aware application.
Phase
1
2
3
4
5
6
7
8
9
10
11
12
J
7.91
10.01
12.05
13.40
16.82
13.14
14.27
14.51
14.14
18.52
20.13
21.92
M
33.69
36.70
38.09
38.88
22.77
22.90
27.20
22.74
26.51
21.10
25.23
24.14
H
10.79
8.32
7.89
9.65
8.49
9.87
6.49
10.80
7.11
9.62
12.85
6.71
J+S.
7.91
8.92
8.86
9.00
16.89
13.43
14.68
15.13
13.68
15.49
14.26
13.77
M+S.
33.69
34.96
34.97
34.11
22.77
22.95
27.39
22.30
26.49
20.67
25.06
25.15
H+S.
10.79
8.32
7.89
9.65
8.49
9.87
6.49
10.80
7.11
9.62
12.85
6.71
Table V. : Quantitative comparison results in CV errors between CIECAM02
and its edge-aware application.
Phase
1
2
3
4
5
6
7
8
9
10
11
12
J
12.43
17.52
21.35
23.07
15.07
12.09
13.38
13.60
17.82
22.11
24.17
25.34
9
REFERENCES
Experimental Data The psychophysical experimental data that
was used to develop our model is available as an electronic appendix
to this article, which can be accessed through the ACM Digital
Library.
Phase
1
2
3
4
5
6
7
8
9
10
11
12
•
M
23.02
19.99
20.00
20.02
20.11
22.37
22.12
28.39
19.06
14.96
21.16
20.60
H
11.03
9.30
9.57
11.12
8.40
10.34
7.36
10.95
7.28
9.68
12.82
6.82
J+S.
11.56
11.50
10.20
8.77
15.08
11.77
12.82
13.06
16.88
16.52
14.06
11.37
M+S.
23.02
19.99
20.00
20.02
20.11
22.37
22.12
28.39
19.06
14.96
21.16
20.60
H+S.
11.03
9.30
9.57
11.12
8.40
10.34
7.36
10.95
7.28
9.68
12.82
6.82
BA ÜML , K. H. AND WANDELL , B. A. 1996. Color appearance of mixture
gratings. Vision Res. 36, 18, 2849–2864.
B RENNER , E., RUIZA , J. S., H ERR ÁIZA , E. M., C ORNELISSENB , F. W.,
AND S MEETSA , J. B. J. 2003. Chromatic induction and the layout of
colours within a complex scene. Vision Res. 43, 13, 1413–1421.
C ALABRIA , A. J. AND FAIRCHILD , M. D. 2003. Perceived image contrast
and observer preference I: The effects of lightness, chroma, and sharpness
manipulations on contrast perception. J. Imaging Science & Technology 47,
479–493.
CIE. 1986. Colorimetry. CIE Pub. 15.2, Commission Internationale de
l’Eclairage (CIE), Vienna.
CIE. 2004. CIE TC8-01 Technical Report, A Colour Apperance Model for
Color Management System: CIECAM02. CIE Pub. 159-2004, Commission Internationale de l’Eclairage (CIE), Vienna.
DALY, R. 2010. Brick house on a sunny day. http://www.flickr.com/photos/rldaly/4480673512/.
FAIRCHILD , M. D. 2005. Color Appearance Models, 2nd ed. John Wiley,
Chichester, England.
FAIRCHILD , M. D. AND J OHNSON , G. M. 2002. Meet iCAM: A nextgeneration color appearance model. In Proc. Color Imaging Conf. IS&T,
33–38.
FATTAL , R., L ISCHINSKI , D., AND W ERMAN , M. 2002. Gradient domain
high dynamic range compression. ACM Trans. Graph. (Proc. SIGGRAPH
2002) 21, 3, 249–256.
H UNT, R. W. G. 1994. An improved predictor of colourfulness in a model
of colour vision. Color Res. Appl. 19, 1, 23–26.
K IM , M. H., W EYRICH , T., AND K AUTZ , J. 2009. Modeling human color
perception under extended luminance levels. ACM Trans. Graph. (Proc.
SIGGRAPH 2009) 28, 3, 27:1–9.
K INGDOM , F. AND M OULDEN , B. 1988. Border effects on brightness: A
review of findings, models and issues. Spatial Vision 3, 4, 225–262.
L UO , M. R., C LARKE , A. A., R HODES , P. A., S CHAPPO , A., S CRIVENER ,
S. A. R., AND TAIT, C. J. 1991. Quantifying colour appearance. Part I.
LUTCHI colour appearance data. Color Res. Appl. 16, 3, 166–180.
M C C OURT, M. E. AND B LAKESLEE , B. 1993. The effect of edge blur on
grating induction magnitude. Vision Res. 33, 17, 2499–2507.
M ONNIER , P. AND S HEVELL , S. K. 2003. Large shifts in color appearance
from patterned chromatic backgrounds. Nature Neuroscience 6, 8, 801–
802.
M ORONEY, N., FAIRCHILD , M. D., H UNT, R. W. G., L I , C., L UO , M. R.,
AND N EWMAN , T. 2002. The CIECAM02 color appearance model. In
Proc. Color Imaging Conf. IS&T, 23–27.
S ANCHEZ , J. 2010. Times square at night. http://www.flickr.com/photos/10iggie74950/3996153072/.
T RAVIS , R. 2010. A psychedelic fairytale. http://www.flickr.com/photos/bekahpaige/475267923/.
T UMBLIN , J. AND T URK , G. 1999. LCIS: A boundary hierarchy for detailpreserving contrast reduction. In Proc. SIGGRAPH ’99. 83–90.
Z HANG , X. AND WANDELL , B. A. 1997. A spatial extension of CIELAB
for digital color-image reproduction. J. Soc. Information Display 5, 1,
61–63.
Received November 2010; accepted January 2011
Table VI. : Quantitative comparison results in CV errors between Kim et
al. [2009] and their edge-aware application.
ACM Transactions on Graphics, Vol. 30, No. 2, Article 13, Publication date: April 2011.