Nothing Special   »   [go: up one dir, main page]

Image Fusion Metrics: Evolution in A Nutshell

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

2013 UKSim 15th International Conference on Computer Modelling and Simulation

Image Fusion Metrics: Evolution in a Nutshell

Mohammed Hossny, Saeid Nahavandi, Douglas Creighton, Asim Bhatti and Marwa Hassan
Centre for Intelligent Systems Research
Deakin University, Australia
mhossny@deakin.edu.au

Abstract—Image fusion process merges two images into a functions will be demonstrated on visual-infrared pairs of
single more informative image. Objective image fusion per- images from TNO image set [2]. Samples of this image
formance metrics rely primarily on measuring the amount of set are shown in Fig. 1 and the full image set is available
information transferred from each source image into the fused
image. Objective image fusion metrics have evolved from image at [3]. The rest of this paper is organized as follows.
processing dissimilarity metrics. Additionally, researchers have Section II provides an overview on image dissimilarity
developed many additions to image dissimilarity metrics in measures that have been employed in image fusion metric.
order to better value the local fusion worthy features in Section III discusses the localization of image dissimilarity
source images. This paper studies the evolution of objective metrics. Weighting and saliency functions are introduced
image fusion performance metrics and their subjective and
objective validation. It describes how a fusion performance in Section IV. Section V surveys subjective and objective
metric evolves starting with image dissimilarity metrics, its validation schemes of fusion metrics. Finally, Section VI
realization into image fusion contexts, its localized weighting concludes.
factors and the validation process.
Keywords-Image Fusion Metrics II. I MAGE P ROCESSING M ETRICS IN I MAGE F USION

I. I NTRODUCTION Image fusion performance metrics evolved from image


processing objective quality measures. In image processing
The process of image fusion aims to merge two or more
litrature, researchers have developed several techniques of
images to produce a new image that is better than the
quality assessment, starting with statistical measures of
original ones. The term ’better’ differs from one context
dispersion (e.g. variance, covariance, and entropy maps),
to another [1]. In some contexts, it means holding more
histogram based methods, mean-square-error (MSE), and
information. In other contexts, it means obtaining a more
signal-to-noise ratios (SNR) [4]. Then researchers adopted
accurate result or reading. In general, an image fusion
edge detection based methods and frequency domain meth-
system takes as an input two or more source images and
ods. Finally it ended with local structural dissimilarity
produces one fused image as an output. The fusion process
measures (SSIM) founded in [5]. A comparative study of
applies a fusion algorithm repeatedly on the source images
image dissimilarity measures is beyond the scope of this
and/or intermediate output images. Almost all the present
paper. Interested readers may refer to [6] for comparative
image fusion algorithms fuse only two images. However,
study of image dissimilarity and quality metrics.
there is a growing trend to employ multi-source image
fusion to obtain image of better qualities such as High
A. Quality Versus Performance Metrics
Dynamic Range (HDR) images.
In general, an image fusion performance metric measures The term image quality metric was originally used in
the amount of information in source images that have image compression literature while developing lossy image
been transferred to the fused image. The generic fusion compression algorithms. Image quality metrics were origi-
performance metric, for a two images fusion system, simply nally developed to measure how close a compressed image
measures the average distance between the fused image and is to the original uncompressed image [7]. The term ‘quality’
each source image. However, this simple metric lacks many refers to the distance between the compressed or transferred
important properties of image fusion such as the assumed image and the original one. The term quality emphasizes on
local structural similarity between the fused image and each how good the compression algorithm is rather than how far
source image and the information density maps of source the compressed image is from the original one. Today, the
and fused images. These missing properties formed the path image processing field spans a wider range of applications
through which image fusion metrics evolved. than compression and transmission of images. Therefore,
This paper surveys the phases through which image fusion the term image quality metrics has shifted to be image
metrics and their validation schemes evolved. Throughout dissimilarity metrics. The term dissimilarity emphasizes on
this paper, a multisource (N images) fusion algorithm is how close the two images being tested are in the image
applied to a set of images I. Localization and saliency space.

978-0-7695-4994-1/13 $26.00 © 2013 IEEE 443


DOI 10.1109/UKSim.2013.72
Figure 1. Visual-infrared image pairs from TNO UN Camp image set [2]. Image names from left to right (i- and v- suffixes refer to infrared and visual
images, respectively): 1801i, 1801v, 1808i, 1808v.

In image fusion context, few researchers used the term im- and image fusion performance metrics to a new level of
age fusion quality metrics to refer to assessment methods of localization. It also motivated many researchers to derive
fused images. However, because there is no clear definition new combinations of dissimilarity metrics and localization
of what an ultimate or perfect fusion should look like, the methods.
term fusion quality metric has become less popular lately. Localization of image dissimilarity metrics runs the met-
Recently, image fusion researchers used the term ‘fusion ric following a moving window scheme on source images
performance metric’ instead. and the fused image to capture local structural dependencies,
similarities, and features. The source and fused images
B. Realization into Image Fusion Context are subdivided into overlapping blocks. Dissimilarities are
The realization of image dissimilarity metrics into image estimated for corresponding blocks. Localization can be
fusion performance metrics is straightforward. The main carried out on simple, overlapped, and convoluted blocks.
idea is to measure the amount of information transferred A localized image dissimilarity between two image x1 and
from each source image separately into the resulting fused x2 is formulated as
image. Therefore, image dissimilarity measures have been
1 
employed to measure the distance between the fused image Δ(x1 , x2 ) = Δ0 (x1 , x2 |w) (2)
and each source image. The simplest idea is to average |W |
w∈W
dissimilarity between the fused image and each of the source where Δ0 (x, y|w) is the quality metric applied to a window
images. Assuming Δ0 : I×I→R, averaging of distances w, W is the set of all windows and |W | denotes to number of
between the fused image and each of the source images windows. The performance of an N source fusion algorithm
participated in the fusion process will do the job. The general is then defined as Δ : I N +1 →R and is estimated as
form for N source images will then be
N
1  1  
N
Δ(x1 , . . . , xN , f ) = Δ0 (xi , f ) (1) Δ(xi , . . . , xN , f ) = Δ0 (xi , f |w). (3)
N i=1 N |W | i=1
w∈W

where f is the fused image and xi |i = 1, ..., N are source The main benefit of localizing a dissimilarity metric is
images in a multi-image fusion process. Cross entropy, to capture structural information encoded in the image.
information measure [8,9], and universal quality index [5] For example, mutual information (MI) used to provide a
are examples of image dissimilarity metrics. The accuracy of questionable result that is biased towards additive fusion
image fusion metrics and their suitability to the application and thus embracing the contrast loss problem [8,9,12].
varies according to the employed dissimilarity metric, its Localized MI, on the other hand, has overcome the contrast
tuned parameters, and how sensitive it is to the information loss problem while valuing the structural hierarchy of the
presented. For example, standard deviation is very sensitive images [11].
to noise. Entropy and cross-entropy are strong against noise Selecting the best window size is challenging. It dif-
but lacks tracking abrupt changes. MSE and Root-MSE fers according to the employed dissimilarity metric and
(RMSE) are sensitive to L2 errors. the amount of information presented in images and its
spatial distribution. In general, larger window size drives
III. L OCALIZING I MAGE F USION M ETRICS the fusion metric one step backwards to a non-localized
Wang’s idea of capturing the structural similarity lo- version. On the other hand selecting a tiny window size
cally [5] within images has moved image similarity metrics loses the information shared with neighbor blocks, reduces

444
Figure 2. Localization levels on dynamic range maps of images 1808i (Fig. 1). From left to right: simple-, overlapped-, convoluted-blocks [10], and
Quadtree decomposition [11] with block sizes 4×4 (top), 8×8 (middle) and 16×16 (bottom). For quadtree decomposition, localization level determines
the minimum block size.

the localized metric to pixel-by-pixel comparison of down duces smooth differentiable and continuous maps without
sampled images, and causes instabilities for some metrics any abrupt changes. In [13], Chen and Blum proposed
such as standard deviation and histogram-based methods to measure the performance of image fusion algorithms
due to deriving single color blocks. Figure 2 shows different using localized contrast sensitivity filters (CSF). Xydeas
levels of localization on dynamic range maps. and Petrovic proposed another alternative in [14] by using
localized gradient information in source and fused images.
A. Equally-Sized Blocks
In the simplest scenario, the dissimilarity metric is run on B. Regions
the corresponding simple blocks in both images. However, Performing localization with blocks of different sizes or
simple blocks produce continuous but non-differentiable regions of compared images has become a new trend of
maps and lose track of the local structural information localization of fusion metrics. The added value local regions
shared with adjacent blocks. Using overlapped blocks provide is measuring the information density across the
does capture shared structures between adjacent blocks but spatial dimensions of the examined images. In [11], Hossny
does not solve the non-differentiability problem. Finally, et al. proposed to localize mutual information using quadtree
the most effective localization level, and also the most decomposition to resolve its questionable results that favors
time consuming one, is using convoluted blocks via linear additive fusion. In [15], Buntilov and Bretschneider pro-
or non-linear filters to detect these features [5]. It pro- posed assessing fusion results based on segmented regions

445
of the tested images. They proposed a histogram dependent features estimated by s◦ (xi |w) of each source image and
fusion metric in [16]. Chen and Varshney [17] used CSF s◦ (f |w) of the fused image with an arbitrary continuous
filters on local regions in source images to estimate how function g as
close each source image is to the resulting fused image.
s(x, f |w) = g (s◦ (x|w), s◦ (f |w)) (7)
IV. S ALIENCY F UNCTIONS
By definition, image fusion aims at neglecting less- Quadtree decomposition [19] and covariance [20] are
informative regions in a source image where better features famous examples of fusion dependent saliency functions.
are present in the other source image. Therefore, source In [21], Yang et al. chose the dissimilarity maps between
images contribute differently in the fusion process. This has source images as an arbitrary function g(·, ·) to give higher
motivated researchers to value the contribution of each block saliency weight to local features that are not spatially
in the metric according to the amount of information they overlapping with features in the other source image. Fu-
hold. sion dependent saliency can also be simplified to source
dependent saliency by choosing s(x, f |w) = s◦ (x|w).
A. Source Dependent Saliency
C. Intra-Image Saliency
Some regions in source images add more information to
the fused image. Therefore, these blocks should be valued Another source-dependent weighting factor was added
more than other less-informative regions. In [18] Piella and in [18]. It affects the way the overall quality is calculated. It
Heijman introduced a weighted version of the localized favors blocks and regions rather than averaging by number
image fusion performance metrics that is defined, for N of windows |W |. The overall performance metric for N
source images, as source fusion will then be
 N

N
1   s,w  
Δ(x1 , . . . , xN , f ) = λi Δ0 (xi , f |w) (4) Δ(x1 , . . . , xN , f ) = cw λs,w
i Δ0 (xi , f |w) (8)
|W | i=1 w∈W i=1
w∈W

s◦ (xi |w) Cw
λsi ◦ ,w = N (5) cw =  (9)
Cw 
j=1 s◦ (xj |w)
w ∈W

where cw represents the perceptual importance of each


where s◦ (x|w) and s◦ (f |w) are local feature extraction
block in the process of the overall quality estimation. Piella
functions of images and λs,w
i is the weighting factor with a
and Heijman chose Cw = max {s(x, f |w), s(y, f |w)}. A
dynamic range of [0, 1]. The weighting factor is computed
thorough survey on local saliency features is presented
in a way that favors the blocks and regions in the source
in [22] and an evaluation mechanism for these saliency
image that have more impact in the fused image. Standard
functions is presented in [23].
deviations, dynamic range of colors, or entropy are examples
of the features used as saliency functions [18]. V. VALIDATING I MAGE F USION M ETRICS
B. Fusion Dependent Saliency Considering the growing interest in image fusion and the
The weighting described in Eq. 5 takes into consideration increasing number of sensors, modalities and applications
the amount of information in source images and identifies that employ image fusion algorithms and metrics; fusion
blocks’ contributions based on a linear mapping. However, metrics must be examined for efficiency, accuracy and
the fused image changes according to the employed fusion suitability to solve a certain problem. This section surveys
algorithms and the tuning of their parameters. Equation 5 validation techniques of image fusion metrics. It starts
does not take into consideration the amount of information with highlighting the problems and disputes that motivated
presented in the fused image and thus is neglecting the con- researchers to validate fusion metrics. Then subjective vali-
tribution of the fusion algorithm itself. Consequently, recent dation techniques are surveyed. Finally, objective validation
research has proposed changing the weighting factor in order techniques are surveyed.
to accommodate the contribution of fusion algorithms by A. Motivations
measuring the relation between the fused image and each
MI dispute [8,9,11,24], Type-II fusion error [25,26], and
source image as
application domains [25,27,28] are the main problems trig-
s(xi , f |w) gered the need for validating image fusion metrics. MI
λs,w
i = N (6) Dispute was first highlighted when Qu et al. noticed that
j=1 s(xj , f |w) mutual information (MI) metric suggested that additive
where s(xi , f |w) are the saliency functions. The fusion fusion (simple averaging) is better than multi-resolution
dependent saliency factor measures the change in local (MR) fusion [8]. The odd result was then generalized to be a

446
common drawback of MI [24] creating a clear contradiction are asked to rank fused images according to their impression
with the fact that MR image fusion was introduced to and measure how comfortable the observers are with the
minimize the luminance shift error resulting from additive fused images. Photography, computer graphics, and art are
fusion [11]. Fusion loss and artifacts were first introduced the main applications that benefit from qualitative subjective
by Petrovic and Xydeas in an argument about the opportu- tests [36].
nity cost that was materialized in terms of fusion artifacts 1) Subjective-Objective Correspondence: In [30],
in [26] highlighting the error source that all fusion metrics Vladimir Petrovic proposed a variation of passive tests
do not measure and fusion algorithm do not minimize. There to validate image fusion metrics subjectively using Toet’s
is currently no objective non-reference test to measure how human perception preference [29]. Although Petrovic’s
far a fusion algorithm minimizes Type-II error [25]. work focuses mainly on the perceptual preference
Additionally, application domain do affect the perfor- evaluation rather than a metric validation, it presents a
mance of fusion algorithms and metrics as well. It dictates solid methodology for binding perceptual preferences to
the kind of images being captured, the resolution and color a objective metrics. He defined a Correct Ranking (CR)
map of captured images, and the limitations of processing index to evaluate the ability of an objective fusion metric
power [28]. For example, a non-indexed color map can im- to predict the overall subjective preference for a particular
pose a serious challenge to common fusion algorithms and image set. The CR index measures the average matching
metrics due to its incompatibility to differential operators. of every objective metric preference vector and subjective
Consequently, a color-map fusion algorithm [28] cannot perceptual preference metric for all human subjects as
operate on normal RGB and gray-scaled images. These
1  T 
application specific parameters highlight the need for a CR = q p (10)
technique to measure and validate a fusion algorithm/metric N n n n
suitability to be employed in a certain application domain. where qn is a zero 3D vector with a 1 at the fusion scheme
MI dispute, fusion artifacts and the challenges of appli- with the highest objective quality preference and is defined
cation domains raised concerns regarding how to validate as
image fusion metrics. Two general schemes were developed
for fusion metric validation, namely subjective and objective ⎡ ⎤ ⎧ T
qn0 ⎪
⎨[1, 0, 0] if Q 1 ≈ Q2
validation. Subjective validation values the visual validation
qn = ⎣ qn ⎦ = [0, 1, 0]T
1
if Q 1 > Q2 (11)
of the end user in the validation process [29,30]. Conversely, ⎪

qn2 [0, 0, 1]
T
if Q 1 < Q2
objective validation aims at deriving a cost function in order
to automate fusion metric validation [27,31]. and pn is the preference vector of human subjects and is
defined as
B. Subjective Validation
Subjective validation incorporates human subjects to mea- ⎡ ⎤ ⎧ T
p0n ⎪
⎨[1, 0, 0] if subject has no preference
sure the amount of information added from both source
images into the fused image. Subjective validations allows pn = ⎣ p1n ⎦ = [0, 1, 0]T if subject prefer scheme 1


a variety of customizable tests that suits different image p2n [0, 0, 1]
T
if subject prefer scheme 2.
fusion applications. The customization degree of freedoms (12)
vary according to the rational of using image fusion, training Petrovic also introduced a confidence measure r to the
images, testing images, and the role a human subject plays correct ranking measure. This confidence measure valued
in the testing scenario. In general, subjective tests have been the normalized voting ratios of human subjects tn .
categorized in two main families of tests, namely active and 1  T
passive subjective tests [30]. r= q tn (13)
N n n
Active subjective tests, also known as “task-related” or
“quantitative” subjective tests, evaluate the impact of an where tn is the normalized voting sum of all human sub-
image fusion algorithm on the human subject’s performance jects’ perceptual preference vectors pm,n in the experiment
during a certain visually-dependent task. Tasks such as that is defined as
locating and identifying certain object in a fused image and 
classifying false alarms are very good examples of active tn = pm,n (14)
subjective tests [32,33]. Application domains where these m
tests are useful include industrial, military, medical and ⎧ T
surveillance applications [34,35]. ⎪
⎨[1, 0, 0] if subject m has no preference
Passive subjective tests, also known as “descriptive” or pm,n = [0, 1, 0]T if subject m prefers scheme 1 (15)

⎩ T
“qualitative” subjective tests, value the subjects’ impression [0, 0, 1] if subject m prefers scheme 2
of the observed fused image. In these tests, human subjects

447
image. Second, this ideal fusion result is still subject to
segmentation errors and cannot be considered as a control
case for fusion [25].
2) Controlled Fusion: Cvejic’s idea of ideal fusion [27]
inspired Hossny et al. to investigate fusion cases [37]. In [25]
they suggested that the perfect fusion needs to be carried
out on two different source images that we certainly know
what the fusion results should be. The perfect fusion requires
identifying control cases for both Type-I and Type-II errors.
In [31], Hossny and Nahavandi proposed a duality index to
measure the suitability of image fusion metrics for different
Figure 3. Image fusion control cases (0- and ∞- images) [25]. Left:
Source image [3]. Middle: 0-image (top) and ∞-image (bottom). Right:
image fusion algorithms. They recommended using fusion
Fusion result. with images that are completely uninformative (0-image)
and completely informative (∞-image) as control cases as
illustrated on TNO’s infrared images [3] in Fig 3. The
C. Objective Validation “controlled” image fusion test cases can then be formulated
Although subjective metric validation sounds the most as follows;
reliable option, it requires a very long and costly setup that
includes equipments as well as signing consent, ethical and ∀x∈I x⊕0=x (16)
hazard estimation forms. It also requires preparing a large
training and testing image sets, and a very long experiment ∀x∈I x⊕∞=∞ (17)
(may reach a rate of one minute per user per image). Adding image fusion metrics to equations Eq. 16 and Eq. 17
These tasks render subjective validation for fusion metrics maps the problem from abstract image space into real
impractical for large datasets as well as dynamic application numbers as using
domains. Therefore, researchers adopted objective validation
of fusion metrics to facilitate testing on larger data sets and 1 
different application domains. Two main schemes have been DII0 (⊕, Δ0 ) = Δ0 (x ⊕ 0, x) (18)
|I|
developed for validating fusion metrics objectively. First x∈I
scheme aims at achieving assumed ideal fusion [27] while 1 
DII∞ (⊕, Δ0 ) = Δ0 (x ⊕ ∞, ∞) (19)
the second scheme targets performing controlled fusion with |I|
x∈I
zero and infinity images [31,37].
1) Ideal Fusion: Previously, the preferred method for where ⊕ is the fusion algorithm (operator), Δ0 is an image
validating fusion metrics relied on using a reference im- dissimilarity metric, I is the set of images acquired from
age [18]–[20]. The reference image is used to create two a particular application domain, 0 is the completely non-
deformed complementary versions, fuse them, and compare informative zero image, and ∞ is the completely informative
the resulting fused image with the original image. However, infinity image. According to Hossny and Nahavandi’s nor-
this comparison is not objective since the ultimate fusion malized dissimilarity metric assumption Δ0 : I×I→[0, 1];
algorithm does not really exist, neither does and ultimate both DII0 and DII∞ are normalized and feature the same
performance metric. Therefore, it is analogous to comparing range of [0, 1].
results that are subjective to two sources of errors, namely They concluded that testing duality DII∞ with an infinity
fusion error and metric error. image, if one can identify or approximate it, provides
In [27], Cvejic et al. proposed obtaining the ‘ideal’ fusion information on the ability of a fusion algorithm/metric
result using joint segmentation maps of source images. In configuration to capture important features from source
this method, Cvejic had two assumptions for the proposed images. Additionally, in [38], they extended the controlled
ideal fusion. First assumption dictates that all fusion worthy fusion with infinity images to measure fusion capacity.
information lie within high frequencies in both source On the other hand, the zero-referenced duality index DII0
images. The other assumption, states that information in measures the ability of a fusion algorithm/metric configu-
both source images are complementary and not spatially ration to minimize the effect of non-informative features
overlapped. Therefore, he proposed isolating visible objects from being added to the fused image. In [37], Hossny
from both source images and adding them into one single et al. discussed the constraints and equations guiding the
output image. Although the proposed ideally fused image selection of zero and infinity images for multiresolution
does provide a better human-guided perception of how the fusion algorithms and metrics. Similar guidelines can be
fusion should work, it lacked in two aspects. First, the pro- drawn to characterize the performance of other families of
duced ideally fused image does not feature a differentiable image fusion algorithms and metrics.

448
0.18
# 
 !!
!  # & &

0.173

Average Duality Error


$
!!# "
  "!  &
0.165

#
' 0.158
! ! # & &

0.15
& Image Dissimilarity Metric Localization Local Saliency

Figure 4. Left: Evolution of image fusion metrics. Right: Fusion metric stages compared with average algorithm/metric duality error in [31]. Zero-
referenced duality (DII0 ∈ [0, 1]) errors were calculated between wavelet fusion algorithm [12] and Wang’s SSIM metric [5]. Less duality errors means
higher duality between fusion metric and algorithm configurations.

VI. C ONCLUSIONS The future deployment of automated and semi-automated


fusion systems on mobile robots highlights the challenge
This paper describes the evolution of image fusion per-
of fusing images from different illumination conditions and
formance metrics. It starts, as illustrated in Fig. 4-left, with
noise distributions with volatile variance heteroskedastic
simple image processing dissimilarity metrics, their real-
noise [39]. Quantifying the noise level requires adding a
ization into image fusion context, capturing local structural
second dimension to the 1D time series heteroskedasticity
dependencies, and concluded with saliency maps. According
measures discussed in [40]–[42].
to Piella and Heijman’s work in [18], which is confirmed
by Hossny and Nahavandi’s experiments on duality index
R EFERENCES
in [31]; fusion metrics do record lower average duality errors
when equipped with localization and saliency functions as [1] L. Wald, “Data fusion: A conceptual approach for an efficient
shown in Fig. 4-right. exploitation of remote sensing images,” Fusion of Earth Data,
The evolution path highlighted in this paper forms a International Conference on, pp. 17–23, 1998.
unified framework for image fusion metrics as illustrated in [2] A. Toet, J. IJspeert, A. Waxman, and M. Aguilar, “Fusion of
visible and thermal imagery improves situational awareness,”
Fig. 4-left. The future advancements in image fusion metrics Displays, no. 2, pp. 85 – 95, 1997.
can be summarized in minor and major advancements. [3] The Online Resource for Research in Image Fusion,
Minor advancements may feature a new level of localization, http://imagefusion.org.
new strategy in saliency calculations, or a new fundamental [4] F. Sadjadi, “Comparative image fusion analysis,” Computer
improvement in image dissimilarity metrics. Major advance- Vision and Pattern Recognition, IEEE Computer Society
ments, on the other hand, will aim to subdivide a stage Conference on, vol. 3, 2005.
in two substages or introducing a novel addition to the [5] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image
framework. quality assessment: from error visibility to structural similar-
ity,” Image Processing, IEEE Transactions on, vol. 13, no. 4,
The presented framework for image fusion metrics, fa- pp. 600–612, 2004.
cilitates the design of image fusion performance metrics as
[6] H. H. Barrett, C. K. Abbey, and E. Clarkson, “Objective
a combination of functions for image dissimilarity, local- assessment of image quality. iii. roc metrics, ideal observers,
ization, and saliency functions of each source image from and likelihood-generating functions,” Journal of the Opti-
the fused image. This framework facilitates customizing cal Society of America A-optics Image Science and Vision,
fusion metrics on the fly and thus allowing a semi-automated vol. 15, no. 6, pp. 1520–1535, 1998.
fusion system to choose the appropriate dissimilarity metric, [7] Z. Wang and A. C. Bovik, Modern Image Quality Assessment.
New York: Morgan & Claypool, 2006.
localization method and saliency functions, as well as tuning
their parameters. A semi-automated fusion system will fa- [8] G. Qu, D. Zhang, and P. Yan, “Information measure for
performance of image fusion,” Electronics Letters, vol. 38,
cilitate promising add-ons for robots operating in hazardous no. 7, pp. 313–315, 2002.
environments (e.g. automated battle fields) where a common [9] M. Hossny, S. Nahavandi, and D. Creighton, “Comments
scenario is having limited resources, such as bandwidth, on ’information measure for performance of image fusion’,”
memory storage, and processing power. Electronics Letters, vol. 44, no. 18, pp. 1066–1067, 28 2008.

449
[10] M. Hossny, http://www.deakin.edu.au/∼mhossny/fusion. [On- no. 2, pp. 95–96, 2007.
line]. Available: http://www.deakin.edu.au/ mhossny/fusion [28] M. Hossny, S. Nahavandi, and D. Creighton, “Color map-
[11] M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti, based image fusion,” Industrial Informatics, IEEE Interna-
“Image fusion performance metric based on mutual informa- tional Conference on, pp. 52–56, 2008.
tion and entropy driven quadtree decomposition,” Electronics [29] A. Toet and E. M. Franken, “Perceptual evaluation of differ-
Letters, vol. 46, no. 18, pp. 1266–1268, 2010. ent image fusion schemes,” Displays, vol. 24, no. 1, pp. 25
[12] G. Qu, D. Zhang, and P. Yan, “Medical image fusion by – 37, 2003.
wavelet transform modulus maxima,” Opt. Express, vol. 9, [30] V. Petrovic, “Subjective tests for image fusion evaluation
no. 4, pp. 184–190, 2001. and objective metric validation,” INFORMATION FUSION,
[13] Y. Chen and R. S. Blum, “A new automated quality as- vol. 8, no. 2, pp. 208–216, APR 2007.
sessment algorithm for image fusion,” Image and Vision [31] M. Hossny and S. Nahavandi, “Image fusion algorithms
Computing, vol. 27, no. 10, pp. 1421–1432, Sep 2009. and metrics duality index,” Image Processing (ICIP), IEEE
[14] C. Xydeas and V. Petrovic, “Objective image fusion perfor- International Conference on, pp. 2193–2196, 2009.
mance measure,” Electronics Letters, vol. 36, no. 4, pp. 308– [32] D. Ryan and R. Tinkler, “Night pilotage assessment of image
309, 2000. fusion,” Proc. SPIE, vol. 2465, pp. 50–67, 1995.
[15] V. Buntilov and T. Bretschneider, “Objective content- [33] P. Steele and P. Perconti, “Part task investigation of multispec-
dependent quality measures for image fusion of optical data,” tral image fusion using gray scale and synthetic color night
Geoscience and Remote Sensing Symposium, vol. 1, pp. 613– vision sensor imagery for helicopter pilotage,” Proceedings
616, 2004. SPIE, vol. 3062, pp. 88–100, 1997.
[16] ——, “A fusion evaluation approach with region relating [34] A. Toet, J. Ijspeert, A. Waxman, and M. Aguilar, “Fusion of
objective function for multispectral image sharpening,” Pro- visible and thermal imagery improves situational awareness,”
ceedings of the IEEE International Geoscience and Remote Displays, vol. 18, no. 2, pp. 85–95, 1997.
Sensing Symposium, vol. 4, pp. 2830–2833, 2005. [35] A. Toet, L. Ruyven, and J. Valeton, “Merging thermal and
[17] H. Chen and P. K. Varshney, “A human perception inspired visual images by a contrast pyramid,” Optical Engineering,
quality metric for image fusion based on regional informa- vol. 28, no. 7, pp. 789–792, 1989.
tion,” Information Fusion, vol. 8, no. 2, pp. 193–207, APR [36] A. A. Efros and W. T. Freeman, “Image quilting for tex-
2007. ture synthesis and transfer,” Computer Graphics Proceedings
[18] G. Piella and H. Heijmans, “A new quality metric for image SIGGRAPH, pp. 341–346, 2001.
fusion,” Image Processing (ICIP), IEEE International Con- [37] M. Hossny, S. Nahavandi, and D. Creighton, “Zero and
ference on, pp. 137–176, 2003. infinity in multiscale image fusion,” Image Processing (ICIP),
[19] M. Hossny, S. Nahavandi, and D. Creighton, “A quadtree IEEE International Conference on, pp. 2181–2184, 2009.
driven image fusion quality assessment,” 5th IEEE Interna- [38] M. Hossny and S. Nahavandi, “Measuring the capacity of
tional Conference on Industrial Informatics, vol. 1, pp. 419– image fusion,” Image Processing Theory, Tools and Appli-
424, 2007. cations (IPTA), International Conference on, pp. 415–420,
[20] N. Cvejic, A. Loza, D. Bull, and N. Cangarajah, “A similarity 2012.
metric for assessment of image fusion algorithms,” Interna- [39] A. Foi, “Clipped noisy images: Heteroskedastic modeling and
tional Journal of Signal Processing, vol. 2, no. 3, pp. 178– practical denoising,” SIGNAL PROCESSING, vol. 89, no. 12,
182, 2005. pp. 2609–2629, DEC 2009.
[21] C. Yang, J.-Q. Zhang, X.-R. Wang, and X. Liu, “A novel [40] M. Hassan, M. Hossny, S. Nahavandi, and D. Creighton,
similarity based quality metric for image fusion,” Information “Heteroskedasticity variance index,” Computer Modelling
Fusion, vol. 9, no. 2, pp. 156–160, APR 2008. and Simulation, International Conference on, pp. 135 –141,
[22] M. Hossny, S. Nahavandi, and D. Crieghton, “Feature-based 2012.
image fusion quality metrics,” Lecture Notes in Computer [41] ——, “Quantifying heteroskedasticity using slope of local
Science, vol. 5314, pp. 469–478, 2008. variance index,” Accepted in Computer Modelling and Sim-
[23] M. Hossny, S. Nahavandi, and D. Creighton, “An evaluation ulation, International Conference on, 2013.
mechanism for saliency functions used in localized image [42] ——, “Quantifying heteroskedasticity via binary decompo-
fusion quality metrics,” Computer Modelling and Simulation sition,” Accepted in Computer Modelling and Simulation,
(UKSim), International Conference on, pp. 407–415, 2012. International Conference on, 2013.
[24] N. Cvejic, C. Canagarajah, and D. Bull, “Image fusion metric
based on mutual information and tsallis entropy,” Electronics
Letters, vol. 42, no. 11, pp. 626–627, 2006.
[25] M. Hossny, S. Nahavandi, D. Creighton, and A. Bhatti,
“Towards autonomous image fusion,” Control Automation
Robotics Vision (ICARCV), International Conference on, pp.
1748 –1754, dec. 2010.
[26] V. Petrovic and C. Xydeas, “Objective image fusion per-
formance characterisation,” Computer Vision (ICCV), IEEE
International Conference on, vol. 2, pp. 1866–1871, 2005.
[27] N. Cvejic, D. R. Bull, and C. N. Canagarajah, “Metric for
multimodal image sensor fusion,” Electronics Letters, vol. 43,

450

You might also like