Asian Journal of Research in Computer Science
3(3): 1-9, 2019; Article no.AJRCOS.49732
ISSN: 2581-8260
A Chinese Remainder Theorem Based
Enhancements of Lempel-ziv-welch and Huffman
Coding Image Compression
M. B. Ibrahim1* and K. A. Gbolagade1
1
Department of Computer Science, Kwara State University, Malete, Nigeria.
Authors’ contributions
This work was carried out between both authors. Author MBI designed the study and wrote the
manuscript. Author KAG supervised the study and provided assistance during the study. Both authors
read and approved the final manuscript.
Article Information
DOI: 10.9734/AJRCOS/2019/v3i330096
Editor(s):
(1) Dr. Shivanand S. Gornale, Professor, Department of Computer Science, School of Mathematics and Computing Sciences,
Rani Channamma University, Vidyasangam, NH-4, Belagavi, India.
(2) Dr. Emanuel Guariglia, Assistant Professor, Department of Mathematics and Applications “R. Caccioppoli”,
University of Naples Federico II, Italy.
(3) Dr. Xiao-Guang Lyu, School of Science, Huaihai Institute of Technology, P. R. China.
Reviewers:
(1) Babatunde Gbadamosi, Landmark University, Nigeria.
(2) Ibrahim Goni, Adamawa State University Mubi, Nigeria.
Complete Peer review History: http://www.sdiarticle3.com/review-history/49732
Original Research Article
Received 25 April 2019
Accepted 29 June 2019
Published 06 July 2019
ABSTRACT
Data size minimization is the focus of data compression procedures by altering representation and
reducing redundancy of information to a more effective kind. In general, lossless approach is
favoured by a number of compression methods for the purpose of maintaining the content integrity
of the file afterwards. The benefits of compression include saving storage space, speed up of data
transmission and high quality of data. This paper observes the effectiveness of Chinese Remainder
Theorem (CRT) enhancement in the implementation of Lempel-Ziv-Welch (LZW) and Huffman
coding algorithms for the purpose of compressing large size images. Ten images of Yale database
was used for testing. The outcomes revealed that CRT-LZW compression saved more space and
speedy compression (or redundancy removal) of original images to CRT-Huffman coding by
29.78% to 14.00% respectively. In terms of compression time, CRT-LZW approach outperformed
CRT-Huffman approach by 9.95 sec. to 19.15 sec. For compression ratio, CRT-LZW also
outperformed CRT-Huffman coding by 0.39 db to 4.38 db, which is connected to low quality and
_____________________________________________________________________________________________________
*Corresponding author: E-mail: imbamok@gmail.com;
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
imperceptibility of the former. Similarly, CRT-Huffman coding (28.13db) offered better quality PeakSignal-to-Noise-Ratio (PSNR) for the reconstructed images when compared to CRT-LZW (3.54db)
and (25.59db) obtained in other investigated paper.
Keywords: LZW; Huffman coding; CRT; compression time; size reduction; image; compression.
1. INTRODUCTION
enhancement is proposed for independent
implementation of Huffman coding and LZW
algorithm for image compression procedures.
Data compression is the method of decreasing
the size of information to be transmitted or stored
by the process of eliminating redundancy in
information without the loss or the ability to
reconstruct the original data. There are several
file formats that can be effectively compressed
including text, image, video, and audio [1]. Image
compression addresses the difficulty involved in
decreasing the volume of data vital in denoting
an image with no major loss of information. In
recent times, Chinese Remainder Theorem
(CRT) has developed in fields of coding theory,
phase unwrapping, frequency estimation and
distributed data storage. This is certainly due to
the fact; CRT is profound for isolating errors in
residue caused by noise. The traditional CRT
reconstructs a single integer for error-free coprime and residues [2]. Images have limited
applications in real life situations such as
medical, scientific, prepress and artistic
applications, due to enormous sizes for
broadcast or storage given low bandwidth [3]. In
dealing with memory capacity insufficiency,
compression schemes have been deployed;
thereby offering the prospect of broadcasting
images/video under scare bandwidth. The
classical image compression scheme converts a
spatial domain representation to frequency
domain [3].
2. RELATED STUDIES
The goal of image compression in variety of
application is to decrease the quantity of bytes in
a graphic file but retaining the quality. In the
study, [3] considered several approaches for
compressing images especially in medicine. The
forms of compression for image involve spatial to
frequency domains. The main concept is that
pictures are composed of neighbouring pixels;
though related not without repetitive data.
However, colour images require treatment of
distinct colour segments.
A novel prediction scheme for performing cloudbased compression was initiated by [4]. This
approach utilizes the semi-local geometric and
photometric prediction scheme to compensate in
a region-wise style the distortion between two
images rather than inter-coding schemes (such
as high efficiency video coding). This is most
useful for highly correlated image content
applications such as traditional video coding,
cloud gaming streaming, photo albums
compression. This minimizes the redundancy
arising from similar content already available in
cloud. Nevertheless, cloud multiple frames
exploitations, and determining scalability of
cloud-based image compression system.
In classical JPEG codec, images are encoded
independently. The surge in cloud data storage
has thrown up issues of content duplication and
large redundancy, which must be considered.
Inter-coding is one of such solution for traditional
video encoding of consecutive frames from
previous frames. Another method is the interprediction tools of video codecs to encode
comparable
images
as
pseudo-video
arrangements [4].
Lossless compression schemes of LZW and
Huffman coding were combined by [6]. The
target was to enable medical imagery suitable for
storage, quality retention and broadcast.
Huffman coding offered massive size decreases
in a speedy manner, but, poor quality of
compressed image. Conversely, LZW algorithm
produced finest quality with little size decreases.
The combined compression schemes gave rise
to high compression ratio and high PSNR values.
Again, lossy compression algorithms such as
Huffman coding gives relatively good quality as
well as compression rates with images but blocky
look of reconstructed images [5]. The reverse is
the case of LZW in which compressed image
data quality is retained at the expense of little
size decreases [6]. In this paper, CRT
A fresh algorithm was proposed by [7] for
encoding, decoding or regenerating the replica of
the encoded data. The first step uses the forward
difference scheme on Huffman. Then, the values
are regenerated into fixed length code
representation with twos complement for further
2
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
new probabilities computation along Huffman’s
algorithm.
There
are
improvements
in
compression factors for the new algorithm
against the traditional Huffman encoding.
Residue Number System is introduced in data
encryption and decryption with Shannon Fano
compression scheme by [8]. The outcomes
revealed significant improvements in the security,
speed and memory needed for existing
information communication networks.
a) The more frequently occurring symbols are
assigned shorter code words than less
frequent occurring ones;
b) The two symbols occurring least frequently
is assigned the similar length.
On the average, code length is determined as
the average of the product of symbol probability
and amount of encoding bits [11,12]. The
Huffman code efficiency is calculated as the ratio
of entropy to the average length. The target of
Huffman encoding creates the optimal code for a
collection of symbols and probabilities whenever,
the symbols are coded currently within the same
time frame [9].
Typically, multimedia files (such as image data)
are composed of redundancy and irrelevance
limiting their usage on widespread basis. Since
the advent of internetworks and communication
infrastructure,
there
are
possibilities
of
broadcasting or storing digital images seamlessly
[4]. The sizes of the multimedia data make them
inefficient for broadcast or storage purpose [6].
Majority of lossless compression methods are
founded on probability or dictionary and entropy
because they make use of the availability of the
identical string or character within data in order to
realize compression [6]. Researchers are
focusing attention on removing redundancy and
irrelevance in image data, which gave rise to the
concept of data compression schemes such as
Huffman coding and Lempel-Ziv-Welch algorithm
[6]. In general, the performance of compression
schemes is estimated with standard metrics such
as effectiveness (compression ratio) and
efficiency (speedup or throughput) [7]. In this
paper, these compression algorithms are
highlighted in certain details.
LZW algorithm is a popular lossless data
compression scheme initiated by Abraham
Lempel, Jacob Ziv, and Terry Welch. In 1984, as
an improvement over the traditional LZ78
algorithm released 1978 by Lempel and Ziv,
which is easy to deploy with the prospect of
offering significantly high throughput in hardware
applications. According [13], the algorithm
encodes sequences of 8-bit data as fixed-length
12-bit codes. The codes from 0 to 255 depict 1character sequences composed of the matching
8-bit character, and the codes 256 through 4095
are created in a dictionary for sequences
contained in the data during the process of
encoding [1,13,14].
2.1 The Concept of Chinese Remainder
Theorem (CRT)
One common entropy encoding algorithm
deployed for lossless image compression is
Huffman coding [9]. The encoding strategy
commences with calculation of each symbol
probability in the image. Thereafter, these
symbols probabilities are placed in a descending
magnitude as to create leaf nodes of a tree. By
individually coding the symbols, the Huffman
code is built combining the lowermost probable
symbols. These entire steps are continued until
only two probabilities of two compound symbols
are present. Eventually, a code tree is produced
and the labelling of the code tree generates the
Huffman code [10].
The basic operation of Chinese Remainder
Theorem (CRT) is to generate a single integer
through its residue modulo within moduli set [2].
CRT is an alternative to the Mixed Radix
Conversion (MRC) in which large modulo M
derivations are unnecessary. MRC accepts a low
complexity of 0(n) unlike the CRT having
computation complexity of order 0(n3). In CRT,
arithmetic operations for modulo M are to be
manually executed. CRT residue converters are
much more complex. In contrast, the MRC
procedure requires arithmetic operations for
modulo mi only, thereby simplifying all
operations. In MRC method, a number x is
expressed in mixed-radix system. Suppose for
moduli set m1 , m2 ,..., mn , RNS representation of
The Huffman codes for the symbols are realised
by analysing the branch digits in succession from
the root node to the respective terminal node or
leaf using symbols 0 and 1. Huffman coding is
the most deployed method for redundancy or
relevance minimization [9]. The operational
principle of Huffman code is based on these
observations:
a number x is given by x1 , x2 ,..., xn . The number
x can be expressed in mixed-radix form as:
x m a1
1
3
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
n 1
X an mi ...a3m1m2 a2 m1 a1
where,
can be uniquely represented. The major
advantage of MRC, as can be seen from
Equation (5) above is that the calculation of ; i
= 1, k can be done only using arithmetic modcontrasting CRT, which entails arithmetic mod-M,
M being the system dynamic range, a rather
large constant. It can be noted that equations (5)
and (6) are directly utilized, only if the moduli set
{ 1 2 3… } are relatively prime and that
Euclidean algorithm is the common way to verify
this, i.e., if gcd ( , mj) = 1, for i ≠ j.
(1)
i 1
The ai's are the mixed-radix coefficients. These
ai's are determined sequentially, starting with a1,
in the following manner:
Equation (1) is first taken in modulo m1. Since, all
terms except the last are multiples of m1 to give
x m a1
The residue independence, carry-free operation
and parallelism attributes of the RNS have been
intensively used in variety of areas, such as
digital signal processing (DSP), digital filtering,
digital communications, cryptography, error
detection and correction [15,16]. The addition,
subtraction and multiplication are dominant. And,
division, comparison, overflow and sign detection
are negligible. One key field of RNS-based
applications is finite impulse response (FIR)
filters. Likewise, digital image processing benefits
from the RNS’s features such as enhancing
digital image processing applications [16].
1
Hereafter, a1 is just the first residue digit. To
obtain a2, first subtract a1 from x. The quantity x a1 is divided by m1, and doing modulo operation
with respect to m2, we have
x a1
m1
a2
m2
Similarly, for a3, (a2m1+a1) is subtracted from x.
By dividing the quantity (x-a2m1-a1) by m1m2 and
performing modulo operation with respect to m3,
we get
x a2 m1 a1
m1m2
3. METHODOLOGY
In this image compression process, the
implementation process was coded from scratch
using MATLAB R2015a. The paper studied the
performances
of
traditional
compression
schemes of LZW and Huffman coding with CRT.
The purpose of the employing CRT is to enhance
their individual effectiveness using image
media lossless compression technique. The
arrangement of the planned enhancements of
compression approaches is illustrated in Fig. 1.
a3
m2
In this way, by successive subtraction and
division in residue notation, all the mixed-radix
digits may be obtained.
Conversely, an RNS number ( 1, x 2, 3………. )
for the moduli set ( 1, 2, 3…. ) whose the
decimal equivalent is given by:
a1 x1
The input image is used to acquire the various
formats of images for the complete data
compression processes. These input images are
composed of diverse degree of redundancy
which is expected to be removed or minimized
during planned compression processes. The
data compression phase encompasses three
distinct operations; firstly, the input image is
received at Lempel-Ziv-Welch Algorithm block to
commence the data compression. Similarly, the
Huffman
coding
performs
preliminary
compression operation on the input images.
Secondly, the complete compression of original
image is achieved with CRT using the outcomes
of LZW and Huffman coding schemes. Finally,
the output image is realized from the last
compression process of CRT computation, which
(3)
a2 ( x2 a1 ) m11
m2 m
2
a 3 (( x 3 a1 ) m 1 1
(4)
m3
a 2 ) m 2 1
m3 m
3
(5)
Therefore, a general expression is given by:
an ((( xn a1 ) m11
mn
a2 ) m21
mn
... an1 ) mn11
mn m
n
(6)
The mixed radix digit (MRD) of , 0 ≤
positive number in the interval by [0,
<
k
i 1
, any
mi 1)
4
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
Fig. 1. The layout for the enhancement of compression schemes
enhanced the traditional image compressed
format when compared to input images.
Step 5: Output
CHARACTER.
This paper considered four metrics in evaluating
the effectiveness of the planned image
compression schemes including:
Step 6: STRING
CHARACTER.
STRING
=
same
STRING
as
and
Step 7: Apply CRT on the resulting STRING.
1) Compression Ratio (CR) is expressed as
the amount of uncompressed data size
divided by compressed data size. This
provides the relative size of compressed
image data.
2) Compression Time (CT) calculates the
time taken to compress bits in data in a
second.
3) Peak Signal-to-Noise Ratio (PSNR) is
used to estimate the amount of noise in the
signals of compressed data relative to
original data.
4) Imperceptibility calculates the rate of bits
distributions of image data after complete
compression.
This
infers
on
the
appearance of compressed image data.
Step 8: The moduli set is chosen to obtain
the best redundancy in data.
Step 9: The compressed image data is
attained as final encoded values.
Step 10: The reconstructed image is
obtained by applying decoding of LZW and
CRT.
Step 11: Output is reconstructed image data.
Similarly, the steps for performing CRT-Huffman
coding image compression scheme is presented
in algorithm 2 below;
Step 1: INPUT original image
The operational algorithm of proposed LZWCRT
image
compression
scheme
is
presented below
Step 2: Run Huffman coding functions
Step 3: Extract symbols of the pixels from
input IMAGE.
Step 1: Extract first byte from input STRING.
Step 4: Create the probability of pixel
symbols and organize in decreasing
magnitude and smaller
probabilities are
combined.
Step 2: Extract the next byte from input
CHARACTER.
Step 3: Lookup in table for the STRING and
CHARACTER stored up.
Step 5: Concatenate the Huffman codeword
ready for CRT
Step 4: Generate code for the STRING and
update the lookup table.
Step 6: Generate code for the STRING and
update the lookup table.
5
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
Step 7: Apply CRT on the resulting STRING.
and CRT-Huffman coding compression. The
outcomes of compression procedures on the
sampled images using CRT and LZW are shown
in Table 1.
Step 8: The moduli set is chosen to obtain
the best redundancy in data.
Step 9: The compressed image data is
attained as final encoded values.
From Table 1, there is a significant decrease in
the compressed images when compared to the
original images at 12477.4 kb and 3715.3 kb on
the average (that is 3:1) respectively.
Consequently, 3 kb is used to represent 1 kb in
the original image after performing the
compression processes on the sampled images.
In the same vain, the average values for reduced
image size, compression time, PSNR and CR
were 29.78%, 9.95 sec, 3.54db and 0.39
respectively. The outlook of the compressed
image is poor for Human Visual System (HVS)
as depicted in Fig. 2.
Step 10: The reconstructed image is
obtained by applying decoding of LZW and
CRT
Step 11: Output is reconstructed image data.
4. PRESENTATION OF RESULTS
The paper utilized 10 different image samples
from Yale database [17] for the purpose of
validating the proposed concepts of CRT-LZW
Table 1. CRT-LZW based image compression
Image
sample
1
2
3
4
5
6
7
8
9
10
Total
Average
Size before
compression
12282
12906
12353
12762
12872
12357
12150
12243
12530
12319
124774
12477.4
Size after
compression
3524
3571
3522
3410
3606
3548
4266
3889
3882
3935
37153
3715.3
Compression
time
10.10
8.77
12.28
10.18
9.99
8.61
9.08
10.45
9.76
10.27
99.49
9.95
PSNR
3.11
3.17
3.21
3.22
3.31
3.13
4.09
3.99
4.09
4.05
35.37
3.54
Compression ratio
0.38
0.40
0.39
0.36
0.38
0.38
0.41
0.40
0.39
0.40
3.89
0.39
Fig. 2. Original image against CRT-LZW compressed image
6
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
Table 2. CRT-Huffman coding based image compression
Image
sample
1
2
3
4
5
6
7
8
9
10
Total
Average
Size before
compression
12282
12906
12353
12762
12872
12357
12150
12243
12530
12319
124774
12477.4
Size after
compression
1853
1819
1745
1766
1854
1783
1593
1638
1614
1672
17337
1733.7
Compression
time
18.40
18.35
16.86
18.56
21.05
18.02
14.91
31.48
17.53
16.37
191.53
19.15
PSNR
28.29
28.34
27.73
28.27
27.85
28.29
28.22
28.07
28.10
28.09
281.25
28.13
Compression
ratio (%)
4.13
4.14
4.28
4.15
4.02
4.28
4.92
4.59
4.63
4.65
43.79
4.38
In Fig. 3, the first image is the original
sample image before applying the proposed
compression algorithm. The second image is the
output of compression procedure with CRTHuffman coding. The two images showed large
similarities, that is, the original and the
reconstructed images, because of even
distribution of bits compositions to the HVS.
In Fig. 2, the image on the left hand side is the
original image without compression operations.
The image on the right hand side is realized after
performing compression on the CRT and LZW.
The compressed image looks washed out due to
uneven distribution of bits composition to the
HVS when matched with the reconstructed
image. The outcomes of applying CRT to
Huffman coding based compression using the
sampled images are obtainable in Table 4.
The paper compared the performances of
compression procedures of CRT-LZW and CRTHuffman coding as shown in Table 3.
In Table 2, the data compression procedure of
CRT and Huffman coding revealed substantial
improvements in terms of the resultant image
sizes, PSNR and compression ratio. On the
average, CRT-Huffman based compressed
scored 14.00%, 19.15 sec, 28.13db, and 4.38 for
image size saved, time of compression, PSNR
and CR. Again, the outlook of reconstructed
image when compared to the original images is
depicted in Fig. 3.
In Table 3 and Fig. 4, the introduction of CRT for
LZW and Huffman coding based image
compression showed significant performances
with LZW saving more space and speedy
compression (or redundancy removal) of original
images. Conversely, Huffman coding offered
better quality for the reconstructed images as
against LZW [5].
Fig. 3. Original image against CRT-Huffman compressed image
7
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
Evaluation Parameters
35
30
25
20
15
10
5
0
Size Reduction (%)
Compression time (sec.)
CRT-LZW
PSNR
Compression ratio
CRT-Huffman
Fig. 4. Graphical view for parameters evaluation of LZW and Huffman coding
Table 3. Comparisons of CRT-LZW and CRT-Huffman coding based image compression
Evaluation parameters
Size Reduction (%)
Compression time (sec.)
PSNR (db)
Compression ratio
Compressed Imperceptibility
CRT-LZW
29.78
9.95
3.54
0.39
Low
CRT-Huffman
14.00
19.15
28.13
4.38
High
5. CONCLUSION
COMPETING INTERESTS
The fundamental principle of data compression
procedures ensure minimization of data
redundancy (or resized data), better data
compression time, improved or retention of data
quality and high compression ratio. The
conventional compression algorithms, such as
LZW and Huffman coding, have shortcomings
which limited their widespread implementations
especially in image processing. One parameter
for measuring the suitability compression
procedure in image is the bits distribution
because it reveals the similarity or otherwise of
compressed image and original images to the
Human
Visual
System
(HVS).
This
paper implemented CRT in LZW and Huffman
coding in order to improve their individual
performances. The outcomes revealed that
more space saving (or redundancy removal)
and faster compression time were offered by
CRT-LZW. But, CRT-Huffman coding (28.13db)
provided better quality (PSNR) for reconstructed
images against CRT-LZW (3.54db) and
25.59db in the study by Oswald and Sivaselvan
[5]. However, there is need for further
implementation of these concepts in other media
files such as text, videos and audio.
Authors have
interests exist.
declared
that
no
competing
REFERENCES
1.
2.
3.
4.
8
Alhassan A, Gbolagade KA, Bankas EK. A
novel and efficient LZW-RNS scheme for
enhanced information compression and
security. International Journal of Advanced
Research in Computer Engineering &
Technology. 2015;4(11):4015-4019.
Xiao H, Huang Y, Ye Y, Xiao G.
Robustness in Chinese remainder theorem
for multiple numbers and remainder
coding. IEEE Transaction on Signal
Processing. 2018;1-16.
Thakur U, Rai SA. Study image
compression techniques for the number of
applications. International Journal of
Research and Innovation in Applied
Science. 2018;3(4):4-7.
Begaint J, Thoreau D, Guillotel P,
Guillemot C. Region-based prediction for
image compression in the cloud. IEEE
Transactions on Image Processing. 2018;
27(4):1835-1846.
Ibrahim and Gbolagade; AJRCOS, 3(3): 1-9, 2019; Article no.AJRCOS.49732
5.
Oswald C, Sivaselvan B. Text and image
Proceedings of IRE. 1952;40:1098compression based on data mining
1101.
perspective. Data Science Journal. 2018; 11. Gupta K, Verma RL, Alam S. Lossless
medical image. Communication Techno17(12):1-12.
6.
Ajala FA, Adigun AA, Oke AO.
logy. 2013;1(2):37-45.
Development of hybrid compression 12. Mishra K, Verma RL, Alam S, Vikram H.
Minimum entropy analysis of lossless
algorithm for medical images using
Lempel-Ziv-Welch and Huffman encoding.
image compression. International Journal
International Journal of Recent Technology
of Research in Electronics and Communiand Engineering. 2018;7(4):1-5.
cation Technology. 2013;1(2):67-75.
7.
Garba AM, Zirra PB. Analysing forward 13. Welch TA. A technique for high
difference schemeon Huffman to encode
performance data compression. IEEE,
and decode data losslessly. The InterSperry, Research Centre. 1984;8-19.
national Journal of Engineering and 14. Jane H, Trivedi J. A survey on different
Science. 2014;3(6):46-54.
compression techniques algorithm for data
8.
Aremu IA, Gbolagade KZ. RNS based on
compression. International Journal of
Shannon Fano coding for data encoding
Advanced Research in Computer Science
and decoding using {2n-1, 2n, 2n+1} Moduli
and Technology. 2014;2(3):1-5.
Sets.
Communications.
2018;6(1):25- 15. Mohan PVA. Residue number systems.
29.
The Springer International Series in
9.
Pujar JH, Kadlaskar LM, LM. A new
Engineering and Computer Science,
lossless method of image compression
Springer US, 677. 2002;2-10.
and decompression using Huffman coding 16. Omondi A, Pumkumar B, Residue number
technique. Journal of Theoretical and
systems: Theory and implementation.
Applied Information Technology. 2010;
London WC2H 9HE: Imperial College
15(1):1-10.
Press. 2007;900.
10. Huffman DA. A method for the construction 17. Available:http://cvc.cs.yale.edu/cvc/project
of
minimum
Redundancy
codes.
s/yalefaces/yalefaces.html
_________________________________________________________________________________
© 2019 Ibrahim and Gbolagade; This is an Open Access article distributed under the terms of the Creative Commons
Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
Peer-review history:
The peer review history for this paper can be accessed here:
http://www.sdiarticle3.com/review-history/49732
9