Quantization of Discrete Probability Distributions: Qualcomm Inc. 5775 Morehouse Drive, San Diego, CA 92121
Quantization of Discrete Probability Distributions: Qualcomm Inc. 5775 Morehouse Drive, San Diego, CA 92121
Quantization of Discrete Probability Distributions: Qualcomm Inc. 5775 Morehouse Drive, San Diego, CA 92121
Yuriy A. Reznik
Qualcomm Inc.
5775 Morehouse Drive, San Diego, CA 92121
yreznik@ieee.org
ABSTRACT
We study the problem of quantization of discrete probability distributions, arising in universal coding, as well
as other applications. We show, that in many situations
this problem can be reduced to the covering problem for
the unit simplex. Such setting yields precise asymptotic
characterization in the high-rate regime. Our main contribution is a simple and asymptotically optimal algorithm
for solving this problem. Performance of this algorithm is
studied and compared with several known solutions.
codes [25], [26], [30], [4], [15]. In this context, quantization of distributions becomes a small sub-problem in
a complex rate optimization process, and final solutions
yield very few insights about it.
In this paper, we study quantization of distributions as
a stand-alone problem. In Section 2, we introduce notation and formulate the problem. In Section 3, we study
achievable performance limits. In Section 4, we propose
and study an algorithm for solving this problem. In Section 5, we provide comparisons with other known techniques. Conclusions are drawn in Section 6.
1. INTRODUCTION
The problem of coding of probability distributions surfaces many times in the history of source coding. First
universal codes, developed in late 1960s, such as LynchDavisson [21, 9], combinatorial [28], and enumerative
codes [7] used lossless encoding of frequencies of symbols in an input sequence. The Rice machine [24], developed in early 1970s, transmitted quantized estimate
of variance of sources distribution. Two-step universal
codes, developed by J. Rissanen in 1980s, explicitly estimate, quantize, and transmit parameters of distribution as
a first step of the encoding process [25, 26]. Vector quantization techniques for two-step universal coding were proposed in [30, 4]. In practice, two-step coding was often
implemented by constructing a Huffman tree, then encoding and transmitting this code tree, and then encoding and
transmitting the data. Such algorithms become very popular in 1980s and 1990s, and were used, for example, in ZIP
archiver [17], and JPEG image compression standard [16].
In recent years, the problem of coding of distributions
has attracted a new wave of interest coming from other
fields. For example, in computer vision, it is now customary to use histogram-derived descriptions of image features. Examples of such descriptors include SIFT [20],
SURF [1], and CHoG [2], differentiating mainly in a way
the quantize histograms. Several other uses of coding of
distributions are described in [11].
To the best of our knowledge, most related prior studies were motivated by optimal design of universal source
1 Presented at the Information Theory and Applications (ITA) workshop, San Diego, CA, February 15, 2010, and at the Workshop on Information Theoretic Methods in Science and Engineering (WITMSE),
Tampere, Finland, August 16 18, 2010.
Let p m be an input distribution that we need to encode, and let Q m be a set of distributions that we
will be able to reproduce. We will call elements of Q
reconstruction points or centers in m . We will further
assume that Q is finite |Q| < , and that its elements are
enumerated and encoded by using fixed-rate code. The
rate of such code is R(Q) = log2 |Q| bits. By d (p, q)
we will denote a distance measure between distributions
p, q m .
In order to complete traditional setting of the quantization problem for distribution p m , it remains to
assume that it is produced by some random process, e.g.
a memoryless process with density over m . Then the
problem of quantization can be formulated as minimization of the average distance to the nearest reconstruction
point (cf. [14, Lemma 3.1])
m , , R) =
d(
Qm
|Q|62R
(2)
However, we notice that in most applications, best possible accuracy of the reconstructed distribution is needed
instantaneously. For example, in the design of a two-part
universal code, sample-derived distribution is quantized
and immediately used for encoding of this sample [26].
Similarly, in computer vision / image recognition applications, histograms of gradients from an image are ex-
inf
Qm pm qQ
|Q|62R
(3)
k
a
k
+
1
m
m1
=
. (4)
(m ) =
k!
2k k=m1
(m
1)!
a= 2
(8)
(9)
which is a decaying function of m. This highlights an interesting property and distinction of the problem of quantization of m-ary distributions, compared to quantization
of the unit (m 1)-dimensional cube.
Our next task is to design a practical algorithm for
solving this problem.
where Cm1 > 0 is a constant known as covering coefficient for the unit cube
(6)
The exact value of Cm1 depends on a distance measure d(p, q). For example, for L norm
i
it is known that
1
2
X
i
|pi qi |r
1/r
(7)
where n, k1 , . . . , km Z+ . Parameter n serves as a common denominator to all fractions, and can be used to control the density and number of points in Qn .
By analogy with the concept of types in universal coding [8] we will refer to distributions q Qn as types.
For same reason we will call Qn a type lattice. Several
examples of type lattices are shown in Figure 1.
4.1.2. Quantization
inf
log2 |Q| ,
k1
n
, . . . , knm :
1. Compute values (i = 1, . . . , m)
P
ki = npi + 12 , n = i ki .
6 j1 6 j2 6 . . . 6 jm 6
1
2
(k1 , . . . , kn ) =
(13)
P
k
1
j
n2
j1
XX
n i l=1 kl + m j 1
+ kn1 .
mj 1
j=1 i=0
This formula follows by induction (starting with m =
2, 3, etc.), and it implements lexicographic enumeration
of types. For example:
(0, 0, . . . , 0, n) = 0 ,
(0, 0, . . . , 1, n 1) = 1 ,
qi = q + vi , q Qn , i = 1, . . . , m 1,
n+m1
m1
1.
(15)
mi times
1 2a(ma)
n
m
(19)
ma times
Similar combinatorial enumeration techniques were discussed in [28, 7, 27]. With precomputed array of binomial
coefficients, the computation of an index by using this formula requires only O(n) additions4 .
Once index is computed, it is transmitted by using its
direct binary representation at rate:
l
m
R(n) = log2 n+m1
.
(14)
m1
4 Considering
pm qQn
...
(n, 0, . . . , 0, 0) =
4.2. Analysis
min
d1 [Qn ](m , R)
2 m1
R
m1
2 m1
1
1 m
p
, (20)
m1
(m 1)!
q
a(ma)
m
p
, (21)
(m 1)!
m1
2a(ma)
pm
. (22)
m1
(m 1)!
Proof. We first obtain asymptotic (with n ) expansion for the rate of our code (14):
R = (m 1) log2 n log2 (m 1)! + O n1 .
n 2 m1
p
(m 1)! .
m1
d [Qn ](m , R) 2 m1 ,
which matches the decay rate of theoretical estimates.
The only difference is in a constant factor. For example, under L norm, such factor in expression (8) is
q
log m
1 m1
1
.
m= +O
2
2
m
Our algorithm, on the other hand, uses a factor
1
1
61
< 1,
2
m
which starts with 12 when m = 2. This suggests that even
in terms of leading constant our algorithm is close to the
optimal.
4.2.2. Performance in terms of KL-distance
All previous results are obtained using L-norms. Such distance measures are common in computer vision applications [20, 22, 1]. In source coding, main interest presents
Kullback-Leibler (KL) distance:
X
pi
pi log2 .
(23)
dKL (p, q) = D(p||q) =
qi
i
It is not a true distance, so the exact analysis is complicated. Yet, by using Pinsker inequality [23]
dKL (p, q) >
1
2 ln 2
d1 (p, q)2 ,
(24)
1
2 ln 2
1 2a(ma)
n
m
2
dKL (q , q) &
1
2 ln 2
2R
m1
2a(ma)
pm
m1
(m 1)!
ki +
, i = 1, . . . , m ,
n + m
2
(25)
By denoting by 1 , . . . , m lengths of prefix codes, recalling that they satisfy Kraft inequality [6], and noting that
2i can be used to map lengths back to probabilities, we
arrive at the following set:
P
Qtree = [q1 , . . . , qm ] Qm qi = 2i , i 2i 6 1 .
Figure 3. Maximal L1 distances vs rate characteristics d1 [H](R), d1 [GM](R), d1 [Qn ](R) achievable by Huffman-,
Gilbert-Moore-, and type-based quantization schemes.
There are several specific algorithms that one can employ for construction of codes, producing different subsets of Qtree . Below we only consider the use of classic
Huffman and Gilbert-Moore [13] codes. Some additional
tree-based quantization schemes can be found in [11].
Proposition 3. There exists a set QGM Qtree , such that
dKL [QGM ](RGM ) 6
d1 [QGM ](RGM )
d [QGM ](RGM )
2,
2 ln 2 ,
(27)
1,
(28)
(26)
where
RGM = log2 |QGM |
where Cn =
2n
1
n+1 n
= log2 Cm1
(29)
3
= 2 m 2 log2 m + O(1),
6
6
1,
2 ln 2 ,
1
2 ,
(30)
(31)
(32)
where
RH = log2 |QH | = m log2 m + O (m) .
(33)
2)!
=
m!.
Upper
2
2
bound is obtained by arbitrary labeling all binary trees
with m leaves: Tm < m! Cm1 , where Cm1 is the
Catalan number. Combining both we obtain: ln12 m <
log2 Tm m log2 m < 2 ln12 m.
5.2. Comparison
We present comparison of maximal L1 distances achievable by tree-based and type-based quantization schemes
in Figure 3. We consider cases of m = 5 and m = 10
dimensions. It can be observed that the proposed typebased scheme is more efficient and much more versatile,
allowing a wide range of possible rate/distance tradeoffs.
6. CONCLUSIONS
The problem of quantization of discrete probability distributions is studied. It is shown, that in many cases, this
problem can be reduced to the covering radius problem
for the unit simplex. Precise characterization of this problem in high-rate regime is reported. A simple algorithm
for solving this problem is also presented, analyzed, and
compared to other known solutions.
7. ACKNOWLEDGMENTS
The author would like to thank Prof. K. Zeger (UCSD)
for reviewing and providing very useful comments on initial draft version of this paper. The author also wishes to
thank Prof. B. Girod and his students V. Chandrasekhar,
[15] T. S. Han, and K. Kobayashi, Mathematics of Information and Coding. Boston: American Mathematical Society, 2001.
[16] ITU-T and ISO/IEC JTC1, Digital Compression and Coding of Continuous-Tone Still Images,
ISO/IEC 10918-1 ITU-T Rec. T.81, Sept. 1992.
[17] P. W. Katz, PKZIP. Commercial compression system, version 1.1, 1990.
[18] A. N. Kolmogorov and V. M. Tikhomirov, entropy and -capacity of sets in metric spaces, Uspekhi Math. Nauk, vol. 14, no. 2, pp. 3-86, 1959. (in
Russian)
[19] R. E. Krichevsky and V. K. Trofimov, The Performance of Universal Encoding, IEEE Trans. Information Theory, vol. 27, pp. 199207, 1981.
[20] D. Lowe, Distinctive Image Features from ScaleInvariant Keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
[21] T. J. Lynch, Sequence time coding for data compression, Proc. IEEE, vol. 54, pp. 14901491,
1966.
[22] K. Mikolajczyk and C. Schmid, Performance
Evaluation of Local Descriptors, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 16151630, 2005.
[23] M. S. Pinsker, Information and Information Stability of Random Variables and Processes, Problemy Peredachi Informacii, vol. 7, AN SSSR,
Moscow 1960. (in Russian).
[24] R.F. Rice and J.R. Plaunt, Adaptive variable
length coding for efficient compression of spacecraft
television data, IEEE Trans. Comm. Tech., vol. 19,
no.1, pp. 889897, 1971.
[25] J. Rissanen, Universal coding, information, prediction and estimation, IEEE Trans. Inform. Theory, vol. 30, pp. 629-636, 1984.
[26] J. Rissanen, Fisher Information and Stochastic
Comprexity, IEEE Trans. Inform. Theory, vol. 42,
pp. 40-47, 1996.
[27] J. P. M. Schalkwijk, An algorithm for source coding, IEEE Trans. Inform. Theory, vol. 18, pp. 395
399, May 1972.
[28] Yu. M. Shtarkov and V. F. Babkin, Combinatorial
encoding for discrete stationary sources, in Proc.
2nd Int. Symp. on Information Theory, Akademiai
Kiado, 1971, pp. 249256.
[29] D.M.Y. Sommerville, An Introduction to the Geometry of n Dimentions. New York: Dover, 1958.
[30] K. Zeger, A. Bist, and T. Linder, Universal Source
Coding with Codebook Transmission, IEEE Trans.
Communications, vol. 42, no. 2, pp. 336346, 1994.